≡ Menu
Relics

Gilad Bracha recently posted an article “Bits of History, Words of Advice” that talks about the incredible influence of Smalltalk but bemoans the fact that:

…today Smalltalk is relegated to a small niche of true believers.  Whenever two or more Smalltalkers gather over drinks, the question is debated: Why? 

(All block quotes are from Gilad’s article unless otherwise noted.)

Smalltalk actually had a surge of commercial popularity in the first half of the 1990s but that interest evaporated almost instantaneously in 1996. Most of the Gilad’s article consists of his speculations on why that happened. I agree with many of Gilad’s takes on this subject, but his involvement and perspective with Smalltalk started relatively late in the commercial Smalltalk lifecycle. I was there pretty much at the beginning so it seems appropriate to add some additional history and my own personal perspective. I’ll be directly responding to some of Gilad’s points, so you should probably read his post before continuing.

Let’s Start with Some History

Gilad worked on the Strongtalk implementation of Smalltalk that was developed in the mid-1990s. His post describes the world of Smalltalk as he saw it during that period. But to get a more complete understanding we will start with an overview of what had occurred over the previous twenty years.

1970-1979: Creation

Starting in the early 1970s the early Smalltalkers and other Xerox PARC researchers invented the concepts and mechanisms of Personal Computing, most of which are still dominant today. Guided by Alan Kay’s vision, Dan Ingalls along with Ted Kaehler, Adele Goldberg, Larry Tesler, and other members of the PARC Learning Research Group created Smalltalk as the software for the Dynabook, their aspirational model of a personal computer. Smalltalk wasn’t just a programming language—it was, using today’s terminology, a complete software application platform and development environment running on the bare metal of dedicated personal supercomputers. During this time, the Smalltalk language and system evolved through at least five major versions.

1980-1984: Dissemination and Frustration

In the 1970s, the outside world only saw hints and fleeting glimpses of was what was going on within the Learning Research Group. Their work influenced the design of the Xerox Star family of office machines and after Steve Jobs got a Smalltalk demo he hired away Larry Tesler to work on the Apple Lisa. But the LRG team (later called SCG for Software Concepts Group) wanted to directly expose their work (Smalltalk) to the world at large. They developed a version of their software, Smalltalk-80, that would be suitable for distribution outside of Xerox and wrote about it in a series of books and a special issue of the widely read Byte magazine. They also made a memory snapshot of their Smalltalk-80 implementation available to several companies that they thought had the hardware expertise to build the microcoded custom processors that they thought were needed to run Smalltalk. The hope was that the companies would design and sell Smalltalk-based computers similar to the one they were using at Xerox PARC.

But none of the collaborators did this. Instead of building microcoded Smalltalk processors they used economical mini-computers or commodity microprocessor-based systems and coded what were essentially simple software emulations of the Xerox super computer Smalltalk machines. The initial results were very disappointing. They could nominally run the Xerox provided Smalltalk memory image but at speeds 10–100 times slower than the machines being used at PARC. This was extremely frustrating as their implementations were too slow to do meaningful work with the Smalltalk platform. Most of the original external collaborators eventually gave up on Smalltalk.

But a few groups, such as me and my colleagues at Tektronix, L Peter Deutsch and Alan Schiffman at Xerox, and Dave Ungar at UC Berkley developed new techniques leading to drastically better Smalltalk-80 performance using commodity microprocessors with 32-bit architectures.

At the same time Jim Anderson, George Bosworth, and their colleagues, working exclusive from the Byte articles independently developed a new Smalltalk that could usefully run on the original IBM PC with its Intel 8088 processor.

1985-1989: Productization

By 1985, it was possible to actually use Smalltalk on affordable hardware and various groups went about releasing Smalltalk products. In January 1985 my group at Tektronix shipped the first of a series of “AI Workstations” that were specifically designed to support Smalltalk-80. At about the same time Digitalk, Anderson’s and Bosworth’s company, shipped Methods—their first IBM PC Smalltalk product. This was followed by their Smalltalk/V products, whose name suggested a linkage to Alan Kay’s Vivarium project. Servio Logic introduced GemStone, an object-oriented data base system based on Smalltalk. In 1987 most of the remaining Xerox PARC Smalltalkers, led by Adele Goldberg, spun-off into a startup company, ParcPlace Systems. Their initial focus was selling Smalltalk-80 systems for workstation class machines.

Dave Thomas’ Object Technology International (OTI) worked with Digitalk to create a Macintosh version of Smalltalk/V and then started to develop the Smalltalk technology that would eventually be used in IBM’s VisualAge products. Tektronix, working with OTI, started developing two new families of oscilloscopes that used embedded Smalltalk to run its user interface. Both OTI and Instantiations, a Tektronix spin-off, started working on tools to support team-scale development projects using Smalltalk.

1990-1995: Enterprise Smalltalk

As many software tools vendors discovered over that period, it takes too many $99 sales to support anything larger than a “lifestyle” business. In 1990, if you wanted to build a substantial software tools business you had to find the customers who spent substantial sums on such tools. Most tools vendors who stayed in business discovered that most of those customers were enterprise IT departments. Those organizations, used to being IBM mainframe customers, often had hundreds of in-house developers and were willing to spend several thousand dollars per initial developer seat plus substantial yearly support fees for an effective application development platform.

At that time, many enterprise IT organizations were trying to transition from mainframe driven “green-screen” applications to “fat clients” with modern Mac/Windows type GUI interfaces. The major Smalltalk vendors positioned their products as solutions to that transition problem. This was reflected in their product branding, ParcPlace Systems’ ObjectWorks was renamed as VisualWorks. Digitalk’s Smalltalk/V became VisualSmalltalk Enterprise, IBM used OTI’s embedded Smalltalk technologies as the basis of VisualAge. Smalltalk was even being positioned in the technical press and books as the potential successor to COBOL:

Smalltalk and Object Orientation: An Introduction John Hunt, 1998, p. 44

Dave Thomas’ 1995 article “Travels with Smalltalk” is an excellent survey of the then current and previous ten years of commercial Smalltalk activity. Around the time it was published, IBM consolidated its Smalltalk technology base by acquiring Dave’s OTI, and Digitalk merged with ParcPlace Systems. That merger was motivated by their mutual fear of IBM Smalltalk’s competitive power in the enterprise market.

1996-Present: Retreating to the Fringe

Over six months in 1996, Smalltalk’s place in the market changed from the enterprise darling COBOL replacement to yet another disappointing tool with a long tail of deployed applications that needed to be maintained.

The World Wide Web diverted enterprise attention away from fat clients and suddenly web thin clients were the hot new technology. At the same time Sun Microsystems’ massive marketing campaign for Java convinced enterprises that Java was the future for both web and standalone client applications. Even though Java never delivered on its client-side promises, the commercial Smalltalk vendors were unable to counter the Java hype cycle and development of new Smalltalk-based enterprise applications stopped. The merged ParcPlace-Digitalk rapidly declined and only survived for a couple more years. IBM quickly pivoted its development attention to Java and used the development environment expertise it had acquired from OTI to create Eclipse.

Niche companies assumed responsibility for support of the legacy enterprise Smalltalk customers with deployed applications. Cincom picked up support for VisualWorks and Visual Smalltalk Enterprise customers, while Instantiations took over supporting IBM’s VisualAge Smalltalk. Today, twenty five years later, there are still hundreds of deployed legacy enterprise Smalltalk applications.

Dan Ingalls at Apple in 1996 used an original Smalltalk-80 virtual image to bootstrap Squeak Smalltalk. Dan, Alan Kay, and their colleagues at Apple, Disney, and some universities used Squeak as a research platform, much as they had used the original Smalltalk at Xerox PARC. Squeak was released as open source software and has a community of users and contributors throughout the world. In 2008, Pharo was forked from Squeak, with the intent of being a streamlined, more industrialized, and less research focused version of Smalltalk. In addition, several other Smalltalks have been created over that last 25 years. Today Smalltalk still has a small, enthusiastic, and somewhat fragmented base of users

Why Didn’t Smalltalk Win?

Gilad in his article, lists four major areas of failure of execution by the Smalltalk community that led to its lack of broad long term adoption. Let’s look at each of them and see where I agree and disagree.

Lack of a Standard

But a standard for what? What is this “Smalltalk” thing that wasn’t standardized? As Gilad notes, there was a reasonable degree of de facto agreement on the core language syntax, semantics and even the kernel class protocols going back to at least 1988. But beyond that kernel, things were very different.

Each vendor had a slightly different version – not so much a different language, as a different platform.

And these platforms weren’t just slightly different. They were very different in terms of everything that made them a platform. The differences between the Smalltalk platforms were not like those that existed at the time between compilers like GCC and Microsoft’s C/C++. A more apt analog would be the differences between platforms like Windows, Solaris, various Linux distributions, and Mac OSX. A C program using the standard C library can be easily ported among all these platform but porting a highly interactive GUI C-based application usually required a major rewrite.

The competing Smalltalk platform products were just as different. It was easy enough to “file out” basic Smalltalk language code that used the common kernel classes and move it to another Smalltalk product. But just like the C-implemented platforms, porting a highly interactive GUI application among Smalltalk platforms required a major rewrite The core problem wasn’t the reflective manner in which Smalltalk programs were defined (although I strongly agree with Gilad that this is undesirable). The problem was that all the platform services that built upon the core language and kernel classes—the things that make a platform a platform—were different. By 1995, there were at three major and several niche Smalltalk platforms competing in the client application platform space. But by then there was already a client platform winner—it was Microsoft Windows.

Business Model

Smalltalk vendors had the quaint belief in the notion of “build a better mousetrap and the world will beat a path to your door”. Since they had built a vastly better mousetrap, they thought they might charge money for it. 

…Indeed, some vendors charged not per-developer-seat, but per deployed instance of the software. Greedy algorithms are often suboptimal, and this approach was greedier and less optimal than most.

This may be an accurate description of the business model of ParcPlace Systems, but it isn’t a fair characterization of the other Smalltalk vendors. In particular, Digitalk started with a $99 Smalltalk product for IBM PCs and between 1985 and 1990 built a reasonable small business around sub $500 Smalltalk products for PCs and Macs. Then, with VC funding, it transitioned to a fairly standard enterprise focused $2000+ per seat plus professional services business model. Digitalk never had deployment fees. I don’t believe that IBM did either.

The enterprise business model worked well for the major Smalltalk vendors—but it became a trap. When you acquire such customers you have to respond to their needs. And their immediate needs were building those fat-client applications. I remember with dismay the day at Digitalk when I realized that our customers didn’t really care about Smalltalk, or object-based programming and design, or live programming, or any of the unique technologies Smalltalk brought to the table. They just wanted to replace those green-screens with something that looked like a Windows application. They bought our products because our (quite successful) enterprise sales group convinced them that Smalltalk was necessary to build such applications.

Supporting those customer expectations diverted engineering resources to visual designers and box & stick visual programming tools rather than important down-stream issues such as team collaborative development, versioning, and deployment. Yes, we knew about these issues and had plans to address them, but most resources went to the immediate customer GUI issues. So much so that ParcPlace and Digitalk were blindsided and totally unprepared to compete when their leading enterprise customers pivoted to web-based thin-clients.

Performance

Smalltalk execution performance had been a major issue in the 1980s. But by the mid 1990s every major commercial Smalltalk had a JIT-based virtual machine and a multi-generation garbage collector. When “Strongtalk applied Self’s technology to Smalltalk” it was already practical.

While those of us working on Smalltalk VMs loved to chase C++ performance our actual competition was PowerBuilder, Visual Basic, and occasionally Delphi. All the major Smalltalk VMs had much better execution performance than any of those. Microsoft once even made a bid to acquire Digitalk, even though they had no interest in Smalltalk. They just wanted to repurpose the Smalltalk/V VM technology to make Visual Basic faster.

But as Gilad points out, raw speed is seldom an issue. Particularly for the fat client UIs that were the focus of most commercial Smalltalk customers. Smalltalk VMs also had much better performance than the other dynamic languages that emerged and gained some popularity during the 1990s. Perl, Python, Ruby, PHP all had, and as far as I know still have, much poorer execution performance than 1995 Smalltalks running on comparable hardware.

Memory usage was a bigger issues. It was expensive for customers to have to double the memory in PCs to effectively run commercial Smalltalk. But Moore’s law quickly overcame that issue.

It’s also worth dwelling on the fact that raw speed is often much less relevant than people think. Java was introduced as a client technology (anyone remember applets?). The vision was programs running in web pages. Alas, Java was a terrible client technology. In contrast, even a Squeak interpreter, let alone Strongtalk, had much better start up times than Java, and better interactive response as well. It also had much smaller footprint. It was a much better basis for performant client software than Java. The implications are staggering.

Would Squeak (or any mid-1990s version of Smalltalk) have really fared better than Java in the browser? You can try it yourself by running a 1998 version of Squeak right now in your browser: https://squeak.js.org/demo/simple.html. Is this what web developers needed at that time?

Java’s problem as a web client was that it wanted to be it’s own platform. Java wasn’t well integrated into the HTML-based architecture of web browsers. Instead, Java treated the browser as simply another processor to host the Sun-controlled “write once, run [the same] everywhere ” Java application platform. It’s goal wasn’t to enhance native browser technology—it’s goal was to replace them.

Interaction with the Outside World

Because of their platform heritage, commercial Smalltalk products had similar problems to those Java had. There was an “impedance mismatch” between the Smalltalk way of doing things and the dominant client platforms such as Microsoft Windows. The non-Smalltalk-80 derived platforms (Digitalk and OTI/IBM) did a better job than ParcPlace Systems at smoothing that mismatch.

Windowing. Smalltalk was the birthplace of windowing. Ironically, Smalltalks continued to run on top of their own idiosyncratic window systems, locked inside a single OS window.
Strongtalk addressed this too; occasionally, so did others, but the main efforts remained focused on their own isolated world, graphically as in every other way.

This describes ParcPlace’s VisualWorks, but not Digitalk’s Visual Smalltalk or IBM’s VisualAge Smalltalk. Applications written for those Smalltalks used native platform windows and widgets, just like other languages. Each Smalltalk vendor supplied their own higher-level GUI frameworks. Those frameworks were different from each other and from frameworks used with other languages. Digitalk’s early Smalltalk products for MSDOS supplied their own windowing systems, but Digitalk’s Windows and OS/2 products always used the native windows and graphics.

Source control. The lack of a conventional syntax meant that Smalltalk code could not be managed with conventional source control systems. Instead, there were custom tools. Some were great – but they were very expensive.

Both IBM and Digitalk bundled source control systems into their enterprise Smalltalk products which were competitively priced with other enterprise development platforms. OTI/IBM’s Envy source control system successfully built upon Smalltalk’s traditional reflective syntax and stored code in a multiuser object database. OTI also sold Envy for ParcPlace’s VisualWorks but lacked the pricing advantage of bundling. Digitalk’s Team/V unobtrusively introduced a non-reflective syntax and versioned source code using RCS. Team/V could forward and backwards migrate versions of Smalltalk “modules” within an running virtual image.

Deployment. Smalltalk made it very difficult to deploy an application separate from the programming environment.

As Gilad describes, extracting an application from its development environment could be tricky. But both Team/V and Envy included tools to help developers do this. Team/V support the deployment of applications as Digitalk Smalltalk Link Libraries (SLLs) which were separable virtual image segments that could be dynamically loaded. Envy, which originally was designed to generate ROM-based embedded applications, had rich tools for tree-shaking a deployable application from a development image.

Gilad also mentioned that “unprotected IP” was a deployment concern that hindered Smalltalk acceptance. This is presumably because even if source code wasn’t deployed with an application it was quite easy to decompile a Smalltalk bytecoded method back into human readable source code. Indeed, potential enterprise customers would occasionally raise that as a concern. But, it was usually as a “what about” issue. Out of hundreds of enterprise customers we dealt with at Digitalk, I can’t recall a single instance where unprotected IP killed a deal. If it had been, we would have done something about this as we knew of various techniques that could be used to obfuscate Smalltalk code.

“So why didn’t Smalltalk take over the world?”

Smalltalk did something more important than take over the world—it define the shape of the world! Alan Kay’s Dynabook vision of personal computing didn’t come with a detailed design document described all of its features and how to create it. To make the vision concrete, real artifacts had to be invented.

Alan Kay Dynabook sketch, 1972

Smalltalk was the software foundation for “inventing the future” Dynabook. The Smalltalk programming language went through at least five major revisions at Xerox PARC and evolved into the software platform of the prototype Dynabook. Using the Smalltalk platform, the fundamental concepts of a personal graphic user interface were first explored and refined. Those initial concepts were widely copied and further refined in the market resulting in the systems we still use today. Smalltalk was also the vector by which the concepts of object-oriented programming and design were introduced to a world-wide programming community. Those ideas dominated for at least 25 years. From the perspective of its original purpose, Smalltalk was a phenomenal success.

But, I think Gilad’s question really is: Why didn’t the Smalltalk programming language become one of the most widely used languages? Why isn’t it in the top 10 of the Tiobe Index?

Gilad’s entire article is essentially about answering that question, and he summarizes his conclusions as:

With 20/20 hindsight, we can see that from the pointy-headed boss perspective, the Smalltalk value proposition was:

 Pay a lot of money to be locked in to slow software that exposes your IP, looks weird on screen and cannot interact well with anything else; it is much easier to maintain and develop though!

On top of that, a fair amount of bad luck.

There are elements of truth in all of those observations, but I think they miss what really happened with the rise and fall of commercial Smalltalk:

  1. Smalltalk wasn’t just a language; it was a complete personal computing platform. But by the time affordable personal computing hardware could run the Smalltalk platform, other less demanding platforms had already dominated the personal computing ecosystem.
  2. Xerox Smalltalk was built for experimentation. It was never hardened for wide use or reliable application deployment.
  3. Adapting Smalltalk for production use required large engineering teams and in the late 1980s no deep-pocketed large companies were willing to make that investment. Other than IBM and HP, the large computing companies that could have helped ready Smalltalk for production use took other paths.
  4. Small companies that wanted to harden Smalltalk needed to find a business model that would support that large engineering effort. The obvious one was the enterprise application development market.
  5. To break into that market required providing a solution to an important customer problem. The problem the companies latched on to was enterprise migration from green-screens to more modern looking fat-client applications.
  6. Several companies had considerable success in taking Smalltalk to the enterprise market. But these were demanding and fickle customers and the Smalltalk companies became hyper focused on fulfilling their fat-client commitments.
  7. With that focus, the Smalltalk companies were completely unprepared in 1996 to deal with the sudden emergence of the web browser platform and its use within enterprises. At the same time, Sun, a much larger company than any of the Smalltalk-technology driven companies spent vast sums to promote Java as the solution for both web and desktop client applications.
  8. Enterprises diverted their development tool purchases from Smalltalk companies to new businesses who promised web and/or Java based solutions.
  9. Smalltalk revenues rapidly declined and the Smalltalk focused companies failed.
  10. Technologies that have failed in the marketplace seldom get revived.

Smalltalk still has its niche uses and devotees. Sometimes interesting new technologies emerge from that community. Gilad’s Newspeak is a quite interesting rethinking of the language layer of traditional Smalltalk.

Gilad mentioned Smalltalk’s bad luck. But I think it had better than average luck compared to most other attempts to industrialize innovative programming technologies. It takes incredible luck and timing to become of one of the world’s most widely used programming languages. It’s such a rare event that no language designer should start out with that expectation. You really should have some other motivation to justify your effort.

But my question for Newspeak and any similar efforts is: What is your goal? What drives the design? What impact do you want to have?

Smalltalk wasn’t created to rule the software world, it was created to enable the invention of a new software world. I want to hear about compelling visions for the next software world and what we will need to build it. Could Newspeak have such a role?

JavaScript: The First 20 Years  by Allen Wirfs-Brock and Brendan Eich

Our HOPL paper is done and submitted to the ACM for June 2020 publication in PACMPL (Proceedings of the ACM on Programming Languages)  and presentation at the HOPL 4 conference whenever it actually occurs. PACMPL is an open access journal so there won’t be a paywall preventing people from reading our paper.  You can access the paper at https://zenodo.org/record/3707007. But before you run off and start reading this 190 page “paper” I want to talk a bit about HOPL.

The History of Programming Languages Conferences

HOPL is a unique conference and the foremost conference relating to the history of programming languages.  HOPL-IV wll be only the 4th HOPL. Previous HOPLs occurred in 1978, 1993, and 2007.  The History of HOPL web page  provides an overview of the conference’s history and which languages were covered at each of the three previous HOPLs.  HOPL papers can be quite long.  As the HOPL-IV call for papers says, “Because of the complex nature of the history of programming languages, there is no upper bound on the length of submitted papers—authors should strive for completeness.” HOPL papers are often authored by the original designers of an important language or individuals who have made significant contributions to the evolution of a language.

As the HOPL-IV call for papers describes, writing a HOPL paper is an arduous multi-year process. Initial submissions were due in September 2018 and reviewed by the program committee.  For papers that made it through that review, the second major review draft was due September 2019.  The final “camera ready” manuscripts were due March 13, 2020.  Along the way, each paper received extensive reviews from members of  the program  committee and each paper was closely monitored by one or more program committee “shepherds” who worked very closely with the authors. One of the challenges for most of the authors was to learn what it meant to write a history paper rather than a traditional technical paper.  Authors were encouraged  to learn to think and write  like a professional historian.

I’ve long been a fan of HOPL and have read most of the papers from the first three HOPLs.  But I’d never actually attended one.  I first heard about HOPL-IV on July 7, 2017 when I received an invitation from Guy Steele and Richard Gabriel to serve on the program committee. I immediately checked whether PC members could submit and because the answer was yes, I accepted. I knew that JavaScript needed to be included in a HOPL and that I probably was best situated to write it. But my direct experience with JS only dates to 2007 so I knew I would need Brendan Eich’s input in order to cover the early history of the language and he agreed to sign-on as coauthor.   My initial outline for the paper is dated July 20, 2017 and was titled “JavaScript: The First 25 Years” (we decided to cut it down to the first 20 years after the first round of reviews). The outline was seven pages long. I hadn’t looked at it since sometime in 2018 but looking at it today, I found it remarkably close to what is in the final paper.  I knew the paper was going to be long.  But I never thought it would end up at 190 pages.  Many thanks to Richard Gabriel for repeatedly saying “don’t worry about the length.”

There is a lot I have to say about gathering primary source materials (like a real historian) but I’m going to save that for another post is a few days. So, if you’re interested in the history of JavaScript, start reading

Our HOPL paper is done—all 190 pages of it. The preprint will be posted this week.  In the meantime, here’s a little teaser.

JavaScript: The First 20 Years
By Allen Wirfs-Brock and Brendan Eich

Introduction

In 2020, the World Wide Web is ubiquitous with over a billion websites accessible from billions of Web-connected devices. Each of those devices runs a Web browser or similar program which is able to process and display pages from those sites. The majority of those pages embed or load source code written in the JavaScript programming language. In 2020, JavaScript is arguably the world’s most broadly deployed programming language. According to a Stack Overflow [2018] survey it is used by 71.5% of professional developers making it the world’s most widely used programming language.

This paper primarily tells the story of the creation, design, and evolution of the JavaScript language over the period of 1995–2015. But the story is not only about the technical details of the language. It is also the story of how people and organizations competed and collaborated to shape the JavaScript language which dominates the Web of 2020.

This is a long and complicated story. To make it more approachable, this paper is divided into four major parts—each of which covers a major phase of JavaScript’s development and evolution. Between each of the parts there is a short interlude that provides context on how software developers were reacting to and using JavaScript.

In 1995, the Web and Web browsers were new technologies bursting onto the world, and Netscape Communications Corporation was leading Web browser development. JavaScript was initially designed and implemented in May 1995 at Netscape by Brendan Eich, one of the authors of this paper. It was intended to be a simple, easy to use, dynamic language that enabled snippets of code to be included in the definitions of Web pages. The code snippets were interpreted by a browser as it rendered the page, enabling the page to dynamically customize its presentation and respond to user interactions.

Part 1, The Origins of JavaScript, is about the creation and early evolution of JavaScript. It examines the motivations and trade-offs that went into the development of the first version of the JavaScript language at Netscape. Because of its name, JavaScript is often confused with the Java programming language. Part 1 explains the process of naming the language, the envisioned relationship between the two languages, and what happened instead. It includes an overview of the original features of the language and the design decisions that motivated them. Part 1 also traces the early evolution of the language through its first few years at Netscape and other companies.

A cornerstone of the Web is that it is based upon non-proprietary open technologies. Anybody should be able to create a Web page that can be hosted by a variety of Web servers from different vendors and accessed by a variety of browsers. A common specification facilitates interoperability among independent implementations. From its earliest days it was understood that JavaScript would need some form of standard specification. Within its first year Web developers were encountering interoperability issues between Netscape’s JavaScript and Microsoft’s reverse-engineered implementation. In 1996, the standardization process for JavaScript was begun under the auspices of the Ecma International standards organization. The first official standard specification for the language was issued in 1997 under the name “ECMAScript”

Two additional revised and enhanced editions, largely based upon Netscape’s evolution of the language, were issued by the end of 1999.

Part 2, Creating a Standard, examines how the JavaScript standardization effort was initiated, how the specifications were created, who contributed to the effort, and how decisions were made.

By the year 2000, JavaScript was widely used on the Web but Netscape was in rapid decline and Eich had moved on to other projects. Who would lead the evolution of JavaScript into the future? In the absence of either a corporate or
individual “Benevolent Dictator for Life,” the responsibility for evolving JavaScript fell upon the ECMAScript standards committee. This transfer of design responsibility did not go smoothly. There was a decade-long period of false starts, standardization hiatuses, and misdirected efforts as the ECMAScript committee tried to find its own path forward evolving the language. All the while, actual usage of JavaScript rapidly grew, often using implementation-specific extensions. This created a huge legacy of unmaintained JavaScript-dependent Web pages and revealed new interoperability issues. Web developers began to create complex client-side JavaScript Web applications and were asking for standardized language enhancements to support them.

Part 3, Failed Reformations, examines the unsuccessful attempts to revise the language, the resulting turmoil within the standards committee, and how that turmoil was ultimately resolved.

In 2008 the standards committee restored harmonious operations and was able to create a modestly enhanced edition of the standard that was published in 2009.  With that success, the standards committee was finally ready to successfully undertake the task of compatibly modernizing the language. Over the course of seven years the committee developed major enhancements to the language and its specification. The result, known as ECMAScript 2015, is the foundation for the ongoing evolution of JavaScript. After completion of the 2015 release, the committee again modified its processes to enable faster incremental releases and now regularly completes revisions on a yearly schedule.

Part 4, Modernizing JavaScript, is the story of the people and processes that were used to create both the 2009 and 2015 editions of the ECMAScript standard. It covers the goals for each edition and how they addressed evolving needs of the JavaScript development community. This part examines the significant foundational changes made to the language in each edition and important new features that were added to the language.

Wherever possible, the source materials for this paper are contemporaneous primary documents. Fortunately, these exist in abundance. The authors have ensured that nearly all of the primary documents are freely and easily accessible on the Web from reliable archives using URLs included in the references. The primary document sources were supplemented with interviews and personal communications with some of the people who were directly involved in the story. Both authors were significant participants in many events covered by this paper. Their recollections are treated similarly to those of the third-party informants.

The complete twenty-year story of JavaScript is long and so is this paper. It involves hundreds of distinct events and dozens of individuals and organizations. Appendices A through E are provided to help the reader navigate these details. Appendices A and B provide annotated lists of the people and organizations that appear in the story. Appendix C is a glossary that includes terms which are unique to JavaScript or used with meanings that may be different from common usage within the computing community in 2020 or whose meaning might change or become unfamiliar for future readers.The first use within this paper of a glossary term is usually italicized and marked with a “g” superscript.’ Appendix D defines abbreviations that a reader will encounter. Appendix E contains four detailed timelines of events, one for each of the four parts of the paper.

Dave Winer recently blogged about his initial thoughts after dipping his toes into using some modern JavaScript features . He ends by suggesting that I might have some  explanations and stories about the features he are using.  I’ve given talks that cover some of this and normally I might just respond via some terse tweets. But Dave believes that blog posts should be responded to by blog posts so I’m taking a try at blogging back to him.

What To Call It?

The JavaScript language is defined by a specification maintained by the Ecma International standards organization. Because of trademark issues, dating back to 1996, the specification could not use the name JavaScript.  So they coined the name ECMAScript instead. Contrary to some myths, ECMAScript and JavaScript are not different languages. “ECMAScript” is simply the name used within the specification where it would really like to say “JavaScript”.

Standards organizations like to identify documents using numbers. The ECMAScript specification’s number is ECMA-262.  Each time an update to the specification is approved as “the standard” a new edition of ECMA-262 is released. Editions are sequentially numbered. Dave said “ES6 is the newest version of JavaScript”.  So, what is “ES6”? ES6 is colloquial shorthand for “ECMA-262, Edition 6”.  ES6 was published as a standard in 2015. The actual title of the ES6 specification is ECMAScript 2015 Language Specification and the preferred shorthand name is ECMAScript 2015 or just ES2015.

So, why the year-based designation?  The 6th edition of ECMA-262 took a long time to develop, arguably 15 years. As ES6 was approaching publication, TC39 (the Technical Committee within Ecma International that develops the ECMAScript specifications) already knew that it wanted to change its process in a way that enabled  yearly maintenance updates.  That meant a new edition of ECMA-262 every year with a new edition number. After a few years we would be talking about ES6, ES7, ES8, ES9, ES10, ES11, etc. Those numbers quickly loose any context for people who aren’t deeply involved in the standards development process. Who would know if the current standard ES7, or ES8, or ES9? Was some feature introduced in ES6 or ES7? TC39 couldn’t eliminate the actual edition numbers (standards organizations love their document numbers) but it could change the document title.  We decide that TC39 would incorporate the year of release into the documents title and to encourage people to use the year when referring to a specific edition. So, the “newest version of JavaScript” is ECMA-262, Edition 8 and its title is  ECMAScript 2017 Language Specification. Some people still refer to it as ES8, but the preferred shorthand name is ECMAScript 2017 or just ES2017.

But saying “ECMAScript” or mentioning specific ECMAScript editions is confusing to many people and probably is unnecessary for most situations.  The common name of the language really is JavaScript and unless you are talking about the actual specification document you probably don’t need to utter “ECMAScript”. But you may need to distinguish between old versions of JavaScript and what is implemented by newer, modern implementations.  The big change in the language and its specification occurred with  ES2015.  The subsequent editions make relatively small incremental extensions and corrections to what was standardized in 2015.  So, here is my recommendation.  Generally you should  just say “JavaScript” meaning the language as it is used in browsers, Node.js, and other environments.  If you need to specifically talk about JavaScript implementations that are based upon ECMAScript specifications published prior to ES2015 say “legacy JavaScript”. If you need to specifically talk about JavaScript that includes ES2015 (or later) features say “modern JavaScript”.

Can You Use It Yet?

Except for modules almost all of ES2015-ES2017 is implemented in the current versions of all the major evergreen browsers (Chrome, Firefox, Safari, Edge). Also in current versions of Node.js. If you need to write code that will run on non-evergreen browsers such as IE you can use Babel to pre-compile modern JavaScript code into legacy JavaScript code.

Module support exists in all of the evergreen browsers, but some of them still require setting a flag to use it.  Native ECMAScript module support will hopefully ship in Node.js in spring 2018. In the meantime @std/esm enables use of ECMAScript modules in current Node releases.

Scoped Declaration (let and const)

The main motivation for block scoped declarations was to eliminate the “closure in loop” bug hazard that may JavaScript programmer have encountered when they set event handlers within a loop. The problem is that var declarations look like they should be local to the loop body but in fact are hoisted to the top of the current function and hence each event handler defined in the loop use the last value assigned to such variables.

Replacing var with let gives each iteration of the loop a distinct variable binding.  So each event handler captures different variables with the values that were current when the event handler was installed:

The hardest part about adding block scoped declaration to ECMAScript was coming up with a rational set of rules for how the declaration  should interact with the already existing var declaration form. We could not change the semantics of var without breaking backwards compatibility, which is something we try to never do. But, we didn’t want to introduce new WTF surprises in programs that use both var and let. Here are the basic rules we eventually arrived at:


Most browsers, except for IE, had implemented const declarations (but without block scoping) starting in the early naughts. Firefox implemented block scoped let declaration (but not exactly the same semantics as ES2015) in 2006.  By the time TC39 started serious working on what ultimately became ES2015, the keywords const and let had become ingrained in our minds such that we didn’t really consider any other alternatives. I regret that.  In retrospect, I think we should have used let in place of  const for declaring immutable variable bindings because that is the most common use case. In fact, I’m pretty sure that many developers use let instead of const for variable they don’t intend to change, simply because let has fewer characters to type. If we had used let in place of const then perhaps var would have been adequate for the relatively rare cases where a mutable variable binding is needed.  A language with only let and var would have been simpler then what we ended up with using const, let, and var.

Arrow Functions

One of the primary motivations for arrow functions was to eliminate another JavaScript bug hazard.  The “wrong this” problem that occurs when you capture a function expression (for example, as an event handler) but forget that this used inside the function expression will not be the same value as this in the context where you created the function expression.  Conciseness was a consideration in the design of arrow functions, but fixing the “wrong this” problem was the real driver.

I’ve heard several JS programmers comment that at first they didn’t like arrow functions but that they grew upon them over time. Your mileage may vary. Here are a couple of good articles that address arrow function reluctance.

Modules

Actually, ES modules weren’t inspired by Node modules. But a lot of work went into making them feel familiar  to people who were used to Node modules. In fact,  ES modules are semantically more similar to the Pascal modules that Dave remembers then they are to Node modules.  The big difference is that in the ES design (and Pascal modules) the interfaces between modules are statically defined while in the Node modules design  module interfaces are dynamically defined. With static module interfaces the inter-dependencies between a set of modules are precisely defined by the source code prior to executing any code.  With dynamic modules, the module interfaces cannot be fully understood without actually executing the code of the modules.  Or stated another way, ES module interfaces are declaratively defined while Node module interfaces are imperatively defined. Static modules systems better support creation of ahead-of-time tools such as accurate module dependency linters or module linkers. Such tools for dynamic module interfaces usually depends upon applying heuristics that analyze modules as if they had static interfaces.  Such analysis can be wrong if the actual dynamically  interfaces construction does things that the heuristics didn’t account for.

The work on the ES module design actually started before the first release of Node. There were early proposals for dynamic module interfaces that are more like what Node adopted.  But TC39 made an early decision that declarative static module interfaces were a better design, for the long term. There has been much controversy about this decision. Unfortunately, it has created issues for Node which have been difficult for them to resolve. If TC39 had anticipated the rapid adoption of Node and the long time it would take to finish “ES6” we might have taken the dynamic module interface path. I’m glad we didn’t and I think it is becoming clear that we made the right choice.

Promises

Strictly speaking, the legacy JavaScript language didn’t do async at all.  It was host environments such as  browsers and Node that defined the APIs that introduced async programming into JavaScript.

ES2015 needed to include promises because they were being rapidly adopted by the developer community (include by new browser APIs) and we wanted to avoid the problem of competing incompatible promise libraries or of a browser defined promise API that didn’t take other host environments into consideration.

The real benefit of ES2015 promises is that they provided a foundation for better async abstractions that do bury more of the BS within the runtime.  Async functions, introduced in ES2017 are the “better way” to do async.  In the pipeline for the near future is Async Iteration which further simplifies a common async use case.

Alan Kay famously said “The best way to predict the future is to invent it.” But how do we go about inventing a future that isn’t a simple linear extrapolation of the present?

Kay and his colleagues at Xerox PARC did exactly that over the course of the 1970s and early 1980s. They invented and prototyped the key concepts of the Personal Computing Era. Concepts that were then realized in commercial products over the subsequent two decades.

So, how was PARC so successful at “inventing the future”? Can that success be duplicated or perhaps applied at a smaller scale? I think it can. To see how, I decided to try to sketch out what happened at Xerox PARC as a pattern language.

futurepatlang

Look Twenty Years Into the Future

If your time horizon is short you are doing product development or incremental research. That’s all right; it’s probably what most of us should be doing. But if you want to intentionally “invent the future” you need to choose a future sufficiently distant to allow time for your inventions to actually have an impact.

Extrapolate Technologies

What technologies will be available to us in twenty years? Start with the current and emerging technologies that already exist today. Which relevant  technologies are likely to see exponential improvement over the next twenty years? What will they be like as they mature over that period? Assume that as the technical foundation for your future.

Focus on People

Think about how those technologies may affect people. What new human activities do they enable? Is there a human problem they may help solve? What role might those technologies have in everyday life? What could be the impact upon society as a whole?

Create a Vision

Based upon your technology and social extrapolations  create a clearly articulated vision of what your desired future. It should be radically different form the present in some respects. If it isn’t, then invention won’t be required to achieve it.

A Team of Dreamers and Doers

Inventing a future requires a team with a mixture of skills.  You need dreamers who are able to use their imagination to create and refine the core future vision. You also need doers who are able to take ill-defined dreams and turn them into realities using available technologies. You must have both and they must work closely together.

Prototype the Vision

Try to create a high fidelity functional approximation of your vision of the future. Use the best of today’s technology as stand-ins for your technology extrapolations. Remember what is expensive and bulky today may be cheap and tiny in your future. If the exact technical combination you need doesn’t exist today, build it.

Live Within the Prototype

It’s not enough to just build a prototype of your envisioned future. You have to use the prototype as the means for experiencing that future. What works? What doesn’t? Use you experience with the prototype to iteratively refine the vision and the prototypes.

Make It Useful to You

You’re a person who hopes to live in this future, so prototype things that will be useful to you.  You will know you are on to something when your prototype become an indispensible part of your life. If it isn’t there yet, keep iterating until it is.

Amaze the World

If you are successful in applying these patterns you will invent things that are truly amazing.  Show those inventions to the world. Demonstrate that your vision of the future is both compelling and achievable. Inspire other people to work towards that same future. Build products and businesses if that is your inclination, but remember that inventing the future takes more than a single organization or project. The ultimate measure of your success will be your actual impact on the future.

 

 

 

 

 

earlyproducts

The chaotic early days of a new computing era is an extended period of product innovation and experimentation. But both the form and function of new products are still strongly influenced by the norms and transitional technologies of the waning era. New technologies are applied to new problems but often those new technologies are not yet mature enough to support early expectations. The optimal form-factors, conceptual metaphors, and usage idioms of the new era have yet to be fully explored and solidified. Looking back from the latter stages of a computing era, early era products appear crude and naive.

This is a great time to be a product innovator or an enthusiastic early adopter. But don’t get too comfortable with the present. These are still the early days of the Ambient Computing Era and the big changes are likely still to come.

grassroots

How do we know when we are entering a new computing era? One signal is a reemergence of grassroots innovation. Early in a computing era most technical development resources are still focused on sustaining the mature applications and use cases from the waning era or on exploiting attractive transitional technologies.

The first explorers of the technologies of a new era are rebels and visionaries operating at the fringes. These explorers naturally form grassroots organizations for sharing and socializing their ideas and accomplishments. Such grassroots organizations serve as incubators for the the technologies and leaders of the next era.

The HomeBrew Computing Club was a grassroots group out of which emerged many leaders of the Personal Computing Era. Now, as the Ambient Computing Era progresses, we see grassroots organizations such as the Nodebots movement and numerous collaborative GitHub projects serving a similar role.

fromchaos

At the beginning of a new computing era, it’s fairly easy to sketch a long-term vision of the era. All it takes is knowledge of current technical trajectories and a bit of imagination. But it’s impossible to predict any of the essential details of how it will actually play out.

Technical, business, and social innovation is rampant in the early years of a new era. Chaotic interactions drive the churn of innovation. The winners that will emerge from this churn are unpredictable. Serendipity is as much a factor as merit. But eventually, the stable pillars of the new era will emerge from the chaos. There are no guarantees of success, but for innovators right now is your best opportunity for impacting the ultimate form of the Ambient Computing Era.

AmbientEduardo

In the Ambient Computing Era humans live in a rich environment of communicating computer enhanced devices interoperating with a ubiquitous cloud of computer mediated information and services. We don’t even perceive most of the computers we interact with. They are an invisible and indispensable part of our everyday life.