≡ Menu

JavaScript: The First 20 Years  by Allen Wirfs-Brock and Brendan Eich

Our HOPL paper is done and submitted to the ACM for June 2020 publication in the PACMPL (Proceedings of the ACM on Programming Languages)  and presentation at the HOPL 4 conference whenever it actually occurs. PACMPL is an open access journal so there won’t be a paywall preventing people from reading our paper.  Regardless, starting right now you can access the preprint at https://zenodo.org/record/3707007. But before you run off and start reading this 190 page “paper” I want to talk a bit about HOPL.

The History of Programming Languages Conferences

HOPL is a unique conference and the foremost conference relating to the history of programming languages.  HOPL-IV wll be only the 4th HOPL. Previous HOPLs occurred in 1978, 1993, and 2007.  The History of HOPL web page  provides an overview of the conference’s history and which languages were covered at each of the three previous HOPLs.  HOPL papers can be quite long.  As the HOPL-IV call for papers says, “Because of the complex nature of the history of programming languages, there is no upper bound on the length of submitted papers—authors should strive for completeness.” HOPL papers are often authored by the original designers of an important language or individuals who have made significant contributions to the evolution of a language.

As the HOPL-IV call for papers describes, writing a HOPL paper is an arduous multi-year process. Initial submissions were due in September 2018 and reviewed by the program committee.  For papers that made it through that review, the second major review draft was due September 2019.  The final “camera ready” manuscripts were due March 13, 2020.  Along the way, each paper received extensive reviews from members of  the program  committee and each paper was closely monitored by one or more program committee “shepherds” who worked very closely with the authors. One of the challenges for most of the authors was to learn what it meant to write a history paper rather than a traditional technical paper.  Authors were encouraged  to learn to think and write  like a professional historian.

I’ve long been a fan of HOPL and have read most of the papers from the first three HOPLs.  But I’d never actually attended one.  I first heard about HOPL-IV on July 7, 2017 when I received an invitation from Guy Steele and Richard Gabriel to serve on the program committee. I immediately checked whether PC members could submit and because the answer was yes, I accepted. I knew that JavaScript needed to be included in a HOPL and that I probably was best situated to write it. But my direct experience with JS only dates to 2007 so I knew I would need Brendan Eich’s input in order to cover the early history of the language and he agreed to sign-on as coauthor.   My initial outline for the paper is dated July 20, 2017 and was titled “JavaScript: The First 25 Years” (we decided to cut it down to the first 20 years after the first round of reviews). The outline was seven pages long. I hadn’t looked at it since sometime in 2018 but looking at it today, I found it remarkably close to what is in the final paper.  I knew the paper was going to be long.  But I never thought it would end up at 190 pages.  Many thanks to Richard Gabriel for repeatedly saying “don’t worry about the length.”

There is a lot I have to say about gathering primary source materials (like a real historian) but I’m going to save that for another post is a few days. So, if you’re interested in the history of JavaScript, start reading

Our HOPL paper is done—all 190 pages of it. The preprint will be posted this week.  In the meantime, here’s a little teaser.

JavaScript: The First 20 Years
By Allen Wirfs-Brock and Brendan Eich

Introduction

In 2020, the World Wide Web is ubiquitous with over a billion websites accessible from billions of Web-connected devices. Each of those devices runs a Web browser or similar program which is able to process and display pages from those sites. The majority of those pages embed or load source code written in the JavaScript programming language. In 2020, JavaScript is arguably the world’s most broadly deployed programming language. According to a Stack Overflow [2018] survey it is used by 71.5% of professional developers making it the world’s most widely used programming language.

This paper primarily tells the story of the creation, design, and evolution of the JavaScript language over the period of 1995–2015. But the story is not only about the technical details of the language. It is also the story of how people and organizations competed and collaborated to shape the JavaScript language which dominates the Web of 2020.

This is a long and complicated story. To make it more approachable, this paper is divided into four major parts—each of which covers a major phase of JavaScript’s development and evolution. Between each of the parts there is a short interlude that provides context on how software developers were reacting to and using JavaScript.

In 1995, the Web and Web browsers were new technologies bursting onto the world, and Netscape Communications Corporation was leading Web browser development. JavaScript was initially designed and implemented in May 1995 at Netscape by Brendan Eich, one of the authors of this paper. It was intended to be a simple, easy to use, dynamic language that enabled snippets of code to be included in the definitions of Web pages. The code snippets were interpreted by a browser as it rendered the page, enabling the page to dynamically customize its presentation and respond to user interactions.

Part 1, The Origins of JavaScript, is about the creation and early evolution of JavaScript. It examines the motivations and trade-offs that went into the development of the first version of the JavaScript language at Netscape. Because of its name, JavaScript is often confused with the Java programming language. Part 1 explains the process of naming the language, the envisioned relationship between the two languages, and what happened instead. It includes an overview of the original features of the language and the design decisions that motivated them. Part 1 also traces the early evolution of the language through its first few years at Netscape and other companies.

A cornerstone of the Web is that it is based upon non-proprietary open technologies. Anybody should be able to create a Web page that can be hosted by a variety of Web servers from different vendors and accessed by a variety of browsers. A common specification facilitates interoperability among independent implementations. From its earliest days it was understood that JavaScript would need some form of standard specification. Within its first year Web developers were encountering interoperability issues between Netscape’s JavaScript and Microsoft’s reverse-engineered implementation. In 1996, the standardization process for JavaScript was begun under the auspices of the Ecma International standards organization. The first official standard specification for the language was issued in 1997 under the name “ECMAScript”

Two additional revised and enhanced editions, largely based upon Netscape’s evolution of the language, were issued by the end of 1999.

Part 2, Creating a Standard, examines how the JavaScript standardization effort was initiated, how the specifications were created, who contributed to the effort, and how decisions were made.

By the year 2000, JavaScript was widely used on the Web but Netscape was in rapid decline and Eich had moved on to other projects. Who would lead the evolution of JavaScript into the future? In the absence of either a corporate or
individual “Benevolent Dictator for Life,” the responsibility for evolving JavaScript fell upon the ECMAScript standards committee. This transfer of design responsibility did not go smoothly. There was a decade-long period of false starts, standardization hiatuses, and misdirected efforts as the ECMAScript committee tried to find its own path forward evolving the language. All the while, actual usage of JavaScript rapidly grew, often using implementation-specific extensions. This created a huge legacy of unmaintained JavaScript-dependent Web pages and revealed new interoperability issues. Web developers began to create complex client-side JavaScript Web applications and were asking for standardized language enhancements to support them.

Part 3, Failed Reformations, examines the unsuccessful attempts to revise the language, the resulting turmoil within the standards committee, and how that turmoil was ultimately resolved.

In 2008 the standards committee restored harmonious operations and was able to create a modestly enhanced edition of the standard that was published in 2009.  With that success, the standards committee was finally ready to successfully undertake the task of compatibly modernizing the language. Over the course of seven years the committee developed major enhancements to the language and its specification. The result, known as ECMAScript 2015, is the foundation for the ongoing evolution of JavaScript. After completion of the 2015 release, the committee again modified its processes to enable faster incremental releases and now regularly completes revisions on a yearly schedule.

Part 4, Modernizing JavaScript, is the story of the people and processes that were used to create both the 2009 and 2015 editions of the ECMAScript standard. It covers the goals for each edition and how they addressed evolving needs of the JavaScript development community. This part examines the significant foundational changes made to the language in each edition and important new features that were added to the language.

Wherever possible, the source materials for this paper are contemporaneous primary documents. Fortunately, these exist in abundance. The authors have ensured that nearly all of the primary documents are freely and easily accessible on the Web from reliable archives using URLs included in the references. The primary document sources were supplemented with interviews and personal communications with some of the people who were directly involved in the story. Both authors were significant participants in many events covered by this paper. Their recollections are treated similarly to those of the third-party informants.

The complete twenty-year story of JavaScript is long and so is this paper. It involves hundreds of distinct events and dozens of individuals and organizations. Appendices A through E are provided to help the reader navigate these details. Appendices A and B provide annotated lists of the people and organizations that appear in the story. Appendix C is a glossary that includes terms which are unique to JavaScript or used with meanings that may be different from common usage within the computing community in 2020 or whose meaning might change or become unfamiliar for future readers.The first use within this paper of a glossary term is usually italicized and marked with a “g” superscript.’ Appendix D defines abbreviations that a reader will encounter. Appendix E contains four detailed timelines of events, one for each of the four parts of the paper.

Dave Winer recently blogged about his initial thoughts after dipping his toes into using some modern JavaScript features . He ends by suggesting that I might have some  explanations and stories about the features he are using.  I’ve given talks that cover some of this and normally I might just respond via some terse tweets. But Dave believes that blog posts should be responded to by blog posts so I’m taking a try at blogging back to him.

What To Call It?

The JavaScript language is defined by a specification maintained by the Ecma International standards organization. Because of trademark issues, dating back to 1996, the specification could not use the name JavaScript.  So they coined the name ECMAScript instead. Contrary to some myths, ECMAScript and JavaScript are not different languages. “ECMAScript” is simply the name used within the specification where it would really like to say “JavaScript”.

Standards organizations like to identify documents using numbers. The ECMAScript specification’s number is ECMA-262.  Each time an update to the specification is approved as “the standard” a new edition of ECMA-262 is released. Editions are sequentially numbered. Dave said “ES6 is the newest version of JavaScript”.  So, what is “ES6”? ES6 is colloquial shorthand for “ECMA-262, Edition 6”.  ES6 was published as a standard in 2015. The actual title of the ES6 specification is ECMAScript 2015 Language Specification and the preferred shorthand name is ECMAScript 2015 or just ES2015.

So, why the year-based designation?  The 6th edition of ECMA-262 took a long time to develop, arguably 15 years. As ES6 was approaching publication, TC39 (the Technical Committee within Ecma International that develops the ECMAScript specifications) already knew that it wanted to change its process in a way that enabled  yearly maintenance updates.  That meant a new edition of ECMA-262 every year with a new edition number. After a few years we would be talking about ES6, ES7, ES8, ES9, ES10, ES11, etc. Those numbers quickly loose any context for people who aren’t deeply involved in the standards development process. Who would know if the current standard ES7, or ES8, or ES9? Was some feature introduced in ES6 or ES7? TC39 couldn’t eliminate the actual edition numbers (standards organizations love their document numbers) but it could change the document title.  We decide that TC39 would incorporate the year of release into the documents title and to encourage people to use the year when referring to a specific edition. So, the “newest version of JavaScript” is ECMA-262, Edition 8 and its title is  ECMAScript 2017 Language Specification. Some people still refer to it as ES8, but the preferred shorthand name is ECMAScript 2017 or just ES2017.

But saying “ECMAScript” or mentioning specific ECMAScript editions is confusing to many people and probably is unnecessary for most situations.  The common name of the language really is JavaScript and unless you are talking about the actual specification document you probably don’t need to utter “ECMAScript”. But you may need to distinguish between old versions of JavaScript and what is implemented by newer, modern implementations.  The big change in the language and its specification occurred with  ES2015.  The subsequent editions make relatively small incremental extensions and corrections to what was standardized in 2015.  So, here is my recommendation.  Generally you should  just say “JavaScript” meaning the language as it is used in browsers, Node.js, and other environments.  If you need to specifically talk about JavaScript implementations that are based upon ECMAScript specifications published prior to ES2015 say “legacy JavaScript”. If you need to specifically talk about JavaScript that includes ES2015 (or later) features say “modern JavaScript”.

Can You Use It Yet?

Except for modules almost all of ES2015-ES2017 is implemented in the current versions of all the major evergreen browsers (Chrome, Firefox, Safari, Edge). Also in current versions of Node.js. If you need to write code that will run on non-evergreen browsers such as IE you can use Babel to pre-compile modern JavaScript code into legacy JavaScript code.

Module support exists in all of the evergreen browsers, but some of them still require setting a flag to use it.  Native ECMAScript module support will hopefully ship in Node.js in spring 2018. In the meantime @std/esm enables use of ECMAScript modules in current Node releases.

Scoped Declaration (let and const)

The main motivation for block scoped declarations was to eliminate the “closure in loop” bug hazard that may JavaScript programmer have encountered when they set event handlers within a loop. The problem is that var declarations look like they should be local to the loop body but in fact are hoisted to the top of the current function and hence each event handler defined in the loop use the last value assigned to such variables.

Replacing var with let gives each iteration of the loop a distinct variable binding.  So each event handler captures different variables with the values that were current when the event handler was installed:

The hardest part about adding block scoped declaration to ECMAScript was coming up with a rational set of rules for how the declaration  should interact with the already existing var declaration form. We could not change the semantics of var without breaking backwards compatibility, which is something we try to never do. But, we didn’t want to introduce new WTF surprises in programs that use both var and let. Here are the basic rules we eventually arrived at:


Most browsers, except for IE, had implemented const declarations (but without block scoping) starting in the early naughts. Firefox implemented block scoped let declaration (but not exactly the same semantics as ES2015) in 2006.  By the time TC39 started serious working on what ultimately became ES2015, the keywords const and let had become ingrained in our minds such that we didn’t really consider any other alternatives. I regret that.  In retrospect, I think we should have used let in place of  const for declaring immutable variable bindings because that is the most common use case. In fact, I’m pretty sure that many developers use let instead of const for variable they don’t intend to change, simply because let has fewer characters to type. If we had used let in place of const then perhaps var would have been adequate for the relatively rare cases where a mutable variable binding is needed.  A language with only let and var would have been simpler then what we ended up with using const, let, and var.

Arrow Functions

One of the primary motivations for arrow functions was to eliminate another JavaScript bug hazard.  The “wrong this” problem that occurs when you capture a function expression (for example, as an event handler) but forget that this used inside the function expression will not be the same value as this in the context where you created the function expression.  Conciseness was a consideration in the design of arrow functions, but fixing the “wrong this” problem was the real driver.

I’ve heard several JS programmers comment that at first they didn’t like arrow functions but that they grew upon them over time. Your mileage may vary. Here are a couple of good articles that address arrow function reluctance.

Modules

Actually, ES modules weren’t inspired by Node modules. But a lot of work went into making them feel familiar  to people who were used to Node modules. In fact,  ES modules are semantically more similar to the Pascal modules that Dave remembers then they are to Node modules.  The big difference is that in the ES design (and Pascal modules) the interfaces between modules are statically defined while in the Node modules design  module interfaces are dynamically defined. With static module interfaces the inter-dependencies between a set of modules are precisely defined by the source code prior to executing any code.  With dynamic modules, the module interfaces cannot be fully understood without actually executing the code of the modules.  Or stated another way, ES module interfaces are declaratively defined while Node module interfaces are imperatively defined. Static modules systems better support creation of ahead-of-time tools such as accurate module dependency linters or module linkers. Such tools for dynamic module interfaces usually depends upon applying heuristics that analyze modules as if they had static interfaces.  Such analysis can be wrong if the actual dynamically  interfaces construction does things that the heuristics didn’t account for.

The work on the ES module design actually started before the first release of Node. There were early proposals for dynamic module interfaces that are more like what Node adopted.  But TC39 made an early decision that declarative static module interfaces were a better design, for the long term. There has been much controversy about this decision. Unfortunately, it has created issues for Node which have been difficult for them to resolve. If TC39 had anticipated the rapid adoption of Node and the long time it would take to finish “ES6” we might have taken the dynamic module interface path. I’m glad we didn’t and I think it is becoming clear that we made the right choice.

Promises

Strictly speaking, the legacy JavaScript language didn’t do async at all.  It was host environments such as  browsers and Node that defined the APIs that introduced async programming into JavaScript.

ES2015 needed to include promises because they were being rapidly adopted by the developer community (include by new browser APIs) and we wanted to avoid the problem of competing incompatible promise libraries or of a browser defined promise API that didn’t take other host environments into consideration.

The real benefit of ES2015 promises is that they provided a foundation for better async abstractions that do bury more of the BS within the runtime.  Async functions, introduced in ES2017 are the “better way” to do async.  In the pipeline for the near future is Async Iteration which further simplifies a common async use case.

Alan Kay famously said “The best way to predict the future is to invent it.” But how do we go about inventing a future that isn’t a simple linear extrapolation of the present?

Kay and his colleagues at Xerox PARC did exactly that over the course of the 1970s and early 1980s. They invented and prototyped the key concepts of the Personal Computing Era. Concepts that were then realized in commercial products over the subsequent two decades.

So, how was PARC so successful at “inventing the future”? Can that success be duplicated or perhaps applied at a smaller scale? I think it can. To see how, I decided to try to sketch out what happened at Xerox PARC as a pattern language.

futurepatlang

Look Twenty Years Into the Future

If your time horizon is short you are doing product development or incremental research. That’s all right; it’s probably what most of us should be doing. But if you want to intentionally “invent the future” you need to choose a future sufficiently distant to allow time for your inventions to actually have an impact.

Extrapolate Technologies

What technologies will be available to us in twenty years? Start with the current and emerging technologies that already exist today. Which relevant  technologies are likely to see exponential improvement over the next twenty years? What will they be like as they mature over that period? Assume that as the technical foundation for your future.

Focus on People

Think about how those technologies may affect people. What new human activities do they enable? Is there a human problem they may help solve? What role might those technologies have in everyday life? What could be the impact upon society as a whole?

Create a Vision

Based upon your technology and social extrapolations  create a clearly articulated vision of what your desired future. It should be radically different form the present in some respects. If it isn’t, then invention won’t be required to achieve it.

A Team of Dreamers and Doers

Inventing a future requires a team with a mixture of skills.  You need dreamers who are able to use their imagination to create and refine the core future vision. You also need doers who are able to take ill-defined dreams and turn them into realities using available technologies. You must have both and they must work closely together.

Prototype the Vision

Try to create a high fidelity functional approximation of your vision of the future. Use the best of today’s technology as stand-ins for your technology extrapolations. Remember what is expensive and bulky today may be cheap and tiny in your future. If the exact technical combination you need doesn’t exist today, build it.

Live Within the Prototype

It’s not enough to just build a prototype of your envisioned future. You have to use the prototype as the means for experiencing that future. What works? What doesn’t? Use you experience with the prototype to iteratively refine the vision and the prototypes.

Make It Useful to You

You’re a person who hopes to live in this future, so prototype things that will be useful to you.  You will know you are on to something when your prototype become an indispensible part of your life. If it isn’t there yet, keep iterating until it is.

Amaze the World

If you are successful in applying these patterns you will invent things that are truly amazing.  Show those inventions to the world. Demonstrate that your vision of the future is both compelling and achievable. Inspire other people to work towards that same future. Build products and businesses if that is your inclination, but remember that inventing the future takes more than a single organization or project. The ultimate measure of your success will be your actual impact on the future.

 

 

 

 

 

earlyproducts

The chaotic early days of a new computing era is an extended period of product innovation and experimentation. But both the form and function of new products are still strongly influenced by the norms and transitional technologies of the waning era. New technologies are applied to new problems but often those new technologies are not yet mature enough to support early expectations. The optimal form-factors, conceptual metaphors, and usage idioms of the new era have yet to be fully explored and solidified. Looking back from the latter stages of a computing era, early era products appear crude and naive.

This is a great time to be a product innovator or an enthusiastic early adopter. But don’t get too comfortable with the present. These are still the early days of the Ambient Computing Era and the big changes are likely still to come.

grassroots

How do we know when we are entering a new computing era? One signal is a reemergence of grassroots innovation. Early in a computing era most technical development resources are still focused on sustaining the mature applications and use cases from the waning era or on exploiting attractive transitional technologies.

The first explorers of the technologies of a new era are rebels and visionaries operating at the fringes. These explorers naturally form grassroots organizations for sharing and socializing their ideas and accomplishments. Such grassroots organizations serve as incubators for the the technologies and leaders of the next era.

The HomeBrew Computing Club was a grassroots group out of which emerged many leaders of the Personal Computing Era. Now, as the Ambient Computing Era progresses, we see grassroots organizations such as the Nodebots movement and numerous collaborative GitHub projects serving a similar role.

fromchaos

At the beginning of a new computing era, it’s fairly easy to sketch a long-term vision of the era. All it takes is knowledge of current technical trajectories and a bit of imagination. But it’s impossible to predict any of the essential details of how it will actually play out.

Technical, business, and social innovation is rampant in the early years of a new era. Chaotic interactions drive the churn of innovation. The winners that will emerge from this churn are unpredictable. Serendipity is as much a factor as merit. But eventually, the stable pillars of the new era will emerge from the chaos. There are no guarantees of success, but for innovators right now is your best opportunity for impacting the ultimate form of the Ambient Computing Era.

AmbientEduardo

In the Ambient Computing Era humans live in a rich environment of communicating computer enhanced devices interoperating with a ubiquitous cloud of computer mediated information and services. We don’t even perceive most of the computers we interact with. They are an invisible and indispensable part of our everyday life.

transitionalquickly

A transitional technology is a technology that emerges as a computing era settles into maturity and which is a precursor to the successor era. Transitional technologies are firmly rooted in the “old” era but also contain important elements of the “new” era. It’s easy to think that what we experience using transitional technologies is what the emerging era is going to be like. Not likely! Transitional technologies carry too much baggage from the waning era. For a new computing era to fully emerge we need to move “quickly through” the transition period and get on with the business of inventing the key technologies of the new era.