≡ Menu

In 2011 I wrote a blog post where I present the big picture model I use for thinking about what some people were calling the “post-PC computing era”. Since then I’ve written other related posts, given talks,  and had conversations with many people.  Time appears to be validating my model so it seems like a good time to do an updated version of the original post.

There seems to be broad agreement that the Personal Computing Era has ended. But what does that really mean? What happens when one era ends and another begins? What is a “computing era”?

e·ra /ˈirə,ˈerə/



  1. a long and distinct period of history with a particular feature or characteristic.

Digital computing emerged in the years immediately following World War II and by the early 1950s computers started to be commercially available. So, the first era of computing must have started about 1950. But how many other eras have passed since then? There are many ways that people slice up the history of modern computing. By the late 1960s the electronics foundation of computers had reached it “3rd generation” (vacuum tubes, transistors, integrated circuits). Some people consider that the emergence of software technologies such time-sharing, relational databases, or the web correspond to distinct eras.

I don’t think any of those ways of slicing up computing history represent periods that are long enough or distinctive enough to match the dictionary definition of “era” given above. I think we’ve only passed through two computing eras and have just recently entered the third. This picture summarizes my perspective:


The must important idea from this picture is that, in my view, there have only been three major “eras” of computing. Each of these eras span thirty or more years and represents a major difference in the primary role computers play in human life and society. The three eras also correspond to major shifts in the dominant form of computing devices and software. What is pictured is a conceptual timeline, not a graph of any actual data. The y-axis is intended to represent something like overall impact of computing upon average individuals but can also be seen as an abstraction of other relevant factors such as the socioeconomic impact of computing technologies.

The first era was the Corporate Computing Era. It was focused on using computers to enhance and empower large organizations such as commercial enterprises and governments. Its applications were largely about collecting and processing large amounts of schematized data. Databases and transaction processing were key technologies.

During this era, if you “used a computer” it would have been in the context of such an organization. However, the concept of “using a computer” is anachronistic to that era. Very few individual had any direct contact with computing and for most of those that did, the contact was only via corporate information systems that supported some aspects of their jobs.

The Corporate Computing Era started with the earliest days of computing in the 1950’s and obviously corporate computing still is and will continue to be an important sector of computing. This is an important aspect of my model of computing eras. When a new era emerges, the computing applications and technologies of the previous eras don’t disappear. They continue and probably even grow. However, the overall societal impact of those previous forms of computing become relatively small in comparison of the scope of impact of computing in the new era.

Around 1980 the primary focus of computing started to rapidly shift away from corporate computing. This was the beginning of the Personal Computing Era. The Personal Computing Era was about using computers to enhance and empower individuals. Its applications were largely task-centric and focused on enabling individuals to create, display, manipulate, and communicate relatively unstructured information. Software applications such as word processors, spreadsheets, graphic editors, email, games, and web browsers were key technologies.

We are currently still in the early days of the third era. A change to the dominant from of computing is occurring that will be at least a dramatic as the transition from the Corporate Computing Era to the Personal Computing Era. This new era of computing is about using computers to augment the environment within which humans live and work. It is an era of smart devices, perpetual connectivity, ubiquitous information access, and computer augmented human intelligence.

We still don’t yet have a universally accepted name for this new era. Some common names are post-PC, pervasive, or ubiquitous computing. Others focus on specific technical aspects of the new era and call it cloud, mobile, or web computing. But none of these terms seem to capture the breadth and essence of the new era. They are either too focused on a specific technology or on something that is happening today rather than something that characterizes a thirty year span of time. The name that I prefer and which seems to be gaining some traction is “ambient computing”,

am·bi·ent /am-bee-uh nt/



  1. of the surrounding area or environment
  2. completely surrounding; encompassing

In the Ambient Computing Era humans live in a rich environment of communicating computer enhanced devices interoperating with a ubiquitous cloud of computer mediated information and services. We don’t even perceive most of the computers we interact with. They are an invisible part of our everyday things and activities. In the Ambient Computing Era we still have corporate computing and task-oriented personal computing style applications. But the defining characteristic of this era is the fact that computing is shaping the actual environment within which we live and work.

The early years of a new era are an exciting time to be involved in computing. We all have our immediate goals and the much of the excitement and opportunity is focused on shorter term objectives. But while we work to create the next great app, machine learning model, smart IoT device, or commercially successful site or service we should occasionally step back and think about something bigger: What sort of ambient computing environment do we want to live within and is our current work helping or hindering its emergence?

I written before about a transition period to a new era of computing. Earlier this month I gave a keynote talk at the Front-Trends conference in Warsaw.  In preparing this talk I discovered a very interesting graphic created by Asymco for an article about the Rise and Fall of Personal Computing.   It was so interesting that I used it to frame my talk. Here is what my first slide looked like, incorporating the Asymco visualization:


This graph is showing market-share of various computing platforms since the very first emergence of what can be characterize as a personal computer.  I urge you to read the Asymco article if you are interested in the details of this visualization.   Keep in mind that it is showing percentage share of a rapidly expanding market. Over on the left edge we are talking about a total world-wide computer population that could be measured in the low hundreds of thousands.  On right we are talking about a market size in the high hundreds of millions of computers.

For my talk, I used the graph as an abstraction of the entire personal computing era. The important thing was that there was a period of around ten years before the Windows/Intel PC platform really began to dominate.  I remember those days. I was a newly graduated software engineer and those were exciting times.  We knew sometime big was happening, we just didn’t know for sure what it was and how it was all going to shake out.  Each year there was one or more new technologies and companies that seems to be establishing themselves  as the dominant platform.  But then something changed and within a year or two somebody else seemed to be winning.  It wasn’t until  the latter part of the 1980’s that the Wintel platform could be identified as the clear winner. That was the beginning of a 20+ year period that, based upon this graph, I’m calling the blue bubble.

While many interesting things (for example, the Web)  happened during the period of the blue bubble, overall it was a much less exciting time to be working in the software industry. For most of us, there was no option other than to work within the confines of the Wintel platform.  There were good aspects to this as a fixed and relatively stable platforms provided a foundation for the evolution of PC-based applications and ultimately the applications were what  was most important from a user perspective. But as a software developer, it just wasn’t the same as that earlier period before the bubble formed. To those of us who were around for the first decade of the PC era there were just too many constraints inside the bubble. There were still plenty of technical challenges, but there wasn’t the broad sense that we were all collectively changing the world.  But then, the blue bubble became normal. Until very recently,  must active software developers have never experienced a professional life outside that bubble.

The most important thing for today is what is happening on the right-hand side of this graph.  Clearly, the big blue bubble is coming to an end. This coincides with what I call the transition from the Personal Computing Era to the Ambient Computing Era.  Many people thank we are already inside the next blue bubble.  That Apple, or Google, or many be even “the Web” has already won platform dominance for the next computing era.  Maybe so, but I doubt it.  Here is a slide I used at the end of my recent talk:


It’s the same graphic.  I only removed the platform legend and changed the title and time line. The key point is that we probably aren’t yet inside the next blue bubble.  Instead, we are most likely in a period that is more similar to the first ten years of the PC Era.  It’s a time of chaotic transition.  We don’t know for sure which companies and technologies map to the colors in the  graph.  We also don’t know the exact time scale;  2013 isn’t necessarily equivalent to 1983.  It’s probably the case that the dominant platform  of the Ambient Computing Age is not yet established. The ultimate winner may  already be out there along with several other contenders.  We just don’t know with certainty how it’s all going to come out.

Things are really exciting again. Times of chaos are times of opportunity. The constraints of the last blue bubble are gone and the next blue bubble isn’t set yet. We all need to drop our blue bubble habits and seize the opportunity to shape the new computing era. It’s a time to be aggressive and to take risks. It’s a time for new thinking and new perspectives.  This is the best of times to be a software developer. Don’t get trapped by blue bubble thinking and don’t wait too long. The window of opportunity will probably only last a few years before the next blue bubble is firmly set. After that it will be decades  until the next such opportunity.

We’re all collectively creating a new era of computing.  Own it and enjoy the experience!

My plan is for this to be the first in a series of posts that talk about specific medium term challenges facing technologists as we move forward in the Ambient Computing Era.  The challenges will concern things that I think are inevitable but which may not be getting enough attention right now. But with attention, we should see significant progress towards solutions over the next five years.

Here’s the first challenge.  I have too many loosely coordinated digital devices and digital services. Everyday, I spend hours using my mobile phone, my tablet, and my desktop Mac PC. I also regularly use a laptop, a FirefoxOS test phone, and my DirecTV set-top box/DVR.  Less, regularly I use the household iPad, an Xbox/Kinect in our family room, and a couple of Denon receivers with network access.   Then, of course, there are various other active digital devices like cameras, a FitBit, runner’s watches, an IPod shuffle, etc.  My car is too old to have much user facing intelligence but I sure that won’t be the case with the next one.

Each of these devices is connected (at least indirectly) to the Internet and most of them have some sort of web browser. Each of them locally hold some of my digital possessions. I try to configure and use services like Dropbox and Evernote to make sure that my most commonly used possessions are readily available on all my general-purpose devices, but sometimes I still resort to emailing things to myself.

I also try to similarly configure all my MacOS devices and all my Android devices. But even so, everything I need isn’t always available on the device I’m using at any instance, even in cases where the device is perfectly capable of hosting it.

Even worse, each device is different in non-essential, but impossible to ignore ways.  I’m never just posting a tweet or reading my favorite new streams.  I’m always doing it on my tablet, or at my desk, or with my phone and the experience is different for each of them in some ways.  In every case, I have to focus as much attention on the device I’m physically using and how it differs from my other devices as I do on the actual task I’m interested in accomplishing.  And, its getting worse. Each new device I acquire may give me some new capability but it also adds to the chaos.

Now, I have the technical skills that enable me to deal with this chaos and get a net positive benefit from most of these devices. But it isn’t where I really want to be investing my valuable time.

I simply want to think about all my “digital stuff” as things that are always there and always available.  No mater where I am or which device I’m using.  When I get a new device, I don’t want to spend a day installing apps and configuring it.  I just want to identify myself and have all my stuff immediately available. I want my stuff to look and operate familiarly.  The only differences should be those that are fundamental to the specific device and its primary purpose.  My attention should always be on my stuff.   Different devices and different services should fade into the background. “Digital footprint” was the term I used my Cloud on Your Ceiling to refer to all this digital stuff.

Is any progress being made towards achieving this? Cloud hosted services from major industry players such as Google and Apple may feel like they are addressing some of these needs. But, they generally force you to commit all your digital assets to a single corporate caretaker and whatever limitations they choose to impose upon you.  Sometimes such services are characterized as “digital lockers”.  That’s not really what I’m looking for. I don’t want to have to go to a locker to get my stuff; I just want it to appear to always be with me and under my complete control.

The Locker Project is something that I discovered while researching this post that sounded like relevant work but it appears to be moribund.  However, it led me to discover an inspirational short talk by one of its developers, Jeremie Miller,  who paints a very similar vision to mine. The Locker Project appears to have morphed in to the Singly AppFabric  product, which seems to be a cloud service for integrating social media data into mobile apps.  This is perhaps a step in the right direction, but not really the same vision.  I suspect there is a tension between achieving the full vision and the short-term business realities of a startup.

So, that’s my first Ambient Computing challenge. Create the technology infrastructure and usage metaphors that make individual devices and services fade into the background and allow us all to focus our attention on actually living our digitally enhanced lives.

I’m interested in hearing about other relevant projects that readers may know about and other challenges you think are important.

(Photo by “IndyDina with Mr. Wonderful”, Creative Commons Attribution License. Sculpture by Tom Otterness)

Recently a friend of mine asked this question.  His theory was that the open web, as described in the Mozilla Mission no longer mattered because much of what we used to do using web browsers is now rapidly shifting to “apps”.  Why worry about the open web if nobody is going to be using it?

To me, this is really a question about what do we mean by “the web”. If by  “the web” we are just referring to the current worldwide collection of information made available by http servers and accessed most commonly using desktop browsers, then maybe he’s right.  While I use it all the time, I don’t think very much about the future of that web. Much about it will surely  change over the next decade. The 1995 era technologies do not necessarily need to be protected and nourished. They aren’t all good.

What I think about is the rapidly emerging pervasive and ambient information ecology that we are living within. This includes every digital device we regularly interact with. It includes devices that provide access to information but also devices that collect information. Some devices are “mobile”, others are build into physical infrastructure that surrounds us. It includes the sort of high production-value creative works that we see today “on the web” and still via pre-web media.  But it also includes, every trivial digital artifact that I create while going about my daily life and work.

Is this “the web”?   I’m perfectly happy to call it that.  It certainly encompasses the web, as we know it today.  But we need to be careful using that term to ensure that our thinking and actions aren’t over constrained by our perception of yesterday’s “web”.  This is why I like to tell people we are still in the very early stages of the next digital era.  I believe that the web we have today is, at most, the Apple-|| or TRS-80 of this new era. If we are going to continue to use  “the web” as a label then it needs to represent a 20+ year vision that transcends http and web browsers.

Technology generally evolves incrementally. Almost all of us spend almost all of our time working on things that are just “tactical” from the perspective of a twenty-year vision.  We are responding to what is happening today and working for achievement and advantage over the next 1-3 years. I think that the shift from “websites” to “apps” that my friend mentioned is just one of these tactical technology evolutionary vectors that is a point on the road to the future. The phenomena isn’t necessarily any more or less important than other point in time alternatives such as Flash vs. HTML or iOS vs. Android.  I think it would be a mistake to assume that “apps” is a fundamental shift.  We’ll know better in five years.

While everybody has to be tactical, a long-term vision still has a vital role.  A vision of a future that we yearn to achieve is an important influence upon our day-to-day tactical work. It’s the star that we steer by. A personal concern of mine is that we are severely lacking in this sort of long-term visions of  “the web”.  That’s why my plan for this year is to write more posts like “A Cloud on your Ceiling” that explore some of these longer term questions. I encourage you to also spend some time to think long term.  What short of digital enhanced world do you want to be living in twenty years from now?  What are you doing to help us get there?

(Photo by “mind_scratch”, Creative Commons Attribution License)

I’ve previously written that we are in the early stages of a new era of computing that I call “The Ambient Computing Era”.  If we are truly entering a new era then it is surely the case that the computers we will be using twenty or more years from now will exist in forms that are quite unlike the servers, desktop PCs, phones, and tablets we use today.  We can at best speculate or dream about what that world may be like.  But some of my recent readings about emerging technologies have inspired me to think about how things might evolve.

This week I learned about “WiGig” which is WiFi operating on 60GHz radio frequencies.  WiGig router chip sets already exist and support a theoretical throughput of 7Gbps.  The catch is that 60GHz radio wave won’t penetrate walls or furniture.  So if you want to have really high bandwidth wireless communications from something in your lap or on your sleeve to that wall-size display, you are probably going to want to hang a WiGig router on your ceiling. If your room is large or has a lot of furniture you may need several.

This got me thinking about what other sort of intelligent devices we may be hanging on our ceilings. The first thing that came to mind was LED lighting.  Until very recently, I was one of those people who would make jokes about assigning IP address to light bulbs.  But recently I was at a friend’s house where I saw exactly that: network addressable smart LED light bulbs.  It turns out that a little intelligence is actually useful in producing optimal room light with LEDs and when you have digital intelligence you really want to control it with something more sophisticated than a simple on/off switch.  So get ready for lighting with IP6 addresses. But they probably won’t be bulb shaped.

Both WiGig routers and networked LED room lighting are still too expensive for wide adoption, but like all solid state electronic devices we can expect their actual cost to approach zero over the next twenty or so years.  So there we have at least two kinds of intelligent devices that we probably will have hanging from our ceilings. But will they really be separate devices?  I could easily image a standardized ceiling panel, let’s say a half meter square consisting of LED lighting, a WiGig router, and other room electronics.  A standardized form factor would allow our homes and offices to be built (or updated) to include the infrastructure (power, external connectivity, physical mounting) that lets us easily service and evolve these devices. In honor of one of the most important web memes, I suggest that we call such a panel, a “CAT” or Ceiling Attached Technology.

So, what other functionality might be integrated on a CAT? Certainly we can expect sensors including cameras that allow the panel to “see” into the room. A 256 or 512 core computing cluster with several terabytes of local storage also seems very plausible.  Multiple CATs in the same or adjoining rooms would presumably participate in a mesh network that ultimately links to the rest of the digital world via high-speed wired or wireless “last mile” connections. Basically, our ceilings and walls could become what we think of today as “cloud” data centers.

What sort of computing would be taking place in those ceiling clouds? One possibility is that our entire digital footprint (applications, services, active digital archives) might migrate to and be cached in the CATs that are physically closest to us.  As we move about or from location to location, our digital footprint just follows us.  No need to make long latency round trips to massive data centers in eastern Oregon or contend for resources with millions of other active users.

Of course, there are tremendous technology challenges standing between what we have today and this vision.  How do we maintain the integrity of our digital assets as they follow us around from CAT to CAT? How do we keep them secure and maintain our personal privacy?  What programs get to migrate into our CATs?  How do we make sure it’s not malicious? How do we keep our homes from becoming massive botnets?  That’s why I think it’s important for some of us to start thinking about where this new computing era is heading and how we want to shape it. We can start inventing the ambient computing world just like Alan Kay and his colleagues at Xerox PARC started in the early 1970’s with the vague concept of a “Dynabook“ and went on to invent most of the foundational concepts that define personal computing.

If you find yourself thinking about “Post-PC Computing” keep in mind that the canonical computer twenty years from now will probably look nothing like a cell phone or tablet. It may look like a ceiling tile. I hope this warps your thinking.

(Photo by “Suicine”, Creative Commons Attribution License)

In my post, The Browser is a Transitional Technology, I wrote that I thought  web browsers were really Personal Computing Era applications and that browsers were unlikely to continue to exist as such as we move deeply into the Ambient Computing Era. However,  I expect browser technologies to have a key role in the Ambient Computing Era. In Why Mozilla, I talked about the inevitable emergence of a universal application platform for the Ambient Era and how open web technologies could serve that role. Last month I gave a talk where I tried to pull some of these ideas together:

For slides, 14-19 I talked about how when you remove that PC application facade from a modern browser you have essentially an open web-based application platform that is appropriate for all classes of ambient computing devices.

Today Mozilla announced an embryonic project that is directed towards that goal.    B2G or (Booting to the Web) is about showing that the open the web application platform can be the primarily platform for running native-grade applications.  As the project page says:

Mozilla believes that the web can displace proprietary, single-vendor stacks for application development. To make open web technologies a better basis for future applications on mobile and desktop alike, we need to keep pushing the envelope of the web to include — and in places exceed — the capabilities of the competing stacks in question.

One of the first steps is to directly boot devices into running Gecko, Mozilla’s core browser engine.  Essentially the devices will boot directly into the browser platform, but without the baggage and overhead of a traditional PC based web browser.  This is essentially the vision of slide 17 of my presentation.  The “G” in B2G comes from the use of Gecko, but the project is really about the open web. Any other set of browser technologies could potentially be used in the same way.  As the project web site says: “We aren’t trying to have these native-grade apps just run on Firefox, we’re trying to have them run on the web.”

This project is just starting, so nobody yet knows all the details or how successful it will be.  But, like all Mozilla projects it will take place in the open and with an open invitation for you involvement.

Recently I’ve had some conversations with some colleagues about how Web IDL is used to specify the APIs that browsers support for web applications.  I think our discussions raised some interesting questions about  the fundamental nature of the web app platform so I wanted to raise those same questions here.

Basically, is the browser web app platform an application framework or is it really  something that is more like an operating system? Stated more  concretely, is the web app platform most similar to the Java or .Net platforms or is it more similar to Linux or Windows?  In the long term this is probably a very important question.  It  makes a different  in the sort of capabilities that can be made available to a web app and also in the integrity expectations concerning  the underlying platform.

In a framework, client code directly integrates and extends the platform code. This allows client code to do very powerful things but the cost of this is that client code can do things that results in platform level errors or even failures. Modern frameworks are pretty much all defined in terms of object-oriented concepts because those concepts permits the client extensibility that is the primary motivation for building a framework. Frameworks generally have to trust their clients because they frequently have to pass control into client code and their is no way they can anticipate or validate everything client code might do. Frameworks are great from the perspective of what they allow developers to create, they are less great in turns of robustness and integrity.

In an operating system, client code almost never directly integrates with the platform code. Client code is limited to a fixed set of actions that can be requested via a fairly simple system call interface. In the absence of platform bugs, client code can’t cause platform level errors or crash the platforms because the platform carefully validates every aspect of every system call request and never directly executes untrusted client code. Operating systems don’t trust their clients. Successful operating system API are pretty much all expressed in terms of procedure calls that only accept scalars and simple structs as arguments because such arguments can be fully validated before the platform uses them to perform any action. Operating systems are great from a robustness and integrity perspective but they don’t offer much direct help to clients that need to do complex things.

Historically, there have been various attempts to create operating systems that uses framework style object-oriented client interfaces. All the major attempts at doing this that I am aware of have been dismal failures.  Taligent and Windows Longhorn are two notorious examples. The problem seems to be that the power and extensibility that comes with framework style interfaces is in direct conflict with robustness and integrity requirements of an OS. It is very difficult and perhaps impossible to find a comprise that provides sufficient power, extensibility, robustness, and integrity all at the same time. Systems like Taligent and Longhorn also have had significant durability issues because one of the ways they  tried to balance power and integrity was by describing their APIs in terms of static recursive object-oriented typing which are very hard to evolve in an backwards compatible fashion over multiple versions.

This begins to sound a lot like the way Web IDL is being used to describe web app APIs.  It has framework style APIs but browser implementers would like to have OS style system integrity and robustness.

One way OSes have addressed this issue is by using a  kernel.  The kernel is a small part of the overall platform that is very robust, has high integrity, and exposes very stable APIs.  The majority of the platform is outside the kernel.  In general, bugs or misuse of  non-kernel code may crash a application but it can’t crash the entire system. One way to think about large application frameworks like Java and .Net is that they are the low integrity but high leverage outer-most layer of such a kernelized design.

So what is the Web App platform. Is it a framework or is it an OS? I think it needs to be designed mostly like a framework.  However, there probably is a kernel of functionality that needs to be treated more like an OS.  That kernel is not yet well identified. It probably needs to be.  Otherwise, the designer of the web application platform run the risk of going down the same dead-end paths that were taken by the designers of “object-oriented” OSes like Taligent and Longhorn.

(Photo Attribution Some rights reserved by Pink Sherbet Photography)

In my last couple posts I introduced idea of using Mirrors for JavaScript reflection and took a first look at the introspection interfaces of my jsmirrors prototype. In this post I’m going to look at the other reflection interfaces in jsmirrors and how they are mixed together to provide various levels of reflection privilege.

When building this prototype I knew that I wanted to have a number of separable sets of reflection capabilities that I could mix and match in various ways. I also knew that the implementation was likely to change several times as I experimented with the prototype. I wanted to make sure that as I evolved the implementation that I could keep track of what belonged in each separable piece. The way I ultimately accomplished this was by maintaining a file of interface definitions that are separate from the actual code that implements jsmirrors. The interface specifications are contained in the file mirrorsInterfaceSpec.js. I look at the interface file when I need to remind myself how to use one of the specific reflection interfaces and as a specification as I make changes to the implementaiton. Also, whenever I perform a major refactoring of the implementation I check it against the interface specification. Here is the interface specification of the basic object introspection interface that I demonstrated in the Looking into Mirrors post:

//Mirror for introspect upon all objects
var objectMirrorInterface = extendsInterface(objectBasicMirrorInterface, {
   prototype:  getAccess(objectMirrorInterface|null),
     //return a mirror on the reflected object's [[Prototype]]
   extensible: getAccess(Boolean),
     //return true if the reflected object is extensible
   ownProperties: getAccess(array(propertyMirrorInterface)),
     //return an array containing property mirrors
     //on the reflected object's own properties
   ownPropertyNames: getAccess(array(String)),
     //return an array containing the string names
     //of the reflected object's own properties
   keys: getAccess(array(String)),
     //return an array containing the string names of the
     //reflected object's enumerable own properties
   enumerationOrder: getAccess(array(String)),
     //return an array containing the string names of the
     //reflected object's enumerable own and inherited properties
   prop: method({name:String}, returns(propertyMirrorInterface|undefined)),
     //return a mirror on an own property
   lookup: method({name:String},returns(propertyMirrorInterface|undefined)),
     //return mirror on the result of a property lookup. It may be inherited 
   has: method({name:String}, returns(Boolean)),
     //return true if the reflected object has a property named 'name'
   hasOwn: method({name:String}, returns(Boolean)),
     //return true if the reflected object has an own property named 'name'
   specialClass: getAccess(String)
    //return the value of the reflected object's [[Class]] internal property

I used JavaScript object literals and a few helper functions to describe these interfaces. Here is the definition of the helper functions used for this interface:

function getAccess(returnInterface) {}; //a "get-able" property
function method(arguments,returnInterface){}; // a method property
function extendsInterface(supers,members) {};//a interface adding to supers
function returns(returnInterface) {};   //return value of a method
function array(elementInterface) {};//array elements all support a interface

The JavaScript code of the interface definitions don’t actually do anything but I find that being able to parse the interface specification using JavaScript forces me to apply some useful structuring discipline that I might skip if I was just writing prose descriptions. Plus I think it is going to be quite useful to have these interface specifications in a form that is easily processed. For example, now that I have an initial implementation of jsmirrors, I may use it to create a little tool that can reflect upon the objects created by the interface specifications and perform useful tasks. For example I may generate unit test stubs for implementations of the interfaces. I may also use reflection over the interfaces to directly validate the completeness of my implementations.

In factoring the jsmirrors functionality for reflecting upon objects I divided it to three primary interfaces. objectMirrorInterface, shown above, is the basic introspection interface. objectMutationMirrorInterface allows changes to be made to a reflected object such as adding or removing properties or changing the object’s prototype. objectEvalMirrorInterface allows various forms of evaluation upon reflected objects such as doing “puts” and “gets” (which may invoke accessor property functions) to access property values of a reflected object or to invoke a method property. There are also corresponding introspection, mutation, and evaluation interfaces for function object mirrors and also for property mirrors.

In the actual implementation, these interfaces are combined in various ways to produce five different kinds of concrete mirrors on local objects. These various kinds of mirrors are accessible via factory functions that are accessed as properties of the Mirrors module object. The five local object mirror factories are:

  • Mirrors.introspect – supports only introspection using objectMirrorInterface.
  • Mirrors.evaluation – supports only evaluation using objectEvalMirrorInterface.
  • Mirrors.introspectEval – supports introspection and evaluation using objectMirrorInterface and objectEvalMirrorInterface.
  • Mirrors.mutate – supports introspection and mutation using objectMirrorInterface and objectMutationMirrorInterface.
  • Mirrors.fullLocal – supports introspection, mutation, and evaluation using all three interfaces.

I demonstrated the use of Mirrors.introspect is my previous post. The other Mirror factories are used in exactly the same manner and, except for Mirrors.evaluation, could be used to run all the same examples. However, the other factories expose additional functionality that isn’t available using Mirrors.introspect. Take a look at the actual interface specification in mirrorsInterfaceSpec.js to see which capabilities are provided by the mirror objects produced by each of these factories.

The reason for providing multiple mirror factories is to demonstrate that by using mirror-based reflection we can decide exactly how much reflection capability we will make available to any specific client or tool. We might allow one tool to use the full range of reflective interfaces. For another we may only expose introspection or evaluation capabilities or perhaps introspection and mutation capabilities without the ability to actually do reflective evaluation. However, so far, I’ve only shown mirrors that know how to reflect upon local objects that exist in the same heap as the mirror objects. In my next post I’ll look at how to use the same interfaces to reflect upon non-local objects that might be encoded in a file or exist in a remote environment.

(Photo by “Metro Centric”, Creative Commons Attribution License)

In my last post I introduced the programming language concept of Mirrors and mentioned jsmirrors, the prototype I’ve been working on to explore using mirrors to support reflection within JavaScript.  In this post I’m going to take a deeper look into jsmirrors itself.  I had three goals for my first iteration of jsmirrors:

  1. Define basic mirror-based interfaces for reflection upon upon JavaScript objects and properties.
  2. Demonstrate that jsmirrors  can support different levels of reflection privilege.
  3. Demonstrate that the jsmirrors interface can work with both local and external objects.

In this post I’m going to concentrate on showing details of the basic interfaces I designed to meet the first goal. In subsequent posts I talk about the other two goals.

The actual implementation of jsmirrors is contained in the file mirrors.js.  Note that jsmirrors requires an ECMAScript 5 compatible JavaScript implementation. The jsmirrors implementation is structured using the module pattern and when loaded defines a single global named Mirrors whose properties are factory functions that can be used to create various kinds of mirror objects. The most basic mirror factory is called introspect and creates a mirror on a local object that only supports introspection (examination without modification):

//create a test object
var obj = {a:1, get b() {return "b value"}, c: undefined};
obj.c = {back: obj};  //make a circular reference to obj

//create an introspection mirror on obj
var m=Mirrors.introspect(obj);
console.log(m);   //output:  "Object Introspection Mirror #0"

In the above example, lines 2-3 create a couple of test objects and line 6 is creating an introspection mirror on one of them. We see from the output of line 7 how such mirror objects identify themselves using the toString method. Once we have such a mirror, we can use it to examine the structure and state of its reflected object:

console.log(m.ownPropertyNames) ;  //output:  "a,b,c"
console.log(m.extensible); //output:  true
console.log(m.has("toString")); //output:  true
console.log(m.hasOwn("toString")); //output:  false
var p=m.prototype;
console.log(p); //output:  "Object Introspection Mirror #3"
console.log(p.hasOwn("toString")); //output:  true

Lines 8-11 are querying various characteristics of the object reflected by the mirror m such as a list of its own property names, whether or not additional properties may be added, and whether it locally defines or inherits a specific property. Line 12 queries for the object that is the prototype object for the reflected object. Note from line 13 that the value returned is also an introspection mirror. This is one of the important characteristics of this style of mirror interface. When an object value is accessed a mirror on the object is always returned rather than the actual object. You may be curious why the mirror p is “Mirror #3” rather than “Mirror #1”. The reason is that some of the preceding method calls internally generated Mirrors #1-2 as part of their internal implementation.

Mirror objects aren’t unique. Multiple mirror objects may simultaneously exist that reflect on the same underlying object. The sameAs method can be used to determine if two mirrors are reflecting the same object:

console.log(m.sameAs(p)) ;  //output:  false
var opm = Mirrors.introspect(Object.prototype);
console.log(p.sameAs(opm)); //output:  true

Introspection mirrors support several other methods. The complete list can be seen by looking at the objectMirrorInterface specification in mirrorsInterfaceSpec.js. Some of the most important methods provide access to information about specific properties. Property mirrors are returned to enable introspection of actual property definitions:

var pmb = m.lookup("b");
  //output: "Accessor Property Introspection Mirror name: b #6"

In line 18 the method lookup on a mirror object is used to retrive the property named “b”. What is return in this case is a property introspection mirror. The interface specifications propertyMirrorInterface, dataPropertyMirrorInteface, and accessorPropertyMirrorInteface in mirrorsInterfaceSpec.js describe the operations that can be performed on property introspection mirrors. For example:

console.log(pmb.isData);  //output: false
console.log(pmb.isAccessor); //output: true
console.log(pmb.enumerable); //output: true
Object.defineProperty(obj,"b",{enumerable: false});
console.log(pmb.enumerable); //output: false

Lines 21-22 show tests to determine whether the reflected property is a data property or an accessor property and line 23 reports the state of the property’s enumerable attribute. Lines 24-25 demonstrate that the mirror is presenting a live view of the reflected object. Line 24 modifies the enumerable attribute of the “b” property of the reflected object. When the mirror is again used in line 25 we see that the reported state of the enumerable attribute has changed to false. Note that we had to use a built-in reflection function to change the enumerable attribute because the mirrors we are using in the above examples only support introspection and don’t allow any changes to the reflected objects to be made using the mirrors.

console.log(pmb.definedOn.sameAs(m)); //output: true
var fm=pmb.getter;
console.log(fm);  //output: "Function Introspection Mirror #8"
console.log(fm.source); //output: "function () {return \"b value\";}"

Property mirrors know what object “owns” the reflected property. Line 26 shows using definedOn to get a mirror on the owning object. We then use sameAs to verify that this mirror is actually reflecting the same object as our original mirror m. Because the property we are reflecting upon is an accessor property it has getter and setter functions. In line 27 we use the property mirror to access the property’s getter function and in line 28 we see that the results is yet another kind of mirror, a “Function Introspection Mirror”. As specified by the functionMirrorInterface in mirrorsInterfaceSpec.js this is a kind of object mirror that adds reflection capabilities that are specific to function objects. For example, in line 29 we see that we can use the function mirror to retrieve the source code of the getter function.

The above examples provide just a quick overview of the capability of jsmirrors introspection mirrors and how they are used. But these mirrors only allow the inspection of objects. In many situations that is the only kind of reflection you need or that you will want to permit. However, there are situations where reflection needs to be able to perform other operations such as modifying the definitions of properties or calling reflected functions. In my next post, I’ll explore how jsmirrors supports those kinds of reflection and how it can be used to control or limit access to them.

A common capability of many dynamic languages, such as JavaScript, is the ability of a program to inspect and modify its own structure.  This capability is generally called reflection. Examples of reflective capabilities of JavaScript include things like the hasOwnProperty and isPrototypeOf methods. ECMAScript 5 extended the reflection capability to JavaScript via functions such as Object.defineProperty and Object.getOwnPropertyDescriptor. There are many reasons you might use reflection but two very common uses are for creating development/debugging tool and for meta-programming.

There are many different ways you might define a reflection API for a programming language. For example, in JavaScript hasOwnProperty is a method defined by Object.prototype so it is, in theory, available to be called as a method on all objects. But there is a problem with this approach. What happens if an application object defines its own method named hasOwnProperty? The application object definition will override the definition of hasOwnProperty that is normally inherited from Object.prototype. Unexpected results are likely to occur if such an object is passed to code that expects to do reflection using the built-in hasOwnProperty method. This is one of the reasons that the new reflection capabilities in ES5 are defined as functions on Object rather than as methods of Object.prototype.

Another issue that arises with many reflection APIs is that they typically only work with local objects. Consider a tool that gives application developers the ability to graphically browse and inspect the objects in an application. If such a tool is effective, developers might want to use it in other situations. For example, they might want to inspect the objects on a remote server-based JavaScript application or to inspect a diagnostic JSON dump of objects produced when an application crashed. If JavaScript’s existing reflection APIs were used to create the tool there is no direct way it can be used to inspect such objects because the JavaScript reflection APIs only operate upon local objects within the current program.

There is also a tension between the power of reflection and security concerns within applications. Many of the reflection capabilities that are most useful to tool builders and meta-programmers can also be exploited for malicious purposes. Reflection API designers sometimes exclude potentially useful features in order to eliminate the potential of such exploits.

Mirrors is the name of an approach to reflection API design that attempts to address many of the issue that have been encountered with various programming languages that support reflection. The basic idea of mirrors is that you never perform reflective operations directly upon application objects. Instead all such operations are performed upon distinct “mirror” objects that “reflect” the structure of corresponding application objects. For example, instead of coding something like:

if (someObj.hasOwnProperty('customer')) {...

you might accomplish the same thing via mirrors via something like:

if (Mirror.on(someObj).hasOwnProperty('customer')) {...

Mirrors don’t have the sort of issues I discussed above because when using them you never directly reflect on application objects. There is never any problem  if the application just happens to define a method that  has the same name as a reflection API method.  Because reflection-based tools only indirectly interact with the underlying objects via mirror objects, it is possible to create different mirrors that use a common interface to access either local object, remote objects, or static objects stored in a file. Similar, it is possible to have have mirrors that present a common interface but differ in terms how much reflection they allow.  A trusted tool might be given access to a mirror that supports the must power reflective operations while an untrusted plug-in might be restricted to using mirrors that support only a limited set of reflective operations.

Gilad Bracha and David Ungar are the authors of a paper that explain the principals behind mirror-based reflection: Mirrors: Design Principles for Meta-level Facilities of Object-Oriented Programming Languages. I highly recommend it if you are interested in the general topic of reflection.

Mirrors were originally developed for the self programming language, one of the languages that influenced the original design of JavaScript. Recently, I’ve been experimenting with defining a mirror based reflection interface for JavaScript.  An early prototype of this interface named jsmirrors is now up on github.  It uses a common interface to support reflection on both local JavaScript objects and on a JSON-based object encoding that could be used for remote or externally stored objects. It also supports three levels of reflection privilege.

In my next post I’ll explain more of the usage and design details of jsmirrors.  In the meantime, please feel free to take a look at the prototype.

(Photo by “dichohecho”, Creative Commons Attribution License)