≡ Menu
transitionalquickly

A transitional technology is a technology that emerges as a computing era settles into maturity and which is a precursor to the successor era. Transitional technologies are firmly rooted in the “old” era but also contain important elements of the “new” era. It’s easy to think that what we experience using transitional technologies is what the emerging era is going to be like. Not likely! Transitional technologies carry too much baggage from the waning era. For a new computing era to fully emerge we need to move “quickly through” the transition period and get on with the business of inventing the key technologies of the new era.

nothardware

Computing “generations” used to be defined by changing computer hardware. Not anymore. The evolution of computing hardware (and software) technologies may enable the transition to a new era of computing. But it isn’t the hardware that really defines such an era. Instead, a new computing era emerges when hardware and software innovations result in fundamental changes to the way that computing impacts people and society. A new computing era is about completely rethinking what we do with computers.

Over the last several years, a lot of my ideas about the future of computing have emerged as I prepared talks and presentations for various venues. For such talks, I usually try to illustrate each key idea with an evocative slide. I’ve been reviewing some of these presentations for material that I should blog about. But one thing I noticed is that some of these slides really capture the essence of an idea. They’re worth sharing and shouldn’t be buried deep within a presentation deck where few people are likely to find them.

So, I’m going to experiment with a series of short blog posts, each consisting of an image of one of my slides and at most a paragraph or two of supplementary text. But the slide is the essence of the post. One nice thing about this form is that the core message can be captured in a single tweet. A lot of reading time isn’t required. And if it isn’t obvious, “slide bite” is a play on “sound bite”.

Let me know (tweet me at @awbjs) what you think about these slide bites. I’m still going to write longer form pieces but for some ideas I may start with a slide bite and then expand it into a longer prose piece.

In 2011 I wrote a blog post where I present the big picture model I use for thinking about what some people were calling the “post-PC computing era”. Since then I’ve written other related posts, given talks,  and had conversations with many people.  Time appears to be validating my model so it seems like a good time to do an updated version of the original post.

There seems to be broad agreement that the Personal Computing Era has ended. But what does that really mean? What happens when one era ends and another begins? What is a “computing era”?

e·ra /ˈirə,ˈerə/

google.com

noun

  1. a long and distinct period of history with a particular feature or characteristic.

Digital computing emerged in the years immediately following World War II and by the early 1950s computers started to be commercially available. So, the first era of computing must have started about 1950. But how many other eras have passed since then? There are many ways that people slice up the history of modern computing. By the late 1960s the electronics foundation of computers had reached it “3rd generation” (vacuum tubes, transistors, integrated circuits). Some people consider that the emergence of software technologies such time-sharing, relational databases, or the web correspond to distinct eras.

I don’t think any of those ways of slicing up computing history represent periods that are long enough or distinctive enough to match the dictionary definition of “era” given above. I think we’ve only passed through two computing eras and have just recently entered the third. This picture summarizes my perspective:

computingeras

The must important idea from this picture is that, in my view, there have only been three major “eras” of computing. Each of these eras span thirty or more years and represents a major difference in the primary role computers play in human life and society. The three eras also correspond to major shifts in the dominant form of computing devices and software. What is pictured is a conceptual timeline, not a graph of any actual data. The y-axis is intended to represent something like overall impact of computing upon average individuals but can also be seen as an abstraction of other relevant factors such as the socioeconomic impact of computing technologies.

The first era was the Corporate Computing Era. It was focused on using computers to enhance and empower large organizations such as commercial enterprises and governments. Its applications were largely about collecting and processing large amounts of schematized data. Databases and transaction processing were key technologies.

During this era, if you “used a computer” it would have been in the context of such an organization. However, the concept of “using a computer” is anachronistic to that era. Very few individual had any direct contact with computing and for most of those that did, the contact was only via corporate information systems that supported some aspects of their jobs.

The Corporate Computing Era started with the earliest days of computing in the 1950’s and obviously corporate computing still is and will continue to be an important sector of computing. This is an important aspect of my model of computing eras. When a new era emerges, the computing applications and technologies of the previous eras don’t disappear. They continue and probably even grow. However, the overall societal impact of those previous forms of computing become relatively small in comparison of the scope of impact of computing in the new era.

Around 1980 the primary focus of computing started to rapidly shift away from corporate computing. This was the beginning of the Personal Computing Era. The Personal Computing Era was about using computers to enhance and empower individuals. Its applications were largely task-centric and focused on enabling individuals to create, display, manipulate, and communicate relatively unstructured information. Software applications such as word processors, spreadsheets, graphic editors, email, games, and web browsers were key technologies.

We are currently still in the early days of the third era. A change to the dominant from of computing is occurring that will be at least a dramatic as the transition from the Corporate Computing Era to the Personal Computing Era. This new era of computing is about using computers to augment the environment within which humans live and work. It is an era of smart devices, perpetual connectivity, ubiquitous information access, and computer augmented human intelligence.

We still don’t yet have a universally accepted name for this new era. Some common names are post-PC, pervasive, or ubiquitous computing. Others focus on specific technical aspects of the new era and call it cloud, mobile, or web computing. But none of these terms seem to capture the breadth and essence of the new era. They are either too focused on a specific technology or on something that is happening today rather than something that characterizes a thirty year span of time. The name that I prefer and which seems to be gaining some traction is “ambient computing”,

am·bi·ent /am-bee-uh nt/

dictionary.com

adjective

  1. of the surrounding area or environment
  2. completely surrounding; encompassing

In the Ambient Computing Era humans live in a rich environment of communicating computer enhanced devices interoperating with a ubiquitous cloud of computer mediated information and services. We don’t even perceive most of the computers we interact with. They are an invisible part of our everyday things and activities. In the Ambient Computing Era we still have corporate computing and task-oriented personal computing style applications. But the defining characteristic of this era is the fact that computing is shaping the actual environment within which we live and work.

The early years of a new era are an exciting time to be involved in computing. We all have our immediate goals and the much of the excitement and opportunity is focused on shorter term objectives. But while we work to create the next great app, machine learning model, smart IoT device, or commercially successful site or service we should occasionally step back and think about something bigger: What sort of ambient computing environment do we want to live within and is our current work helping or hindering its emergence?

I written before about a transition period to a new era of computing. Earlier this month I gave a keynote talk at the Front-Trends conference in Warsaw.  In preparing this talk I discovered a very interesting graphic created by Asymco for an article about the Rise and Fall of Personal Computing.   It was so interesting that I used it to frame my talk. Here is what my first slide looked like, incorporating the Asymco visualization:

rise-and-fall-pc

This graph is showing market-share of various computing platforms since the very first emergence of what can be characterize as a personal computer.  I urge you to read the Asymco article if you are interested in the details of this visualization.   Keep in mind that it is showing percentage share of a rapidly expanding market. Over on the left edge we are talking about a total world-wide computer population that could be measured in the low hundreds of thousands.  On right we are talking about a market size in the high hundreds of millions of computers.

For my talk, I used the graph as an abstraction of the entire personal computing era. The important thing was that there was a period of around ten years before the Windows/Intel PC platform really began to dominate.  I remember those days. I was a newly graduated software engineer and those were exciting times.  We knew sometime big was happening, we just didn’t know for sure what it was and how it was all going to shake out.  Each year there was one or more new technologies and companies that seems to be establishing themselves  as the dominant platform.  But then something changed and within a year or two somebody else seemed to be winning.  It wasn’t until  the latter part of the 1980’s that the Wintel platform could be identified as the clear winner. That was the beginning of a 20+ year period that, based upon this graph, I’m calling the blue bubble.

While many interesting things (for example, the Web)  happened during the period of the blue bubble, overall it was a much less exciting time to be working in the software industry. For most of us, there was no option other than to work within the confines of the Wintel platform.  There were good aspects to this as a fixed and relatively stable platforms provided a foundation for the evolution of PC-based applications and ultimately the applications were what  was most important from a user perspective. But as a software developer, it just wasn’t the same as that earlier period before the bubble formed. To those of us who were around for the first decade of the PC era there were just too many constraints inside the bubble. There were still plenty of technical challenges, but there wasn’t the broad sense that we were all collectively changing the world.  But then, the blue bubble became normal. Until very recently,  must active software developers have never experienced a professional life outside that bubble.

The most important thing for today is what is happening on the right-hand side of this graph.  Clearly, the big blue bubble is coming to an end. This coincides with what I call the transition from the Personal Computing Era to the Ambient Computing Era.  Many people thank we are already inside the next blue bubble.  That Apple, or Google, or many be even “the Web” has already won platform dominance for the next computing era.  Maybe so, but I doubt it.  Here is a slide I used at the end of my recent talk:

rise-and-fall-ambient

It’s the same graphic.  I only removed the platform legend and changed the title and time line. The key point is that we probably aren’t yet inside the next blue bubble.  Instead, we are most likely in a period that is more similar to the first ten years of the PC Era.  It’s a time of chaotic transition.  We don’t know for sure which companies and technologies map to the colors in the  graph.  We also don’t know the exact time scale;  2013 isn’t necessarily equivalent to 1983.  It’s probably the case that the dominant platform  of the Ambient Computing Age is not yet established. The ultimate winner may  already be out there along with several other contenders.  We just don’t know with certainty how it’s all going to come out.

Things are really exciting again. Times of chaos are times of opportunity. The constraints of the last blue bubble are gone and the next blue bubble isn’t set yet. We all need to drop our blue bubble habits and seize the opportunity to shape the new computing era. It’s a time to be aggressive and to take risks. It’s a time for new thinking and new perspectives.  This is the best of times to be a software developer. Don’t get trapped by blue bubble thinking and don’t wait too long. The window of opportunity will probably only last a few years before the next blue bubble is firmly set. After that it will be decades  until the next such opportunity.

We’re all collectively creating a new era of computing.  Own it and enjoy the experience!

My plan is for this to be the first in a series of posts that talk about specific medium term challenges facing technologists as we move forward in the Ambient Computing Era.  The challenges will concern things that I think are inevitable but which may not be getting enough attention right now. But with attention, we should see significant progress towards solutions over the next five years.

Here’s the first challenge.  I have too many loosely coordinated digital devices and digital services. Everyday, I spend hours using my mobile phone, my tablet, and my desktop Mac PC. I also regularly use a laptop, a FirefoxOS test phone, and my DirecTV set-top box/DVR.  Less, regularly I use the household iPad, an Xbox/Kinect in our family room, and a couple of Denon receivers with network access.   Then, of course, there are various other active digital devices like cameras, a FitBit, runner’s watches, an IPod shuffle, etc.  My car is too old to have much user facing intelligence but I sure that won’t be the case with the next one.

Each of these devices is connected (at least indirectly) to the Internet and most of them have some sort of web browser. Each of them locally hold some of my digital possessions. I try to configure and use services like Dropbox and Evernote to make sure that my most commonly used possessions are readily available on all my general-purpose devices, but sometimes I still resort to emailing things to myself.

I also try to similarly configure all my MacOS devices and all my Android devices. But even so, everything I need isn’t always available on the device I’m using at any instance, even in cases where the device is perfectly capable of hosting it.

Even worse, each device is different in non-essential, but impossible to ignore ways.  I’m never just posting a tweet or reading my favorite new streams.  I’m always doing it on my tablet, or at my desk, or with my phone and the experience is different for each of them in some ways.  In every case, I have to focus as much attention on the device I’m physically using and how it differs from my other devices as I do on the actual task I’m interested in accomplishing.  And, its getting worse. Each new device I acquire may give me some new capability but it also adds to the chaos.

Now, I have the technical skills that enable me to deal with this chaos and get a net positive benefit from most of these devices. But it isn’t where I really want to be investing my valuable time.

I simply want to think about all my “digital stuff” as things that are always there and always available.  No mater where I am or which device I’m using.  When I get a new device, I don’t want to spend a day installing apps and configuring it.  I just want to identify myself and have all my stuff immediately available. I want my stuff to look and operate familiarly.  The only differences should be those that are fundamental to the specific device and its primary purpose.  My attention should always be on my stuff.   Different devices and different services should fade into the background. “Digital footprint” was the term I used my Cloud on Your Ceiling to refer to all this digital stuff.

Is any progress being made towards achieving this? Cloud hosted services from major industry players such as Google and Apple may feel like they are addressing some of these needs. But, they generally force you to commit all your digital assets to a single corporate caretaker and whatever limitations they choose to impose upon you.  Sometimes such services are characterized as “digital lockers”.  That’s not really what I’m looking for. I don’t want to have to go to a locker to get my stuff; I just want it to appear to always be with me and under my complete control.

The Locker Project is something that I discovered while researching this post that sounded like relevant work but it appears to be moribund.  However, it led me to discover an inspirational short talk by one of its developers, Jeremie Miller,  who paints a very similar vision to mine. The Locker Project appears to have morphed in to the Singly AppFabric  product, which seems to be a cloud service for integrating social media data into mobile apps.  This is perhaps a step in the right direction, but not really the same vision.  I suspect there is a tension between achieving the full vision and the short-term business realities of a startup.

So, that’s my first Ambient Computing challenge. Create the technology infrastructure and usage metaphors that make individual devices and services fade into the background and allow us all to focus our attention on actually living our digitally enhanced lives.

I’m interested in hearing about other relevant projects that readers may know about and other challenges you think are important.

(Photo by “IndyDina with Mr. Wonderful”, Creative Commons Attribution License. Sculpture by Tom Otterness)

Recently a friend of mine asked this question.  His theory was that the open web, as described in the Mozilla Mission no longer mattered because much of what we used to do using web browsers is now rapidly shifting to “apps”.  Why worry about the open web if nobody is going to be using it?

To me, this is really a question about what do we mean by “the web”. If by  “the web” we are just referring to the current worldwide collection of information made available by http servers and accessed most commonly using desktop browsers, then maybe he’s right.  While I use it all the time, I don’t think very much about the future of that web. Much about it will surely  change over the next decade. The 1995 era technologies do not necessarily need to be protected and nourished. They aren’t all good.

What I think about is the rapidly emerging pervasive and ambient information ecology that we are living within. This includes every digital device we regularly interact with. It includes devices that provide access to information but also devices that collect information. Some devices are “mobile”, others are build into physical infrastructure that surrounds us. It includes the sort of high production-value creative works that we see today “on the web” and still via pre-web media.  But it also includes, every trivial digital artifact that I create while going about my daily life and work.

Is this “the web”?   I’m perfectly happy to call it that.  It certainly encompasses the web, as we know it today.  But we need to be careful using that term to ensure that our thinking and actions aren’t over constrained by our perception of yesterday’s “web”.  This is why I like to tell people we are still in the very early stages of the next digital era.  I believe that the web we have today is, at most, the Apple-|| or TRS-80 of this new era. If we are going to continue to use  “the web” as a label then it needs to represent a 20+ year vision that transcends http and web browsers.

Technology generally evolves incrementally. Almost all of us spend almost all of our time working on things that are just “tactical” from the perspective of a twenty-year vision.  We are responding to what is happening today and working for achievement and advantage over the next 1-3 years. I think that the shift from “websites” to “apps” that my friend mentioned is just one of these tactical technology evolutionary vectors that is a point on the road to the future. The phenomena isn’t necessarily any more or less important than other point in time alternatives such as Flash vs. HTML or iOS vs. Android.  I think it would be a mistake to assume that “apps” is a fundamental shift.  We’ll know better in five years.

While everybody has to be tactical, a long-term vision still has a vital role.  A vision of a future that we yearn to achieve is an important influence upon our day-to-day tactical work. It’s the star that we steer by. A personal concern of mine is that we are severely lacking in this sort of long-term visions of  “the web”.  That’s why my plan for this year is to write more posts like “A Cloud on your Ceiling” that explore some of these longer term questions. I encourage you to also spend some time to think long term.  What short of digital enhanced world do you want to be living in twenty years from now?  What are you doing to help us get there?

(Photo by “mind_scratch”, Creative Commons Attribution License)

I’ve previously written that we are in the early stages of a new era of computing that I call “The Ambient Computing Era”.  If we are truly entering a new era then it is surely the case that the computers we will be using twenty or more years from now will exist in forms that are quite unlike the servers, desktop PCs, phones, and tablets we use today.  We can at best speculate or dream about what that world may be like.  But some of my recent readings about emerging technologies have inspired me to think about how things might evolve.

This week I learned about “WiGig” which is WiFi operating on 60GHz radio frequencies.  WiGig router chip sets already exist and support a theoretical throughput of 7Gbps.  The catch is that 60GHz radio wave won’t penetrate walls or furniture.  So if you want to have really high bandwidth wireless communications from something in your lap or on your sleeve to that wall-size display, you are probably going to want to hang a WiGig router on your ceiling. If your room is large or has a lot of furniture you may need several.

This got me thinking about what other sort of intelligent devices we may be hanging on our ceilings. The first thing that came to mind was LED lighting.  Until very recently, I was one of those people who would make jokes about assigning IP address to light bulbs.  But recently I was at a friend’s house where I saw exactly that: network addressable smart LED light bulbs.  It turns out that a little intelligence is actually useful in producing optimal room light with LEDs and when you have digital intelligence you really want to control it with something more sophisticated than a simple on/off switch.  So get ready for lighting with IP6 addresses. But they probably won’t be bulb shaped.

Both WiGig routers and networked LED room lighting are still too expensive for wide adoption, but like all solid state electronic devices we can expect their actual cost to approach zero over the next twenty or so years.  So there we have at least two kinds of intelligent devices that we probably will have hanging from our ceilings. But will they really be separate devices?  I could easily image a standardized ceiling panel, let’s say a half meter square consisting of LED lighting, a WiGig router, and other room electronics.  A standardized form factor would allow our homes and offices to be built (or updated) to include the infrastructure (power, external connectivity, physical mounting) that lets us easily service and evolve these devices. In honor of one of the most important web memes, I suggest that we call such a panel, a “CAT” or Ceiling Attached Technology.

So, what other functionality might be integrated on a CAT? Certainly we can expect sensors including cameras that allow the panel to “see” into the room. A 256 or 512 core computing cluster with several terabytes of local storage also seems very plausible.  Multiple CATs in the same or adjoining rooms would presumably participate in a mesh network that ultimately links to the rest of the digital world via high-speed wired or wireless “last mile” connections. Basically, our ceilings and walls could become what we think of today as “cloud” data centers.

What sort of computing would be taking place in those ceiling clouds? One possibility is that our entire digital footprint (applications, services, active digital archives) might migrate to and be cached in the CATs that are physically closest to us.  As we move about or from location to location, our digital footprint just follows us.  No need to make long latency round trips to massive data centers in eastern Oregon or contend for resources with millions of other active users.

Of course, there are tremendous technology challenges standing between what we have today and this vision.  How do we maintain the integrity of our digital assets as they follow us around from CAT to CAT? How do we keep them secure and maintain our personal privacy?  What programs get to migrate into our CATs?  How do we make sure it’s not malicious? How do we keep our homes from becoming massive botnets?  That’s why I think it’s important for some of us to start thinking about where this new computing era is heading and how we want to shape it. We can start inventing the ambient computing world just like Alan Kay and his colleagues at Xerox PARC started in the early 1970’s with the vague concept of a “Dynabook“ and went on to invent most of the foundational concepts that define personal computing.

If you find yourself thinking about “Post-PC Computing” keep in mind that the canonical computer twenty years from now will probably look nothing like a cell phone or tablet. It may look like a ceiling tile. I hope this warps your thinking.

(Photo by “Suicine”, Creative Commons Attribution License)

In my post, The Browser is a Transitional Technology, I wrote that I thought  web browsers were really Personal Computing Era applications and that browsers were unlikely to continue to exist as such as we move deeply into the Ambient Computing Era. However,  I expect browser technologies to have a key role in the Ambient Computing Era. In Why Mozilla, I talked about the inevitable emergence of a universal application platform for the Ambient Era and how open web technologies could serve that role. Last month I gave a talk where I tried to pull some of these ideas together:

For slides, 14-19 I talked about how when you remove that PC application facade from a modern browser you have essentially an open web-based application platform that is appropriate for all classes of ambient computing devices.

Today Mozilla announced an embryonic project that is directed towards that goal.    B2G or (Booting to the Web) is about showing that the open the web application platform can be the primarily platform for running native-grade applications.  As the project page says:

Mozilla believes that the web can displace proprietary, single-vendor stacks for application development. To make open web technologies a better basis for future applications on mobile and desktop alike, we need to keep pushing the envelope of the web to include — and in places exceed — the capabilities of the competing stacks in question.

One of the first steps is to directly boot devices into running Gecko, Mozilla’s core browser engine.  Essentially the devices will boot directly into the browser platform, but without the baggage and overhead of a traditional PC based web browser.  This is essentially the vision of slide 17 of my presentation.  The “G” in B2G comes from the use of Gecko, but the project is really about the open web. Any other set of browser technologies could potentially be used in the same way.  As the project web site says: “We aren’t trying to have these native-grade apps just run on Firefox, we’re trying to have them run on the web.”

This project is just starting, so nobody yet knows all the details or how successful it will be.  But, like all Mozilla projects it will take place in the open and with an open invitation for you involvement.

Recently I’ve had some conversations with some colleagues about how Web IDL is used to specify the APIs that browsers support for web applications.  I think our discussions raised some interesting questions about  the fundamental nature of the web app platform so I wanted to raise those same questions here.

Basically, is the browser web app platform an application framework or is it really  something that is more like an operating system? Stated more  concretely, is the web app platform most similar to the Java or .Net platforms or is it more similar to Linux or Windows?  In the long term this is probably a very important question.  It  makes a different  in the sort of capabilities that can be made available to a web app and also in the integrity expectations concerning  the underlying platform.

In a framework, client code directly integrates and extends the platform code. This allows client code to do very powerful things but the cost of this is that client code can do things that results in platform level errors or even failures. Modern frameworks are pretty much all defined in terms of object-oriented concepts because those concepts permits the client extensibility that is the primary motivation for building a framework. Frameworks generally have to trust their clients because they frequently have to pass control into client code and their is no way they can anticipate or validate everything client code might do. Frameworks are great from the perspective of what they allow developers to create, they are less great in turns of robustness and integrity.

In an operating system, client code almost never directly integrates with the platform code. Client code is limited to a fixed set of actions that can be requested via a fairly simple system call interface. In the absence of platform bugs, client code can’t cause platform level errors or crash the platforms because the platform carefully validates every aspect of every system call request and never directly executes untrusted client code. Operating systems don’t trust their clients. Successful operating system API are pretty much all expressed in terms of procedure calls that only accept scalars and simple structs as arguments because such arguments can be fully validated before the platform uses them to perform any action. Operating systems are great from a robustness and integrity perspective but they don’t offer much direct help to clients that need to do complex things.

Historically, there have been various attempts to create operating systems that uses framework style object-oriented client interfaces. All the major attempts at doing this that I am aware of have been dismal failures.  Taligent and Windows Longhorn are two notorious examples. The problem seems to be that the power and extensibility that comes with framework style interfaces is in direct conflict with robustness and integrity requirements of an OS. It is very difficult and perhaps impossible to find a comprise that provides sufficient power, extensibility, robustness, and integrity all at the same time. Systems like Taligent and Longhorn also have had significant durability issues because one of the ways they  tried to balance power and integrity was by describing their APIs in terms of static recursive object-oriented typing which are very hard to evolve in an backwards compatible fashion over multiple versions.

This begins to sound a lot like the way Web IDL is being used to describe web app APIs.  It has framework style APIs but browser implementers would like to have OS style system integrity and robustness.

One way OSes have addressed this issue is by using a  kernel.  The kernel is a small part of the overall platform that is very robust, has high integrity, and exposes very stable APIs.  The majority of the platform is outside the kernel.  In general, bugs or misuse of  non-kernel code may crash a application but it can’t crash the entire system. One way to think about large application frameworks like Java and .Net is that they are the low integrity but high leverage outer-most layer of such a kernelized design.

So what is the Web App platform. Is it a framework or is it an OS? I think it needs to be designed mostly like a framework.  However, there probably is a kernel of functionality that needs to be treated more like an OS.  That kernel is not yet well identified. It probably needs to be.  Otherwise, the designer of the web application platform run the risk of going down the same dead-end paths that were taken by the designers of “object-oriented” OSes like Taligent and Longhorn.

(Photo Attribution Some rights reserved by Pink Sherbet Photography)