{"feed":"Benedict-Evans","feedTitle":"Benedict Evans","feedLink":"/feed/Benedict-Evans","catTitle":"Business","catLink":"/cat/bussiness"}

In 1999, when WAP was the future of mobile, the industry group behind SIM cards worked out a way to use the programmable space on a SIM to build a complete WAP browser. This meant that instead of having to wait for consumers to buy new phones with WAP built-in, mobile operators could push a WAP browser onto every phone already in use over the air and get people to start using these services straight away. 

This looked like genius - if you worked for the SIM industry group. The problem was that any phone that hadn't shipped with a WAP browser also, ipso facto, had no kind of dedicated data network access (GPRS at the time) and so would be accessing these services over dial-up at something under 9.6 Kbits/second (and paying per minute for call time), and also almost certainly only had a one or two line character-based screen. Adding WAP to such a phone would be almost totally pointless.  

This is an extreme example of a bridge product. A bridge product says 'of course x is the right way to do this, but the technology or market environment to deliver x is not available yet, or is too expensive, and so here is something that gives some of the same benefits but works now.'

Hence, retrofitting a WAP browser to existing phones was a bridge and, indeed, WAP itself was a bridge. It was self-evident even by 1999 that the ‘right’ approach was to put the web...

A few weeks ago I spent several days marching around CES in Las Vegas (along with close to 200,000 other people), and as in previous years I saw 'smart' versions of just about anything you can imagine and many you can't. I also heard just about any thesis you can imagine, from 'this is all nonsense' to 'this is the next platform and voice-based AI will transform our homes and replace the smartphone.'

I'm not quite sure what my grand unified thesis on 'smart home' is, but I think there are some building blocks to try to get closer to one:

  1. Will people buy 'smart' anything at all? Will people buy a whole lot of smart things, or just one or two (for example, a door lock, a thermostat and nothing else). Why?
  2. If they do buy more than a handful of things, will they all be connected into one system, with a voice front end?
  3. Finally, if lots of people do have three dozen smart things all connected to Alexa (or Siri, or Google), does that change the broader tech environment? Does it result in massive...

Two months ago I gave a presentation talking about the fundamental structural trends in tech - on the one hand, we talk about what we can build on the billions-scale platform that the smartphone gives us now, and who can compete there with GAFA, and on the other, we wonder what the next decade-scale, billions-scale platforms or trends might be - machine learning, autonomous cars and so on. 

However, there are also some important current trends that don’t necessarily fit into those over-arching narratives, but might have almost as much impact in the next, say, five years. I’ve written some long pieces about what might happen to TV, and to discovery in general, but I haven’t written a single unified theory of the future of retail (and I’m not sure anyone could), nor advertising. Yet there is a set of accelerating and interlocking changes happening in TV, advertising and retail that could lead to some interesting discontinuous and cascading effects. So, in no particular order...

TV viewing is finally starting to unlock and move away from linear and cable bundles, especially in the USA. That ought in due course to have some effect on TV ad inventory and rates: some viewing will go to places without ads, or where ads are sold very differently, overall viewing might fall (though this seems unlikely), and viewing will be distributed in different ways - probably, as tends to happen with digital, the curve will get much steeper. The hits are bigger and the...

When you look at large manufacturing companies, it becomes very clear that the machine that makes the machine is just as important as the machine itself. There’s a lot of work in the iPhone, but there’s also a lot of work in the machine that can manufacture over 200m iPhones in a year. Equally, there’s a lot of work in a Tesla Model 3, but Tesla has yet to build a machine that can manufacture Model 3s efficiently, reliable, quickly and at quality at the scale of the incumbent car industry.

More than any of the other big tech platform companies, Amazon is a machine that makes the machine. People tend to talk about the famous virtuous circle diagram - more volume, lower costs, lower prices, more customers and so more volume. However, I think the operating structure of Amazon - the machine - is just as important, and perhaps less often talked about.

Amazon at its core is two platforms - the physical logistics platform and the ecommerce platform. Sitting on top of those, there is radical decentralization. Amazon is hundreds of small, decentralized, atomized teams sitting on top of standardised common internal systems. If Amazon decides that it’s going to do (say) shoes in Germany, it hires half a dozen people from very different backgrounds, maybe with none of them having anything to do with shoes or ecommerce, and it gives them those platforms, with internal transparency of the metrics of every other team, and of course, other people...

This autumn I gave the keynote at Andreessen Horowitz's annual 'Tech Summit' conference, talking about the state of tech today and what's likely to happen in the next decade: mobile, Google / Apple / Facebook / Amazon, innovation, machine learning, autonomous cars, mixed reality and crypto-currencies. 

(I had a cold). 

This is in part an expansion of some of the things I wrote about this post in the spring: 'Ten Year Futures'.  

This is the 'New Look', created by Christian Dior in 1947. It was a very conscious shift away from the restrictions and sumptuary constraints of the war, and a move to a very different way of feeling about how you looked and how you lived. It was a move away from narrow profiles, limited use of cloth, 'make do and mend' and women's clothes designed for working in munitions factories. It used twenty metres of fabric for an outfit instead of two.

This was a big change - many people were furious at the 'waste' of fabric. Indeed, it was so different that some outraged Parisiennes physically attacked a woman wearing the clothes. 

There's a common idea...

There's a pretty common narrative that Google & Facebook have a lot of control of the internet, in that they choose where you go and what you see. While this is true in an obvious sense, it also misses something important: Google and Facebook don't have fundamental control over what's actually in your search results or your news feed.

This is pretty clear for Google - it doesn't control what you search for. It does decide what results you get, but that decision is also in some sense out of its hands, because it has to give you the best results it can. So while Google is always making decisions around search, and those can create or uncreate companies, they are in essence technical, mechanistic judgements (or ought to be, at any rate), and not editorial ones. Google search is and has to be a mirror of the internet - both the content of the internet and human behaviour on the internet. It's useful to compare it with an index fund, that doesn't have opinions about individual stocks, but only makes technical, mechanistic adjustments to how well its holdings reflects the index: so too, Google can only adjust how well it reflects the internet. That's a very partial kind of control. 

While this is explicit for Google, it's implicit for Facebook. You tell Google explicitly what you want and you don't think you tell Facebook, but actually you've spent months and years telling it, through everything you've interacted with or ignored. Facebook makes technical,...

In 2004, ten years after Netscape launched, Tim O'Reilly launched the 'Web 2.0' conference, proposing (or branding) a generational shift in how the web worked. There were lots of trends, and none of them really started in 2004, but to me, looking back, the key thing was that people said 'if we forget about dial-up and forget about supporting old and buggy web browsers, and presume that lots of people are online and have got used to this stuff now, what can we build now that we couldn't build before?'

Not everyone had broadband and not everyone had a new computer with a modern browser, but enough people did that you could think about setting aside the constraints of a 14.4k modem and a table-based static web page and start building something new. And enough people were online, and knew lots of other people that were too, for social models to start working. Flickr had no less than 1.5m users when Yahoo bought it in 2005, which seemed like a lot at the time. 

Today, ten years after the iPhone launched, I have some of the same sense of early constraints and assumptions being abandoned and new models emerging. If in 2004 we had 'Web 2.0', now there's a lot of 'Mobile 2.0' around. If Web 2.0 said 'lots of people have broadband and modern browsers now', Mobile 2.0 says 'there are a billion people with high-end smartphones now'*. So, what assumptions are being left behind? What do you do differently if you assume not...

This is a photo of my grandfather, Will Jenkins. It was taken in 1909, when he was 13. He made the glider himself and took it to Cape Henry, about 17 miles by trolley from Norfolk, where his first flight took him eight feet, and his last that day took him 40 feet and broke one of his uprights. They made 13-year-olds differently then, I think. 

He built the glider, incidentally, with a gift of $5 sent to him by an American Civil War veteran after a school essay he'd written about Robert E. Lee was published in the local paper.  The war, after all, had ended only 44 years earlier. 

In 1946, by which time he'd become a notable writer of science fiction, he published a...

When I moved to Silicon Valley from London, in 2014, I bought a second-hand German car from 2009. The dashboard reminds me very much of using a Nokia in 2000 - it's perfect, and clear, and easy to understand, and there's no software at all. There are features, some of which are shown on a monochrome screen, and powered by firmware, but no software.

Then, a few weeks ago, it needed to be serviced and the dealer lent me a brand new top-of-the-line version of the same model. This one was like using a Nokia from 2007 - they've added all the smart stuff, badly. There are so many buttons that even the buttons have buttons, and though each particular feature makes sense on its own, and might even be implemented quite well, when they're all added together the effect is absurd. 

My new favorite site on the internet shows this extremely well, if unintentionally. 'My Car Does What?' is a attempt by the car industry to educate the public about the safety features that have been added to their cars over the past decade or so (I saw it advertised on a video screen at a gas pump). Unfortunately, what it really shows is that a proliferation of features has overwhelmed the 'job to be done'. The job is to stop the car crashing (or rather, stop the user from crashing the car), but the implementation is 'give the user 37 different icons on their dashboard'. Indeed, it's not...

As we pass 2.5bn smartphones on earth and head towards 5bn, and mobile moves from creation to deployment, the questions change. What's the state of the smartphone, machine learning and 'GAFA', and what can we build as we stand on the shoulders of giants?

Slides embedded above - video version with talk track below. 

Mobile means that, for the first time, pretty much everyone on earth will have a camera, taking vastly more images than were ever taken on film ('How many pictures?'). This feels like a profound change on a par with, say, the transistor radio making music ubiquitous.

Then, the image sensor in a phone is more than just a camera that takes pictures - it’s also part of new ways of thinking about mobile UIs and services ('Imaging, Snapchat and mobile'), and part of a general shift in what a computer can do ('From mobile first to mobile native'). 

Meanwhile, image sensors are part of a flood of cheap commodity components coming out of the smartphone supply chain, that enable all kinds of other connected devices - everything from the Amazon Echo and Google Home to an August door lock or Snapchat Spectacles (and of course a botnet of hacked IoT devices). When combined with cloud services and, increasingly, machine learning, these are no longer just cameras or microphones but new endpoints or distribution for services - they’re unbundled pieces of apps. ('Echo, interfaces and friction') This process is only just beginning - it now seems that some machine learning use cases can be embedded into very small and cheap devices. You might train an ‘is there a person in this image?’ neural network in the cloud with a vast image set - but to run it, you can put it on a cheap DSP with a...

A couple of years ago internet companies moved from having a mobile team and a mobile strategy to what they called ‘mobile first’. Instead of building a product and deciding how and if it would work on mobile, new things are build for mobile by default, and don’t necessarily make their way back to the desktop. 

Now, though, I think we can see an evolution beyond ‘mobile first’. What happens if you just forget about the PC altogether? But also, what happens if you forget about featurephones? What happens if you presume all of the sophistication that a modern smartphone has and a PC does not, and if you also presume that, with 650m iPhones in use and 2.5bn smartphones in total, you can build a big company without thinking about the low end anymore?

There are a couple of building blocks to think about here. 

  • There is the image sensor as a primary input method, not just a way to take photographs, especially paired with touch. That image sensor is now generally the best ‘camera’ most people have ever owned, in absolute image quality, and is also presumed to be good for capture more or less anywhere 
  • There’s the presumption of a context that makes sound OK, both for listening and talking to your device - we’re not in an open-plan office anymore. 
  • There’s bandwidth (either LTE or wifi, which is half of smartphone use) that makes autoplaying video - indeed, video that might not even have a ‘play’ button...

Mobile phones and then smartphones have been swallowing other products for a long time - everything from clocks to cameras to music players has been turned from hardware into an app. But that process also runs in reverse sometimes - you take part of a smartphone, wrap it in plastic and sell it as a new thing. This happened first in a very simple way, with companies riding on the smartphone supply chain to create new kinds of product with the components it produced, most obviously the GoPro. Now, though, there are a few more threads to think about. 

First, sometimes we're unbundling not just components but apps, and especially pieces of apps. We take an input or an output from an app on a phone and move it to a new context. So where a GoPro is an alternative to the smartphone camera, an Amazon Echo is taking a piece of the Amazon app and putting it next to you as you do the laundry. In doing so, it changes the context but also changes the friction. You could put down the laundry, find your phone, tap on the Amazon app and search for Tide, but then you’re doing the computer’s work for it - you’re going through a bunch of intermediate steps that have nothing to do with your need. Using Alexa, you effectively have a deep link directly to the task you want, with none of the friction or busywork of getting there. 

Next, and again removing friction,...

For the first time, pretty much everyone on earth is going to have a camera. Over 5bn people will have a mobile phone, almost all will be smartphones and almost all will have cameras. Far more people will be taking far more photos than ever before - even today maybe 50-100 times more photos are taken each year than were taken on film. 

Talking about 'cameras' taking 'photos', though, is a pretty narrow way to think about this - rather like calling those internet-connected pocket supercomputers 'phones'. Yes, the sensor can capture something that looks like the prints you got with a 35mm camera, or that looks like the footage a video camera could take. And, yes, it's easier to show those images to your friends on the internet than by post, and easier to edit or crop them, or adjust the colours, so it's a better camera. But what else? Terms like camera or photo, like phone, are inherently limiting - they specify one particular use for underlying technology that can do many things. Using a smartphone camera just to take and send photos is a little like using Word for memos that you used to create on a typewriter - you're using a new tool to fit into old forms. Pretty soon you work out that new forms are possible. 

So, you break up your assumptions about the models that you have to follow. You don't have to save the photos - they can disappear. You're not paying to...

People in tech and media have been saying that ‘content is king’ for a long time - perhaps since the VHS/Betamax battle of the early 1980s, and perhaps longer. Content and access to content was a strategic lever for technology. I’m not sure how much this is still true.  Music and books don’t matter much to tech anymore, and TV probably won’t matter much either. 

Most obviously, subscription streaming has more or less ended the strategic importance of music to tech companies. In the past, any music you bought for your iPod had DRM and could only be played on Apple devices, and the same was true in reverse for music from any other service. Even if you’d just encoded your own CDs (or downloaded pirated tracks, but in either case without DRM), physically transferring them to a different device with different software was a barrier. Your music library kept you on a device. With streaming these issues mostly go away. All the major services are cross-device (even Apple’s), and if you do switch to a different service you’re not giving up tracks you’ve paid money for, just a list of your favourites. Switching became easy. 

Since music no longer stops people from switching between platforms, it’s gone from being a moat (especially for Apple, the one platform company that actually had a strong position) to a low-margin check-box feature. That doesn’t mean that these services are exactly commodities - each builds its own recommendation tools, some experiment with routes to...

There's a pretty common argument in tech that though of course there are billions more smartphones than PCs, and will be many more still, smartphones are not really the next computing platform, just a computing platform, because smartphones (and the tablets that derive from them) are only used for consumption where PCs are used for creation. You might look at your smartphone a lot, but once you need to create, you'll go back to a PC. 

There are two pretty basic problems with this line of thinking. First, the idea that you cannot create on a smartphone or tablet assumes both that the software on the new device doesn't change and that the nature of the work won't change. Neither are good assumptions. You begin by making the new tool fit the old way of working, but then the tool changes how you work. More importantly though, I think the whole idea that people create on PCs today, with today's tools and tasks, is flawed, and, so, I think, is the idea that people aren't already creating on mobile. It's the other way around. People don't create on PCs - they create on mobile. 

There are around 1.5bn PCs on earth today (using the term 'PC in the broad sense covering Wintel, Mac and Linux). Maybe as many as 100m PCs are being used for some kind of embedded product: elevators, points of sale, ATMs, machine tools, security systems etc. Setting those aside, the rest are split roughly evenly between corporate and consumer, and many...

There’s a story told of the theoretical physicist Wolfgang Pauli that a friend showed him the paper of a young physicist that he suspected was not very good but on which he wanted Pauli's views. Pauli remarked sadly "It is not even wrong”. For a theory even to be wrong, it must be predictive and testable and falsifiable. If it cannot be falsified - if it does not make some prediction that could in theory be tested and proven false - then it does not count as science. 

I've always liked this quote in its own right, but it's also very relevant to talking about new technology and the way that people tend to dismiss and defend it. For as long as people have been creating technology, people have been saying it'll never amount to anything. As we create more and more - as 'software eats the world', the urge to dismiss seems only to get stronger, and so does the urge to defend. However, these conversations tend to follow a fairly predictable sequence, and quickly become unhelpful:

  1. That’s just a toy
  2. Successful things often started out looking like toys
  3. That’s just survivor bias - this one really is a toy
  4. You can't know that
  5. So tech is just a lottery?

The problem with both of these lines of argument is that they have no predictive value. It is unquestionably true that many of the most important technology advances looked like toys at first - the web, mobile phones, PCs, aircraft, cars and even hot and cold...

Now that mobile is maturing and its growth is slowing, everyone in tech turns to thinking about what the Next Big Thing will be. It's easy to say that 'machine learning is the new mobile' (and everyone does), but there are other things going on too. 

On one hand, we have a set of profound changes coming as a result of new primary technology. Electric and autonomous cars will change cities, virtual and mixed reality will change the entire computing experience, and machine learning is changing the kind of questions that computers can answer. But each of these is also just beginning, especially relative to their potential - they are at the bottom of the S-Curve where smartphones are now getting towards the top. On the other hand, I think we can see a set of changes that come not so much from any new technology as from shifts in consumer behaviour and operating economics. These changes are potentially just as big, and might be starting sooner.  

Electric and autonomous cars are just beginning - electric is happening now but will take time to grow, and autonomy is 5-10 years away from the first real launches. As they happen, each of these destabilises the car industry, changing what it means to make or own a car, and what it means to drive. Gasoline is half of global oil demand and car accidents kill 1.25m people year, and each of those could go away. But as I explored here, that's...

In February 2006, Jeff Han gave a demo of an experimental 'multitouch' interface, as a 'TED' talk. I've embedded the video below. Watching this today, the things he shows seems pretty banal - every $50 Android phone does this! - and yet the audience, mostly relatively sophisticated and tech-focused people, gasps and applauds. What is banal now was amazing then. And a year later, Apple unveiled the iPhone and the tech industry was reset to zero around multitouch. 

Looking back at this a decade later, there were really four launches for multitouch. There was a point at which multitouch became an interesting concept in research labs, a point at which the first demos of what this might actually do started appearing in public, a point at which the first really viable consumer product appeared in the iPhone, and then, several years later, a point at which sales really started exploding, as the iPhone evolved and Android followed it. You can see some of that lag in the chart below - it took several years after the 2007 launch of the iPhone for sales to take off (even after the pricing model changed). Most revolutionary technologies emerge like this, in stages - it's rare for anything to spring into life fully formed. And in the meantime, there were parallel tracks that turned out to be the wrong approach - both Symbian in the west and iMode et al in Japan.