This essay is an edited version of my TEDx talk from 2015. Everything here is still accurate and relevant.
How many of you like asking tough, creative questions? What if I told you that you already are active in the field of AI?
What if I told you that in the future, all we’ll need to do is ask really tough, wicked questions, and that the answers will always be there?
To understand what I mean by that, we first need to revisit the founding thinkers of AI and IA, namely Minsky, McCarthy, Engelbart, and Kay. Then, we can engage in speculative design thinking about the future and ultimately return with some valuable insights.
But first, let’s look at the basic unit of our relationship with a machine: a machine on one side, a human on the other, and there’s an interaction point—an interface point—that could be your phone, a website, a kiosk.
The AI approach to looking at things is making the machine intelligent and bringing that intelligence to humans.
John McCarthy, who co-founded the MIT AI Lab with Marvin Minsky in 1959, once said that you can view computing AI from the perspective of biology or computer science. You could imitate the nervous system as far as you understand the nervous system, or you could imitate human psychology as far as you understand human psychology.
Marvin Minsky, his counterpart at MIT, was the first person to develop a neural network simulator—a system modeled after the human brain, with neural connections—back in 1951.
He once said, for a machine to be intelligent, we have to give it several different kinds of thinking. When it switches from one of those to another, we will say that it is changing emotions. Emotion in itself is not a very profound thing—it’s just a switch between different models of operation.
So to these thinkers (and some thinkers today as well), my entire being, with all of our ambitions and magic of being alive and human, is a computer problem—quantified as buckets of knowledge and equations and mathematics.
The other school of thought is intelligent augmentation, which posits that tools can be made smarter by integrating intelligence into machines and moving the interface point closer to the human. All it means is that the tools become smarter and more intuitive.
So we need to stretch ourselves less in extending to the machine.
In this school of thought, the two leading thinkers are Doug Engelbart and Alan Kay.
Doug Engelbart is known for numerous contributions to the field, almost all of which were demoed in one iconic demo—aptly named “the mother of all demos”—where he showcased the first mouse, windows, teleconferencing, live document editing—basically, everything we use today daily.
He once said, “The better we get at getting better, the faster we will get better.” Engelbart’s philosophy was that, if you pair groups of thoughtful people with purposeful tools, you get something bigger than just efficiency improvements.
To complement that, Alan Kay’s work at Xerox PARC Lab was also versatile—on educational devices, music synthesizers, and software paradigms. Kay was one of the first to consider the concept of user interface, a field that is one of the most prominent in design today.
Alan Kay once said, “Reality is a reconstruction based on our beliefs of the world.” It is that reconstruction that allows for theatrical performance. It was his understanding that the user interface is the theater that advances us.
What makes him stand out is his ability to design tools that weren’t possible at the time—a Dynabook, a hybrid of a laptop and an iPad, meant as an early education device with software named Smalltalk—intuitive and easy to use.
He invited Trygve Reenskaug, a Finnish computer scientist, who conceived the idea of building software architecture that matches user mental models to computer operations—mapping the way we think to the way the machine operates.
That was something that philosophers and psychologists long debated, but these technologists had the foresight to incorporate it into everyday tools.
On the left, mental models; on the right, computer models; in the middle, controller and view—essentially, the tool. The modern manifestation of Model-View-Controller (MVC) is a system where a database is on the left, a view (such as an app or website) is on the right, and controllers facilitate communication between the two.
The user only interacts with the view.
For example, all the entries in Wikipedia are stored in a single, massive table (the computer model or database); Wikipedia.com is the view. Editing is done via the controller, which updates the database.
MVC was popular, but really took off in 1993 when the internet opened up indexable systems—the foundation for early internet popularity.
Now, think about all the sites in the world—their knowledge, utilities—they all fit this structure: stationary databases and proprietary interface points waiting to be accessed. A couple of hundred years ago, we entered the Information Age, marked by new communication tools, printing, and other innovations. We then transitioned from hand tools to factories, and subsequently shifted from a material economy to a data economy.
Are we too efficient now? What’s next, with all this quick movement of data?
Consider a scenario: I want to start a business, a system, a tool. What should I ask myself? Is it faster, better-looking? What data am I protecting? Am I being wasteful by copying data?
We are producing a lot more data, much faster than ever before—soon there won’t be time for data to reach a database (i.e., the internet). Phones and wearables constantly produce data—some of which may not have time to get to the cloud or internet. We’ll have to do something with offline data.
Interface points that are bigger than our phones—in the future, will be everywhere. Utilities will come to us instead of us having to find them.
We’ll need to build tools for data that is not online. Utilities will travel instead of data. Currently, we upload data from our devices, but in the future, utilities will locate us.
Lastly, data and interfaces will merge. Right now, there’s a clear boundary between app, website, database—like a form and stack of papers—but in the future, that’ll change.
Data is cheap; tools create themselves.
Kevin Kelly, a founder of Wired, said that cognition should soon be as accessible as AWS or electricity—we could plug cognition right into any system.
Endless knowledge, but not necessarily endless utility—if you had a blank piece of paper with all the answers, what question would you ask?
I see the future as a convergence of humans and machines, communicating through data and intelligence, rendering tools of purpose and function.
The future of universal interfaces will be the question. What would we ask if everything is possible? What would you ask if everything is known?
You can watch the talk below.
Other talks, essays, and recommended books are at https://library.in-process.net/everything-will-happen