The Third Wave



I think it’s time to talk about the way we communicate with our computers. To do that, we need to look at the ways we have in the past. Forgetting about punch-cards and switches, I think it’s fair to look at the technology that most people recognize as input devices. Then, I want to look at what’s next – how do we interact with a computer in ten years. I believe that this is something that’s starting to take shape right now – and I like to call it the Third Wave.

See what I did there? Wave? I know – brilliant – but I’ll move on. Before I get to the third wave, though, let me outline what I feel define the first two.

The First Wave

The first wave was the device we talked to that was attached to the device we wanted to talk to. The first wave encompasses what we once knew as wired devices, though now most of these are wireless. These include keyboards, mice, and other less obvious input devices like scanners, barcode readers, etc. These are external to the thing you’re working with (the screen, the CPU) and they take data from the outside world and bring it into the computer’s world, usually with a bit of translation.

The keyboard is sometimes considered indispensable (I’m sure there are more than a few people over at Blackberry who wish they’d thought that one out) for a reason that’s worth exploring. The thing we do, falsely, is treat written language as sacrosanct. There’s a great comic (which I can’t find right now) that shows a guy holding a scroll and looking at a pile of books saying Sure, those are great, but people will always use scrolls. The thing is, it might be scary to think of the written word as itself an input device, itself an abstraction, but what is more human – reading a story or hearing it? What if the written word were not as sacred – at least when communicating with your computer – as we thought? What if we saw it for what it really was – a secondary input that was required by technical limitations?

What if the computer’s endpoint is a human interaction?

If it is, then just as we don’t communicate with each other using a keyboard, so would we just talk, gesture, and even emote to our computer. If you can think of the computer that way, you can see all of these peripherals as secondary, dated, and eventually redundant. If you can’t, then the first wave may never end.

The Second Wave

If you don’t believe the first wave is doomed to extinction, you might not have been around technology for the last few years. Because the second wave is us talking directly on the device. This is Siri, and the touch-screen, and even the camera. Think of how substantial a difference there is between attaching a flatbed scanner and running it and just taking a picture of a document with your built-in camera. That’s the second wave.

The most important piece of the second-wave, so far, has been the touch-screen. Want to zoom an object – touch the object. Want to open a program? Tap the image for the program on the screen itself. This is momentous because we have taken away yet another layer that stands between us and the thing we’re talking to, the computer. Of course, it could be said that the most momentous aspect of the second wave is the voice-automated assistant – Siri, Vlingo, etc. While not yet pitch-perfect – and certainly limited by a need to process on major boxes rather than on local devices – this is a truly human interaction, speech. And even if you truly believe humans will always communicate through written language, you have to admit that when  you want to set your alarm, it’s easier to ask Siri than it is to open an app, choose the time, and turn the alarm on. In fact, Siri has done something that turns back the clock on technological interaction to the time when you could talk to your butler and just ask them literally to do whatever you want. (Remember that time before Obama destroyed the economy by being elected after it crashed?)

The second wave is here – just tap your phone or tablet and wait for that little beep…

The Third Wave

The third wave, however, doesn’t need a beep. I’m going to start with an interaction that I’ve just said is the crown jewel of the second wave – voice automation. But I’m going to take away the little beep. I’m going to take away the tap and maybe even the tablet. The third wave is when you don’t even need the CPU to talk to the CPU. This is the truest natural state of computing. There may be a screen to express data, just as there may be speakers. But why should you have to interact directly with these objects? We love our tablets and smartphones because they are always with us. But they aren’t always with us. We have to bring them with us. The things that are always with us are:

1. Our Hands, Arms, Bodies
2. Our Voices
3. Our Thoughts

Let’s just say it. Someday, we should be able to communicate with our technology using brain-wave-based functionality. It seems somewhat over-the-top, but let’s not forget, we already have cat ears that we can control with our mind. Until that time, we have to stick with our hands and our voices. The third wave of interaction will be when we can, with no device in our hands and no peripheral at our fingers, communicate with and make demands of our technology.

This leads us to the Leap Motion Controller. For those who are looking for a Minority-Report styled screen interaction – yes. We have that. Now. But not just that – I believe that the next big thing in computers is ubiquitous control without peripherals, and so a built-in Leap (or Kinect) sensor that can track your gestures throughout a room is essentially all you’d need to have a computer with no inputs. Sure, you might point out that the Leap is an input – but it’s a matter of where you are. Are you sitting at your desk? Sure. Are you walking to the desk? Sure. Are you lying in bed? Sure. The difference is that the computer isn’t with you. The tablet isn’t with you.

As a Leap developer, I eagerly await my own third-wave device. But be warned, be enlightened – you heard it here first. This is the future.

Leave a Reply