Share your Ideas here. Be as descriptive as possible. Ask for feedback. If you find any interesting Idea, you can comment and encourage the person in taking it forward.
Take a moment to reflect upon how you experience the world. Your eyes are not photographing every moment just like a camera is. Your eyes notice when something is out of the ordinary — when that shadow has shifted, when someone is moving, when that light has switched on. Far more often than not, you don’t even notice the things that stay the same. That’s how it is that you can see through the crowded street, see the approaching vehicle, and react instantly, without becoming lost in detail.
Compare that to how the vast majority of machines “see.” Cameras provide them with frame after frame, dozens per second, whether anything changed or not. Then vast neural networks consume all that data. It's potent, yes, but it's cumbersome and inefficient too. It's never necessary for a robot arm or pair of smart glasses to replay an entire scene when only something was moving. Much worse, those systems need bulky hardware and bulky batteries to accommodate. For devices that need to be wearable, light, or fast-moving, that's a big problem.
That's when it gets exciting. There's a new breed of camera — an event-based camera — that acts just like eyes. Instead of shooting every frame, its pixels only emit a signal when they detect change. Imagine every pixel as a small sentinel that only reveals itself when it detects activity. The byproduct is a stream of only what really matters, all at incredible speeds. Now machines don't have to sift through oceans of data. They can just look at the activity. It's natural to love seeing change, of course, but to be able to understand it is something else. That's where spiking neural networks fit in. These don't take large blocks of data like regular AIs. No, they use little spurts — spikes — that only fire up when something happens, just like neurons in the human brain. That makes them not only much faster, but much more powerful, too. Coupled with little, low-power chips, it enables the possibility of vision systems that can run directly on wearable devices or little 'bots without something gargantuan having to be running in the background.
But what does that all mean in day-to-day existence? Imagine a prosthetic limb that can detect that a glass is falling and change its grip before it does. Blind people can imagine smart glasses that notify them silently of a step in front of them or of a bike flying by. Imagine drones that can soar through forests or bob through wreckage after disaster, with the kind of reflexes that otherwise only come in nature.
The notion is, it's not necessarily that AI has to be made smarter. It's that it has to be made more natural, more alive. Robots that don't just record the world, but react to it — immediately, gracefully, without dead batteries. If that can be done, robotics and wearable AI won't be seen as mechanical anymore. It'll be like wearing partners that automatically react to the world that it's in just like all of us do every day.
Comments