Enter, Project Soli from Google. At its root, it’s a fingernail-sized radar chip and an advanced set of algorithms that interpret the data that the array feeds back into a connected device. The purpose of those algorithms is to analyze the fine-grain motions of your hands and fingers. Simply by sliding their thumb against their forefinger, the device knows what the user is trying to do, and that’s because the embedded radar chip is taking the feedback it’s getting in real time and using algorithms to understand that this is what the person’s fingers are doing.
Stay updated on Telegram with latest updates from Google Home/Assistant ecosystem.
It’s a tiny chip, one I hope will soon be remarkably easy to add to nearly any device: inside the frame of a VR helmet, the bezel of a smartwatch, the chassis of your phone. It can detect movement of less than a millimeter—you hold your hand as still as possible, and it still sees huge motion. 3,000 times a second, it collects information about where your hand and fingers are, and what they’re doing. It only cares about motion. And through Google’s machine-learning algorithms, it’s beginning to see gestures in what it captures. More importantly, it can see and recognize the tiniest, most specific of movements, and translate that to your gadgets.
This is one of those things that could mark a real leap in how consumer electronics are designed and evolve. This just reminded me of my Minority Report & Tony Stark fantasies. Didn’t I tell you, I live in future.
Things you can do from here: