Next big thing for Smartphones: Perceptual Computing And Vision Processing

What does a world look like when our device can see and hear us?

Stay updated on Telegram with latest updates from Google Home/Assistant Google Assistant ecosystem.

Over the last seventy years, computers aimed to augment human thinking and memory. However, the new wave of computers seeks to heighten our sensory experiences—improving sight, hearing, touch, smell, and taste.

This evolution is encapsulated by the rising concept of spatial computing, which includes augmented and virtual reality. Yet, I perceive an even wider shift: the advent of sensory computing. Initially, our phones offered glimpses into this realm, but now, we’re unlocking its full potential.

Our trajectory leads us to a suite of technologies that will not only broaden but also customize our individual sensory worlds.

A Smartphone would be able to see and understand its environment with the help of Perceptual Computing And Vision Processing, making it a phone that can adapt as much as we can.

Project Tango, an attempt to give mobile devices human-like understanding of space and motion by +Google ATAP(Advanced Technology And Projects).

Computer vision is a rich field of academic, commercial and industrial research, the implications of which extend tendrils into virtually every aspect of our lives. Anticipating needs before they arise, and understanding needs that a person might not even be aware they have, make up the broader blue sky opportunities for a tech like this.

Perceptual computing encompasses the whole field of analyzing data captured with sensors and visualization systems like cameras and infrared light. Computer vision is one component of the field, which allows devices to ‘see’ the areas around it more like a human does — or better. Getting 3D sensing onto phones, and spatial and topographic contextual awareness is table stakes for the next generation of phones. For e.g. Project Ara: a modular smartphone, Ara allows you to snap together a smartphone out of a bunch of swappable components which can be slotted together in various custom combinations by the user individual components, almost like LEGO. The advantage with this design is that you could update components, like a camera(If you want better computer vision) or a processor, as they become obsolete; and can be easily swapped out if they became problematic.

Stay updated on Google News with the latest updates from Google Home/Assistant ecosystem.

What does a world look like when our device can see and hear us?

A technology that allows the phone to do things like motion detection and tracking, depth mapping, recording and interpreting spatial and motion data in 3D space. There are a variety of things that this could be used for, but one prime example is indoor mapping. Imagine, reading not just the size and shape of a room, but also all of the items in the room and treating them as discrete objects.

When you think about it, smartphones haven’t changed dramatically since long time. Sure, they have gotten faster, more powerful, and thinner. They have far better sound, displays, and cameras. But at the end of the day, we’re all still using our smartphones the same way we did then: by tapping a glass screen. That’s frustrating, because there’s a world of other ways we could interact with our devices.

For a moment in time, resisting new platforms can fit into a corporate strategy. However, over time, companies must constantly ask hard questions and reassess the growth of emergent platforms and not make small moves too late. Often, these platforms can get so big, they turn into unstoppable(ala Android) movements. Mobile, as a platform, is so massive, it has indirectly incapacitated other companies who resist it(Microsoft) or who wait too long(Nokia, Blackberry) and these emergent platforms could be just as big.

Will be interesting to see, how will current companies think and react to potentially large platform shifts as opportunities or threats like these and the disruptive effects of this new platform?

Interested in learning more about the Google Assistant Google Assistant and Smart Home? Subscribe to WAV newsletter via Email.

We are living in a time when technology is riding high and showing little signs of stopping, and underneath this swell of activity are newer, emergent platforms, which could dramatically alter industries and the profits of corporations who dismiss them. Expect to see development come fast and furious on this new frontier in mobile tech of spatial awareness, computer vision, and perceptual computing, and expect every major player to claim a seat at the table.


Things you can do from here:

One thought on “Next big thing for Smartphones: Perceptual Computing And Vision Processing

Add yours

  1. We want to get that pack coupled to Glass – so we can interact- with everything in our near space. That is #Handyoptics  Corp goal for many of our Apps in design.

    Like

Share your view

Up ↑