This week a Harvard Business School student challenged me to name a startup capable of producing an intelligent robot – TODAY! At first, I did not understand the question, as artificial intelligence (AI) is an implement like any other in a roboticist’s toolbox. The student persisted, she demanded to know if I thought that the current co-bots working in factories could one day evolve to perceive the world as humans. It’s a good question that I didn’t appreciate at the time as robots are best deployed for specific repeatable tasks, even with deep learning systems. By contrast, mortals comprehend their surroundings (and other organisms) using a sixth sense, intuition
As an avid tennis player, I also enjoyed meeting Tennibot this week. The autonomous ball-gathering robot sweeps the court like a Roomba sucking up dust off a rug. In order to accomplish this task, without knocking over players, it navigates around the cage utilizing six cameras on each side. This is a perfect example of the type of job that an unmanned system excels at performing, freeing up athletes from wasting precious court time with tedious cleanup. Yet, Tennibot, at the end of the day, is a dumb appliance. While it gobbles up balls quicker than any person, it is unable to discern the quality of the game or the health of players.
No one expects Tennibot to save Roger Federer’s life, but what happens when a person has a heart attack inside an self-driving car on two-hour journey? While autonomous vehicles are packed with sensors to identify and safely steer around cities and highways, few are able to perceive human intent. As Ann Cheng of Hyundai explains, “We [drivers] think about what that other person is doing or has the intent to do. We see a lot of AI companies working on more classical problems, like object detection [or] object classification. Perceptive is trying to go one layer deeper—what we do intuitively already.” Hyundai joined Jim Adler’s Toyota AI Ventures this month in investing Perceptive Automata, an “intuitive self-driving system that is able to recognize, understand, and predict human behavior.”
As stated by Adler’s Medium post, Perceptive’s technology uses “behavioral science techniques to characterize the way human drivers understand the state-of-mind of other humans and then train deep learning models to acquire that human ability. These deep learning models are designed for integration into autonomous driving stacks and next-generation driver assistance systems, sandwiched between the perception and planning layers. These deep learning, predictive models provide real-time information on the intention, awareness, and other state-of-mind attributes of pedestrians, cyclists and other motorists.”
While Perceptive Automata is creating “predictive models” for outside the vehicle, few companies are focused on the conditions inside the cabin. The closest implementations are a number of eye-tracking cameras that alert occupants to distracted driving. While these technologies observe the general conditions of passengers, they rely on direct eye contact to distinguish between emotions (fatigue, excitability, stress, etc.), which is impossible if one is passed out. Furthermore, none of these vision systems have the ability to predict human actions before they become catastrophic.
Isaac Litman, formerly of Mobileye, understands full well the dilemma presented by computer vision systems in delivering on the promise of autonomous travel. In speaking with Litman this week about his newest venture Neteera, he declared that in today’s automotive landscape the “the only unknown variable is the human.” Unfortunately, the recent wave of Tesla and Uber autopilot crashes has glaringly illustrated the importance of tracking the attention of vehicle occupants in handing off between autopilot systems and human drivers. Litman further explains that Waymo and others are collecting data on occupant comfort as AI-enabled drivers have reportedly led to high levels of nausea from driving too consistently. Litman describes this as the indigestion problem, clarifying that after eating a big meal one may want to drive more slowly than on an empty stomach. In the future, Litman professes that autonomous cars will be marketed “not by the performance of their engines, but on the comfort of their rides.”
Litman’s view is further endorsed by the recent patent application filed this summer by Apple’s Project Titan team for developing “Comfort Profiles” for autonomous driving. According to AppleInsider, the application “describes how an autonomous driving and navigation system can move through an environment, with motion governed by a number of factors that are set indirectly by the passengers of the vehicle.” The Project Titan system would utilize a fusion of sensors (LIDAR, depth cameras, and infrared) to monitor the occupants’ “eye movements, body posture, gestures, pupil dilation, blinking, body temperature, heartbeat, perspiration, and head position.” The application details how the data would integrate into the vehicle systems to automatically adjust the acceleration, turning rate, performance, suspension, traction control and other factors to the personal preferences of the riders. While Project Titan is taking the first step toward developing an autonomous comfort system, Litman expresses that it is limited by the inherent short-comings of vision-based systems that are susceptible to light, dust, line of sight, condensation, motion, resolution, and safety concerns.
Unlike vision sensors, Neteera is a cost-effective micro-radar on a chip that leverages its own network of proprietary algorithms to provide “the first contact-free vital sign detection platform.” Its FDA-level of accuracy is not only being utilized by the automotive sector, but healthcare systems across the United States for monitoring such elusive conditions as sleep apnea and sudden infant death syndrome. To date, the challenge of monitoring vital signs through micro-skin motion in the automotive industry has been the displacement caused by moving vehicles. However, Litman’s team has developed a patent-pending “motion compensation algorithm” that tracks “quasi-periodic signals in the presence of massive random motions,” providing near-perfect accuracy (see tables below).
While the automotive industry races to launch fleets of autonomous vehicles, Litman estimates that the most successful players will be the ones that install empathic engines into the machines’ framework. Unlike the crowded field of AI and computer vision startups that are enabling robocars to safely navigate city streets, Neteera’s “intuition on a chip” is probably one of the only mechatronic ventures that actually report on the psychological state of drivers and passengers. Litman’s innovation has wider societal implications, as social robots begin to augment humans in the workplace and support the infirm and elderly in coping with the fragility of life.
As scientists improve artificial intelligence, it is still unclear what the reaction will be from ordinary people to such “emotional” robots. In the words of writer Adam Williams, “Emotion is something we reserve for ourselves: depth of feeling is what we use to justify the primacy of human life. If a machine is capable of feeling, that doesn’t make it dangerous in a Terminator-esque fashion, but in the abstract sense of impinging on what we think of as classically human.”