Considering that the five senses have been known and pondered for millennia, the efforts of AI to emulate these senses are relatively new.
Hearing, sight, touch, smell, and taste: logical starting places when AI first proposed that machines could mimic humans in the 1960s. At the time, optimistic researchers thought a computer as intelligent as a human could be created in twenty-five years.
Over sixty years later and despite considerable progress, there are still many opportunities for AI to capitalize on the full depth range of these complex sensory systems.
Assigning AI with a charming persona (Siri, Alexa, OK Google) is not new. Hearing recognition first emerged in 1952, when Bell Laboratories created ?Audrey.?
The system recognized only digits from a single voice, but IBM furthered the technology to recognize words in the 1960s. Google?s voice recognition system began in 2001, featuring an English Voice Search system that benefitted from 230 billion words resulting from user searchers.
Today we have tools making creative use of speech recognition such as Shazam, an app that can identify music and movies, as well as hearing aids that use AI to automatically adjust to surrounding noise levels.
Object recognition requires more than simply connecting to a camera because the computer needs to understand and interpret the raw visual input it is receiving.
Earlier technology, such as face detection from the early 2000s, was based on processing images and assessing pieces of the image in sections.
Tools like machine learning have accelerated these capabilities by allowing computers to review massive amounts of data in order to continually learn and recognize patterns.
Companies like Integra Sources use machine learning and computer vision development to create technology that interprets scenes in ways that provide for a wide range of applications, including AI that triggers an ambulance call in response to visually observing abnormal behavior and real-time detection that allows for an autonomous robotic lawnmower.
Touch screens have become the norm, but a machine responding to our touch is much different than a machine recognizing what it is touching.
Humans have tactile feedback and sense multiple features, including pressure, temperature, vibration, and tension, as well as textures such as roughness and smoothness.
Emulating this level of intricacy is no small task ? a glove developed by MIT researchers and designed to capture pressure signals made use of 550 sensors across the entire hand.
By detecting pressure, these sensors enabled computers to compile data from various objects to build a dataset that works toward learning and predicting what object is being touched.
Smell becomes challenging for AI because of its subjective nature.
Earlier this year, researchers from Google?s Brain Team used machine learning to develop a robot that accurately categorized different smells, but only by identifying objective details associated with each smell.
By using a database of 5,000 molecules, researchers worked with perfume makers to instruct AI to recognize different smells based on molecular structures of certain scents. Still, AI capabilities are limited because certain smells, such as caraway and spearmint, have similar or identical molecular structures.
AI has been making recommendations based on music and movie tastes for years, but physical taste involves a level of complexity and subjectivity that becomes difficult to quantify and therefore difficult to emulate.
For example, a wine recommending AI would need objective criteria such as purchase data, as relying on the experiential factor of taste varies too widely and is purely subjective.
These types of technology are ubiquitous to the point where we are reminded of AI?s progress daily.
However, despite growing rapidly for nearly 70 years, many areas remain where AI has plenty of room for future growth ? the room that serves as a testament to the complexity and depth of the five senses we take for granted every day.