Improved speech, voice, image and video recognition will change the way we interact with our devices. Over the next few years, we will continue to see vast improvements in the quality and fidelity of speech, voice, image, and video recognition, and our ability to classify results will improve significantly. Cheap and omnipresent sensors and cameras will provide ever-increasing streams of data for processing in real-time. This real-time requirement, paired with cheap and available processing power and storage, will make it much more cost-effective and efficient to process the data at the point of collection and, eventually, to learn and act upon the data locally. We will see these systems widely adopted in industrial automation systems, factory operations, security systems, agriculture, traffic and transportation, and many other domains.
Personal assistants will become more acceptable to us as they become more personalized to our needs and more able to understand the context of our requests, which in turn will enable them to broker an ever wider range of capabilities. Whether conversationally-driven AI assistants end up completely displacing more traditional GUI interfaces to our daily activities.
Beyond current command-and-control style personal assistant systems, improvements in conversational systems will be the catalyst to finally bring robots into general use as household items. Each time you fly much of the journey is run by machine, not the pilot. Self-driving cars and autonomous drones seem inevitable.
AI is being, and will continue to be, quietly adopted by enterprises, allowing them to extract knowledge from all the data that is being generated – and not just the structured data.
AI will continue to move towards taking on decision-making tasks. Automated fleet management, inventory management, and candidate resume screening are but a few examples.
Each stride forward in core AI research is opening up our abilities to solve new classes and scales of problems, which in turn enables the acceleration of research in almost every scientific domain, to the betterment of humanity.
Then, they can learn on their own, whether on a supervised or even unsupervised basis – will lead to successful deployments in a range of specialized application areas. AI will grow beyond its role as curator and analyzer of content and become much more important in generating and augmenting content in the first place. These types of systems could be used in education: imagine a teacher that is learning alongside the student.
Fast forward and we will start to see hyper-personalized hypothesis generation systems, which will operate on our background data like our genomics, paired with measurements from our wearables and other biological monitors, to provide each of us and our doctors a highly accurate lens – and a crystal ball – offering valuable insights into environmental and behavioral impacts on our health.
AI will also be used to interpret human brain activity in a way that can decipher intent, enabling augmentation to overcome physical challenges and new methods of communication for and with disabled patients.
With AI moving to control more devices and sources of content, collaboration amongst these semi-autonomous AI agents will drive great benefit.
AI will impact designers and programmers too, automating much of the processes involved, mapping their desires, explicitly communicated or even implied, to achieve creations that fulfill those requirements. In parallel, this will result in increased satisfaction from the people that interact with these AI-automated designs/programs, creating surprise and delight by continuously morphing the design or program as the system takes into account learnings from interactions with other users.
AI has the potential to greatly improve things like healthcare, education, poverty and security. AI machines can do some very beneficial things already today that humans will simply never be able to. If we leverage that to augment what humans do well, AI could positively impact society, business, and culture on the order of magnitude of the internet itself. This will allow using AI to scale the human mind, not replace it.
Many of the answers lie in the vast amount of medical data already collected. Ayasdi uses AI algorithms like deep learning to enable doctors and hospitals to better analyze their data. Through their work, medical practitioners have been able to identify previously unknown diabetes sub-types that could lead to better understanding of therapies that could work better for certain types of patients. Enlitic and IBM are using similar AI algorithms but to detect tumors in radiology scans more accurately and efficiently, and even potentially accelerate finding a cure for cancer.