From filtering spam emails to forecasting traffic patterns, you interact with artificial intelligence (AI) daily – even if you don’t realize it. Once considered little more than science fiction, AI now permeates nearly every industry. With seemingly endless potential, it’s no wonder this technology is widely considered to be the spark of the next Industrial Revolution.
Across The Corridor, several technology pioneers are experiencing the value of AI as they harness the power of this technology to enhance experiences in airports, courtrooms, hospitals and more. They prove the region’s ability to remain abreast of industry trends as this rapidly evolving technology becomes increasingly sophisticated. Plus, Corridor researchers are helping the industry answer tough questions about workforce implications and ethics.
Ask 10 experts to explain “artificial intelligence,” however, and you may hear 10 slightly different answers. For a technology so deeply engrained in our lives, its definition is surprisingly vague.
Merriam-Webster describes AI as a branch of computer science dealing with the simulation of intelligent behavior in computers and the capability of a machine to imitate intelligent human behavior. To be considered “intelligent,” AI systems must be able to gather information, learn and adjust. Because of this, the foundation of most AI systems are machine-learning algorithms that are programmed to understand and analyze data, and make predictions or generate outputs based on patterns.
The “artificial” aspect of AI technology comes into play when machines automatically correct outputs based on what they “learn” through data analysis. A famous illustration of this concept is Facebook’s photograph-tagging feature, which automatically recognizes faces based on a user’s past behavior.
Applications of AI extend well beyond social media. Gartner’s report released in April projects global business value derived from AI will top $1.2 trillion in 2018 and will reach nearly $4 trillion by 2022.
Disrupting Manual Processes
If you’re not friends with Ana yet, you should really meet her.
This AI-powered consumer feedback analyst developed by Datanautix father-and-son team, Sanjay and Neel Patel, can digest millions of company reviews and deliver actionable insights in less than 15 minutes. Her human competition once spent more than one week performing the same task and read only a thousand.
A client of the University of Central Florida (UCF) business incubator in Winter Springs, Datanautix has built Ana to help clients improve operational efficiencies and transform customer experiences while using less resources. Already, Ana has proven effective for Orlando International Airport, Orlando Magic and UCF.
“AI allows you to focus on the things that have a higher impact,” Sanjay said. “Before, you needed to do the precursor work to get to that high-value stuff. Now, we eliminate the effort of the precursor to work so you can dedicate more time to get happier customers.”
Ana is just one example of how AI can allow business decision-makers to work smarter by managing complex data analysis. Also in Central Florida, Orlando’s Jury Lab is applying AI to help legal professionals more easily predict outcomes.
“Your facial expressions don’t lie,” said Jury Lab CEO Susan Constantine. “Expressions are based on your subconscious mind. Whatever you’re thinking and feeling is going to be exhibited through your facial expressions.”
Susan and her team are revolutionizing the mock trial process, allowing attorneys to more quickly and accurately predict results using facial recognition technology to analyze and interpret jurors’ expressions. Historically, lawyers relied on subjective interpretation conducted by humans.
The Jury Lab’s AI-powered technology works by objectively interpreting, quantifying and reporting upon jurors’ facial responses to different arguments presented in a mock trial case. Attorneys can then learn which points resonate or inspire contempt and apply that knowledge in court to elicit desired responses from a real jury.
In the northern part of The Corridor, Newberry’s Convergent Engineering is applying AI in the health care industry to change the way we care for hospital patients who need breathing support.
The company’s VentAssist is an automated system that collects data from sensors and other inputs attached to a patient’s ventilator. It quickly analyzes their vital signs and how much effort it takes the patient to breathe and produces a set of recommendations for improved care. The clinician can then adjust the ventilator and other equipment as needed. Work is also underway on a closed-loop version of VentAssist that does not require the clinician to make these changes.
“Often, clinicians have way more data than they know what to do with,” said Neil Euliano, Ph.D., president of Convergent Engineering. “AI can help clinicians by analyzing a lot of data faster and helping find things in the data that the clinician either doesn’t have time to find or might miss altogether.”
Despite AI’s growing popularity in the health care industry, Neil shares that technology advancements are likely to plateau while industry leaders and regulators determine how to ensure this technology is completely safe. By definition, AI’s machine-learning algorithms learn from data inefficiencies, which raises concerns.
“Many new forms of AI are training all the time, getting smarter with the analysis of new data – especially when the AI recognizes better decisions could have been made,” Neil said. “This type of continuous learning is unlikely to be accepted by the FDA anytime soon, since the training algorithm cannot be proven safe.”
Challenges and Trends
In an industry where human lives are at stake, concerns about the safety and ethics of AI are amplified, but these concerns are also being raised in the global conversation as AI permeates all industries.
Amanda Hicks, Ph.D., assistant professor of health outcomes and biomedical informatics in the University of Florida College of Medicine, experiences some of the ethical dilemmas facing AI innovators in her work developing semantic networks and ontologies.
“Ontologies provide knowledge and context that needs to be spelled out in very explicit terms for the computer,” Amanda said. “For example: humans know that plants don’t get headaches. We know that to have a headache, you must have a head. A computer doesn’t necessarily understand that. Ontologies make this knowledge explicit and give the computer rules for working with and inferring more information with that knowledge.”
While creating the next generation of machine-learning algorithms that would enable AI systems to have “common sense,” Amanda runs into the issue of avoiding stereotypes, especially when it comes to the concept of identity.
“How do you convey common sense to a computer without generating stereotypes? You have to do this in a way that overcomes potential bias rather than confirming existing biases.”
In most cases, machine-learning output is only as good as its input. According to Lawrence Hall, Ph.D., distinguished university professor at University of South Florida, it’s the human element that brings bias to datasets. To achieve unbiased output, extreme care must be taken by human users while selecting training data and developing the algorithm. As the people classifying and labeling data are trained to better recognize their own unconscious biases, machines will be able to pull from cleaner, more unbiased datasets.
Lawrence’s colleague at the UCF agrees. Gita Reese Sukthankar, Ph.D., professor and director of the Intelligent Agents Lab at UCF, forecasts an industry wide shift toward more advanced machine learning that reduces the need for human input, thus reducing the influence of human biases.
While much of today’s AI runs on “supervised” machine-learning algorithms, this kind of machine learning is “unsupervised.” Whereas supervised machine learning relies on data labeled by humans, unsupervised machine learning needs no labeling assistance – essentially, it’s “smart” enough to analyze data without any guidelines or variables. While advances in AI technology trend toward unsupervised machine learning, most researchers would agree consumers should temper their expectations, since this likely won’t become the norm for another five to 10 years.
The workforce will undoubtedly experience a shake up as businesses continue to adopt and advance AI, but The Corridor’s researchers and industry leaders would agree the effects won’t all be negative.
“My personal feeling is that it’s going to kill some jobs, but it’s going to create new jobs,” explained Lawrence.
He predicts many new jobs created by AI will involve curating and fine-tuning data to maximize the accuracy of machine-learning systems.
As AI-enabled technology becomes more integrated into our daily routines, human input and guidance will still be critical, but perhaps won’t be needed forever. Rather than recording and analyzing data manually, for example, humans might someday learn to program systems to do this work for them.
The processes that enable us to ask smart personal assistants like Siri and Alexa about the weather, deposit a check and mark email as spam – processes that enable us to work smarter, not harder – are continuously learning, improving and advancing without signs of slowing down.
“We’re just going to have to wait and see what’s next for AI,” Amanda said. “It all depends on organizational forces and on people’s creativity – and how those two things interact never ceases to surprise me.”
There is plenty of speculation as to what’s next for this burgeoning discipline, but one thing remains constant: The Corridor’s researchers and entrepreneurs will be at the forefront as the future unfolds.