From chess computers to self-driving cars, here's where AI is heading

Written by Andreas Rekdal
Published on Mar. 10, 2016
From chess computers to self-driving cars, here's where AI is heading

On Wednesday, a Google deep-learning program beat the world’s best player at Go — an ancient Chinese game once considered too complex and nuanced for a computer to ever master. This game comes 20 years after IBM’s Deep Blue computer first beat reigning world champion Garry Kasparow in a game of chess.

Though pitting AIs against humans in games of strategy does offer some insight into how the field of machine learning is progressing, the increasing presence of AI in our daily lives shows that the technology is reaching a point where it will soon be hard to imagine what the world used to be like. Machine learning computers perform mundane tasks for us — like completing our sentences, or finding the fastest route home. But they also do tasks we’re incapable of doing, like sorting through and spotting patterns in incredibly large and complex data sets.

Chicago is home to a number of companies doing big things in the AI and machine learning field. We spoke with the leaders of some of them to hear about what they’re doing, and where they think the field is heading.
 

 

Motion AI makes artificial intelligence technology more accessible with a platform that lets users without programming experience set up AI bots. These bots can process language to perform tasks that have traditionally been performed by humans, like handling customer service requests, or taking pizza orders by text message.

Answers from CEO David Nelson

What are the greatest challenges to working with AI?

The entire field is still nascent which is simultaneously invigorating and challenging. This means we get to build out a lot of very cool things in-house and tackle problems that you can’t simply find the answer to on Stack Overflow. Most of the time, it just makes things more fun — but there are always exceptions!

Where do you see machine learning going in the next five years?

Related to messaging, I think we will see a great deal of progress and adoption towards conversational commerce in the next year alone. As more users begin interacting with services through bots, the data sets we can analyze to understand how and why users talk to machines will grow exponentially. As we all know, knowledge — in this case, data — is power. And that will help push innovation ahead very quickly.

And how about the next 10?

I believe that while true singularity is still quite distant, we’ll enter a phase of pseudo-singularity relatively soon as a result of the sheer amount of conversation data that will be retained in the coming years. That is to say, while computers may not be able to “think” for themselves anytime soon, there comes a point when even hard-coded data alone will create a sense of quasi-singularity, just because of its sheer volume and scope.

Why did you decide to get into AI?

Over the last couple of years there has been a great deal of interest and development in AI, and more specifically the way that natural language processing and machine learning may be able to help streamline the way we interact with companies and services.

The prospect of a “post-application” world, where we use and consume services primarily by messaging them, was a very exciting idea for me.

Each major iteration in tech has been characterized by making things easier to use (with less required of users to learn), and more broadly appealing. But even today, you still have to “learn” to use the UI of each application you download. Granted, the learning curve is drastically reduced from the days of command-line computing. But what’s more universal than language? When we focus more on messaging and language, we can all but remove the learning curve from using any service. Just say or type what you want, and let technology do the heavy lifting.
 

 

Rippleshot uses machine learning to understand credit card users’ behavior patterns. Drawing on those insights, the company detects suspicious changes in behavior to detect breaches and assess whether other cardholders may be affected. This focus on consumers rather than stores, allows banks and other card issuers to act proactively to prevent fraud.

Answers from co-founder and chief scientist Randal Cox

What are the greatest challenges to working in the machine learning space?

The two most painful difficulties we face seem to be shared by everyone: data quality and the infrastructure needed to deploy our models. Far and away the majority of our modeling time is spent in data validation and cleaning. Some of this is automatable and testable, but there are always unexpected irregularities in any new data set. Worse, data feeds can and do change in unexpected ways, so we run daily statistical tests to see that our data sources do not drift. And if they do, we need to know early.

The second problem is implementation. Complicated models often require more resources at a client’s site than they have available. We spend a great deal of time optimizing models to meet those requirements.

Where do you see machine learning going in the next five years?

Some modeling technologies are still expensive to scale and don’t good make use of available concurrency (e.g., tree models scale in random forests, but not in single trees), and a great deal of work is being done to make this more efficient.

Most statistical models use data referring to a single snapshot in time, but there is great power in having access to previous states. For example, knowing that this credit card user has visited this exact store three times in the last week can completely change fraud detection. The infrastructure to implement this approach is expensive, but well worth the investment.

And how about the next 10?

I think we will see some movement from descriptive models (this is fraud) to prescriptive models (you should reissue this card, but just change the spending limit on the other three [and decline certain transactions]). Early on, this will be very human-designed, but I expect to see machine exploration guided by simple optimization functions. For example, knowing it costs $30 to reissue a card, and that fraudulent transactions cost $100, algorithms might optimize strategies to minimize these costs.

Why did you decide to get into machine learning?

My PhD is in molecular biology and genomics. Most of my papers are about combining big genomics data sets in ways that reveal hidden gems of biological insight. My first startup sequenced bacterial genomes, and I had to extend sequence alignment, quality control and genomic assembly processes to scale. After the dot.com bubble, I carried those skills to financial informatics where I have been working for almost a decade.
 

 

Thoughtly’s machine learning software helps clients analyze huge amounts of text in real time. For instance, if you’re a researcher, financier, or consultant, you could upload hundreds of documents to Thoughtly’s Ellipse platform. The software would then analyze, visualize and summarize the data for you, and direct you to the sections that are most relevant to what you’re looking for. The company’s clients include BBC Worldwide.

Answers from CEO Chase Perkins

What are the greatest challenges to working with AI?

There are rather significant capital and knowledge requirements to offering products that are useful to enterprise clients…. Most startups offer API’s or only generalized products. It’s not necessarily due to lack of tenacity, but there are — practically speaking — limitations to adoption of new tools at scale without forward thinking clients that appreciate the technology you are providing.

Where do you see machine learning going in the next five years?

While people generally think of AI as a singular sentient being, like Ex-Machina or Age of Ultron, the next five years will begin to resemble sci-fi in very limited ways. Self-driving cars and other applications of machine learning that have large capital and resource investments today, will be an everyday reality for the masses tomorrow.

In finance we’ve experienced augmentation of professional and consumer services. As trading systems become more intelligent and expand from signal and behavior-based models, to include for example rapid natural language processing analysis — the sheer volume of data being considered will further compound the automation race.

But there are other more operational components of finance that are beginning to utilize machine learning. We are currently working with financial institutions to offer solutions to better understand and de-risk client exposure. There are an endless number of core operational tasks affecting enterprise clients that will deeply benefit from machine learning technology.

There is a lot of hyperbole in the press focusing on the negatives of AI, particularly of general AI. I’m far more concerned with the implications of dumb automation than a singularity or the emergence of an organically nefarious self-driven system. We’ll need something tantamount to the AI theory of everything before that’s a practical concern.

How about the next 10?

The world will in all likelihood look noticeably different in ten years from a social and economic perspective. Not unlike how mobile phones have disproportionately impacted social connectivity, commerce and information dissemination in only ten years — AI will fundamentally alter the basic human experience.

As systems become more intelligent, our expectation and reliance on the systems will drive social and commercial decisions in unpredicted ways. When you wake up and your email client responded to all your missed email, in contextually appropriate and seemingly personal ways, your day will start off very differently. When your car is out generating income all day, only to anticipate your needs and still be waiting to take you to your appointments, our possessions will no longer be single purpose, depreciating assets. When our virtual assistant is also our personal teacher, that identifies and informs us of our own knowledge gaps, we will usher in an era of continuous learning.

The only known about the next ten years is that it will not resemble the previous.

Why did you decide to get into the AI field?

Artificial intelligence is a highly delineated field. While sharing many aspects, it turns out that the algorithms and processes that enable machine vision, for example, are different than the approaches taken to natural language processing, predictive analytics or voice-to-text. Remaining hyper-focused on natural language processing has allowed Thoughtly to compete with the likes of IBM Watson and traditionally entrenched parties.

Most machine learning startups offer algorithms for rent (API’s), not products that can be easily integrated by clients to perform specific functions. We founded Thoughtly to solve specific problems and, by providing off-the-shelf NLP tools for enterprise at scale, we are doing just that.


 

In recent years, a lot of innovation has been done in natural language processing — that is, teaching computers how to understand the way people speak and write. Narrative Science puts that model on its head, teaching computers how to analyze data and communicate their findings in natural language. The company’s platform, Quill, has written finance stories for Forbes, and can generate anything from written alerts to extensive investment performance reports. The company’s clients include companies like Deloitte, USAA, Nuveen and Credit Suisse.

Answers from CEO Stuart Frankel.

What are the greatest challenges to working with AI?

One of the most common challenges for all companies building technology driven by AI/machine learning (ML) is the customer adoption curve. While AI technologies have been around for many years, we’re finally at a point where commercially viable AI-powered technologies are available and practical. It’s the early days and enterprises are still trying to figure out how to fit these technologies into their business operations.

Where do you see machine learning going in the next five years?

People tend to use AI and ML interchangeably when, in fact, machine learning is a subfield of AI. We are at the very beginning of a revolution that will be defined by AI and ML will certainly be a big part of that.

Right now, ML can be used to predict outages on factory floors, traffic problems related to weather, crime and accident rates at the neighborhood level, and logistical problems before they occur. Over the next five years, ML systems will gain more access to data about our homes (e.g. Nest) and our health (e.g. Fitbit), which will drive rapid innovation in intelligent devices and systems. We will see the rise of systems that think and act for us like Amazon delivering in anticipation of our needs, Grubhub texting us confirmation of a predicted order and Uber showing up at our door just in case we need a ride to work in the morning. The possibilities are endless.

And how about the next 10?

Over the next 10 years, increases and optimization of computing power will free ML systems from the need to have access to well crafted models. This will give ML a much wider range of applicability within the data we already have. Likewise, the data available to support learning is expanding at an accelerated rate due to the rise of the Internet of Things, which will expand the reach of these systems into everywhere there are connected devices. That said, I won’t even try to predict where ML will be in ten years because the pace of innovation is so fast.

Why did you decide to get into the AI/machine learning field?

It actually wasn’t intentional. In late 2008, I started looking around for a new venture. I was introduced to two computer science professors at Northwestern University who showed me some interesting technologies that they were working on, including an AI-powered application that automatically generated natural language stories from baseball game data. I was fascinated by the application and the potential to extend the technology to other domains.

It was abundantly clear six or seven years ago that data would quickly engulf us. We knew that the exponential growth of data would require new technologies enabling people to not only analyze data, but to communicate information gleaned from the data in an easy to understand format: plain language. Based on this macro thesis, we started Narrative Science.

Some answers have been edited for brevity.

Images via Shutterstock and listed companies.

Have a tip for us or know of a company that deserves coverage? Shoot us an email or follow us on Twitter @BuiltInChicago.

Hiring Now
Origami Risk
Information Technology • Insurance • Professional Services • Software • Analytics