Inside the AI Trends Every Techie Should Be Watching

Written by Janey Zitomer
Published on Apr. 09, 2020
Inside the AI Trends Every Techie Should Be Watching
Brand Studio Logo

Narrative Science uses a natural language generation (NLG) engine to help businesses make sense of complex enterprise data and tell clearer stories, especially in uncertain times. The company prides itself on taking a different approach to NLG and processing than some larger organizations using the strategy, like Google and Salesforce. That said, Nate Nichols, distinguished principal of product strategy and architecture, sees the increased buy-in across the board as nothing but good news.

“Their work is helping to make the idea of computers creating language or stories more mainstream than we’ve seen before,” Nichols said. 

As advanced data processing techniques become more commonplace, so do the errors associated with them. Such errors include implicit bias as a result of fair prediction, or predicting outcomes for one group well and another group poorly, according to Stats Perform Director of Computer Vision Sujoy Ganguly.

It’s a side effect that he and Relativity Senior Data Scientist Rebecca BurWei are looking to avoid as trends like learned user trust gain steam. 

“Building AI that understands and responds to user trust could help us build systems that are more accurate and less biased,” BurWei said. 

 

Stats Perform
Stats Perform
Nate Nichols
Distinguished Principal of Product Strategy and Architecture

Nichols is seeing a push for transparency in how businesses use AI. He said that Narrative Science takes an algorithmic approach to generating language from data, rather than a black box-style model. This decision is especially significant given data’s role in educating the public during unprecedented health-related events such as COVID-19, which he predicts will rely increasingly on AI-based solutions.
 

What AI trends within your industry are you watching at the moment?

I’m watching a few trends specifically. First, I’m watching the recent trend around natural language generation and processing. I’m seeing companies like Google, Microsoft, Salesforce and others producing impressive results in generating language. They’re optimizing to generate hopefully sensible text for any domain, while we’re focused on generating stories from business data that are guaranteed to be accurate. That said, their work is helping to make the idea of computers creating language or stories more mainstream than we’ve seen before. 

Global issues like COVID-19 have refocused everyone on data, the decisions that need to be made from data, and how high the stakes can really be. We’ve always believed that making data not only available, but also digestible helps people make better decisions. Now, we are seeing the general public believe that more than ever. 

 

How is your team applying these trends in their work or leveraging AI in the products they’re building?

We are taking further steps to ensure that we can use our products to help keep everyone in the world informed. For example, no one knows what will happen tomorrow with COVID-19. But we believe that data stories, written using our proprietary AI, can help you understand what is happening today.

We recently wired our software straight to data sources at John Hopkins, the World Health Organization and others. And we’re making our coronavirus stories free to everyone. The goal isn’t advertising or sales. Rather, we are using our AI to help people understand the biggest global challenge we’ve faced in 70 years.

I expect to see an explosion in interest in AI-driven medicine creation.’’  

What’s one trend you’re watching that other people in the industry aren’t talking about?

I expect to see an explosion in interest in AI-driven medicine creation. People have thought of AI in the medical space as primarily doing one of two things: automating what medical professionals do today, like viewing CT scans, or tailoring medicine to a patient through optimization. 

In February, researchers at MIT identified a new antibiotic through AI. The AI was trained on 2,500 different molecules with known effects. Given 6,000 new compounds, it identified one as highly likely to have antibiotic properties. The researchers experimentally confirmed it. 

The drug is now called Halicin after HAL, the AI from “2001: A Space Odyssey.” It may be available for prescription in the next few years. As we continue to live in the shadow of the novel coronavirus and pandemics of the future, there will be billions spent on enabling AI to help us find new medicines faster and more safely.

 

Sujoy Ganguly
Director of Computer Vision • Stats Perform

If a business like AI sports company Stats Perform predicts certain statistics for star players well, but misses the mark for bench players, it could be because of a theory called “fair prediction.” It’s a phenomenon that Ganguly feels is currently underestimated in the industry. He’s working to avoid falling into its trap, as the issue of fairness in AI is an indicator of implicit bias. 

Fair Prediction

You don't want to accurately predict outcomes for one group and inaccurately predict outcomes for another group. Imbalanced prediction is an issue in medicine and sociology.

What AI trends in the industry are you watching at the moment?

The emergence of graph neural networks, which apply the techniques of deep learning to data that is naturally structured as a graph. For example, in social networks, nodes represent people and edges represent the relationships between individuals. 

Using graph neural networks, businesses can take this user data and make recommendations based on a user’s features as well as their social context. In sports, we have similar social structures: teams. We can represent each player as a node and their relationship to their teammates and opponents as edges. Using graph neural networks allow us to understand how players act in the context of their team and opponents, as well as how individuals use teamwork to create better outcomes. 

In other words, we can start to understand how player performance is affected by the team and how team performance is affected by the collection of players.

 

How is your team applying these trends in their work or leveraging AI in the products they’re building? 

Our team is using graph neural networks in many areas of our work. One of the most compelling use cases is to understand collective actions in sports. Many actions or activities in sports require the cooperative movement of whole teams. For example, to get an open shot in basketball, one player may set a screen, another might cut off the screen and another might make a pass. At the same time, the defense is acting as a unit to prevent this action. 

Using a graph representation of the teams allows us to understand how a set of players’ set of actions leads to an open shot. Since we have tens of thousands of examples of possessions in sports, we are in an ideal situation to use modern AI methods to learn to detect such actions.

 

What’s one trend you’re watching that other people in the industry aren’t?

The issue of fairness in AI is a trend  the sports industry isn’t considering. At a high level, fair prediction means you want to avoid predicting outcomes for one group well and another group poorly. This imbalanced prediction is a critical issue in the fields of medicine and sociology. 

Fair prediction is a sign of implicit bias, which we should try to correct. Luckily, we can take inspiration from our colleagues in the medical industry and use techniques they are developing in our products.

 

Rebecca BurWei
Senior Data Scientist • Relativity

BurWei looks forward to a future where companies build AI models that understand and respond to user trust. That way, systems would take better cues from their surroundings as users become increasingly comfortable with the technology. Relativity leverages machine learning and visualizations to help users identify key issues during litigation, internal investigations and compliance projects. 

 

What AI trends within your industry are you watching at the moment?

At Relativity, we organize large bodies of text for legal applications. So, I follow innovations that require less and less human effort to classify, cluster and structure large text corpuses.

In particular, innovations on transfer learning for text data are reaching maturity. In 2019, AI researchers and engineers developed a rich ecosystem of pre-built models appropriate for transfer learning on text. At a high level, this technology transfers salient information from prior data, so that new models can be built more efficiently. For our clients, this means coding fewer documents to discover new insights. 

Recent advances in machine translation are also impressive. While the challenge of building AI that understands hundreds of languages remains great, I’m keeping an eye on creative methods such as cross-lingual transfer. It can be used to build multi-lingual systems without incurring the cost of a dataset in every language.

We are actively researching multi-lingual transfer learning architectures.’’

How is your team applying these trends in their work or leveraging AI in the products they’re building?

We are actively researching multi-lingual transfer learning architectures. In addition to the efficiency gains, we anticipate that these architectures will provide a foundation for building new product features such as document segmentation and providing explanations for model predictions.

 

What’s one trend you’re watching that other people in the industry aren’t talking about?

I’m excited for creative AI and UX researchers to design systems where people can express how much trust they have in an AI system and receive insights appropriate to that level of trust. Whether it’s a self-driving car or a volunteer-built encyclopedia, a new technology always takes time to mature. Stakeholders are correct to be wary at first. 

However, as an AI system evolves and “learns,” it would be exciting to give users more control over how the technology and its insights are phased in. Building AI that understands and responds to user trust could help us build systems that are more accurate and less biased.

 

Chris Plenio
VP of Research and Development • CCC Intelligent Solutions

CCC has built mobile apps and SDKs that help users take better quality and increasingly relevant photos after a vehicle collision. VP of Research and Development Chris Plenio said that an internal AI model interprets the photos and flags those that would deteriorate AI performance. Their model evaluation dashboard enables the team to make sure that every model retraining is regressively compared to previous versions.

 

What AI trends within your industry are you watching at the moment? 

A few years ago, it was enough to train an AI model with a blackbox approach. Today the focus is turning more towards transparency and explainability of the models. How did the AI come up with its prediction and what influenced the decision path? 

Another trend we see is the automation of the development, training and testing of AI models. What has long been the status quo for traditional software development is now also becoming a must-have for AI development: automating all aspects of the data science lifecycle. Automate capturing and data cleansing, training, calibrating, validating and benchmark testing and finally deploying into highly scalable and flexible environments. Create milestone checkpoints that ensure accuracy metrics are adhered to at every step.

Many AI models need to be updated on a regular basis as they become “stale.” The more your training data changes over time, the more your AI models are susceptible to deteriorating performance. This leads to an evolving trend of continuous learning, which allows AI models to constantly learn from new incoming data without forgetting relevant historical data. 

 

How is your team applying these trends in their work or leveraging AI in the products they're building?

We are fortunate to have been building AI models for over seven years now. Many AI capabilities are integrated into our production solutions. Our industry, like many others, is still going through a learning curve on how to best leverage AI solutions. 

Our CCC smart estimate solution helps insurance appraisers to more effectively write estimates that identify the parts and operations required to repair a vehicle. Using photos of a vehicle, our AI predicts in near real-time which parts are damaged and whether the parts can be repaired or have to be replaced. It then populates these items on the preliminary estimate. Our visual tool, which we call a damage heatmap, helps visualize and correlate the damages. It resembles a weather map, showing vehicle damages on every photo of the car. A major benefit of correlation of the damages is an increased user confidence in the predictions. 

Our mobile app automatically puts photos from an accident through a CCC-built AI, shows and interprets the damages and can show you the likely repair costs. In order to make all of this scalable and train models in a short period of time, we have spent significant resources over the last couple of years building a robust and automated AI pipeline. 

Our end-to-end pipeline addresses data wrangling, merging, data verification, training, model calibration and evaluation. The model has to pass through gates and checkpoints throughout the AI pipeline. At each checkpoint, we have the ability to gain insights on the model’s performance. 

Look at your AI solutions holistically rather than in isolation.’’  

What’s one trend you’re watching that other people in the industry aren’t talking about?

Look at your AI solutions holistically rather than in isolation. Consider the entire solution and identify factors that optimize your AI performance. For example, many of our AI models require photos to evaluate car damages. The better the photos, the better the performance of the AI. 

Consider how your AI will be used and how you can improve not only your training data but the quality and consistency of your input data at runtime. Further consider the predictions themselves as important means to improve your AI models. Incorrect predictions can help your AIs improve and continuously learn from its mistakes.

 

Vivek Vaid
CTO • FourKites

At FourKites, CTO Vivek Vaid said the engineering team relies on theories like data federation to support machine learning initiatives relating to cargo operations. The team uses ML to provide customers with insights that help them reduce the number of empty trucks on the road, for example. 

 

What AI trends within your industry are you watching at the moment?

COVID-19 has certainly captured everyone’s attention like no event we’ve ever seen before. It’s shedding light on how supply chains are managed. It’s differentiating companies that use data and those that don’t to optimize their operations. For instance, we are seeing significant changes in transit times as well as shipment volumes. Without good data, the whole process falls apart. 

At FourKites, we spend a considerable amount of time predicting on-time delivery, estimating dwell and detention cost and route planning and smart warehousing. Recently, our data scientists are paying attention to contextual science. With the recent evolution of NLP using efficient deep learning techniques, contextual science has become highly relevant in understanding user needs and providing pointed responses. Supply chain leaders are integrating it into various parts of their workflow, such as customer support, customer engagement and context-aware response.

Another AI technology that is reshaping the industry is autonomous vehicles. For instance, freight volume will increase over time as autonomous vehicles wouldn’t be mandated with HOS regulations. Loads will be delivered more efficiently and costs will go down. It will be interesting to see the types of intelligence that will come from vehicles themselves as well as the opportunities for us to leverage or complement that intelligence.

Data Federation involves using data across entities to create outcomes that you could never accomplish within an enterprise boundary.’’ 

How is your team applying these trends in their work or leveraging AI in the products they’re building?

Since its early days, FourKites has recognized the potential for AI in the supply chain visibility domain. We have already invested in building a strong data science team. This team is building new products using existing machine learning and AI techniques and keeping up to date with emerging technologies within machine learning and AI.

We’ve always believed that machine learning will augment our users’ productivity and decisions. Together with our customers, we are building products that are based on predictive and prescriptive AI technology. We are investing in AI-powered, time-series based solutions and neural-network based solutions like smart, forecasted arrival.

As an example, FourKites recommendation engine was designed to proactively save late loads for our clients’ customers. FourKites illustrates not only the current status of a shipper’s loads but also surfaces recommendations to drive on-time delivery.

 

What’s one trend you’re watching that other people in the industry aren’t talking about?

Geospatial AI, which refers to the use of machine learning or AI on top of geolocation data to answer complex questions around real-time location updates, route planning, inconsistent tracking, geolocation outlier detection, point-of-interest prediction and recommendation. With more and more companies leveraging real-time visibility, geolocation data is growing exponentially. 

User experience is also an integral part of an AI solution. When a user reviews their shipments in our platform, an insight cannot stand alone. Those insights must be provided with context to promote end-user confidence. For example, our dynamic ETA product includes context like risk levels, route prediction, traffic highlights, weather and more.

 

Responses have been edited for length and clarity. Images via listed companies.

Hiring Now
DFIN
Fintech • Information Technology • Legal Tech • Software • Financial Services • Data Privacy