How to develop solutions by using the Cognitive offers and learn how to distinguish yourself - Cognitive Computing Takes to the Cloud: Part 3

Arvind Mehrotra
7 min readOct 29, 2019

--

While Cognitive technologies have been around for a while, it’s only recently that we are seeing a push towards wider democratization. Owing to the advent of the Cloud, access to Cognitive has become easier — enabled by Software-as-as-Service (SaaS) models. While Part 2 turned the spotlight on the different delivery models and leading vendors to consider. In Part 3 let’s now turn our attention to implementation these offerings frameworks and success stories on the Cognitive Cloud.

Consume Cognitive for enabling a Challenger Customer

The wide expansion of cloud services in the recent years has been boosting various innovative organizations across the globe for the adoption of the cognitive service solutions. Let us take deep dive how these technology vendors and service providers worked together to address problems and bring new agility & performance levels.

Innovative Solutions in Cognitive Cloud: By using Market ready offerings

Here is a brief look at how Google, Microsoft, IBM, and AWS are reimagining Cognitive Computing in the Cloud.

Google’s Cognitive Cloud is a plug-and-play solution — which is expected given that Cognitive is Google’s competitive edge. Google has three distinct products for developers (AI Hub, AI building blocks, and AI Platform) as well as prebuilt AI solutions for business users in contact centers, HR, and other verticals.

Google’s cloud platform has proved to be incredibly popular among businesses of every size and sector. From their endless roster of case studies, let’s consider two that made a major impact on their industry of operation and a lasting difference to society at large.

First, there’s the story of omni.us, a German BFSI firm that leveraged AI on the cloud to simplify insurance processing. Insurance has traditionally been held back by severe policy-based complexities, requiring careful handling instead of generating value. omni.us co-founder, Sofie Quidenus-Wahlforss, noted that globally insurance companies spend approximately $250 billion in claims handling — and this is excluding settlement costs.

omni.us moved their AI microservices framework to the Google Cloud platform speeding up the claims handling process significantly. The solution is capable of scanning handwritten documents with only a 7.25% error rate. One of the key Google Cognitive services at play is TensorFlow, ideal for building largescale neural networks.

As a result, the time taken to deploy an AI model was reduced to four hours from the earlier two days. Using computer vision processing, semantics information extraction, microservices, and the Google Compute engine, omni.us was able to modernize its operational capabilities from end to end.

The second case involves Alacris Theranostics, a company that develops drug therapies for cancer patients. Given the extreme complexity of cancer symptoms and differences in patient genetic makeup, matching therapy to an ailment is still based on trial and error. Alacris decided to use computer models based on millions of data points to virtualize the clinical trial process.

This resulted in a proprietary ModCell system on the Google Cloud platform that could predict treatment effectiveness with marked accuracy. The solution could handle five million different models and over 2TB of data, running on a single core. Google Cloud completed the process at a 10x faster pace than the original Alacris computer cluster. The company hopes to achieve increased drug approval rates and reduce costs, thanks to Google’s predictive capabilities.

Microsoft offers standalone Cognitive APIs covering five use cases/functionalities — recommendation engines, speech conversion, language processing, image processing, and enterprise search. Between 2016 and 2017, its revenues from Cognitive grew by a massive 190.3%.

This success is illustrated by the increasing adoption of Microsoft’s Cognitive services across industries and sectors. Let’s consider two case studies: one for increased operational efficiency and productivity, and another to address the critical concern of wildlife preservation.

With over 100,000 employees in 90 countries, Tech Mahindra was struggling to keep up with the rising volume of employee service requests. It was receiving up to 13,000 queries a month, holding back consumer-grade experiences in the workplace. Tech Mahindra leveraged Microsoft’s cognitive services to create a virtual assistant called UVO. This used natural language process (NLP) to process requests like leave applications, call routing, travel expense management, etc. The underlying technology leveraged Microsoft’s speech API, Bing spellcheck, and the Azure Machine Learning Studio to simplify operations.

And the results were astounding. The average query resolution time came down to just 8 seconds (from 8 to 72 hours), along with a 35% reduction in efforts.

Cognitive services aren’t just useful for operational efficiency in large scale enterprise scenarios. This was showcased by the Snow Leopard Trust, an organization devoted to maintaining numbers for this highly endangered animal population.

There are less than 7,000 snow leopards left in the wild, making it imperative to understand its natural habitat and the factors influencing survival rates. Unfortunately, the rugged terrain and challenging weather conditions mean that a manual survey isn’t an option. The Snow Leopard Trust has employed motion-sensing cameras to produce over 1 million images — now Microsoft’s machine learning capabilities are helping to analyze this image data in a fraction of the time.

Typically, biologists must spend approximately 300 hours and at least 8 different spreadsheets per camera survey to build an adequate database.

The Snow Leopard Trust partnered with Microsoft to find a smarter alternative — leveraging ML to build an image classification model powered by neural networks at scale. The result is more accurate image classifications, support for a wider set of image data sets, and data set augmentation with sequential burst shots.

IBM’s Watson is an AI engine-as-a-Service with APIs for data management & utilization, knowledge management, and the standard, vision, speech, and language packages. It also has the Watson Open scale platform to build enterprise AI solutions.

IBM is among the leading names when it comes to AI and cognitive services. That’s why it continues to be the preferred solution for some of the world’s leading companies — let us consider two unique applications, one in data privacy compliance and the other in sports audience experiences.

Thompson Reuters understood that in a climate of changing regulations and tightening data privacy norms, it’s essential to build an up-to-date knowledge base on compliance. To offer users clarity in a complex regulatory landscape, Thompson Reuters used Watson to develop two industry-first solutions.

First, Thompson Reuters created an “Ask Watson a Question” feature to help data privacy professionals view natural language answers to their queries. From simple concepts like “What is an organization?” to more intricate queries in legal and compliance-related domains, the interface used IBM’s AI and ML services to mimic a human subject matter expert.

The next solution was called Related Concepts that predicts the future requirements of data privacy professionals and recommends additional topics that they might want to explore. The biggest differentiator for the framework was its ability to evolve in tandem with the global data privacy and compliance environment. For example, every answer returned carried a confidence score to help build trust among users and encourage information search until 100% accuracy was achieved.

The next application was the use of AI to transform audience experiences at the US Open.

Every year, the US Open produces a vast repository of unstructured data, including around 370 hours of video footage with over 31,000 data points. This offers an incredible opportunity to apply AI and gain insights into audience experiences.

At the US Open, IBM Watson and other Cognitive services resulted in game-changing innovations. Watson, assessed video clips to identify its “highlight worthiness” to promote effective social sharing.

Next, it analyzed the crowd roar to assign a reaction score to every clip. Based on a player’s celebratory gestures, facial expressions, and the sound of the Tennis ball, an overall “excitement level” was identified. This is a first in the global sports entertainment arena, transforming how live and streaming audiences experience the medium.

AWS boasts of a rich AI service library, with innovations such as Lex for voice assistance, Polly for multilingual NLP, and Rekognition for image analysis. AWs has a strong IoT-focus with Deep Racer for smart cars and Deep Lens for video.

Combining this focus on IoT and machine learning, AWS helped GE Healthcare to improve decision-making and reduce readmission rates. GE Healthcare partnered with clinicians at the University of California to assemble a library of algorithms that would augment traditional X-ray imaging technologies.

Patient records, sensor data (IoT data) and other sources were factored into the x-ray scanning process to boost accuracy. The solution was built on the AWS Cloud using Amazon SageMaker to deploy machine learning at scale. This has already resulted in better quality of care according to 82% of healthcare decision-makers, and reduced readmission according to 63%.

While healthcare is possibly the most promising use case for cognitive services today, this technology is also making massive ripples in less widely known areas such as language skill learning. Consider the story of Duolingo, a highly popular learning platform, used by over 3 million customers around the world.

Duolingo’s AI-based learning platform covers over 32 courses, often veering into endangered languages like Hawaiian and Navajo. According to the US State Department, it takes around 600 hours to learn a category 1 language — Duolingo makes this possible in just 15 minutes per day.

Duolingo is based on the AWS Cloud, using the PyTorch Deep Learning framework which enters production using Amazon’s high-performance GPU instances. The solution uses Amazon Dynamo DB, Amazon EMR, Amazon S3, and Spark to manage the data pipelines for machine learning. Also, Amazon Polly is used for text-to-speech functions on Duolingo. By partnering with Amazon, the number of return users jumped by 12%.

And this is no surprise since Amazon’s bundle of Cognitive services has made Duolingo’s Deep Learning models scalable enough to support 100,00 to 30 million data points at a time, making 300+ million predictions every day.

Getting Started with Cognitive Cloud Implementation

Companies looking to embrace the power of Cognitive must ask themselves a few key questions: Who is the intended user? What is the dominant technology behind the existing It environment? And, is Cognitive integral to the company’s value proposition? The answers form the starting point towards your journey to the Cognitive Cloud.

In the next part of this blog series, we consider best practices for Cognitive Cloud adoption and companies that have already led the way.

--

--

Arvind Mehrotra
Arvind Mehrotra

Written by Arvind Mehrotra

Board Advisor, Strategy, Culture Alignment and Technology Advisor

No responses yet