Welcome to yet another chapter of AI for managers. In this chapter, we will briefly discuss well-known implementations of AI algorithms in real-world projects.
We will look into 4 familiar industries which heavily uses AI.
Will will see how Alexa has changed the way we interact with intelligent agents,
how product recommendation engines are revolutionizing eCommerce,
how AI is being used in self-driving cars,
and how AI is helping in lowering Data Center Power Consumption costs.
Alexa has suddenly become a household appliance, listed as the top 5 items sold on amazon.com continuously for the last couple of years.
This is mainly because of the ease at which it performs the 3 key tasks - Voice Recognition, responding to your queries and connecting to other IoT devices.
Alexa is a powerful little hardware which is capable of receiving your voice, sending it to the cloud and playing back the text back from the cloud.
Records your speech
If speech contains invocation word “Alexa”, send to Amazon servers
Voice is broken down into individual word contexts
Voice to text conversion
Understand context and intent ( keywords / Amazon Lex )
Respond back with either information or execute a task
The first part of Alexa is the voice to text engine. It is basically a model that has been trained on voice samples along with text labels. Its role is to convert the voice to text.
The second part is understanding the text. This again is a model that has been trained to infer the context from text and map it to the relevant answer.
The third part is the text of the speech converter. This converter is again a model that has been trained to generate speech from words.
The next example is the recommender systems.
Some of the recommendation engines that we come across daily
Facebook Friends Suggestion
Amazon Product recommendation
Netflix Movie Recommendation
Broadly, there are two methods of choosing which products to recommend -
and content-based filtering.
Collaborative filtering finds users similar to the customer and recommends the products those users bought or interacted with. By interactions, we refer to either viewing, buying or rating a product.
To effectively use collaborative filtering, the system needs a large dataset with active users who rated a product before in order to make accurate predictions.
In Content-based filtering, recommendations are made based on the information the customer has already generated on the system. In this method, products that are similar to the ones he had interacted with, in the past are recommended to him.
Let's understand Collaborative filtering with an example of Amazon’s product recommendation engine. Imagine that you bought a GoPro camera The system finds out other customers similar to you, ( in terms of buying and browsing habits i.e. either bought a Gopro or were actively searching for a GoPro) and recommends the next product to you. The assumption here is that you are likely to buy the recommended product since other users who are similar to you in their buying behavior also bought these products.
In content-based filtering, Amazon’s recommendation engine displays to you similar products based on your past behaviors. Imagine you bought a book called ‘Equations in Science”. Since Amazon has all the details of the books that you had bought or searched for in the past, it recommends you book that is very closely related to, or similar to the book ‘Equations in Science” and also the other related books that you had bought or searched for in the past.
Let’s get into the details on what the AI algorithms actually do when doing performing a recommendation. Let's take the case of content-based filtering. Imagine that you had bought a book. Every product in Amazon’s product catalog is populated with a list of features or metadata - for example in our case, the price, title, author, category it belongs to, the topic under which it comes under, the keywords used in its reviews etc.
A scoring algorithm lists other top books which have a similarity index very close to that of the book in question - based on how similar the features are to that of the book which was already bought.
The recommendation engine lists the highest scoring matches. By definition, the higher the score, the more similar it is to the book. This method relies solely on the object features and not on the preference of other users.
On a side note, when Amazon open-sourced their proprietary library for building deep learning neural networks algorithms named DSSTNE ( pronounced destiny), we came to know how its recommendation engine could achieve such good accuracy. Destiny works well with large feature vectors with sparse data. It works well where other solutions, like Apache Spark ML, etc fail to perform. It's written in C++ but support for Python has been planned.
Virtually every automotive company is getting into the race to put their self-driving cars on the street. But there are some non-automobile companies who are contributing to the eco-system in not the conventional ways. One of them is NVIDIA - the company that makes the GPU cards.
We can divide the self-driving car into three parts: eyes, brain, and backbone.
Eyes stand for sensors, backbone include the IoT connectivity and the brain represents the AI models or you can say software.
A self-driving car can be considered as a giant data collection machine. It constantly scans the environment around it using the various sensors and tries to make a virtual image of its surroundings.
By the way, this is how a 3D virtual image created by the LIDAR looks like.
Combining the 3d virtual image along with the real world
Training the model on a real-world scenario, i.e on the road, is not feasible and is costly too. Therefore the model is trained on hours of simulated data. A human closely monitors the system's actions as it acts in the real world, providing feedback when the system made,
Taking the discussion forward, let’s look into a case study in a non-tech industry - power optimization in a data center.
Recently Google announced that using its own AI - DeepMind, they were able to reduce the power consumption in their data centers by almost 40%. This is a significant step forward. When traditional rule-based approaches fail, AI works well in these environments because of primarily 4 reasons.
The large industrial equipment used in data centers interacts with the environment and with each other in a very complex, non-linear ways where traditional rule-based systems fail to adapt.
The traditional systems could not envision every possible scenario and changes - like the rapid changes in weather and other internal and external factors. It is difficult to customize the system to cater to every possible scenario.
Every data-center is unique in the kind of equipment, requirements, and the environment. So a general intelligence model is required to address without having to do infinite customizations.
All these factors helped DeepMind’s Algorithms achieve what traditional Industrial Systems could not achieve.
DeepMind’s work can be roughly summarized in three steps -
Collect all the historical data from the thousands of sensors and input devices along with other internal and external operating variables of the datacenter.
Train an ensemble of neural networks on all possible variations of the input features.
Predict the 2 key external factors ( temperature and pressure ) for the next hour.
What Deep Brain did is much similar to a traditional train-and-predict machine learning model.
But the important takeaway is that it demonstrated that it was possible to model a very complex traditional system which could be controlled by an AI model and achieve very high-performance improvements.
This graph summarizes the performance improvements that were obtained from implementing the AI system.
Notice how the power usage drops significantly while the AI systems take over the operations in the data-center.
Thank you for watching "Generalized AI" chapter.
In simple terms, a simulation is a computer model run constrained by a certain number of variables,