Natural Languages and AI

During one of the keynote speeches in India, an elderly person asked a question: why don’t we use Sanskrit for coding in AI. Though this question might look very strange to researchers at first it has some deep background to it.

Long back when people were trying to build language translators, the main idea was to have an intermediate language to and from which we could translate to any language. If we build direct translation from a language A to B, there will be too many permutations. Imagine, we have 10 languages, and we will have to build 90 (10*9) such translators. But to come up with an intermediate language, we would just need to encode for every 10 languages and 10 decoders to convert the intermediate language to each language. Therefore, there will be only 20 models in total.

So, it was obvious that there is definitely a need for an intermediate language. The question was what should be the intermediate language. Some scientists proposed that we should have Sanskrit as the intermediate language because it had good definitive grammar. Some scientists thought a programming language that can dynamically be loaded should be better and they designed a programming language such as Lisp. Soon enough, they all realized that both natural languages and programming languages such as Lisp would not suffice for multiple reasons: First, there may not be enough words to represent each emotion in different languages. Second, all of this will have to be coded manually.

The approach that became successful was the one in which we represent the intermediate language as a list of numbers along with a bunch of numbers that represent the context. Also, instead of manually coding the meaning of each word, the idea that worked out was representing a word or a sentence with a bunch of numbers. This approach is fairly successful. This idea of representing words as a list of numbers has brought a revolution in natural language understanding. There is humongous research that is happening in this domain today. Please check GPT-3Dall-E, and Imagen.

If you subtract woman from Queen and add Man, what should be the result? It should be King, right? This can be easily demonstrated using word embedding.

Queen — woman + man = King

Similarly, Emperor — man + woman = Empress

Yes, this works. Each of these words is represented by a list of numbers. So, we are truly able to represent the meaning of words with a bunch of numbers. If you think about it, we learned the meaning of each word in our mother tongue without using a dictionary. Instead, we figured the meaning out using the context.

In our mind, we have sort of a representation of the word which is definitely not in the form of some other natural language. Based on the same principles, the algorithms also figure out the meaning of the words in terms of a bunch of numbers. It is very interesting to understand how these algorithms work. They work on similar principles to humans. They go through the large corpus of data such as Wikipedia or news archives and figure out the numbers with which each word can be represented. The problem is optimization: come up with those numbers to represent each word such that the distance between the words existing in a similar context is very small as compared to the distance between the words existing in different contexts.

The word Cow is closer to Buffalo as compared to Cup because Cow and buffalo usually exist in similar contexts in sentences.

So, in summary, it is very unreasonable to pursue that we should still be considering a natural language to represent the meaning of a word or sentence.

I hope this makes sense to you. Please post your opinions in the comments.

Writing Custom Optimizer in TensorFlow Keras API

Recently, I came up with an idea for a new Optimizer (an algorithm for training neural network). In theory, it looked great but when I implemented it and tested it, it didn’t turn out to be good.

Some of my learning are:

  1. Neural Networks are hard to predict.
  2. Figuring out how to customize TensorFlow is hard because the main documentation is messy.
  3. Theory and Practical are two different things. The more hands-on you are, the higher are your chances of trying out an idea and thus iterating faster.

I am sharing my algorithm here. Even though this algorithm may not be of much use to you but it would give you ideas on how to implement your own optimizer using Tensorflow Keras.

A neural network is basically a set of neurons connected to input and output. We need to adjust the connection strengths such that it gives the least error for a given set of input. To adjust the weight we use the algorithms. One brute force algorithm could be to try all possible combinations of weights (connections strength) but that will be too time-consuming. So, we usually use the greedy algorithm most of these are variants of Gradient Descent. In this article, we will write our custom algorithm to train a neural network. In other words, we will learn how to write our own custom optimizer using TensorFlow Keras.

Continue reading “Writing Custom Optimizer in TensorFlow Keras API”

Understanding Computer Vision with Deep Learning – Free Webinar

CloudxLab conducted a successful webinar on “Introduction to Machine Learning” on the 15th of October, 2019.  It was a 2-hour session in which the instructor explained the concepts based on Understanding Computer Vision with Deep Learning.

More than 250 learners around the globe attended the webinar. The participants were from countries namely; United States, Canada, Australia, Indonesia, India, Thailand, Philippines, Malaysia, Macao, Japan, Hong Kong, Singapore, United Kingdom, Saudi Arabia, Nepal, & New Zealand.

Continue reading “Understanding Computer Vision with Deep Learning – Free Webinar”

Fashion-MNIST using Deep Learning with TensorFlow Keras

A few months back, I had presented results of my experiments with Fashion-MNIST using Machine Learning algorithms which you can find in the below mentioned blog:

https://cloudxlab.com/blog/fashion-mnist-using-machine-learning/

In the current article, I am presenting the results of my experiments with Fashion-MNIST using Deep Learning (Convolutional Neural Network – CNN) which I have implemented using  TensorFlow Keras APIs (version 2.1.6-tf).

Continue reading “Fashion-MNIST using Deep Learning with TensorFlow Keras”

Things to Consider While Managing Machine Learning Projects

Generally, Machine Learning (or Deep Learning) projects are quite unique and also different from traditional web application projects due to the inherent complexity involved with them.

The goal of this article is, not to go through full project management life cycle, but to discuss a few complexities and finer points which may impact different project management phases and aspects of a Machine Learning(or Deep Learning) project, and, which should be taken care of, to avoid any surprises later.

Below is a quick ready reckoner for the topics that we will be discussing in this article.

Continue reading “Things to Consider While Managing Machine Learning Projects”

Conference on Computer Vision at Google Asia, Singapore

The deep learning algorithms and frameworks have changed the approach to computer vision entirely. With the recent development in computer vision with Convolutional Neural Networks such as Yolo, a new era has begun. It would open doors to new industries as well as personal applications.

After the successful bootcamps held at IIT Bombay, NUS Singapore, RV College of Engineering, etc, CloudxLab in collaboration with IoTSG and Google Asia conducted a successful conference on Understanding Computer Vision with AI using Tensorflow on May 11, 2019, at Google Asia, Singapore office.

Continue reading “Conference on Computer Vision at Google Asia, Singapore”

Creating AI Based Cameraman

Whenever we have our live talks of CloudxLab, in presentations or in a conference, we want to live stream and record it. The main challenge that occurs is the presenter gets out of focus as the presenter moves. And for us, hiring a cameraman for three hours of a session is not a viable option. So, we thought of creating an AI-based pan and tilt platform which will keep the camera focussed on speaker.

So, Here are the step-by-step instructions to create such a camera along with the code needed.

Continue reading “Creating AI Based Cameraman”

Regression with Neural Networks using TensorFlow Keras API

As part of this blog post, I am going to walk you through how an Artificial Neural Network figures out a complex relationship in data by itself without much of our hand-holding. You should modify the data generation function and observe if it is able to predict the result correctly. I am going to use the Keras API of TensorFlow. Keras API makes it really easy to create Deep Learning models.

Machine learning is about computer figuring out relationships in data by itself as opposed to programmers figuring out and writing code/rules. Machine learning generally is categorized into two types: Supervised and Unsupervised. In supervised, we have the supervision available. And supervised learning is further classified into Regression and Classification. In classification, we have training data with features and labels and the machine should learn from this training data on how to label a record. In regression, the computer/machine should be able to predict a value – mostly numeric. An example of Regression is predicting the salary of a person based on various attributes: age, years of experience, the domain of expertise, gender.

The notebook having all the code is available here on GitHub as part of cloudxlab repository at the location deep_learning/tensorflow_keras_regression.ipynb . I am going to walk you through the code from this notebook here.

Generate Data: Here we are going to generate some data using our own function. This function is a non-linear function and a usual line fitting may not work for such a function

def myfunc(x):
    if x < 30:
        mult = 10
    elif x < 60:
        mult = 20
    else:
        mult = 50
    return x*mult
Continue reading “Regression with Neural Networks using TensorFlow Keras API”

Deploying Machine Learning model in production

In this article, I am going to explain steps to deploy a trained and tested Machine Learning model in production environment.

Though, this article talks about Machine Learning model, the same steps apply to Deep Learning model too.

Below is a typical setup for deployment of a Machine Learning model, details of which we will be discussing in this article.

Process to build and deploy a REST service (for ML model) in production
Process to build and deploy a REST service (for ML model) in production

The complete code for creating a REST service for your Machine Learning model can be found at the below link:

https://github.com/cloudxlab/ml/tree/master/projects/deploy_mnist

Let us say, you have trained, fine-tuned and tested Machine Learning(ML) model – sgd_clf, which was trained and tested using SGD Classifier on MNIST dataset.  And now you want to deploy it in production, so that consumers of this model could use it. What are different options you have to deploy your ML model in production?

Continue reading “Deploying Machine Learning model in production”

One-on-one discussion on Gradient Descent

Usually, the learners from our classes schedule 1-on-1 discussions with the mentors to clarify their doubts. So, thought of sharing the video of one of these 1-on-1 discussions that one of our CloudxLab learner – Leo – had with Sandeep last week.

Below are the questions from the same discussion.

You can go through the detailed discussion which happened around these questions, in the attached video below.

One-on-one discussion with Sandeep on Gradient Descent
Continue reading “One-on-one discussion on Gradient Descent”