Blog
See All Blog
Cloud application and its transformation in 2020

Sep 16

Blog
See All Blog
Cloud application and its transformation in 2020

Sep 16

Blog
See All Blog
Cloud application and its transformation in 2020

Sep 16

Why AI and ML are new buzzwords again

  • April / 9th / 2018 | Gangadhar Heralgi
  • Share this blog:

For most parts, Artificial Intelligence was a buzz word, from the 1960s to almost 2010! Artificial Intelligence itself has changed much since it was to what we refer to it as it exists today. Majority of the AI between the earlier phases depended on “Predicate Logic” – major languages that used predicate logic to build inferencing systems include LISP and Prolog.

What is Predicate Logic – Predicate Logic comes from a first-order logic, which involves declarative model. That is a truth table is built as a database of training material, which is used by inference engines to see if inferencing can be done for a given problem. This required a lot of work in building up the assertions in the truth table. Let us understand how a simple truth table and inferencing can be achieved.

Let us say if “A implies B”, and “B implies C” are two assertions, then we can infer that “A implies C”, and a simple lookup inferencing engine can be built which will go over the truth table and assert the “A implies C”.

Now let me give an example to make it even more concrete – let us have two assertions like “All animals have 4 legs”, and “Humans are animals”, which means it can be derived from both the above statements that “Humans have 4 legs” which is again an assertion. And if a new assertion is given that “X” is human, then any inferencing engine will be able to conclude that “X” is an animal, and “X” has 4 legs.

As we can see from above example, this was a very primitive AI engine that has many drawbacks:

  • One needs to build a lot of assertions for an inferencing engine to successfully derive intelligence
  • Walking through such truth tables and arriving at an inference is computationally very intensive and require a huge processing power
  • Truth table inferencing results are not very accurate, as you can see from above that we derived “Humans have 4 legs”, and it is not easy to change such things in a truth table
  • The programming models, building truth tables, and finally ending up with inaccurate inference is a big drawback
  • It’s humanly impossible to build a large set of truth table assertions
  • All assertions have to be very accurate to infer even more accurate results

Evolution of a new way of AI:

Instead of using Truth tables and Predicate Logic, new programming models take dependency of a statistical model, which itself comes from a very simple mathematical model of “regression” or “line fitting” algorithms. A simple excel can work on a large chunk of data and come up with a “regression” line, or equation of a line that will define how well the data is spread over. Now, this mathematical model has a great application in the way humans think and learn and interpret, which we can apply to machine learning as well.

When a child is growing up, most of the activity she is doing is collecting a lot of data, and then any new input she receives, she compares and fits in according to the nearest match that she has learnt previously. Let us say that child as she is learning is looking at various Dogs, now his brain is constantly and continuously storing images of these dogs, and when she sees an animal, she matches the image of the animal with his previously stored images of Dogs. If there is a near match – she concludes what she has seen is a Dog or not, and keeps the new image as learned. If she interprets wrongly, then his parents guide him, which becomes feedback, and she learns one more way of pattern matching and uses that in his next interpretation of what is a Dog.

Now when we convert this into a mathematical model, we can easily see that all the Dogs images can be stored in a huge database, and when we get a new Animals image, we can compare if it fits the “regression line” of all the images stored so far or not. Depending on how close the new image of the animal is to the regression line, one can interpret if the Animal is Dog. If the interpretation lets say is wrong, then with feedback, the learning can be fixed to be interpreted accurately.

So the complexity of Predicate Logic of the previous era has been replaced with the simplicity of a regression line, and how close the term in question fits the line or not. Also, the learning is all about big data, or neural network. All the data collected can be used as training material, the more you collect the data, the more accurate is your match or no match decision in a line fitting algorithm. As you evolve, you will have more data and hence accuracy of your results improve with each passing day, and learning keeps increasing.

In today’s world, we have super-fast GPUs which can help us compare, and do the line drawing and matching easily. The world has changed so much that we are always dealing with a large amount of data. So the two most complex problems of predicate logic are gone, we have the “training” material with big data, we have the computational power to match and decide quickly. The corrections to interpretation are also very easy compared to the truth table corrections, making the AI, a far simpler world.

The new age AI, based on pattern matching, line fitting(regression), Neural network (data, training material), Machine Learning (feedback) makes it very possible that we are doing exactly the way a human brain interprets and evolves over a period of time – making the AI the new buzz word again with endless possibilities.

What’s your Challenge? Let’s work together to solve it.

Stay updated with the latest trends in the software industry with Monocept.

  • April / 9th / 2018 | Gangadhar Heralgi
  • Share this blog:
Related Posts