Human kind’s quest to explore and understand the phenomenon around itself has brought us to this stage, where we have started expecting from machines. We want machines to process the information and give us the answers which we cannot see or comprehend. What is it? Does the planet’s smartest animal finally lost the battle with nature, or are we planning our next big leap? In this article, I will try to explain what I observe or understood from my analysis.
Humankind loves to question things from time immemorial. Sometimes our need and other time our inquisitiveness lead us to discover the phenomenon, tools, technologies, theories about nature around us. We started this quest from survival, reached to the journey where we can live harmonically, then progressed to question nature, finally manipulating it for our good.
I believe there is no layer left, which we haven’t explored or tried to explore, no matter what it’s magnitude was. We studied galaxies sitting at thousands of light-years away to the sub-atomic particles. Our constant endeavors challenged the physicist to re-scale their measuring instruments and bring new technology so that we can fathom the scale of our discovery.
Now, where does this upcoming era of Artificial Intelligence aka AI, fits in among the big schema of things?
At first, AI appeared to be another fantasy created by the technological world, and I still repent on my first thought about AI. If I look back in the past two hundred years of documented history, this very stage, today where we are afraid as well as excited about AI, is inevitable.
We wanted to count how many times certain things appear; hence, we discovered numbers. We wanted to know different types of things; therefore, we started classifying things. When we came to realize that simple geometrical shapes can’t measure the entities available in nature, we derived complex formulas.
Hence comes the big stage, where at some point in the 1800s we started understanding patterns and automate the repetitive tasks, surprisingly a “Hand-loom” became an inspiration to the first computer and great Charles Babbage. Our love for Maths didn’t stop there. As our knowledge increased by parallel discoveries in other fields, the urge to find a logical pattern behind various phenomenons became more robust.
We started understanding that most of the natural phenomenons we observe can be described in a mathematical equation, the only issue other than deriving a proper equation was to predict or discover the variables that control that phenomenon.
For some, these variables are only one, in some, there are two, and yet some other depends on more.
For example, if we wanted to increase the speed of a vehicle, we can explain and understand this by using a couple of equations like:
(I am not giving the formula for Force, Work, and Energy, hoping that I am making my point clear by these two equations, only)
F = ma;
v = u + at;
Second example, Current = Voltage/Resistance;
So, all in all, where the variables are not more than 10, our mathematical skills are competent enough, where we can find out the result of those variables on the target quantity.
However, what if, with the number of variables, the degree of variables starts coming into the picture.
Area_Circle = pi.sqr(radius)
You see, where it is going.
Not all phenomena we see in nature have linear relationships. Sometimes, even though it has some linear relationship, still the number of variables involved affecting the event is so large, that either we cannot find out the weightage of those variables or some times we even miss the variable at all. Our statistical tools can come handy, but at some point, the data is so enormous that it’s simply not possible to visualize the relevance of every given variable.
Let me explain it to you.
Take an example of Cancer. In the case of Cancer (Take any common cancer), sometimes the number of genes it is dependent are in hundreds. Then comes the expression level of those genes, which can be expressed in the units of pico-grams to micro-grams.
So, for simplicity, I will present some remotely imaginable numbers.
Suppose given X kind of Cancer depends on some 120 number of genes, and these genes expressed in 20 different levels.
Now to come up with reasonable certainty that the cause of Cancer X, we need to go through 120 X 20 = 2400 different combinations.
Let me tell you the numbers are more complex than this. Means, some genes, when expressed at a certain level, will prove to be healthy, while at other levels, they are fatal. I hope in all this; you can see the complexity behind a single phenomenon, which comes behind this dreaded disease.
Take another example of the share market.
How many variables an ordinary person can observe, without getting involved in technicality behind the market. (Me too not an expert either!)
I will give you an example. The Indian share market can get affected by the change of government, policy change in a fiscal year, budget, monsoon forecast, global markets, value of Rupee vs. Dollar (US), the situation in the middle-east oil market. I mean, there are many more, but even If I consider these many only, it is tough to come up with a model that can explain all the ups and downs of the market.
The Internet generates ten raised to the power of eighteen, bytes of data roughly every day. We have evolved big-data technology to deal with such a large scale of data, but analyzing this data needs something more.
Data Science comes to our rescue, where it helps in cleaning the data, organizing it to some extent. It helps us in finding out the variables which have some relationship with the targeted entity in which we are interested. Data Science lets us visualize with naked eyes that, yes, there is some degree of correlation between these variables, or if not, we can dispose of this variable without any issue.
What left with us is beautifully organized data, which even though large in shape and size but still something which carries meaning to us. Now, our human brain what so ever intelligent, cannot sit and digest data on this scale. Hence we need some assistance from machines. Machines that are good at repetitive work, a machine which is excellent in applying a given equation, and can provide us the result, which may still be hard to comprehend but reduced in scale.
Here Machine learning helps us in reaching a place where things start becoming evident. After these two things happen, either the variables start falling into some model and explain the phenomenon, or we come to realize that still a higher level of understanding required.
There may be some variables that affect other variables and decide the final result.
This issue, where a variable, let’s call it a node, start affecting another node and getting affected by still others, will fall under the subject of Deep learning, where there are multiple layers of nodes influencing each other to conclude.
At this stage, our work of feeding the machine with equation or models get ended. Now the onus is on the computer to learn, re-learn from the data. In the end, it comes up with a model that can predict the phenomenon with some confidence.
This stage is where our phenomenon understanding system which we are trying to build, starts acting like a brain, which is taking various factors into the considerations, giving them weight, and then calculating the target. ANN, CNN, DNN these start coming into the picture. Where although we can see what are the variables our human mind cannot predict, as underlying complexity has attained an altogether different level. These systems, which are far from the intelligence of the human brain, still give us a machine that has enough ability to start mirroring some capabilities of human intelligence. This is what we call Artificial Intelligence or AI in short.