top of page

Evolution of AI - A brief journey back in time





The idea of Artificial Intelligence is anchored on how do we leverage the collective intelligence of the human and the computers. In simple terms Artificial Intelligence can be defined as
AI = Machines acting in ways that seem intelligent
But how do we define intelligence in the context of computers. To understand how we got here? where we are going ? and where we might go ?, we will take a brief journey in time.

1950's : Alan Turing published his famous paper and coined the term "Turing Test" to define Intelligence. The premise of the test was that
If a human could not tell within five minutes, if he/she was talking to a computer or a person, then it passed the test for intelligence. - " Alan Turing"




Instead of proving intelligence, Alan was mostly focused on overcoming the objections of his time. He could convince then that the "Humans can do it".


1960's ( The First Wave): Later on in the 60's we had mathematicians like Marvin Minsky, who coined the term "Suitcase" to define Artificial Intelligence. What he intended to define was that there are a lot of things that can be stuffed in big suitcases, giving analogy that it is too hard to bother to precisely define Intelligence.

Many eminent mathematicians of the 60's paved the way for the first wave of Artificial Intelligence. While some were focused on applying logic for what the computers need to do, others focused on modeling human thinking to solve simple puzzles. The field advanced in writing programs with an approach of problem reduction. That is to take harder problems and break into smaller pieces.


1970's (Second Wave): As the research progressed, many successors to Alan Turing, built systems that were similar to modern day Siri and Alexa, where computers were able to understand drawings and learn from examples. This period turned out to be the second wave of AI evolution. AI became more about representation of the problem correctly.
If you get the representation right, then you are almost done.
Representations of the problem can be "Sound, Words, Symbols, Maps, Pictures... etc"



As time progressed, AI went on to evolve with more modern definition as
AI = Models of Thinking + Models of Perception + Models of Action.
Where the idea of Models is to behave the same way as the real thing.
Where a model = Understand, Explain, Predict, Control

Combining the representations and the Models framework, we can define AI as follows

AI = Representations that support models of thinking, perception & Action.
where representations = Sounds, Words, Symbols, Maps, Pictures... etc
where Models = Understand, Explain, Predict, Control



Anchoring on the learnings, Prof. Patrick Winston at MIT outlays the modern definition of AI as follows
AI = Architecture that deploy methods for computers to not just do, but also learn to do, while the computers are enabled by constraints, exposed by representations that support models of "thinking, perception, and action"
where methods = statistical, analytical, mathematical, computational....etc,
where constraints are defined by the problem we are trying to solve and the data we are anchoring on,
where representations = Sounds, Words, Symbols, Maps, Pictures... etc
where Models = Understand, Explain, Predict, Control
The field of AI advanced further in the 1970's when Ed Shortliff from Stanford developed MYCIN system for diagnosing a class of diseases. Essentially, he created a rule based expert system , which is a collection of rules representing knowledge.


1974: Then in 1974 using lots of data and lots of computing, a new sub-field called Machine Learning emerged. In a Harvard Thesis, Paul Werbos published deep neural networks using the back propagation algorithms.

Machine Learning (subset of AI) =Lots of data + Lots of Computing

Neural Networks do the classification problems like picture classification, but it does not know what the job is all about, it does not feel, it does not have the context of the real world as it has never seen one. This has brought a lot of criticism for the usage of AI/ML in real world applications. There were clear examples of false positives and the lack of explainability by AI/ML led to even more questions than answers.

However, The evolution of Machine Learning (ML) and Deep Neural Networks got a lot of excitement as well as fear, with legends and myths prevalent in the main stream

"Once a Computer get control, we might never get it back. We should survive at their sufferance. If we are lucky, they might keep us as pets" --- Marvin Minsky

"With AI, we are summoning the demon. AI is our biggest existential threat" --- Elon Musk

AI Winter (Mid 1970's - Mid 1980'S) : The advancements in AI took a break in the mid 70's and 80's what was called as the AI winter. During this time very few computer programs replaced human experts. The advancement of AI in business world was met with skepticism with VC's tagging companies using AI as " yet another AI".

1990's: Following the AI winter, Rodney Brooks had yet another answer to advance the field through subsumption Architecture that can be used in Robotics. The idea was to have several layers of neural networks, with the lowest layer focused on obstacle avoidance, the layer focused on wandering and the next layer focused on exploration etc.


During this time, the rule based expert systems proliferated in scheduling problems used in Airlines and Airports. AI was used not to replace experts or humans, but to do things that couldn't be done by computers or humans alone.

AI has become dormant where computer were used for routine work. Nobody paid attention until the evolution of Apple Siri in 2011, paving the way for the third wave.


2011 - Present (Third Wave): Following the launch of Apple Siri, in 2011, IBM Watson program played Jeopardy and beat human champion. With this breakthrough, IBM started investing heavily into cognitive computing. By this time, all the other major Tech companies(Google, Amazon, Microsoft) also saw an explosion of data, analytics and the need to invest in research of AI/ML

In the modern era, AI have evolved with goals such as


1. Reasoning/Problem Solving : Solving Puzzles and Making logical deductions


2.Knowledge Representation : answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining "interesting" and actionable inferences from large databases),and other areas.


3.Planning and Decision Making :An "agent" is anything that takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision making, the agent has preferences – there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.


4. Learning or Machine Learning (ML) : Lots of data + Lots of Computing
Several advances in AI include Supervised and unsupervised Machine Learning.

Supervised Machine Learning : Supervised learning, also known as supervised machine learning, is a subcategory of machine learning and artificial intelligence. It is defined by its use of labeled datasets to train algorithms that to classify data or predict outcomes accurately.
Unsupervised Machine Learning : Unsupervised learning, also known as unsupervised machine learning, uses machine learning algorithms to analyze and cluster unlabeled datasets. These algorithms discover hidden patterns or data groupings without the need for human intervention.


5. Natural Language Processing : Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering.

Modern deep learning techniques for NLP include word embedding (how often one word appears near another),transformers (which finds patterns in text), and others. In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text, and by 2023 these models were able to get human-level scores on the bar exam, SAT, GRE, and many other real-world applications.



6. Perception : Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input.The field includes speech recognition, image classification, facial recognition, object recognition, and robotic perception.


7. Robotics : Field of Automating Repeated tasks



8. Social Intelligence : Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process or simulate human feeling, emotion and mood.


9. General Intelligence : A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence






7 views

Comments


bottom of page