Introduction to Machine Learning: A Brief Overview For Everyone

With evolving automation and self-learning technologies, it is necessary to stay in loop with what is happening in Artificial Intelligence (AI) and Machine Learning (ML) world. These technologies are slowly penetrating almost all existent work domains right now. Today we start with the very basics, i.e., understanding the foundational concepts of Machine Learning. But before that, let us delve into how automated computing developed.

Quick history of computers and programming:

Earlier, humans manually performed tasks, including documentations and calculations. Eventually, in the 1830s, Charles Babbage (also called the father of computers) designed the ā€˜Analytical Engine,ā€™ which could perform complex arithmetic calculations. He gave the basis for a general-purpose computer. Building upon this foundation, Alan Turing introduced the world to the ā€˜Universal Turing Machineā€™ (UTM) in 1936. UTM was a theoretical machine that could simulate any other machineā€™s computation, given the right instructions on an input tape. Eventually in 1945, John von Neumann introduced ā€˜Von Neumann Architecture,ā€™ which is presently in use. The modern computers are highly efficient and advanced. Today, we have various syntaxes or languages (C, C++, Java, Python, etc.) to frame our rules for computers or machines to process and execute them. Think of your program rules as the recipe for a dish. A chef follows the recipe and executes it. Similarly, computers, like that chef, process the program commands (like recipe) and generate output (like the dish!)

Why wasnā€™t the programming enough? What led to Machine Learning?

Performing tasks based on set rules was outdated and disjointed due to urban development and the evolving complexity and randomness in the world thereof. It is not at all optimal to accommodate complexity using rules. We cannot code every exception, right? What we needed was a way to teach machines the fundamentals of our data, just like the way humans are trained. Let us take the example of learning about leaves. We are not taught about ā€˜allā€™ the leaves in the world. We are shown some examples of leaves, and we generalize them to figure out what is a leaf and what is not! Note that exceptions are always there, but generally we can deduce. How do we do it? 

Here is a general logic flow to this:

  1. We are shown various pictures of leaves.
  2. We notice the basic shapes of the leaves. We also note the diverse shapes.
  3. We memorize their colors range.
  4. We additionally observe varied leaves in our environment and generate a basic model of leaf in our minds.
  5. We match every new structure with our mental model of the leaf and decide whether it is a leaf or not.

If only we could make the computer smart enough to do this, right? Well, here ML comes into play. Machine learning primarily makes the computer build a virtual model of the data to automatically predict outcomes and the required processes, without needing dedicated programming for each case separately. ML not only saves computational costs, but also automates tasks like classifying emails as spam, predicting stock prices, weather foresight, and many more types of processes.

To define formally: 

Machine Learning (ML) is a computer science field that enables unprogrammed machines to ā€˜learnā€™ from data.

The 3 types of Machine Learning:

  • Supervised Learning: Here, we teach the computer by giving it labelled examples. Labelled here means that we input some sample cases with their desired outcomes. We let the machine identify the underlying technique linked to the sample & its output and predict the desired outcomes on new unseen data, it will encounter in the future. The example of the leaves we discussed above is supervised learning. We input sample images (leaf or not) along with the desired outcomes and let the machine identify the technique or model that can predict correct classification for any random image that we input in the future. 

          Applications of supervised learning:

             a) Classification: Categorizing data into two classes or labels based on parameters.
             b) Regression: Predicting future trends or outcomes.
  • Unsupervised Learning: In this, we input unlabeled samples. Unlabeled means without the mapping of our desired outcome. We let the machine identify the patterns in the data and build the model for classification, clustering, etc. 

        Applications of unsupervised learning:

             a) Clustering: Putting data into similar groups (clusters) based on certain set 
                  parameters. 
             b) Reduction: Reducing the size of data by only considering the set vital parameters 
                  needed for accurate outputs.
  • Reinforcement Learning: In Reinforcement learning, the machine builds the model based on trial and error. The machine starts to produce outcomes for interactions and evaluates the feedback of the correctness of the outcomes. Based on this feedback, we build the model. 

        Applications of reinforcement learning: 

             a) Self-driving cars: Machine based on trial and error develops a model to prevent 
                 accidents and generate safe movements. 
             b) Learning games: Machine makes moves in the game and based on the feedback
                 (goodness or correctness) of each move, a model is built to learn the game 
                  strategy.

These were the basic terms in ML that you might have heard of. This article was just the first in The Syntax Systemsā€™ ML blogs series to briefly acquaint you with Machine Learning. In the upcoming articles, we will dive deep into the ML concepts and learn how ML technically works. Stay tuned!

         -Lakshya Sinha Kashyap, Intern-Corporate & Tech Content, The Syntax Systems, India


Comments

Popular Posts