Topic 3: Machine Learning: Practical examples

Topic 3: Practical examples

Application of machine learning vast in our daily life, starting from robotic voice answering in customer support calls, voice to text conversion in whatsapp messaging, text recommendation in Google search, chatbots in any customer support chat and many more.

Below are some more practical examples:

  • Face Detection: It is a is a computer technology being used in a variety of applications that identifies human faces in digital images, which uses Face-detection algorithms focus on the detection of frontal human faces. It is analogous to image detection in which the image of a person is matched bit by bit with the image stores in database. Any facial feature changes in the database will invalidate the matching process.
  • Bioinformatics: Gene Prediction in Genomicsis a problem which has an increasing need for the development of machine learning systems that can automatically determine the location of protein-encoding genes within a given DNA sequence. Machine learning has also been used for the problem of multiple sequence alignment which involves aligning many DNA or amino acid sequences in order to determine regions of similarity that could indicate a shared evolutionary history. It can also be used to detect and visualize genome rearrangements
  • Fraud Detection:  Fraudis a billion-dollar business and it is increasing every year. Through statistical techniques and artificial intelligence, a fraud can be detected whether it happens in ecommerce or banking transactions.
  • Space exploration: Space exploration is the discoveryand exploration of celestial structures in outer space by means of evolving and growing space technology. While the study of space is carried out mainly by astronomers with telescopes, the physical exploration of space is conducted both by unmanned robotic space probes, radio astronomy and human spaceflight.
  • Robotics: A self driving car combines a variety of techniques to perceive their surroundings, including radarlaser lightGPSodometry, and computer vision. Its advanced control systemsinterpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage.
  • Information extraction: Information extraction (IE) is the task of automatically extracting structured information from unstructuredand/or semi-structured machine-readable In most of the cases this activity concerns processing human language texts by means of natural language processing (NLP). Recent activities in multimedia document processing like automatic annotation and content extraction out of images/audio/video could be seen as information extraction.
  • Document Classification: Document classification or document categorization is a problem in library scienceinformation scienceand computer science. The task is to assign a document to one or more classes or categories. The algorithmic based classification of documents is mainly in information science and computer science. The documents to be classified may be texts, images, music, etc. Each kind of document possesses its special classification problems. Documents may be classified according to their subjects or according to other attributes (such as document type, author, printing year etc.).
  • Classification of images: A topic of pattern recognitionin computer vision, is an approach of classification based on contextual information in images. “Contextual” means this approach is focusing on the relationship of the nearby pixels, which is also called neighbourhood. The goal of this approach is to classify the images by using the contextual information. if only a small portion of the image is shown, it is very difficult to tell what the image is about. However, if we increase the contextual of the image, then it makes more sense to recognize.

The only thing matters is that what is your domain of interest and how could you use machine learning in that domain?

Topic 2: History of Machine Learning

Topic 2: History of Machine Learning

Even though, the name machine learning was coined in 1959 by Arthur Samuel, but its history is quite old.

It was in 1940s when the first manually operated computer system, ENIAC, was invented. At that time the word “computer” was being used as a name for a human with intensive numerical computation capabilities, so, ENIAC was called a numerical computing machine! Well, you may say it has nothing to do with learning?! WRONG, from the beginning the idea was to build a machine able to emulate human thinking and learning. In, 1952 — Arthur Samuel wrote the first computer learning program. The program was the game of checkers, and the IBM computer improved at the game the more it played, studying which moves made up winning strategies and incorporating those moves into its program.

This program helped checkers players a lot in improving their skills! Around the same time, 1957 — Frank Rosenblatt designed the first neural network for computers (the perceptron), which simulate the thought processes of the human brain, which was a very, very simple classifier but when it was combined in large numbers, in a network, it became a powerful monster. Well, monster is relative to the time and in that time, it was a real breakthrough.

Then, the discoveries in neural network didn’t stop.

1967 — The “nearest neighbor” algorithm was written, allowing computers to begin using very basic pattern recognition. This could be used to map a route for traveling salesmen, starting at a random city but ensuring they visit all cities during a short tour.

1979 — Students at Stanford University invent the “Stanford Cart” which can navigate obstacles in a room on its own.

1981 — Gerald Dejong introduces the concept of Explanation Based Learning (EBL), in which a computer analyses training data and creates a general rule it can follow by discarding unimportant data.

1985 — Terry Sejnowski invents NetTalk, which learns to pronounce words the same way a baby does.

1990s — Work on machine learning shifts from a knowledge-driven approach to a data-driven approach.  Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions — or “learn” — from the results.

1997 — IBM’s Deep Blue beats the world champion at chess.

Thanks to statistics, because of what, machine learning became very famous in 1990s. The intersection of computer science and statistics gave birth to probabilistic approaches in AI. This shifted the field further toward data-driven approaches. Having large-scale data available, scientists started to build intelligent systems that were able to analyze and learn from large amounts of data. As a highlight, IBM’s Deep Blue system beat the world champion of chess, the grand master Garry Kasparov.

It continued with more fast pace.

2006 — Geoffrey Hinton coins the term “deep learning” to explain new algorithms that let computers “see” and distinguish objects and text in images and videos.

2010 — The Microsoft Kinect can track 20 human features at a rate of 30 times per second, allowing people to interact with the computer via movements and gestures.

2012 – Google’s X Lab develops a machine learning algorithm that is able to autonomously browse YouTube videos to identify the videos that contain cats.

2014 – Facebook develops DeepFace, a software algorithm that is able to recognize or verify individuals on photos to the same level as humans can.

2015 – Amazon launches its own machine learning platform.

2015 – Microsoft creates the Distributed Machine Learning Toolkit, which enables the efficient distribution of machine learning problems across multiple computers.

2015 – Over 3,000 AI and Robotics researchers, endorsed by Stephen Hawking, Elon Musk and Steve Wozniak (among many others), sign an open letter warning of the danger of autonomous weapons which select and engage targets without human intervention.

2016 – Google’s artificial intelligence algorithm beats a professional player at the Chinese board game Go, which is considered the world’s most complex board game and is many times harder than chess. The AlphaGo algorithm developed by Google DeepMind managed to win five games out of five in the Go competition.

We can consider the 90s as one of the golden eras of machine learning. During the decade there were significant contributions to the field. In addition to the developments being made on the algorithm side, the hardware and the technology were also improving dramatically! For years, you saw computers becoming not only more powerful but smaller in size! This huge advancement also contributed a lot to scientific progress in general, and to AI in particular.

Dear Buddies, do not get worry with so many scientific words mentioned here. We will learn everything step by step.

Machine Learning

Topic1: What is Machine Learning?

Think of a day when the sky is full of dark clouds and thunderstorms. The first thing that comes to our mind is its going to rain today.

How do you know that it’s going to rain?

Because, in our life, whenever we have seen the sky behaving the same then it has rained. That’s what Machine Learning is.

Machine learning is a branch of Artificial Intelligence that provides systems the ability to automatically learn by using statistical techniques and improve from experience without being explicitly programmed.

For a given problem, Machine learning (ML) models learn from past recorded problems and solutions, use an algorithm to learn, understands the solution, then use this learning experience to give solutions in future.

Talking technically, there is a huge database containing data of a particular domain of several years from past history. The process of learning begins with observations or data, in order to look for patterns in data and make better decisions in the future based on the examples that we provide. A particular algorithm is made to run on this data that does data analysis, to recognize the hidden pattern in the data. When further such a same situation arises then the computer gives the solution that fall in the specific range of the pattern recognized. The primary aim is to allow the computers learn automatically without human intervention or assistance and adjust actions accordingly.

diaWriting software is the bottleneck, we don’t have enough good developers. Let the data do the work instead of people. Machine learning is the way to make programming scalable. Machine Learning is getting computers to program themselves.

Traditional Programming: Data and program is run on the computer to produce the output.

dia2

Machine Learning: Data and output is run on the computer to create a program.

dia3.jpg

Machine learning is like farming or gardening. Seeds is the algorithms, nutrients is the data, the gardener is you and plants is the programs.

Any technology user today has encountered machine learning in his day to day life. Facial recognition technology allows social media platforms to help users tag and share photos of friends. Recommendation engines, powered by machine learning, suggest what movies or television shows to watch next based on user preferences. A self-driving car is a vehicle that is capable of sensing its environment and navigating without much human input.