Artificial Intelligence

Artificial Intelligence

Demystifying Artificial Intelligence

With the very high buzz about Artificial Intelligence (AI),people are curious to know more about it. Some people have ideas of AI influenced by movies and literature, which are leaning towards fantasies. Some people are excited about its prospects however some are skeptical about it. However, the business world is highly excited because of the transformational power of AI.


Intelligence has been of great interest for researchers and there is a long history of research and debate, however, there is still no standard definition of intelligence, accepted by all.

Generally, it is understood that intelligence is related to the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, learn from experience and a broader and deeper capability for comprehending one’s surroundings. Intelligence provides the ability to adapt to different environments and succeed or profit with respect to some goal or objective.

History of Artificial Intelligence

Artificial Intelligence (AI) does not have a universally accepted definition, like intelligence.

In 1955, John McCarthy, one of the pioneers of AI, was the first to define the term artificial intelligence, roughly as “The goal of AI is to develop machines that behave as though they were intelligent.”

John Haugland defined it in his book titled “Artificial Intelligence The Very Idea”, published in 1985, as “The exciting new effort to make computers think.”

Elaine Rich and Kevin Knight defined it in his book titled “Artificial Intelligence”, published in 1991, as “The study of how to make computers do things which, at the moment, people are better”

Richard E. Bellman defined it in his book titled “An Introduction to Artificial Intelligence: Can Computers Think?”, published in 1978, as “The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, and learning.”

Gartner defines AI as “Artificial intelligence (AI) applies advanced analysis and logic-based techniques, including machine learning, to interpret events, support and automate decisions, and take actions”[1]

A highly technical definition of Artificial Intelligence can be “Algorithms enabled by Constraints exposed by Representations that support Models targeted at Thinking, Perception and Action”[2]



Few of these definitions are focusing on the faculty of thinking and reasoning and others on behavior or action. A few of them talk about both thinking and action.

However, for all practical purposes we can define Artificial Intelligence as “AI is the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, and problem solving.”

Strong AI and Narrow AI

Strong AI is the ability of a machine to perform any task that a human can perform. It is also called as Full AI or Artificial general intelligence (AGI). This is still a concept and people have varied views about its feasibility.

Narrow AI is not intended to perform human like cognitive abilities, but the AI is limited to Solving a specific problem or performing a specific reasoning task. Most of the commercial applications of AI fall under this category. It is also called as Weak AI.

Forms of AI

 May take many forms, however, the most popular and discussed forms are

  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Natural Language Processing (NLP)
  • Robotics
  • Autonomous Vehicle

Machine learning, Computer vision, and Natural language processing have wide applications in many business functions and processes and have highest adoption levels in the enterprises.

Machine Learning

The traditional computer systems use the set of programs which has explicit instructions to perform certain tasks. These programs contain the identified process or method or sets of business rules, where the certainty of events without involving any randomness is assumed. These can be classified as deterministic systems.

Machine learning is a subfield of artificial intelligence that gives computers the ability to learn without explicitly being programmed.[1] Here, instead of explicit instruction like deterministic systems, the system is provided with examples or experiences in the form of training data and the machine learning algorithms detect the patterns from these examples and then provide output (predictions) even for unseen data. It learns from the examples and gets generalized understanding of the process, hence can provide output even for data with some randomness.

There are three types of machine learning scenarios which are termed as

  • Supervised learning
  • Unsupervised learning
  • Reinforcement learning

These are employed to find answers to some futuristic questions business users have like what will be the price of a commodity in future, what will be the sales volumes of an SKU in a specific geography for next , say, 6 months, what will be the life time value of a customer, when will the machine fail, what will be the propensity of receipt of a payment, will the lead convert into customer, what different distinct types of customers exist, etc.

Deep Learning

Deep learning is a type of machine learning which is based on Artificial Neural Network construct with many layers. Deep learning is a scalable machine learning, which extracts useful patterns from data in an automated way with as little human effort involved as possible as it automates feature extraction, which eliminates dependency on human experts. It also does not require some of data pre-processing that is typically required in case of machine learning.

Deep learning algorithms are highly effective with unstructured data like image, video, text, or audio and hence are used in the computer vision and Natural language processing applications.

Along with standard neural network architecture, Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) are most widely used architectures of deep learning.

Computer Vision

It is about ability of computer systems to extract meaningful information from digital images or videos. It enables a system to identify different segments of an image and enables to recognize the image or a segment of it.

Computer vision generally uses Convolutional Neural Network (CNN) for object recognition and Recurrent Neural Network (RNN) for understanding sequence of images like video frames. Optical Character recognition (OCR) can be considered as one of the oldest and most known application of Computer Vision.

License plate recognition, physical Intrusion detection, detection of traffic rules violations through the feeds received from CCTV cameras, facial recognition for security purposes are few of the applications. It has immense applications in healthcare diagnosis like detection of cancer from the images faster than traditional methods resulting in life savings. All industrial visual inspections can be enhanced with computer vision.

Natural Language Processing

Natural-language processing (NLP) technology involves the ability to turn text or audio speech into encoded, structured information, based on an appropriate ontology.[1]

NLP is about the interactions between computers and human languages, particularly how to program computers to process and analyze large amounts of natural language data. NLP makes computers or systems able to analyze and understand human language in written and spoken form.

The ubiquitous Chatbots are one of the most popular application areas of NLP called Conversational AI. The intelligent chatbots attempt to make humans believe they are interacting with another human, while they are interacting with a chatbot. The Voicebots and Emailbots are other applications from the area of Conversational AI.

Natural language understanding through identification of keywords, entities, sentiments, tones, emotions from a piece of text or audio is another area of NLP application.

Psycholinguistics is another area of NLP application where the psychological profiling of a person can be derived from self-authored text. Summarization of text is another area of NLP application. The computer reads a larger text and creates a summary of it. The Natural Language Generation(NLG) methodology generates summary, whereas the extractive summarization extracts important sentences to create the summary.


Robotics is related to design, construction, and use of machines to perform tasks mimicking human motions and intelligence. A robot is a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through variable programmed motions for the performance of a variety of tasks.

An Intelligent Robot can have a vision capability by application of Computer vision technologies. It can recognize voice commands, or it can converse with humans using NLP technologies. It can be in the form of Autonomous Mobile Robot (AMR), which can move around without any human guidance.

Autonomous Vehicle

An autonomous vehicle is one that can drive itself from a starting point to a predetermined destination in “autopilot” mode using various in-vehicle technologies and sensors, including adaptive cruise control, active steering (steer by wire), anti-lock braking systems (brake by wire), GPS navigation technology, lasers and radar.

It is a vehicle that can sense its environment and move safely with little or no human input. There are various components of autonomous driving like perception, localization, mapping, control, planning, driver state, etc., and technologies like Reinforcement learning, CNNs and RNNs are employed in these components.

Enterprise AI

The enterprise AI is the implementation of AI, which is seen as a strategic initiative aligned with the business strategy and goals, implemented across the enterprise, incorporates the information architecture and the data strategy and is the integrated and holistic business initiative to acquire a leadership position in the market and gain competitive advantage by becoming a data superpower.

Benefits of AI

Artificial Intelligence provides multiple types of abilities to businesses like, ability to know leading indicators in the form of predictions which help businesses get answers for queries related to what will happen. Another ability is to understand class or category of an entity like customer category. AI also provides ability to automate repetitive processes intelligently like invoice processing, customer support, etc. AI helps organizations to make data-driven decisions, improve business processes, transform the products, transform the business model resulting in growth in revenue, improvement in profitability, increased efficiency, reduced waste, increased resilience, reduced risks, and increased customer engagement.

Future of AI

Businesses worldwide have realized the transformational benefits of AI and most of the major businesses have either implemented AI in some form or have included it in their strategy. Many businesses are thinking of transforming their business model with implementation of AI in their processes and products. Businesses are also focusing on improving their data maturity so that they can reap the benefit of AI.

On the technology front, innovations are in progress on algorithms, compute abilities, cloud services, etc. to make AI more powerful and efficient.


1. What is Enterprise AI?

Enterprise AI is a category of enterprise software that harnesses advanced artificial intelligence techniques to drive digital transformation. It is solely meant to solve modern digital complications for various businesses using advanced AI techniques. A key distinction between enterprise AI and AI generally is that enterprise AI focuses on addressing specific high-value use cases at a large scale, where the application of AI embedded into business processes can produce meaningful value. Large organizations will build and operate dozens or hundreds of enterprise AI applications to address numerous use cases across their business.

2. What are the benefits of Artificial Intelligence?

AI is a powerful tool for addressing a variety of challenges that are difficult to model or solve with traditional methods. For the end user, AI seamlessly offers enhanced experiences, personal assistance, and automation of repetitive tasks. In addition, AI can make devices more energy efficient and allow us to interact with them in more convenient ways.

3. How do we use AI in everyday life?

Artificial Intelligence (AI) and its multiple sub-domains are being increasingly employed in various industries and businesses to aid in repetitive processes.
AI in Healthcare: AI has been the key component in drivers of innovation. Healthcare is primarily driven by anatomical knowledge and, to a certain degree, human intuition, but AI can help healthcare professionals. AI data processes can smoothen the process for the patients and reduce the financial and time pressure on the clinic, making them more efficient.
AI in Manufacturing: Manufacturing is seeing a lot of development with AI coming in with technological advancements such as inventory management; AI alerts employees at the right time to replenish the supplies required to make the final product.
AI in Retail: The retail sector can immensely benefit by leveraging AI in their business. Enterprise AI can help in pricing decisions as AI can accurately forecast the demand in the coming season, festive sales, etc.
AI in Banking: Banking is a sector that is rapidly deploying AI. A few key innovations in banking with AI have been fraud detection by studying thousands of data points at one time, which is only possible with technology; another thing that stood out with AI is it can help tailor out the best investment portfolio by studying multiple data points that make your financial status.
AI in Education: As we progress ahead in time, we need better education models to strengthen the knowledge economy. AI will not remove the classroom instruction model but enhance it with better personalization, knowing the strengths and weaknesses of an individual student, and meeting his learning needs. It can also help with grading; not only can AI grade objective exams, but it also works with abstract assessments.

4. What is Predictive Analytics?

Predictive Analytics is the use of mathematical and statistical methods, including artificial intelligence and machine learning, to predict the value or status of something of interest. The benefits of predictive analytics are wide-ranging and potentially game-changing for a company. Predictive analytics powered by AI/machine learning embedded in business processes can drive significant improvements such as reduced costs, increased margins and profitability, better safety and reliability, and lower environmental impact. Predictive analytics is a core function of enterprise AI. The foundational purpose of enterprise AI applications is to serve as “prediction engines,” providing insights to drive actions that improve business operations and performance.

5. What is the difference between Big data and Wide data?

Big data refers to data sets that are too large or complex to be dealt with by traditional data-processing application software. Widedata applies to the typical organization that is more often concerned with tying together data from disparate sources, often a wide range of sources.
As big data is locked up in different silos, it is either largely unusable or requires substantial effort to manually analyze each data set and then tie it together. Conversely, wide data slashes the boundaries of those data silos to combine data from multiple sources. The resulting insights offer a detailed analysis that leads to better data-driven decisions, andin turnreduced risk.