Artificial Intelligence

The Beginning of New Era

Intro-img

Introduction

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals including humans. AI research has been defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals.

The term "artificial intelligence" had previously been used to describe machines that mimic and display "human" cognitive skills that are associated with the human mind, such as "learning" and "problem-solving". This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does not limit how intelligence can be articulated.

AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go). As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.

Key Categories of AI


Categorization

AI requires a lot of data that is relevant to the problem being solved. The first step to building an AI solution is creating what I call “design intent metrics,” which are used to categorize the problem. Whether users are trying to build a system that can play Jeopardy, help a doctor diagnose cancer, or help an IT administrator diagnose wireless problems, users need to define metrics that allow the problem to be broken into smaller pieces. In wireless networking, for example, key metrics are user connection time, throughput, coverage, and roaming. In cancer diagnosis, key metrics are white cell count, ethnic background, and X-ray scans.

middle-2 image

How does AI work ?

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes

This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes

This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes

This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

image-3

Classification

Once users have the problem categorized into different areas, the next step is to have classifiers for each category that will point users in the direction of a meaningful conclusion. For example, when training an AI system to play Jeopardy, users must first classify a question as being literal in nature or a play on words, and then classify by time, person, thing, or place. In wireless networking, once users know the category of a problem (e.g. a pre- or post-connection problem), users need to start classifying what is causing the problem: association, authentication, dynamic host configuration protocol (DHCP), or other wireless, wired, and device factors.

middle-4 image

Machine Learning

Now that the problem is divided into domain-specific chunks of metadata, users are ready to feed this information into the magical and powerful world of machine learning. There are many machine learning algorithms and techniques, with supervised machine learning using neural networks (i.e. deep learning) now becoming one of the most popular approaches. The concept of neural networks has been around since 1949, and I built my first neural network in the 1980s. But with the latest increases in compute and storage capabilities, neural networks are now being trained to solve a variety of real-world problems, from image recognition and natural language processing to predicting network performance. Other applications include anomaly feature discovery, time series anomaly detection, and event correlation for root cause analysis.

middle-5 image

Collaborative filtering

Most people experience collaborative filtering when they pick a movie on Netflix or buy something from Amazon and receive recommendations for other movies or items they might like. Beyond recommenders, collaborative filtering is also used to sort through large sets of data and put a face on an AI solution. This is where all the data collection and analysis is turned into meaningful insight or action. Whether used in a game show, or by a doctor, or by a network administrator, collaborative filtering is the means to providing answers with a high degree of confidence. It is like a virtual assistant that helps solve complex problems.

AI is still very much an emerging space, but its impact is profound and will be felt even more keenly as it becomes an ever larger part of our daily lives. When choosing an AI solution, like when buying a car, we’ll need to understand what is under the hood to make sure we are buying the best product for our needs.

middle-6_image

Why is artificial intelligence important ?

AI is important because it can give enterprises insights into their operations that they may not have been aware of previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of online services by using machine learning to understand how people use their services and then improving them. In 2017, the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their competitors.

middle-7 image

What are the advantages and disadvantages of artificial intelligence?

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

Advantages

Disadvantages

bottom-part image

Conclusion

AI is at the centre of a new enterprise to build computational models of intelligence. The main assumption is that intelligence (human or otherwise) can be represented in terms of symbol structures and symbolic operations which can be programmed in a digital computer. There is much debate as to whether such an appropriately programmed computer would be a mind, or would merely simulate one, but AI researchers need not wait for the conclusion to that debate, nor for the hypothetical computer that could model all of human intelligence. Aspects of intelligent behaviour, such as solving problems, making inferences, learning, and understanding language, have already been coded as computer programs, and within very limited domains, such as identifying diseases of soybean plants, AI programs can outperform human experts. Now the great challenge of AI is to find ways of representing the commonsense knowledge and experience that enable people to carry out everyday activities such as holding a wide-ranging conversation, or finding their way along a busy street. Conventional digital computers may be capable of running such programs, or we may need to develop new machines that can support the complexity of human thought.