The term artificial intelligence refers to the field of computer science that deals with the design and implementation of computer systems that imitate human elements that imply even rudimentary intelligence: learning, adaptability, inference, contextual understanding, problem solving, etc. John McCarthy defined this field as "the science and methodology of creating intelligent machines".
Artificial intelligence is a point of intersection between multiple sciences such as computer science, psychology, philosophy, neurology, linguistics and engineering science, with the aim of synthesizing intelligent behavior, with elements of reasoning, learning and adaptation to the environment, while it is usually applied to specially constructed machines or computers. It is divided into symbolic artificial intelligence, which attempts to simulate human intelligence algorithmically using symbols and high-level logical rules, and subsymbolic artificial intelligence, which attempts to reproduce human intelligence using elementary numerical models that inductively synthesize intelligent behaviors by the sequential self-organization of simpler structural components ("behavioral artificial intelligence"), simulate real biological processes such as the evolution of species and the functioning of the brain ("computational intelligence"), or are an application of statistical methodologies to AI problems.
In the science fiction film 2001: A Space Odyssey (1968), an intelligent computer plays a central role in the plot. The image shows the artificial "eye" (a video camera) with which the computer spies on the human crew of the spacecraft where it is installed.
The distinction between symbolic and subsymbolic approaches concerns the nature of the tools used, while it is not uncommon to combine multiple approaches (different symbolic, subsymbolic, or even symbolic and subsymbolic methods) when trying to address a problem. Based on the desired scientific goal, AI is categorized into other broad areas, such as problem solving, machine learning, knowledge discovery, knowledge systems, etc. There is also overlap with related scientific fields such as computer vision, natural language processing or robotics, which can be placed within the broader context of modern artificial intelligence as independent fields.
Science fiction literature and cinema from the 1920s to the present have given the general public the feeling that AI is about the effort to build mechanical androids or self-aware computer programs (strong AI), even influencing the first researchers in the field. In reality, most AI scientists strive to build software or complete machines that can solve realistic computational problems of any type with acceptable results (weak AI), although many believe that emulating or simulating real intelligence, strong AI, should be the ultimate goal.
Modern artificial intelligence is one of the most "mathematical" and rapidly evolving fields of computer science. Today, the field makes more use of subsymbolic methods and tools derived from applied mathematics and engineering sciences, rather than from theoretical computer science and mathematical logic as was the case before 1990. At the academic level, artificial intelligence is also studied by electronic engineering, and it constitutes one of the most important fundamental components of the interdisciplinary field of cognitive science. Modern graphics processing units (GPUs), such as those produced by Nvidia, AMD, and others, using advanced semiconductor materials, offer the high computing power required to develop complex machine learning algorithms.
A diagram of the structure and function of a simple artificial neuron.
In 1950, mathematician Alan Turing, the father of the theory of computation and the forefather of artificial intelligence, proposed the Turing test; a simple test that could determine whether a machine possesses intelligence. Artificial intelligence was formally established as a field at a meeting of some prominent American scientists in the field in 1956 (John McCarthy, Marvin Minsky, Claude Shannon, etc.). This year also saw the debut of Logic Theorist, a program that relied on formal logic inference rules and heuristic search algorithms to prove mathematical theorems.
Next important milestones were the development of the LISP programming language in 1958 by McCarthy, the first functional programming language that was The emergence of genetic algorithms in the same year by Friedberg and the presentation of the improved perceptron neural network in '62 by Rosenblatt played a very important role in the creation of AI applications in the following decades. However, in the late '60s, the AI winter began, a time of criticism, disappointment and underfunding of research programs, as all the tools in the field up to that time were only suitable for solving extremely simple problems. In the mid-'70s, however, there was a resurgence of interest in the field due to the commercial applications acquired by expert systems, AI machines with stored knowledge for a specialized domain and the ability to quickly extract logical conclusions, which behave like a human expert in the respective domain. At the same time, the logic programming language Prolog appeared, which gave a new impetus to symbolic AI, while in the early 1980s, much more powerful and more widely used neural networks began to be implemented, such as multilayer perceptrons and Hopfield networks. At the same time, genetic algorithms and other related methodologies were now being developed jointly, under the umbrella of evolutionary computing.
During the 1990s, with the increasing importance of the Internet, intelligent agents, autonomous AI software placed in an environment with which it interacts, developed, which found a wide field of applications due to the spread of the Internet. Agents usually aim to provide assistance to their users, to collect or analyze giant data sets or to automate repetitive tasks (e.g. see web bot), while in their construction and operation methods they summarize all the known AI methodologies developed over time. Thus today, not infrequently, AI is defined as the science that studies the design and implementation of intelligent agents.
Also in the 1990s, AI, mainly machine learning and knowledge discovery, began to be greatly influenced by probability theory and statistics. Belief networks were the starting point of this new movement, which eventually connected AI with the more rigorous mathematical tools of statistics and engineering science, such as hidden Markov models and Kalman filters. This new probabilistic approach is strictly subsymbolic in nature, as are the three methodologies that are categorized under the label of computational intelligence: neural networks, evolutionary computation, and fuzzy logic.