Part of a series on |
Artificial intelligence |
---|
History of computing |
---|
Hardware |
Software |
Computer science |
Modern concepts |
By country |
Timeline of computing |
Glossary of computer science |
The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956.[1] Attendees of the workshop became the leaders of AI research for decades. Many of them predicted that machines as intelligent as humans would exist within a generation. The U.S. government provided millions of dollars to make this vision come true.[2]
Eventually, it became obvious that researchers had grossly underestimated the difficulty of the project.[3] In 1974, criticism from James Lighthill and pressure from the U.S. Congress led the U.S. and British Governments to stop funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government and the success of expert systems reinvigorated investment in AI and by the late 80s the industry had grown into the billions of dollars. However, investors' enthusiasm waned in the 1990s and the field was criticized in the press and avoided by industry (a period known as the "AI Winter"). Nevertheless, research and funding continued to grow under other names.
In the early 2000s, machine learning was applied to a wide range of problems in academia and industry. The success was due to the availability of powerful computer hardware, the collection of immense data sets and the application of solid mathematical methods. In 2012, deep learning proved to be a breakthrough technology, eclipsing all other methods. The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications. Investment in AI boomed in the 2020s.