Artificial Intelligence

A Brief History of Artificial Intelligence

A Brief History of Artificial Intelligence
In a nutshell, AI is a thinking machine—technology that can solve problems, learn and change. One of AI’s founders, Stanford computer scientist John McCarthy, called it "the science and engineering of making intelligent machines, especially intelligent computer programs." Image from Unsplash
Eddie Huffman profile
Eddie Huffman August 23, 2022

The notion of artificial intelligence has been around for centuries, but not until the advent of modern computing could the dream start to become reality.

Related Programs You Should Consider

Article continues here

Artificial intelligence has transformed science fiction into everyday reality. Advancements spawned by AI range from vacuuming the dirt beneath our feet (the Roomba) to collecting soil samples on another planet (Mars Rovers). AI allows us to ask questions or issue commands to our phone or smart speaker and get appropriate responses. It protects your financial and medical data online.

AI helps accelerate vaccine development and improve disease diagnoses. A 2019 project used AI technology to analyze CT lung scans, reducing false-positive results by 11 percent. Thanks to AI, machines have beaten Grandmaster Garry Kasparov at chess and won on “Jeopardy!” AI has brought us to the early days of self-driving cars and trucks. There are robot dogs and robot dance troupes. Bomb-disposal robots saved lives in Iraq and Afghanistan.

The technology can translate speech from one language to another in real time, forecast floods and recommend songs on your favorite music app based on your listening history. We get closer every day to a machine that can pass the Turing Test–convincing someone that they’re communicating with a living, breathing human being.

How did we get here? This brief history of artificial intelligence helps answer that question.

What is artificial intelligence?

In a nutshell, AI is a thinking machine—technology that can solve problems, learn and change. One of AI’s founders, Stanford computer scientist John McCarthy, called it “the science and engineering of making intelligent machines, especially intelligent computer programs.” AI involves “using computers to understand human intelligence,” he added, but “does not have to confine itself to methods that are biologically observable.”

Problem-solving is at the heart of AI. A primary advantage is its ability to study and analyze massive quantities of data at a rate far beyond the capacity of even the most skilled human brain. It applies that knowledge to a vast battery of algorithms at a similarly dizzying speed.

Artificial intelligence vs. machine learning

AI and machine learning are related concepts, but they are not interchangeable. Machine learning is actually a type of AI, enabling computer systems to learn by gathering data beyond their original programming. Machine learning applies to “any system that can perform capabilities that typically require human intelligence,” according to Greg Allen, chief of strategy and communications for the U.S. Department of Defense. Machine learning is derived from data, he explains, where handcrafted knowledge AI follows “programmed rules given by human experts.”

Machine learning is used for a variety of purposes, such as detecting online fraud, filtering spam and recognizing speech.


“I’m ready for a degree!”

University and Program Name Learn More

A brief history of artificial intelligence

AI’s story is nearly a century in the making, technology-wise, with the concept of AI floating around for centuries before that. Ancient philosophers pondered the possibility of mechanical men, while those in the 1700s and beyond began to grasp the concept of mechanized thinking. René Descartes–Monsieur “I think, therefore I am”–imagined machines that “bore a resemblance to our body and imitated our actions.” Mechanical humanoids began to appear in books and movies in the first two decades of the 20th century, though the word “robot” would not come along until 1920.

When Alan Turing wasn’t busy breaking Nazi codes during World War Two, he and his pals began to lay the groundwork for AI. He wrote a paper in 1950 called “Computing Machinery and Intelligence” that included a proposal called The Imitation Game, which considered whether machines could think. The proposal evolved into The Turing Test to measure machine intelligence, or AI. In 1952, Arthur Samuel created a computer program that could play checkers. Logic Theorist, the first AI program, came along three years later.

Stanford’s John McCarthy and one of his counterparts at MIT, Marvin Minsky, set the modern AI movement in motion in 1956 when they brought a bunch of early computer geeks together for the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) . They all agreed that AI was attainable.

Early AI efforts

AI took off in the wake of the Dartmouth conference, boosted by cheaper and faster computers with greater storage capacity. Programming languages proliferated–including McCarthy’s Lisp, the most popular one for AI research, still widely used many decades later. The world’s first industrial robot went to work on a General Motors assembly line in 1961. Early AI programs tackled algebra and calculus challenges.

Another MIT professor, Joseph Weizenbaum, created an interactive program in 1965 called ELIZA that could carry on a conversation. A year later came Shakey the Robot, the first mobile general-purpose robot and one capable of formulating its own actions based on broad commands. Moviegoers in 1968 were introduced to HAL in “2001: A Space Odyssey,” an AI system that controlled a spacecraft, conversed with crew members and went rogue.

The 1970s were a mixed bag for AI. The decade started off with a bang when Japanese programmers built WABOT-1, the world’s first anthropomorphic robot. But the technology took a hit in 1973 a negative report delivered to the British Science Council about the progress of AI precipitated a cut in funding. Another robot closed out the decade in 1979 when the Stanford Cart, an early autonomous vehicle, crossed a room full of chairs without human assistance. (It took five hours to do so, but baby steps, amirite?)

Slow progress continued in the 1980s, from WABOT-2 to a driverless van developed by Mercedes-Benz. The 1990s saw A.L.I.C.E., a chatbot inspired by ELIZA; Long Short-Term Memory (LSTM), used for recognition of handwriting and speech; and Deep Blue, IBM’s chess-playing computer. The most humanoid robots to date came along in the 2000s, including Kismet, which had a face that could mimic human expressions, and ASIMO, Honda’s AI robot. Roombas and Mars Rover followed soon after, as did Google’s first driverless car in 2009.

AI comes of age

AI has increasingly become a part of everyday life since 2010. Smartphones with voice assistants have become ubiquitous worldwide. Apple introduced Siri to homes in 2011, with Google Home and other smart speakers and virtual assistants following in her wake. Driverless vehicles have already hit the highways, though designers still have safety concerns to work out before they become common.

Microsoft brought AI to gaming systems in 2010 with Kinect for Xbox 360, which uses infrared detection and a 3D camera to track body movements. The past decade-plus has also brought Watson, the IBM computer that handily defeated two “Jeopardy!” champions in 2011 and conversed with Bob Dylan in 2016); Sophia, the most realistic humanoid robot to date; and Alibaba, a language-processing program that narrowly outscored humans in a reading and comprehension test at Stanford in 2018.

AI has become a standard tool to combat cyber security threats such as financial fraud, ransomware and behavior manipulation via social networks. It checks to make sure you’re not a robot when you log into your bank and it scans countless online events to spot and report suspicious behavior to corporations and government entities.

Technology that was a mere glimmer in Alan Turing’s eye 80 years ago has become integral to the way the world works, with new AI applications coming online every day.

Building a career in artificial intelligence

For a career path with good salaries and zero unemployment, look no further than AI. Almost everyone is hiring AI professionals, from financial institutions and healthcare systems to government agencies and supply-chain logistics. They need researchers, developers and engineers for data analysis, programming and protection against fraud and theft. And they’re paying good money to get them.

Jobs requiring AI skills pay about $144,000 per year, according to, and more in tech hubs like New York and California. ZipRecruiter reports a range of positions and salaries:
· $109,000 for a natural language processor
· $121,446 for a machine learning engineer, increasing to about $147,000 with experience
· $125,993 for an AI scientist
· $148,136 per year for senior artificial intelligence engineers
· $148,993 for machine learning research scientists
· $149,704 for intelligence researchers
· $164,769 for artificial intelligence engineers

Advanced AI degrees

Most AI jobs require, at a minimum, a bachelor’s degree in a field such as computer science or engineering. An advanced degree can give you a competitive edge in terms of job opportunities and higher salaries. You don’t have to relocate to study AI at a top institution; many offer online degrees in the field. They include:

  • Southern Methodist University: SMU’s highly interactive online learning program offers an MS in Computer Science with a specialization in artificial intelligence. Its highly specialized curriculum takes a deep dive into topics like neural networks, data mining, and storage retrieval.
  • Johns Hopkins University: The prestigious Baltimore university has faculty members at the leading edge of AI. Its online master’s program features elective courses such as Applied Game Theory and Integrating Humans and Technology.
  • Drexel University: Drexel offers a variety of specialization options, with the option of stacking multiple graduate certificates to earn an MS. Students can earn a dual degree in Artificial Intelligence and Machine Learning.
  • Stevens Institute of Technology: Stevens’ award-winning engineering programs include a pioneering cross-disciplinary graduate program in AI and engineering.

How useful is this page?

Click on a star to rate it!

Since you found this page useful...mind sharing it?

Questions or feedback? Email

About the Author

Eddie Huffman is the author of John Prine: In Spite of Himself and a forthcoming biography of Doc Watson. He has written for Rolling Stone, the New York Times, Utne Reader, All Music Guide, Goldmine, the Virgin Islands Source, and many other publications.

About the Editor

Tom Meltzer spent over 20 years writing and teaching for The Princeton Review, where he was lead author of the company's popular guide to colleges, before joining Noodle.

To learn more about our editorial standards, you can click here.


You May Also Like To Read

Categorized as: Artificial IntelligenceEngineeringInformation Technology & Engineering