by Rockwell Anyoha

Can Machines Think?

In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Making the Pursuit Possible

Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, computers could be told what to do but couldn’t remember what they did. Second, computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing.

The Conference that Started it All

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy, imagining a great collaborative effort, brought together top researchers from various fields for an open ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthy’s expectations; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research.

Roller Coaster of Success and Setbacks

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.

Anyoha SITN Figure 2 AI timeline

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldn’t store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy at the time, stated that “computers were still millions of times too weak to exhibit intelligence.”  As patience dwindled so did the funding, and research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it could be argued that the indirect effects of the FGCP inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair game as evidenced by Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions.

Time Heals all Wounds

We haven’t gotten any smarter about how we are coding artificial intelligence, so what changed? It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem. Moore’s Law, which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Ke Jie, only a few months ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

Artificial Intelligence is Everywhere

We now live in the age of “big data,” an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, banking, marketing, and entertainment. We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. There may be evidence that Moore’s law is slowing down a tad, but the increase in data certainly hasn’t lost any momentum. Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

The Future

So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, it’s already underway. I can’t remember the last time I called a company and directly spoke with a human. These days, machines are even calling me! One could imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages being translated in real time. We can also expect to see driverless cars on the road in the next twenty years (and that is conservative). In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems inconceivable that this would be accomplished in the next 50 years. Even if the capability is there, the ethical questions would serve as a strong barrier against fruition. When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. His current project employs the use of machine learning to model animal behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence.

For more information:

Brief Timeline of AI

Complete Historical Overview

Dartmouth Summer Research Project on Artificial Intelligence

Future of AI

Discussion on Future Ethical Challenges Facing AI

Detailed Review of Ethics of AI

57 thoughts on “The History of Artificial Intelligence

  1. During the Times Heals all wounds you say that “holding us back 30 years ago was no longer a problem. , which estimates that the ” the , is supposed to have the words Moore’s Law infront of it the article was a great thing though thank you for the information and apologies for being nitpicky.

  2. Intelligence is not common to everyone. If you are smart enough to figure out problems and solve without much stress, then you are intelligent. AI is still no match for human intelligence but its quite unbelievable hearing scientists create supercomputers that operate faster than the human brain.

    1. Agree with you. Computers can think, count, solve complex mathematical problems and equations, but compared to human intelligence,AI is like a small child. It can work only in programmed situations; it cannot understand what improvisation and creativity are. At least for now. Recently I read an article of a company that develops innovative technologies (, even they assure that artificial intelligence is far from ours.

  3. Thanks for the article. According to the byline, the author studies “the use of machine learning to model animal behavior”, it would be great, if there were some examples in the text. It is interesting how such work is carried out when it comes to animals. Maybe any more articles or videos to have a look?))

  4. Thanks, Rockwell!
    It’s really interesting to see AI-development from the retrospective touch of view. For me, is very interesting to investigate the use of AI in different areas. I want to add some articles, where people can find interesting information about AI to your list:

  5. Thanks for the article explaining the history of Artificial Intelligence from head to toe. But there is a mistake in it.
    “This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Kie Je, only a few months ago.” The name should be “Ke Jie” not “Kie Je”. Hope this can be corrected soon.

  6. In the section The Future it states “Even if the capability is there, the ethically would serve as a strong barrier against fruition. ” Maybe switch ‘ethically’ to ‘ethical questions’ or ‘ethical feasibility.’ Sorry for nit picking, otherwise great article.

  7. The information is great, it’s been a long time since last time I read a long story. I have more understanding about AI. Thanks.

  8. Thanks for sharing your experience! Really enjoyed the reading.
    These days, AI is not new social media trends as well. Having become popular in 2017, they still enjoy the demand.
    Also, I would like to share with you an interesting and up-to-date software company blog:
    There is always topical and checked information. Take a look!

  9. Hello!

    That was a great read indeed. Very informative. Thanks for sharing it. 🙂

    The evolution of AI is so impactful that soon it will be like we are living in a sci-fi movie! Maybe we’re already doing so to some extent! AI evolution in eCommerce ( is already creating a lot of buzz in the market. You can’t even forget about the healthcare sector right?

    Now, when AI is combined with other futuristic technologies like Machine Learning, Deep Learning, Neural Network, IoT – wonders are happening and more to come yet!

  10. Hello, Rockwell!

    Thank you for sharing this information. From 2017 and until now AI becomes a real trend. This year also back in-game AR. A technology that, with the help of your devices, allows you to see the digital version of different objects overlaid on the real background. The possibilities of AR are endless. It can be implemented in web design as a virtual dressing room for the retail stores when the users can try out products online. Or, this can be used for interior design ideas when you virtually try a piece of furniture or arts in your home to see whether it’ll match. All users need to have is just a web or smartphone camera. I’d like to share with you and your readers more read about AI, AR, and web design trends 2020: I hope you will like it)

  11. Thank you for your great content!
    Artificial Intelligence has taken the world by a storm. Machine learning and AI have become an essential part of our lives, from “Hey Siri” entering with us on live chat to self-driving cars technology. In fact, the growth of AI should more than double revenue to become a USD 12.5 billion industry.
    If you want to know more, don’t hesitate to see:

  12. I am a fifth grader, and I found this topic to be so interesting I decided to do a research project on it and at first I had trouble on finding sources, but once I found this I loved it! Everything in this passage was useful and I also absorbed a lot from it. I would like to say thank you very much to the author for helping me make my research project a success and i look forward to absorbing more into the topic through articles that Rockwell Anyoha hopefully makes. Again thank you very much! With all the information this article provided I was able to create a college leveled essay about Artificial Intelligence!

  13. Dear Rockwell,
    thank you for your interesting overview!!
    One question: What does the Y-achsis in your graphic stand for?
    Thanks for answering!

  14. Punctuation typo in “Time Heals All Wounds” section:
    “`30 years ago was no longer a problem. , which estimates`”

  15. You’re so cool! I do not believe I’ve read anything like that before. So great to discover somebody with some original thoughts on this subject matter. Seriously.. many thanks for starting this up. This web site is one thing that’s needed on the web, someone with some originality!

  16. A speed of the actions, performed by this kind of AI, is the same as the human brain, but the quality of tasks is much higher. It can be compared to the fact of how people are higher by their intelligence from the animals. Both are smart, by it is clear who is able to do more, in the shortest period of time, and with higher quality, in the result.
    This article is food for thought, by no means.

  17. science is trying so much by artificial intelligence but still not succeed. but maybe one day they can. thanks for your post

  18. A ery good article that explains the history of AI very well. Would have liked a little bit about the history of AI in healthcare so far, to prepare us for the courses ahead.

  19. Thank you Rockwell, for the retrospective to Artificial Intelligence. Now machine learning and AI development is a part of the daily work of software engineers for so many different industries

    1. Thanks for the fantastic article.
      Would it be okay if I translate this into Korean and share your article ? Your original post link will be of course added!!

  20. A great piece of information. It takes a while to go through this article, but it is very resourceful. Thanks for sharing. Helped a lot.

  21. Good things come to people who wait, but better things come to those who go out and get them (Good luck to all Harvard undergraduate students)

  22. Amazing history of AI, We know that Artificial Intelligence is a computer science that develops programs to mimic human intelligence. The level of intelligence may range from recognizing patterns in data to deriving insights for problem solving.

  23. A great piece of information. And a lot more on the comments. This helps me some get some more ideas for my next article on Artificial Intelligence. Thanks for sharing. It helped a lot.

  24. “The History of Artificial Intelligence” wow nice title and article also, thanks for this awesome tutorial. you give the best article to your readers.

Leave a Reply

Your email address will not be published. Required fields are marked *