top of page
Search
Writer's pictureMichael Paulyn

The History of Artificial Intelligence (AI)

Updated: Mar 9


The Age-Old Question "Can Machines Think?"

During the first half of the 20th century, science fiction introduced the idea of artificially intelligent robots to the public. From the heartless Tin Man in "The Wizard of Oz" to the humanoid robot in "Metropolis," society was exposed to the concept of machines that could think. By the 1950s, a generation of scientists, mathematicians, and philosophers had assimilated the idea of artificial intelligence into their culture.


Among them was Alan Turing, a British polymath who explored the mathematical possibility of artificial intelligence. Turing's 1950 paper, "Computing Machinery and Intelligence," presented a logical framework for building intelligent machines and testing their intelligence. He argued that if humans can use available information and reason to solve problems and make decisions, machines should be capable of doing the same.


Image: AI-Generated using Lexica Art

Next Stages

At the time, the biggest hindrance to Turing's vision was that computers needed a significant overhaul. Before 1949, a critical element for achieving intelligence was the ability to store commands instead of just executing them. But luckily, computers were unable to remember past actions.


Additionally, computing was exorbitantly expensive. In the early 1950s, leasing a computer could cost up to $200,000 per month, a cost that only top-tier universities and large technology companies could afford to bear. Proof of concept and backing from influential figures were required to convince them that machine intelligence was a worthy pursuit to win the support of funding sources.


Initiating the Proof of Concept

Despite Alan Turing's groundwork in the 1950s, progress in the field of artificial intelligence (AI) faced two significant barriers: computers' inability to store commands and the exorbitant cost of computing. Only the most prestigious institutions could afford to invest in AI research.


It wasn't until 1956 that AI research gained momentum. The Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), hosted by John McCarthy and Marvin Minsky, brought together the top researchers from various fields to discuss AI. At this historic conference, McCarthy coined the term "artificial intelligence." However, the event did not meet McCarthy's expectations, and there needed to be an agreement on standard methods for the field. Nonetheless, the conference paved the way for the next two decades of AI research.


A Bumpy Road Ahead

Between 1957 and 1974, AI made significant progress, with computers becoming faster, cheaper, and more accessible. Machine learning algorithms also improved, leading to amazing early demonstrations of problem-solving skills and spoken language interpretation—these successes, coupled with the advocacy of leading researchers, convinced government agencies to fund AI research.


The Defense Advanced Research Projects Agency (DARPA) funded AI research at several institutions, particularly those interested in developing machines that could transcribe and translate spoken language and process high volumes of data. However, despite the optimism, achieving the end goals of natural language processing, abstract thinking, and self-recognition still needed to be a long way off. Although the proof of principle existed, the technology had yet to advance sufficiently to achieve human-level intelligence.


The Later Years

The initial challenges in AI were significant, with the most daunting obstacle being the need for more computational power to make significant progress. Put, computers needed to be more capable of quickly storing and processing enough information. For instance, comprehending and communicating using language requires a vast understanding of words and their combinations. At the time, Hans Moravec, a John McCarthy student, observed that "computers were still millions of times too weak to exhibit intelligence." This realization led to decreased funding and a slowing of research for about a decade.


However, in the 1980s, there was a revival for AI research due to two factors: an expansion of the algorithmic toolkit and increased funding. The emergence of "deep learning" techniques, popularized by John Hopfield and David Rumelhart, allowed computers to learn from experience. Meanwhile, Edward Feigenbaum introduced expert systems that emulated the decision-making process of human experts.


Image: AI-Generated using Lexica Art

Later on, expert systems found numerous applications in many industries. These systems would query an expert in a field on how to respond to a given situation, and then, once this was learned for virtually every situation, non-experts could receive advice from that program.


The Japanese government heavily funded expert systems and other AI-related endeavors as part of their Fifth Generation Computer Project (FGCP) from 1982 to 1990. Unfortunately, most of the ambitious goals were unmet, but the indirect effects of the FGCP inspired a new generation of engineers and scientists. Despite this, funding for the FGCP ceased, and AI fell out of the public eye.


Major Hurdles to Overcome

Even so, AI made significant progress without government funding and public hype. During the 1990s and 2000s, there were landmark AI achievements. For example, in 1997, IBM's Deep Blue defeated reigning world chess champion and grandmaster Gary Kasparov, marking the first time a computer defeated a reigning world chess champion.


This highly publicized match was a significant step toward creating an artificially intelligent decision-making program. Machines seemed capable of handling any problem, even human emotions, as evidenced by Cynthia Breazeal's robot, Kismet, which could recognize and display emotions. In the same year, Dragon Systems implemented speech recognition software on Windows, another significant achievement in interpreting spoken language.


Time Goes On

The saying "time heals all wounds" may not apply to our lack of progress in coding artificial intelligence, but something else has changed. The primary limitation of computer storage that hindered us 30 years ago is no longer a problem, thanks to Moore's Law. This law estimates that computer memory and speed double yearly, which has caught up and even exceeded our needs.


This law explains why machines like Deep Blue and Alpha Go could defeat human champions. Our progress in AI research follows a rollercoaster pattern: we reach the limits of AI capabilities at the level of our current computational power, then wait for Moore's Law to catch up again.


Where We're At Now

We now live in the "big data" age, where we can collect vast amounts of information too complex for humans to process. Artificial intelligence has proven helpful in several industries, including technology, banking, marketing, and entertainment. Even if algorithms don't improve much, big data and massive computing allow AI to learn through brute force. While Moore's Law may be slowing down, breakthroughs in computer science, mathematics, and neuroscience can provide a way to surpass its limitations.


Moving Forward

AI language is the next big thing shortly. We already interact with machines instead of humans in customer service, and machines even call us. Soon, we may converse with expert systems in fluid conversation or have real-time translations between different languages. We can also expect driverless cars within the next 20 years. However, the long-term goal is to achieve general intelligence, where machines surpass human cognitive abilities in all tasks.


This idea is similar to sentient robots in movies. While it's difficult to imagine achieving this in the next 50 years, we need to start discussing machine policy and ethics sooner rather than later. For now, we'll let AI continue to improve and integrate into society. Rockwell Anyoha is a graduate student in the Department of Molecular Biology.


Stay Tuned for More!

If you want to learn more about the dynamic and ever-changing world of AI, well, you're in luck! stoik AI is all about examining this exciting field of study and its future potential applications. Stay tuned for more AI content coming your way. In the meantime, check out all the past blogs on the stoik AI blog!



20 views0 comments

Comments


bottom of page