We accept purchase orders from educational and government institutions.

From Concept to Reality: A Deep Dive into the History of AI

Artificial Intelligence (AI) has journeyed from an imaginative speculation to a transformative reality that's having more and more influence over the way humans work, live and play. This blog will be the first instalment of our two-part series that will explore the past, present and future of AI in our world. From its origins in the 20th century to its profound advancements that have reshaped industries, this blog will uncover the milestones and breakthroughs of AI. Join us as we trace the path of AI’s development, understanding how this technology moved from concept to reality.

What is Artificial Intelligence?

Artificial intelligence, more commonly referred to as 'AI', is defined as a machine's ability to perform tasks that, historically, only a human could do. These tasks can include things such as running an online chatbot, text editing, opening your phone with face ID, and more. The term can also apply to any machine that exhibits traits of a human mind such as learning and problem-solving. 

You may have also heard the term 'machine learning'. Machine learning is the method by which computers are able to learn from and make decisions based on data without direct human intervention. Machine learning is a key tool in AI that handles the learning part of artificial intelligence, allowing machines to gain, process and apply knowledge without explicit programming for each step.

The McCulloch-Pitts Legacy in AI

Early Beginnings of AI

The McCulloch-Pitts Legacy in AI

In 1943, American neurophysiologist Warren McCulloch and American logician Walter Pitts wrote a seminal paper which introduced the first mathematical model of a neural network. This model suggested that such networks could theoretically perform any computable function, essentially mirroring certain functions of the human brain.

The McCulloch-Pitts neuron model provided the first step toward developing artificial neural networks, which would later become central to machine learning and AI.

The Turing Test

By the 1950s, there was a generation of scientists, mathematicians, and philosophers who were devoting their time to exploring the concept of AI. One such person was computer scientist Alan Turing who thought that if humans can use available information to problem solve and make decisions, then why can't machines do the same? 

In 1950, Alan Turing created a practical test to evaluate a machine's intelligence. The Turing Test requires that a machine must successfully engage in a conversation with a human without being identified as a machine. The first instance of a machine passing the Turing Test occurred during a competition in June 2014, commemorating the 60th anniversary of Turing's death. A chatbot named Eugene Goostman convinced 33% of the judges that it was human. There have been many other contests like these but we'll save that for another blog.

The Birth of AI - Scientists in 1955
Photo by Margaret Minksy

The Birth of AI

In 1955, the term 'artificial intelligence' was coined by computer scientist, John McCarthy, who's considered the 'father of artificial intelligence'. The following year, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, held the Dartmouth Conference which was the first-ever conference devoted to AI.

The Dartmouth Conference was treated as a workshop that aimed to explore and define the concept of artificial intelligence, laying the groundwork for the development of AI as a scientific discipline. The discussions and ideas that emerged from the conference spearheaded the next 20 years of AI research. 

The Maturation of AI
Source: SRI International

The Maturation of AI

From the late 1950s through the 1960s, the concept of AI entered the mainstream. This era saw the emergence of groundbreaking discoveries and creations such as programming languages like LISP, which remains widely used today, and the first 'chatterbot'. 

1958: John McCarthy developed LISP (List Processing), the inaugural AI programming language, still widely used.

1959: Arthur Samuel coined 'machine learning' during a talk on machines outplaying their chess programmer creators.

1961: The first industrial robot, Unimate, began operating on General Motors' assembly line in New Jersey, performing tasks too dangerous for humans.

1965: Edward Feigenbaum and Joshua Lederberg launched the first 'expert system,' mimicking human expert reasoning and decision-making.

1966: Joseph Weizenbaum introduced ELIZA, the first 'chatterbot,' which simulated a psychotherapist using natural language processing.

1968: Soviet mathematician Alexey Ivakhnenko published the "Group Method of Data Handling," pioneering what would evolve into deep learning. 

1969: Researchers at Stanford Research Institute introduced Shakey the robot, the first general-purpose mobile robot capable of perceiving its environment, planning its actions, moving around, and interacting with objects.

AI Winters

The AI Winters

The term "AI Winter" describes a phase when funding and interest in artificial intelligence research significantly declined. Historically, there have been two distinct AI Winters, with the first one occurring in the 1970s. These periods of reduced enthusiasm and investment in AI were punctuated by a brief resurgence or boom in AI research.

The First AI Winter

The first AI winter occurred in the 1970s and was triggered by a combination of high expectations set during the 1950s and 1960s that weren't met within the predicted time frame. The 1973 Lighthill Report, by British scientist Sir James Lighthill, critically assessed these shortcomings, particularly noting AI's success only in limited, specific areas.

This led to significant funding cuts in the UK and the U.S., where DARPA shifted focus towards more targeted, application-driven AI research. This resulted in less AI projects, budget constraints in academic settings, slower development of AI expertise, and slowed progress in other areas like machine learning and robotics.

The AI Boom in the 1980s

After the stagnation of AI in the 1970s, the 1980s marked a period of resurgence which was fueled by advancements in technology and a renewed interest in the potential of AI. Several key milestones that marked this decade as the 'AI Boom' include:

Fifth Generation Computer Systems Project (1982): This was the Japanese Government's initiative which aimed to create the next wave of computers that could process information much like the human brain.

Rise of Expert Systems: Throughout the early 1980s, expert systems like XCON (R1) for computer configuration and MYCIN for medical diagnosis showcased AI's practical potential in industries such as medicine, finance, and technology.

Introduction of the Connection Machine (1986): This massively parallel supercomputer was designed to enhance pattern recognition tasks, proving instrumental for AI research requiring large-scale data processing.

Backpropagation Algorithm (1986): David Rumelhart, Geoffrey Hinton, and Ronald Williams revolutionised neural networks with this algorithm, enabling more effective training by adjusting parameters based on output errors, paving the way for deep learning advancements.

Founding of AI Companies: The mid to late 1980s saw the emergence of numerous AI-focused companies and increased interest from tech giants, which began investing in AI technologies and developing hardware optimised for AI applications, notably using Lisp programming language.

The Second AI Winter 

Unfortunately, after a period of great strides within the AI landscape, a second AI winter hit. The second AI Winter spanned from the late 1980s to around 1993 and mirrored the first AI winter in that it featured a significant downturn in both funding and interest in AI research. This slump was due to a mix of factors such as high costs paired with low returns, overinflated expectations, and a scarcity of practical AI applications.

The Resurgence of AI
Source:IBM

The Resurgence of AI 

The period from the mid-1990s to the early 2000s marked significant advancements in artificial intelligence, characterised by both technological progress and increasing mainstream recognition of AI's potential. This era is particularly noted for the development and application of more sophisticated algorithms, improved computational power, and greater availability of data.

Key Advances in AI Technology

Machine Learning and Data Mining: As the internet expanded and more industries digitised, an abundance of data became available, enabling AI researchers to refine and train more complex models.

Neural Networks: Though neural networks were not new, they began to see more practical applications thanks to increased computational power and better understanding of how to train these models effectively. This period laid the groundwork for later developments in deep learning.

Natural Language Processing (NLP): Advances in NLP began to accelerate with more sophisticated linguistic models that could analyse, understand, and generate human language with better accuracy. 

Chess Competition of 1997

One of the most publicised events highlighting AI's capabilities was the 1997 chess match between IBM's Deep Blue computer and the reigning world chess champion, Garry Kasparov. Deep Blue's victory marked the first time a computer had beaten a world champion in a standard chess tournament match under regular time controls.

This event not only captured the public's imagination, altering perceptions of AI's capabilities across society, but also fueled media discussions about AI's impact on human intellect and creativity. Furthermore, Deep Blue's success inspired further AI research, particularly in game theory, decision-making, and strategic planning, and encouraged the exploration of AI applications in other complex human activities.

The Growth of AI in the 21st Century

The Growth of AI in the 21st Century

In the early-to mid-2000s, the growth of AI was fuelled and supported by major advancements in technology and increased integration into both industry and everyday life. The era saw significant progress in NLP and computer vision, facilitated by improved techniques and faster processors. Major tech companies like Google, IBM, and Microsoft started investing heavily in AI, driving forward innovations such as IBM's Watson, which famously won the "Jeopardy!" game show in 2011, demonstrating the practical potential of AI in parsing natural language and retrieving information.

During the 2010s and into the 2020s, artificial intelligence has undergone remarkable growth and transformation, deeply impacting society, the economy, and daily life. The rise of deep learning has propelled AI to excel in complex tasks such as image and speech recognition, as seen in technologies like Apple's Face ID and voice assistants like Amazon’s Alexa and Google Assistant. In transportation, AI helps companies like Tesla and Waymo develop safer autonomous vehicles by interpreting real-time road conditions. In healthcare, AI-driven tools enhance diagnostic accuracy and speed in detecting diseases like cancer, outperforming human radiologists.

Additionally, AI has changed education by enabling personalised learning experiences, automating administrative tasks, and introducing interactive technologies like virtual and augmented reality, particularly as digital and remote learning became more prevalent during the COVID-19 pandemic. While not without its challenges and concerns, AI continues to offer profound opportunities for innovation and improvement across multiple sectors, promising an even brighter future as its potential is further realised.

To Be Continued...

From its theoretical beginnings in the mid-20th century to its current applications across diverse sectors, AI's evolution has been both rapid and revolutionary. As we've explored significant developments and iconic milestones, the impact of AI on industries is undeniable. Stay tuned for the second instalment of CD Soft's AI blog series, where we'll focus on how AI is used in the education industry and the impact it has on educators and students.

Leave a comment

Your email address will not be published. Required fields are marked *

RELATED ARTICLES

Educational Robot Toys for Kids
6 Must-Have Educational Robot Toys for Kids
By Matteo Pietropaolo
Flashforge 3D Printer Management
Flashforge 3D Printer Management
By Matteo Pietropaolo
FlashPrint vs Orca-Flashforge
FlashPrint vs Orca-Flashforge
By Matteo Pietropaolo
Why the Adventurer 5M Pro is the best 3D Printer for the Classroom
Why the Adventurer 5M Pro is the best 3D Printer for the Classroom
By Matteo Pietropaolo
Flashforge Adventurer 5M vs 5M Pro
Flashforge Adventurer 5M vs 5M Pro
By Matteo Pietropaolo
Adventurer 5M Pro with Orca-Flashforge Software & Flash Maker App
The Super Impressive Adventurer 5M Pro Just Got Better!
By Matteo Pietropaolo
BricQ Motion Prime Set-Discover-Lesson-Plans
Why You Should Set Up LEGO Clubs in Schools
By Matteo Pietropaolo
Edison Robot V3 - Everything New!
Edison Robot V3 - Everything New!
By Matteo Pietropaolo