History of Robots: THE Definitive Guide

This is THE definitive guide to the history of robots.

From alarm clocks to C3PO.

So if you want to know all about the history of robots, then this is the article for you.

Let’s jump right in!

Table of Contents

THE Definitive Guide to the History of Robots

If someone said to you that they have just seen a robot, what would be the first thing that comes into your head? 

A humanoid robot like C3PO?  

What about a self-driving car?  

Or the Mars Rover? 

Or a self-propelled vacuum or lawnmower like Roomba or Robomow? 

Maybe a personal assistant as in Alexa or Siri? 

Would you consider your smartphone a robot? 

Believe it or not, all of these are technically robots. 

Arthur C. Clarke’s Third Law:

“Any sufficiently advanced technology is indistinguishable from magic.”

[Wikipedia]

What Is a Robot?

So what is a robot anyway? 

Well, that depends on which time in history you ask that question and who you ask. 

Vintage tin toy robot with purple background.

I say that because the word “robot” is only about one hundred years old. It was first used by Czech playwright Karel Capek in 1921 in his play RUR or Rossum’s Universal Robots

The term he used was robota, translated into English from Czech as slave, worker, or forced laborer.

And as with most stories of this kind, the robota rebel against their cruel masters. 

RUR has a happy ending of sorts when two of the robots fall in love, and the last human alive declares them “Adam” and “Eve” so they will be able to procreate. 

Their counterpart, Maschinenmensch, actually a robotic version of the heroine Maria, in the 1927 movie Metropolis by Fritz Lang, convinces the ill-treated human workers to rebel. 

The movie ends with the not-so-subtle message: The mediator between the head [the bosses] and the hands [the workers] must be the heart. 

These themes were not new and would continue in stories told in books, plays, and movies even today. 

Questions about life, moral responsibility to others, and we are risking everything if we humans play God by creating creatures in our likeness has been the constant questions asked in stories with robots as their main characters.  

Automatons and Automata

Before Capek, people called replicas of animals, humans, and self-propelled machines automatons or automata. 

Maskelyne-Cook's 1875 automaton, Psycho.

The ancients worldwide recognized the dilemma of creating these “beings”; this desire to and fear of recreating nature; attempting to play God. 

Some of the most profound examples are the stories told by the ancient Greeks who recognized this challenge when they created two Titan characters with names that meant forethought (Prometheus) and afterthought (Epimetheus) who would play important roles in the story about Pandora. 

You may not be aware that Pandora was not human, nor some kind of other beings. She was manufactured by Hephaestus for Zeus. 

She was an automaton (an early word for a robot). It is she who would bring evil into the world as a tool for Zeus to punish humans for possessing fire. She was a warning to not step too far out of our human limitations.    

She would not be the only automata in Greek myths. There would be Talos, a bronze automata, who was programmed to protect the island of Crete. 

He was “killed” by Jason and Medea by draining his lifeblood, his ichor, through a bolt hole in his ankle. This story would open up questions about the meaning of life and what it means to be alive. 

Myth or Reality?

Other stories reach beyond myth and may or may not describe actual created machines that seem to replicate real life. 

For example, stories of statues carved to such lifelike realism appeared ready to walk out of the wall. 

Some could make sounds like a horn blast, talk (priests hiding behind the statue and talking through a metal tube), or even sing. 

According to ancient reports that continued well into 200s CE, the north statue of Colossi of Memnon in Thebes, Egypt, could sing at dawn; supposedly, it consoles his mother Eos (dawn).

The Colossi of Memnon, two massive stone statues of Pharaoh Amenhotep III in Egypt.

Some ancient stories border on myth and truth—homer talks of self-propelled bronze tripods that served the Gods on Mount Olympus. 

Other stories bordered on the fantastical, such as crews of female automata servants that could pour drinks and serve food for guests at a royal banquet. 

Other cultures, such as Buddhist, Hindu, Chinese, etc., had similar tales of these kinds of automaton. 

For example, one of the earliest and most famous stories is found in a fifth century BCE Daoist text Lieh-tzu that mentions King Mu’s account of the Zhou Dynasty. 

His engineer named Ye Shi presented the king with a life-sized, human-shaped machine that could interact with people naturally. 

When it started getting a little too friendly with the ladies, well, the king insisted on knowing whether it was a man or a machine. As legend states, it was a machine; no little person was hiding inside. 

And other stories are more truth than myth. Alexandria, Egypt, from the fourth century into the first century BCE served as an intellectual center of the ancient world. 

There are numerous descriptions of automaton created by inventors, mechanics, and scientists working there, usually mechanical models of animals. One such inventor named Archytas, who lived in the fourth century BCE, created a mechanical bird. 

The Arabic world carried on the Alexandrian tradition of preserving knowledge and scientific exploration. 

One of the most important records showing efforts to create automata is a record of the robotic inventions of Al-Jazari through his book, Book of Knowledge, published in 1206, that discussed all kinds of robotic tools such as:

  • A robotic fountain
  • Alarm clocks
  • Musical instruments

An even earlier book was created in the ninth century by the Banu Musa brothers, who lived in Baghdad, called the Book of Ingenious Devices. They described a programmable automated flute and how it worked. 

Renaissance and the Enlightenment

During the Renaissance and the Enlightenment eras, it became common for scientists and inventors to create mechanical replicas of life forms to learn more about how the body worked. 

Leonardo da Vinci of course, had to get into the act. So, in 1495, he designed a human automaton that supposedly looked and acted like a knight in armor. 

By the eighteenth century, automata creation had moved from science into the realm of exhibition. In fact, many remarked that science had become an exhibition. 

Probably one of the most famous exhibitioners was a man by the name of Jacques de Vaucanson (1709-1782) who created various automated machines, such as a life-sized flute player that could play twelve songs and a tambourine player; both totally mechanical. 

Vaucanson's 1738 automata.

His most famous automatons were probably his anatomically correct animals, especially his digesting duck, used to demonstrate natural functions to the delight, or disgust, of his wealthy patrons. 

Another very famous example of automaton intelligence was The Turk, created by Wolfgang von Kempelen in 1770 to impress Empress Theresa of Austria. 

He presented The Turk as an automaton who had mastered the game of chess. He went up against some of the best chess players in Europe. 

However, the real brain was not gears and levers but an actual chess master (a whole series of them over time) hidden in a compartment who played the game using mirrors and special levers to move pieces. It was a masterful example of deception that continued well into the nineteenth century. 

In 1774, Pierre Jacquet-Dros created a figure of a little boy called the Writer, who could scribe messages up to 40 characters long. Jacquet-Dros and his son also created a Draughtsman and a Musicienne as well.   

But, scientific entertainment was not the only realm for automatons. Scientists also imaged the day when robots could provide a variety of important services and help humanity. 

Gottfried Leibniz (1646-1716) and Blaise Pascal (1623-1662) thought that machines because they required logic to create and function, might be developed to serve as reasoning devices to help settle disputes.  

Philosophers as varied as Rene Descartes (1596-1650) and Etienne Bonnot de Condillac (1714-1780) discussed the idea of mechanical men or machines containing all of the world’s knowledge. 

A pipe dream for them, a regular occurrence for us with the advent of the Internet. 

Mechanical Men and the Three Laws of Robotics

Writers throughout the eighteenth, nineteenth, and early twentieth centuries such as Jules Verne, Mark Twain, L. Frank Baum, Johnathan Swift, Samuel Butler, and Mary Shelley continued to imagine “mechanical men” or machines with the potential for consciousness. They asked their readers to consider some tough moral questions that resonate even more loudly today.  

In his science fiction short story, Isaac Asimov, “Runaround.” in Astounding Science Fiction magazine in March 1942, went so far as to lay out the idea of the “Three Laws of Robotics.” 

  • First Law:  A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law:  A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  • Third Law:  A robot must protect its own existence as long as such protection does not conflict with the First of Second Law. 

It is also considered the first time anyone used the term “robotics.”  

A Robot Is…

Yes, I haven’t actually answered the initial question: What is a robot? That is a touchy question because the term “robot” is really a catchword for any machine with the ability to think, reason, and act upon its understanding independently of human intervention. 

And, as things would develop, it did not necessarily mean something humanoid; that is, it looks like a human. 

For example, have you ever experienced your smartphone filling in the wrong word when typing an e-mail or a text message to a friend? 

Serious guy taking off eyewear while reading messages on smartphone.

It is deciding for you what you really want to say, even though that choice was incorrect. Is it thinking? 

Or is it just performing a mathematical calculation through algorithms to determine what the most probable answer might be?  

In other words, robots are tools used to complete some kind of work, such as helping you send a text message. 

As we will see, discussions in the twentieth century did not require anthropomorphizing the machine itself, only its capabilities. 

And as technological development began to speed up and scientists began to create high-functioning computers, the possibility of actually creating a self-actuated robot started to seem like a real possibility. 

Big Dreams and a Reality Check

By the 1950s, early computer pioneers thought that creating a “thinking” machine would be a very simple thing to do. 

Some even gave wildly optimistic predictions along with tantalizing intelligence tests that proved to be, in reality, a low bar for what would actually be required to create a fully automated “thinking” machine. 

By the 1970s, as this reality began to sink in, the catchphrase or mantra for those in robotics and the artificial intelligence field became, to quote Steve Jobs, “Simple can be harder than the complex.” 

Here are some examples of key developments in robotics since the 1960s: 

1960s

  • UNIMATE: Considered to be the first industrial robot. It was first conceived when George Devol, an inventor, and Joe Engelberger, a business executive interested in robotics, met at a cocktail party in 1956. In 1961, George Devol obtained the patent for his invention, which could follow step-by-step instructions stored in a metal drum for repetitive actions such as working on an assembly line—similar to the Jacquard loom. UNIMATE took on its first job at General Motors stacking hot pieces of die-cast metal. 
  • The Rancho Arm: Researchers at Rancho Los Amigos Hospital in Downey, California, wanted to create an artificial prosthetic arm using computer technology. They created a six-jointed mechanical arm that was as flexible as a human arm. Stanford University acquired it in 1963. It was the first robotic arm controlled by a computer. 
  • ELIZA: Computer Scientist Joseph Weizenbaum finished ELIZA in 1965. ELIZA was a computer program that could functionally converse with humans by answering questions on a computer terminal. While very brittle and, at times, nonsensical, it did mark an important step in natural language research.  
  • The Tentacle Arm: Developed by Marvin Minsky in 1968. It moved like an octopus with twelve joints and was powered by hydraulic fluids. 
  • Victor Scheinman’s Stanford Arm: One of the first robotic arms to be both electronically-powered and computer-controlled, developed in 1969. This system soon came to permeate industrial applications. 
  • Shakey the Robot: Created by the Stanford Research Institute through studies done from 1966 to 1972. Shakey became one of the first robots controlled by artificial intelligence systems using a three-tiered software architecture. While it did not use neural networks, it did use algorithms that mimic how the human brain worked. It had television cameras to see and “whiskers” to sense the world around it. It would use this data to determine actions to respond to its environment in a simple way using its planning system called STRIPS. While it had no direct use outside of its controlled environment, it did serve as a foundation for future artificial intelligence experiments and developments.  

1970s

  • “Uncanny Valley”: A term first coined in 1970 by the famous Japanese robotics engineer Masahiro Mori when attempting to make lifelike prosthetics. It is when we experience an eerie, almost repellent sensation when reality and the artificial blend; familiar, but it’s not. 
  • The Silver Arm: Created by David Silver at MIT in 1974. Using pressure sensors, the arm could mimic the dexterity of human fingers. 
  • The Soft Gripper: The Shigeo Hirose’s Soft Gripper took this refinement one step further by creating grippers that could conform to the object’s shape being picked up. It was designed in 1977 at the Tokyo Institute of Technology. This system served as the foundation for many modern versions of robotic “hands.”

1980s

  • Kunihiko Fukushima (1982): An engineer who developed cognition and neo-cognition systems to better translate how an actual eye sees into a program for computers so they can “see” through the use of algorithms and back-propagation. These efforts created the concept of Convolutional Neural Networks or CONVNets, which proved key in creating visual recognition programs.    
  • DaVinci System (1987): Created by Philip S. Green at Stanford’s Scientific Research Institute, Joseph M. Rosen, MD, and Jon Bowersox, an army surgeon, created one of the robotic-assisted surgery systems, sometimes called a “telepresence surgery system.” 
  • Fifth Generation Computer Project: Throughout the 1980s, the Japanese government dumped over $400 million dollars to create a platform to foster the growth of artificial intelligence systems to create “thinking” machines. While the project never reached its lofty goals, it did spur a new generation of AI scientists, leading to important innovations in the 1990s.  
  • The Stanford Cart in 1980 had applications that led to improvements on independent mobility and navigation.  
  • Starting in 1981, innovations in robotic arm construction included motors housed inside of the arm joins (direct drive or DD), eliminating the need for wires and chains to control joint movement. 
  • Some robots like the Denning Sentry robot (1983) improved the effectiveness of security systems.

1990s

  • Carbon Nanotube (CNT): In 1991, Sumio Iljima of NEC discovered tubular structures. Soon applications for these tubes began to be explored in areas as diverse as electronics, multifunctional fabrics, photonics, biology, and communication.  
  • CyberKnife (1992): Medical technology had moved into the realm of increased precision using a surgical robot developed by neurosurgeon John R. Adler that used x-rays to locate tumors or deliver a set dose of radiation. It was an important development in medical science’s fight against cancer.
  • Nanocrystal Synthesis: This method was invented by Moungi Bawendi of MIT. It is best known to many of us as quantum dots. Soon applications were found for this in computing, biology, high-efficiency photovoltaics, and lighting. 
  • Checkers Benchmark: In 1994, Marion Tinsley had been the world’s best checker player for forty years, at least until he squared off against Chinook. Johnathan Schaeffer, a computer scientist, had programmed Chinook over a series of years.  
  • Chess Benchmark: Back in 1958 Allen Newell and Herbert Simon, “If one could devise a successful chess machine, one could seem to have penetrated to the core of human intellectual endeavor.” (Mitchell, 156.) IBM’s Deep Blue beat Garry Kasparov, a world champion chess player, in 1997. This was an incredibly important milestone. However, when used for other applications such as medical prognostication, Deep Blue proved not as effective without substantial reprogramming. In other words, did it actually penetrate “the core of human intellectual endeavor?” Could it actually “think” as humans do? Many scientists in artificial intelligence felt that it did not because it did not have the full rational and informational flexibility that humans do. 
  • Dragon Systems (1990): Starting as early as the 1950s, various researchers studied ways to help machines be able to recognize and comprehend human speech. One of the first was the AUDREY system, which used vacuum tube circuitry and could comprehend numerical digits about 97% of the time. The IBM shoebox machine presented at the 1962 World’s Fair in Seattle did only slightly better. Using a database to retrieve information to improve comprehension, Carnegie Mellon developed HARPY in the 1970s. By the 1980s, systems were using predictive methods (Hidden Markov Model-HMM) rather than trying to match the sound exactly. The first commercial use of this technology was “Julie” in 1987, which could understand basic phrases and answer back. In 1990, a company by the name of Dragon Systems developed a dictation system called “Dragon NaturallySpeaking” that could hear, understand, and transcribe the human voice. One of its key applications was in medical dictation. In 1997, BellSouth released its VAL system. Most of us are familiar with what this sounds like since it is the computer that speaks to you in voice-activated menus today.   
  • Yann LeCun (1998): Geoffrey Hinton, in the 1980s, made back-propagation work for training neural networks. His former graduate student Yann LeCun created a convolutional network to read handwritten numbers accurately. They called this program LeCun. 
  • Kismet (1998): Kismet, a robotic head created by Dr. Cynthia Breazeal, could read and express emotion. Kismet was the first robot that could do this. Although some might say that humans are projecting emotions onto Kismet. 
  • Dip-pen Nanolithography: Invented by Chad Mirkin at Northwestern University. This allowed for the “writing” of electronic circuits as well as the creation of microscopic biomaterials and even encryption applications. 

2000s

  • National Nanotechnology Initiative (NNI): In 2000, Present Bill Clinton created NNI to coordinate efforts to promote nanotechnologies’ development. These technologies had already hit consumer markets in clear sunscreens, stain- and bacterial-resistant clothing, better screen interface with electronics, and scratch-resistant coatings. Three years later, Congress would pass the 21st Century Nanotechnology Research and Development Act to promote development. The European Commission in 2003 also adopted the communication, “Towards a European Strategy for Nanotechnology,” to encourage members of the European Union to promote research and development in nanotechnology. 
  • Drones: Although drones have existed as early as 1849 when Austria used incendiary bombs attached to balloons to attack Venice, the first use of a robotic drone in military conflict to specifically target a military combatant was in 2002 when the CIA used a Predator Drone to try kill Osama Bin Laden. By 2006, drones used GPS technology and became readily available to anyone who wanted one. In that year, the public use of drones had grown to such a point that the Federal Aviation Administration had to begin issuing permits and setting regulations for use.  
  • Ray Kurzweil: Former student of Marvin Minsky, who became a foremost innovator with inventions ranging from music synthesizers to one of the first text-to-speech machines to optical character recognition. However, he is best known for his unfailing certainty in the ability of computers to exceed human intelligence. In his most famous work, The Singularity is Near: When Humans Transcend Biology (2005), he makes several startling predictions: Artificial intelligence will reach human levels by 2029—that is, pass the Turing Test; non-biological intellectual capacity will exceed human intelligence by the early 2030s; and, by 2045 we will reach the Singularity, the point at which there will a profound and disruptive change in our intellectual capabilities, implying we may no longer be fully biological, but a part machine. 
  • Google Voice Search App: In 2008, Google introduced its Voice Search App for iPhones. It was an important demonstration of just how far things had come in the field of voice recognition.  
  • ImageNet: Fei-Fei Li created this image database in 2009 using convolutional networking. It has millions of images curated (i.e., labeled) through crowdsourcing. 
  • Robots had become more lifelike such as the SONY AIBO (1999), a robot that acted like a dog and could respond to voice commands, or Honda’s (2002) Advanced Step in Innovative Mobility (ASIMO), a humanoid robot that could walk, climb stairs, recognize faces and objects and could respond to voice commands.
  • One of the first mobile robotic cleaning machines, called iRobot Roomba, was first introduced in 2002. The vacuum robot was able to detect and avoid obstacles using an insect-like reflex behavior giving it more flexibility than using a more centralized “brain.” 
  • Other robots, like those created by Defense Advanced Research Project Agency (DARPA), called Centibots (2003), could enter into dangerous areas in groups communicating with each other to coordinate efforts without human intervention. If one of the units failed, another could move into its place. It could be used for mapping or to find items as well. 
  • And probably one of the most famous robots, the Mars Rover, landed on Mars in 2004 and continue to return messages and data to Earth until 2010. The information it beamed back to Earth opened up a greater understanding of the Mars surface than ever before. 
  • In 2004, DARPA introduced the first “Grand Challenge” for autonomous vehicles with the goal of encouraging research in self-driving cars. In 2005, Stanford’s entry named “Stanley,” managed to drive an off-road, 142-mile race course well within the ten-hour time limit set by the contest organizers. It did it in seven hours.

2010s

  • Watson: In 2011, IBM created a computer that had been programmed for natural language and communication. That year, it beat Ken Jennings and Brad Rutter (two former high-winning players) in a televised episode of Jeopardy!. Again, Watson fell into the same trap as Deep Blue. Was it actually able to “think” and process information as humans do? Many still believe we have a long way to go before we reach that point. 
  • Siri: In 2011, Apple took personal assistant and voice recognition technology to a new level with the introduction of Siri. 
  • ALEXNet: Taking ImageNet to the next stage in 2012 with over fourteen million images. Visual recognition had improved immensely.
  • Nanotube Computer: In 2013, the first carbon nanotube computer was developed by Stanford Researchers. They named it Cedric. The goal was to determine if it was possible to build a computer out of carbon rather than silicon to try to help improve energy efficiency.  
  • Passed the Turing Test? In 2014, the Royal Society in London hosted an event following the “Imitation Game” format described by Turing back in 1950. There were several groups of contestants. A Russian/Ukrainian group presented Eugene Goostman, who (in reality, was a Chatbox) won with much press fanfare and excitement that the Turing Test had finally been beaten. But was it? Did this computer actually interact with the judges, or was it just programmed well to perform this one task? Most experts felt it was just programmed well. It could not “think” outside of this activity. 
  • Joint Letter: In 2015, Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and about one hundred others sent a joint letter to the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, issuing a warning about the risk of artificial intelligence to humanity, especially if used for military purposes. 
  • AlphaGo: In March 2016, a competition between Lee Sedol, a world-renown Go player, and a computer program called AlphaGo beat him in a five-game match. The first time a machine ever beat a human in the game of Go.  

Today Robots are Everywhere

Today, robots are everywhere in our lives: 

  • They work in factories and hospitals
  • Help serve food in lunch lines 
  • They assist with security
  • Plant and harvest crops
  • Help us milk cows
  • Provide improve military reconnaissance
  • Provide entertainment in a variety of ways, such as computerized chess games, 
  • Enter situations too dangerous for humans, such as nuclear reactors or outer space

Most of these robots do not look like humanoid robots from ancient Greece or even movies or television. Instead, most look like very specialized machines.

Automatic system pneumatic input to robot handle in a factory.

And if they look like anything found in nature, the picture of beetles and spiders come to mind. 

Current Status of Robotics

Should we fear them? 

Will they take over the world? 

Well, if you talk with robotic engineers today, they will tell you a thing or two about what they have learned since those heady days of the 1950s when all things seemed possible. 

At this point, robots do not really “think.” They can be programmed to do complicated tasks like Deep Blue, but they can’t do anything else unless they are reprogrammed. 

They also do not have any common sense, and they definitely cannot plan for the unexpected. It is one of the reasons why self-driving cars are still in the testing phase. These robots are prisoners to their programming.

Automation engineer uses laptop for programming robotic arm.

Scientists have also made remarkable progress in creating robots that can grip things and move on “legs.” But, these robots cannot do those activities very well—mostly acting like someone with all thumbs or two left feet. 

According to Manuela Veloso, a robotics engineer, “It’s crazy how sophisticated our bodies are as machines.

We’re very good at handling gravity, dealing with forces as we walk, being pushed and keeping our balance. It’s going to be many years before a bipedal robot can walk as well as a person.”

In other words, they are not going to be artificial people any time soon.  

The other big challenge is emotions.

Humans project their feelings onto these machines, personifying them.

While scientists are experimenting with creating emotions in robots to respond back in kind, what they are more concerned with is human response to the robots themselves.

Basically, we need to adapt to them, not the other way around. They are machines and therefore are emotionless.

And, as some fear, they will not be stealing away our jobs any time soon. Basically, there are some things that only humans can do. Yes, some types of work will disappear.

Still, new ones will open up, such as a robot may install a power-generating windmill. Still, it is the human who will maintain it, as one example. Basically, the jobs will shift. 

Yet, great human thinkers like Stephen Hawking warned us that:

“AI could spell the end of the human race.”

Is he right? Are we overreaching?

According to classicist Adrienne Mayor:

“That unsettling oscillation between techno-nightmares and grand futuristic dreams—that is timeless. The ancient Greeks understood that the quintessential attribute of humankind is always to be tempted to reach ‘beyond human.’ and to neglect to envision the consequences. We mirror Epimetheus, who accepted the gift of Pandora and only later realized his error.”

While I don’t expect we will reject the benefits technology brings us, we do need to find ways to control those tools, so they don’t turn on us like HAL in the movie 2001: A Space Odyssey.

Hopefully, we be able to keep the evil in the jar.