The Turing Test
These wartime computers proved that machines could be more powerful than first imagined. In 1950, at the dawn of the digital age, Mind published Alan Turing’s seminal paper ‘Computing Machinery and Intelligence’, in which he posed the question, ‘Can machines think?’ Instead of trying to define the terms ‘machine’ and ‘think’, Turing outlines a different method derived from a Victorian parlour amusement called the imitation game. This later became known as the Turing test, and it demonstrated how a human may be unable to distinguish machine from another human being.
The Turing test involves three participants: a computer who answers questions, a human who answers questions and another human who acts as the interrogator. Using only a keyboard and a display screen, the interrogator will ask both human and computer a series of wide-ranging questions to determine which is the computer. If the interrogator is unable to distinguish the human from the computer then it is considered an intelligent-thinking entity and has passed the test.
I subscribe to the belief that the Turing test doesn’t actually assess whether a machine is intelligent, more that we’re willing to accept it as intelligent. As Turing himself said in ‘Intelligent machinery’, a report he wrote for the National Physical Laboratory, ‘The idea of intelligence is itself emotional rather than mathematical. The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration.’ Turing’s paper had a huge impact in a variety of fields as questions about machine intelligence snowballed into those about a machine having emotions or soul. It prompted further lines of enquiry of philosophy, engineering, sociology, psychology and religion.
In Chapter Seven, we’ll explore how, just as Turing predicted, modern-day AI products are built to kid you into thinking they are real humans. You’ll find some pointers about how to develop the skills needed to better spot the difference!
Grace Hopper (1906–1992)
Meanwhile, back in America, John von Neumann was developing ever more complex computers, like the MARK 1. One of this project’s first programmers was Grace Hopper. As well as being in the navy reserves during the Second World War, Hopper was excellent with numbers, and she used her PhD in mathematics to become a ground-breaking programmer.
It was Grace who came up with the idea of including English words in computer programming language, rather than the complicated numbers and symbols that had, until then, been used as commands. At first, her mostly male colleagues scoffed and thought it was impossible. But Grace persisted, determined to make programming more accessible. She’s been quoted as saying, ‘If it’s a good idea, go ahead and do it. It’s much easier to apologise than it is to get permission’, and I can’t help but admire this attitude from a woman who knew that some people would see her gender as a reason to hold her back. In 1952, she successfully released her first compiler (a way to translate between human language and computer language) called A-O. This idea of the compiler is what many coding languages still use now – a mix of numbers and letters compiled into a version that the computer can make sense of. Then, in 1959, she led the development of one of the earliest coding languages, COBOL.
What I think is so critical about Grace’s role in AI is that she was trying to make programming more accessible by effectively being a translator of computer language, removing it from the clutches of the few men who knew the program language. She considered those programmers as ‘high priests’ who regarded themselves as the gatekeepers between ordinary people and computers. This meant that while she was making all these incredible breakthroughs in programming, she was not popular among her male peers. Later she said that ‘Someone learns a skill and works hard to learn that skill, and then if you come along and say, “You don’t need that, here’s something else that’s better”, they are going to be quite indignant.’ But once more, she persisted. Hopper also went on to become one of the key people building and developing the standards for other programming languages like FORTRAN – paving the way for my own personal hero Dorothy Vaughn (more on her in a moment).
Hopper played a critical role in computation and AI. In 1969, she received the Computer Science Man of the Year Award from the Data Processing Management Association, to which was added throughout her life a whole host of other medals and awards that no woman had ever received before. She was even featured in a 1967 issue of Cosmopolitan magazine about women in computing – no mean feat for that time.
LISP, John McCarthy (1927–2011) & Phyllis Fox (1923– )
Although Turing was already asking ‘can computers think?’, it wasn’t until 1955, two years after Turing’s untimely death, that a young computer scientist, John McCarthy, coined the term Artificial Intelligence.
A year later, McCarthy brought together a group for the first conference on AI, and this is really when the field as we understand it began to take shape. Sadly, this is also the point at which the AI world became overwhelmingly male. The conference was organised and attended only by men. (It’s not that women were not still making waves in this new AI arena – they were – but they were often sidelined and rarely recognised.)
In 1958, McCarthy created the LISP computer language, which is still used. But what many people don’t know is that the first interpreter and manual for LISP was written by a woman named Phyllis Fox. As Fox humbly commented in an interview with the Society for Industrial and Applied Mathematics, ‘Now, this was not because I was a great LISP programmer, but they never documented or wrote down anything, especially McCarthy. Nobody in that group ever wrote anything down. McCarthy was furious that they didn’t document the code, but he wouldn’t do it, either. So I learned enough LISP that I could write it and ask them questions and write some more. One of the people in the group was a student named Jim Slagel, who was blind. He learned LISP sort of from me, because I would read him what I had written and he would tell me about LISP and I would write some more. His mind was incredible.’
I’ve often been the only person in the room who would just get on with it and write something down to preserve it when the men wouldn’t. Phyllis Fox is an inspiration, not only as a woman coder, but also for reminding us of the importance of translators, preservers and teachers to the history of AI.
The 1960s and 1970s
DENDRAL & Georgia L. Sutherland
In the mid-1960s, a problem-solving AI project called DENDRAL was set up. It tried to use ideas from artificial intelligence to help chemistry labs and researchers identify unknown molecules by storing information about what they did know and then analysing those they didn’t in order to look for patterns and similarities. DENDRAL ran for decades, and a lot of the software and theory it used ended up being the basis for expert systems in the 1980s (more on that later.)
If you research DENDRAL, the names of four men (Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg and Carl Djerassi) are repeatedly affiliated with the project. But it was a woman, Georgia L. Sutherland, who, as you can see from the published notes, wrote the program in LISP and was in fact the lead author on many of the papers.
Dorothy Vaughan (1910–2008)
Remember Grace Hopper and her work with FORTRAN, the programming language she helped design and build? Well, not only did FORTRAN become an incredibly important language for businesses and industries, it also played a huge part in the Space Race at NASA.
If you’ve watched the film or read the book Hidden Figures, which I suggest you do if you haven’t, you might already be familiar with the name Dorothy Vaughan. Vaughan worked at NASA during the segregated Jim Crow years, where women were employed as ‘computers’, calculating complicated maths formulae by hand. When Vaughan saw that NASA were about to implement new technology that would do the jobs of her team faster, thereby making them obsolete, she decided to teach herself FORTRAN. She got the system working and then passed this knowledge on to her team, teaching them all that she’d learned. In doing so, she was able to make sure these women were skilled enough to be re-deployed to new jobs working on the most important part of the system.
Vaughan was the first ever African-American woman to be promoted to supervisor for the west computer group, the segregated female computing team. She led the electric computing department there and stayed at NASA until 1971.
Rather than feeling threatened by the machines, Dorothy embraced them, and with the confidence and foresight to predict the next wave of technology, she ensured that she and her team were primed and had the skills to keep their jobs and stay ahead of the machines.
Unimate
Tech wasn’t only being used for space. In 1961, Unimate was the first robot built for manufacturing and was used to automate repetitive tasks at a General Motors plant. This was the first time the idea of stored memory (early AI) was used in industry, instead of in a lab or a research setting. It prompted a wave of robots being built through the 1960s and 1970s, notably the WABOT in Japan, which was the first robot built to resemble a human.
Kathleen Booth (1922– ) & Karen Spärck Jones (1935–2007)
The late 1960s and early 1970s was a real boomtime for AI, and Karen Spärck Jones was part of it. Instead of focusing on the growing field of coding, she taught computers to understand the human language. This is called natural language processing (NLP) and it is critical for the idea of computer intelligence. Before Jones, little work had been undertaken to get computers to understand spoken or written English, although as early as the 1950s a brilliant woman named Kathleen Booth first imagined some of these concepts. Jones, however, was responsible for one of the biggest breakthroughs in NLP with something called IDF (inverse document frequency), a way of calculating how important a term is in a text. Her 1972 paper laid the groundwork of IDF, and this became the basis for almost all the search engines that we use now.
Like so many of the women featured in this chapter, what I admire and think is so cool about Jones is that she was a self-taught programmer – her background was in philosophy and history. I can’t help but think that her immersion in these disciplines sparked her approach to computer science, which focused not only on how to make computers understand humans, but also its social impact.
She also came up with the phrase: ‘Computing is too important to be left to men.’ Karen Spärck Jones, I salute you!
The First AI Winter
Okay, so I’ve rounded up some of the prime events that led towards the development of AI, but in truth it’s not as streamlined as it might appear on the page. Once the quest for artificial intelligence was truly underway, expectations snowballed, making the lack of advancement and results by the end of the 1970s dispiriting. Problems with progressing centred around two basic limitations: there was not enough RAM memory, making the processing speeds poor. This led to a lack of data suitable to train the machines. Progress slowed and research money was pulled. What followed came to be known as the ‘AI Winter’, which we can roughly date as beginning in the mid-1970s and ending in 1980.
The 1980s
Expert Systems
The emergence of expert systems in the 1980s went some way to address this. An expert system is a computer system that has an in-built decision-making ability. They work by using pre-programmed bodies of knowledge to reason their way through complex problems. These were the first truly successful forms of AI software and focused on narrow tasks, such as playing chess.
All around the world, expert systems were developed and quickly adopted by competitive corporations – just like Georgia Sutherland and DENDRAL. This meant that the primary focus of AI research turned to accumulating comprehensive quantities of knowledge from various experts. It didn’t take long before AI’s commercial value started to be realised, and investment into it began to flow again.
Deep Learning
Alongside the expert systems in the 1980s, another method of AI was being developed. Deep learning is a type of machine learning that tries to mimic the way humans learn through experience. You’ll learn more about all the technical parts of this in the next chapter. Some of the core ideas from machine learning – the idea that a computer can learn to make decisions from identifying data patterns – can be traced to those early days in the 1950s, but deep learning brought machine learning to a new level. It was first developed by mathematical psychologist David Rumelhart and his doctoral advisor, physicist John Hopfield. They were later joined by Geoffrey Hinton, and it was their idea to train machines by mimicking the design of the brain with neural networks. Deep learning is still best practice for much of AI in the twenty-first century.
Deep Thought
Around this time, many people were still engrossed in how AI could work with humans, rather than just for them. One of the most common ways to explore this was through games. Engineers and computer scientists wanted to know: could a computer understand context sufficiently enough to play a game with a human? One of the most famous examples of this is Deep Thought, a chess program.
In 1988, Deep Thought beat its first grandmaster (someone ranked in the highest level of chess playing), marking a real turning point for AI. Within a decade, IBM’s chess playing application, Deep Blue, beat world-reigning chess champion, Garry Kasparov. The machine was capable of evaluating up to 200 million positions a second. But could it think strategically? The answer was such a resounding YES that Kasparov believed a human being had to be behind the controls. He also thought that the human was, of course, a man. In fact, Kasparov famously said, ‘Women, by their nature, are not exceptional chess players: they are not great fighters.’ Little did he know just how many women had already contributed to the development of the machine that had just shown him up.
By 1985, a billion dollars had been spent on AI. New, faster computers convinced American and British governments to start funding AI research again. But with the widespread introduction of desktop computers, it soon became apparent that expert systems were too costly to maintain. In comparison to desktops, expert systems were difficult to update and still didn’t learn. In 1987, the Defence Advanced Research Projects Agency (DARPA) concluded AI would not evolve sufficiently and instead chose to invest its funds in projects they decided would yield better results. A second, though this time much shorter, AI Winter began.
Leslie P. Kaelbling (1961– ) & Cynthia Breazeal (1967– )
In the late 1980s and 1990s, while AI had been toppled from its position as the golden child of the tech world, two women were nonetheless quietly making strides in developing it.
Leslie Pack Kaelbling is widely regarded as one of the leaders of AI in the 1980s and 1990s. She worked on the idea of reinforcement learning, which is one of three key areas of machine learning (the other two are supervised and unsupervised machine learning – more on this in Chapter Three). This was a departure from deep learning because machine learning is concerned with a computer learning from a set of data, whereas reinforcement learning learns from trial and error. It was Kaelbling who figured out how to use this for improving robot navigation – going on to win awards and edit leading journals in AI.
Meanwhile, Cynthia Breazeal was following a different line of enquiry. She wanted to know if machines could feel. This was called affective computing and Breazeal put this to the test by building a robot named KISMET. It was designed to recognise and simulate emotions and was one of the first ever robots able to demonstrate social and emotional interactions with humans. KISMET can perceive a variety of our emotions, and even gets uncomfortable when people get too close to it. Breazeal founded the JIBO company and continues to work on personalised robots today. Although Breazeal comes from a computer and engineering background, she also works in media, education and psychology.
Meanwhile, Going Online
You might be wondering at what point the Internet first appeared in the narrative of AI. The history of the Internet is a book in and of itself (see my notes on Claire L. Evans’ Broad Band in Chapter Eight), but the connections between the Internet and AI are difficult to break down since they’re now so intertwined. Think about how often people building algorithms have to contact other people to problem-solve, innovate and collaborate. But one way to look at this link is to consider how the Internet gave way to the collection of big data.
At its most simple, big data is the term given to datasets that are so big and complex that trends or patterns cannot be found without computational methods. A good example of this is how Facebook networks can be not only your friends, but also your friends of friends. Big data is critical for AI because it needs huge amounts of data to learn, and these datasets are often made available through the Web. It’s not that the Internet is big data per se, but rather it’s the way that big data has become so ubiquitous. When you’re browsing online, accepting cookies usually means you’re allowing a webpage to collect your data. So, although you are just one data point, the points from everyone who visits that page are pooled together into a dataset so large it can only be handled by computers.
This can all be traced back to a moment in 1969 when two devices first connected to each other over something called ARPANET. This is the earliest ancestor of what we now know as the Internet.
Before we go any further, I think it’s important to make a quick distinction between what is meant by the Web and the Internet. It’s pretty common to use them interchangeably, but technically the Internet is a collection of global networks while the Web is one way to access the information stored there. The Web can be traced back to 1991, but there are earlier examples of the same kind of systems from the eighties.
Jake Feinler (1931– ) & Radia Perlman (1951– )
Computer scientists have been interested in the idea of computers communicating with one another since the 1950s. This led to a development of networks – LAN and WAN, the names of which you might recognise from using your desktops or laptops when connecting to WIFI. I find it helpful to think of these networks like the underground roots of trees that spread out and connect to each other, sharing water or sugar and other chemical, hormonal and electrical signals. Computer networks work in the same way, but they share digital information instead. LAN is a local area network where computers are linked to each other in a single building or location. WAN is a wide area network where multiple LANS connect to each other across a larger geographic space. In the 1980s, both these systems were riddled with issues, but two amazing women set out to conquer them.
Elizabeth Feinler, known as Jake, solved a huge problem with WANs. One of the most famous and successful early WANs was ARPANET, which started out linking several university and military LANs to one another. But in these early days, WANs had great difficulty navigating – imagine having no search engines, no way of knowing how to find the person or organisation you need. Along with her team, Jake solved this and progressed the advancement of the Internet. She went on to be part of the naming authority for the Internet, devising and developing top-level domain names, including .com, .us and .gov.
Next up is Radia Perlman, who devised something called the Spanning Tree Protocol. It allowed computers on LANs to connect to more complicated networks without looping back along a path, which would have ultimately broken the connection. This protocol still forms the basis of many networks today.
Margaret Boden (1936– )
Margaret Boden is a Research Professor of Cognitive Science at the University of Sussex and an expert in the intersection of AI, psychology and philosophy.
She published her first article about AI in 1972, and has since led the way when it comes to researching about the intersection of computer science and the human mind. Stressing the importance of celebrating creativity, she continues to be a powerful voice advocating for the necessity of interdisciplinary research in AI, which means she is essentially a trailblazer for how I think about interacting with, and getting involved in, AI. She also thinks through the possibilities of machine creativity, with a special focus on what AI might mean for art in the twenty-first century.
Wendy Hall (1952– )
Dame Wendy Hall is one of the people who worked on a pre-Web hypermedia system. A hypermedia system links together different types of information like text or images in a non-linear way. Wendy led a team of computer scientists who collaborated on a hypermedia system called Microcosm at the University of Southampton. Many of the principles used in this system informed the Web as we know it today.
In 1987, Wendy and her colleague Gillian Lovegrove wrote the paper ‘Where have all the girls gone?’ to address the deficit of women working with computers. She continues to work at the forefront of research in multimedia and hypermedia. She is a Professor of Computer Science, Associate Vice President of International Engagement, and Executive Director of the Web Science Institute at the University of Southampton, and Managing Director of the Web Science Trust to name but a few roles that she currently undertakes. In these roles, she works tirelessly to ensure that the Web and the Internet are governed responsibly.
As the Web has become more intertwined with AI technologies, Wendy has brought her expertise to AI, and this is how we first met. Wendy was writing a review for the British government on AI. It was an exciting time, the whole community was abuzz, and there was an obvious need to get more people with artificial intelligence skills into the workforce. The many men and women consulted on ways to achieve this suggested that graduates who had completed undergraduate STEM degrees should be the ones encouraged to do a Master’s conversion course to AI. When I first heard this idea, I was uncomfortable. I’d not taken a STEM degree, and I knew that on average less than 25 per cent of nationwide STEM degrees were taken by women. I shared my concern with Wendy that this project might reduce the number of women who’d be able to take up this opportunity. Despite her own maths degree and computer science prowess, Wendy had in fact already started campaigning to open up this conversion course to people from other subjects. Since then, through the AI Sector Deal and the Office for AI she’s been able to get funding for non-STEM conversion, meaning that graduates from disciplines such as history, English and philosophy can access the course – richly diversifying the field of AI. There are more details about how you can apply in Chapter Eight.