AI is automating everyday tasks at an unprecedented pace. As we acquire and learn how to use more new digital devices, we’re also constantly upgrading our dependency on AI. Few people are oblivious to how this technology has firmly slotted into our lives, but most of us don’t appreciate that the rules of this tech are constantly shifting and evolving. We’re now living in an era where machines are taught to learn and adapt without human intervention, and this has some serious ramifications that we’ll explore over the course of the next few chapters.
The many functions of AI will continue to impact every aspect of our daily routines, but as well as helping us to avoid traffic or introducing us to new music, AI systems will detect disease, reduce energy consumption, decide which of us receives an approved loan, power vehicles to be autonomous and self-driving and both inform and control our news and advertising feeds. AI has the potential to unlock a future where humans live longer, healthier, happier lives. It could change the nature of much work: taking over many repetitive, boring tasks and freeing humans up to spend time on creative, fulfilling projects. This could dramatically change common conceptions of how much of our lives we need to spend on work, allowing us all to spend more time with each other and on our relationships, or on whatever it is that makes life worth living for you.
But there is a much scarier alternative. The flipside to this technology is that it could make life a lot worse for a lot of us, and especially the most vulnerable: it could widen the poverty gap, further increase inequality, reduce diversity and re-entrench many of the structures that keep some people down, no matter what they do. Anyone who tells you otherwise is not telling you the whole story.
The reality of AI, in its current state, is that it adopts the truths of its creators: humans. AI-driven machines learn from data that humans feed into their systems, meaning that they can also learn the social norms that many of us are desperately trying to escape or eradicate. For example, what if they are programmed to accept the existing pay gaps, or the idea that a woman’s place should always be in the home? If misogyny and unconscious or conscious bias is codified into the next wave of technology, we are all exposed to a less fair, less equal future. It gives me the shivers just thinking about it. So, although AI has enormous potential to improve our lives, there is the risk that rather than empowering women it may continue to compound existing stereotypes. And, as we’re going to see below, the evidence that we have suggests that the risk is already becoming the reality. We have to protect our rights and fight against oppressive gender constructs becoming codified, because if they are, then whatever progress women have made over the past century could very easily be wiped out. We need to do so much more than become merely competent responders to AI.
The first step here is to accept that to thrive we must learn to live and work alongside machines. I don’t want women to be more at risk of losing their jobs to a machine because there wasn’t enough material out there to prepare them for using it. But don’t worry, as you’ll read time and again from some of the women interviewed here, this does not mean we have to become coders, statisticians, designers, or engineers. On the flipside, we certainly can do all these things and in fact, many of the pioneering early coders were women.
No one will be unaffected by AI. Are you at school deciding what work experience to do? Are you in university and making plans to enter the world of work? Are you in an office or in retail? Maybe you work in a hospital as a nurse or a porter or a doctor. Or are you a banker, an accountant, an advertising executive? Are you a part of the gig economy? AI technology, as you perhaps know all too well, is already permeating your workplace. The challenge now is to work out how to make sure it helps you rather than undermines you.
It’s widely publicised that only 13 per cent of engineers and data scientists working in the West are women, and so I hope some of you go on to change this shocking statistic. But what this book is really about is inspiring you to get interested in tech, feel comfortable working alongside it, leverage it, use it, be enabled by it, know how to be heard and have a stake in how technology is built and deployed. It’s a pragmatic guide for the uninitiated.
There are ways that you can be instrumental to the building of the AI systems. Just think about how many different steps there are when building a product. Companies are going to need people who love languages to give AI a voice; historians and philosophers to give AI context; designers and artists to give AI personality and an interface; and product managers to ensure the AI is fit for purpose. The number of non-technical roles will become crucial as we try to build machines that think and act like humans. AI should be democratised not professionalised. All this power should not lie in the hands of the few, nor should it stay only in the hands of men who currently make up the dominant percentage of the tech workforce.
OK, sounds good, but what exactly is AI?
In order to answer this question, I called on Karen Hao. Karen is a journalist, storyteller, engineer and for the past few years she’s been the woman who has explained complicated concepts to me via the MIT Technology Review magazine. She has made what hundreds of other people have tried to explain to me before just click into place. She’s always been good at finding novel ways of communicating: as well as an AI expert, she’s also passionate about the environment and as an undergraduate, launched a fashion show entirely made of rubbish! She’s a woman after my own heart in more than one way.
What are the fundamentals of AI that everyone should understand – and why does it often seem confusing?
Well, in the broadest sense, AI refers to a branch of knowledge that strives to recreate human intelligence within machines. In the ideal realisation of this goal, such machines would be able to learn, reason and act for themselves, mimicking the ways we as humans, or a pet dog, might do so. They would take in information from the rapidly evolving world, process it and then figure out how to respond based on prior experience – and they would supposedly be able to do so much faster and on a far greater scale than any individual human could.
Karen Hao
This is the dream of artificial intelligence: to make these super capable machines that can put their ‘minds’ together with ours to solve some of the world’s most complex problems: climate change, poverty, hunger – things we haven’t been able to wrap our heads around on our own. I like to think of this dream as Janet from NBC’s comedy series The Good Place. She’s a kick-ass, fully autonomous and highly intelligent agent that helps her human counterparts be better versions of themselves.
Part of the reason why AI gets confusing is because the term can often feel like it refers to two completely different things. We now know one of them – the vision of the field, which evokes something closer to what we see in science fiction. But as you’ve already intuited, today’s AI systems are nothing near the clever, autonomous agents that I just described. They’re simpler and less capable, able only to perform specific tasks such as adding dog ears onto your Snapchat photo, ranking your content on Facebook or recommending new songs on Spotify to match the genre you like.
In the field, these two versions of AI have been given different names. The dream is often called ‘artificial general intelligence’, or AGI, while the reality is sometimes referred to as ‘artificial narrow intelligence’. People usually call artificial narrow intelligence AI. But these definitions are often combined into one in popular culture, causing people to think that the AI we have today is far more advanced than it really is.
There is another element that complicates the whole thing further. Because AI researchers are constantly pushing the boundaries of the technology, the definition of present-day AI also changes over time. What might have been considered AI thirty years ago is no longer really considered AI today. And what we know as AI today has only been the working definition for less than a decade. Certainly, among experts there’s a considerable amount of debate about what constitutes AI and what it will be capable of in the future. In fact, one of the fiercest debates is about whether AGI is even possible!
When you add all that up, it goes without saying that the notion of AI is constantly being tweaked, debated, probed and refined. Don’t let that overwhelm you. Instead, take it as an invitation: what AI is and where it’s going is ultimately shaped by people. That means you can have a role in influencing the technology, which I hope you find as hugely exciting as I do.
So what do you think the most common misconceptions are about AI?
One of the biggest things that confused me when I first began covering the field is the difference between robots and AI. The two are clearly interrelated, yet they are not the same thing. As it stands now, AI specifically deals with software. Robotics, by contrast, deals with hardware. Think of it as your brain versus your body. Your body relies on instructions from the brain to move, but your brain also relies on your body to experience and sense its surroundings so it can learn about the world.
Of course, as mentioned in the introduction, AI doesn’t always come physically embodied in a robot. More often than not, it exists as a hidden algorithm on your favourite websites or apps. Similarly, robots aren’t always powered by AI. Instead, their actions could be dictated by software that executes a series of hard-coded rules. In those instances, the robots can neither learn nor adapt to unexpected circumstances, so they are usually confined to perform rote tasks in unchanging environments such as a manufacturing floor.
When AI and robots combine, that’s when the interesting stuff happens. It’s no longer only about efficiency but breaking new ground as well. Self-driving cars, for example, are a product of this union.
What happens if we rely too much on AI?
The most popular products we use are developed and implemented by tech corporations with profit motives. That’s why social media and streaming platforms can be so addictive: the machine-learning algorithms are all pushing us to stay on these platforms for just a little longer. That’s also why ads can sometimes turn predatory: the algorithms get so good at knowing our weaknesses that we end up spending more money than we should have. As algorithms learn more and more about you, implications regarding privacy become a serious concern.
It’s important, therefore, every once in a while to think through how algorithms are impacting your life. What do you like about how they’ve changed it? What do you not? Ultimately, you are in control of the way you interact with them. You could choose to abstain from them entirely: Facebook, for example, gives users the option to turn off automatic photo-tagging. Or you could learn to hack it so that it does different things: if YouTube is showing you too many similar videos, purposely watch some radically different ones to reteach the algorithm about what you want.
What’s your one piece of advice for the reader?
Command algorithms; don’t let them command you! And you’re already one step ahead of the game because you’re reading this book.
As Karen has explained, AI has been around for a lot longer than it first appears. The next chapter takes a closer look at the history of AI. It might not be what you’re expecting …
2
A POTTED HISTORY OF AI
One of the many fascinating aspects of AI is that although it still seems super futuristic, its roots lie in classical thinking. Since we humans have always been partial to making things easier for ourselves, we’ve long been pretty adept at designing machines to increase efficiency. Combine that disposition with some killer mathematics and it’s little wonder that AI has been a twinkle in our eye for quite some time.
The quest for artificial intelligence as we know it began over seventy years ago with the idea that computers would one day be able to think as humans. As so much of the history of AI stems from early computers, we’ll start off by looking at where it all began. You’ll soon notice that early AI was influenced by many different disciplines – yes, mathematics and engineering play their part, but so do biology, game theory, psychology and philosophy. AI is not just the domain of computer scientists. It has always been, and will continue to be, an arena that explores what it means to be human and how we live our lives.
AI’s interdisciplinary nature means that there are many ways to trace its history. This chapter is one way and is not exhaustive. Instead, I wanted to focus on some of the major moments and shine a light on some of the women who’ve been so chronically overlooked along the way. There is a long tradition of women translating or communicating science, partly because of being excluded in some way from the mainstream of conducting it. In fact, I’ve searched far and wide for all the women who played a role in the development of AI, but many were simply never recorded in the archives. This is why I’ve called this chapter a potted history: because I’ve chosen the accounts that most inspired me. Consider this just one way of telling a very complicated story. A historian of science at the University of Cambridge, Patricia Fara, summarised this perfectly when she said to me: ‘Broadening what counts as science’s history entails recognising and crediting women’s involvement.’
I’ve ordered this chapter as a timeline of individuals, but it’s really important to note that the history of AI, as any history of innovation, is never that straightforward. The history of computing in particular is often documented as a series of male geniuses appearing one after another, which simply isn’t the case. So often when history is told as ‘a series of geniuses’, women, particularly women of colour, are erased from the narrative. Among the many reasons for this is the big one: traditionally, the people writing history are the same people who hold the power – white men. When new inventions or ideas shake up our way of thinking or doing, it’s always a result of many people working together, forming communities and pushing things forward – not just one individual, inspired though they may be. So please remember as you read this that it’s networks of people, not individuals on their own, who’ve brought us to where we are today.
Pre-Twentieth Century
The Antikythera Mechanism (205 BCE)
Mechanical machines have been performing complex functions for much longer than you would think. Salvaged from the depths of the Aegean Sea, the Antikythera mechanism has been described as the world’s oldest computer. This strange thing has thirty-odd gearwheels and countless astronomical inscriptions, and it dates to 205 BCE – over 1,200 years before mechanical clocks first appeared in Europe. International experts have decided that it must have been part of an intricately engineered machine for crunching the mind-boggling mathematics needed to model the positions of the sun and moon. The fact that it not only monitored but also seemed to have forecast solar cycles is of course exciting for astronomy enthusiasts, but it also makes it part of the long history of artificial intelligence. Experts still don’t know its exact purpose – it could have been a teaching tool, or used for navigation, but I like to think that it was an early version of the Clue app, tracking women’s menstrual cycles. The likelihood of this is, sadly, zilch. As you’ll see, technology developed in patriarchal societies rarely takes the needs of women into account.
The Analytical Engine & Ada Lovelace (1815–1852)
The partnership between Ada Byron, Countess of Lovelace, and Charles Babbage in the 1830s was perhaps the true birth of computer science.
As a child, Ada Lovelace was taught mathematics by her mother Lady Byron, who hoped to keep her well away from the poetry (and temperament) of her father Lord Byron. Babbage was the inventor of a machine he called ‘the Analytical Engine’, an early computer designed to complete mathematical functions. When Lovelace first met Babbage, he asked her to translate into English an account of his Analytical Engine written by the future prime minister of Italy. She made copious notes on the paper and proposed an algorithm for the Analytical Engine to calculate a sequence of Bernoulli numbers that blew Babbage away. This algorithm is arguably the first true piece of computer code. Though it was a century before the dawn of the computer, Lovelace went on to imagine a machine that could be programmed to follow instructions. But as well as calculating, it would also create. She anticipated a machine that ‘weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves.’
Although the computer she wrote about was never built, it was Lovelace’s imagination, application and appreciation of maths and mechanics that earned her the title of first computer programmer and the epithet ‘the Enchantress of Numbers’. It’s exactly this kind of imagination that enables researchers, academics and entrepreneurs worldwide to push boundaries and make new discoveries with artificially intelligent machines today. I love knowing that it was Ada Lovelace, a woman well before her time, who sparked this line of enquiry.
Early Twentieth Century
Alan Turing (1912–1954) & Joan Clarke (1917–1996)
Alan Turing was an English mathematician and computer scientist and is now considered the father of theoretical computer science and artificial intelligence. Turing studied mathematics at Cambridge, and it was there that he first started thinking about the possibility of an intelligent computer. In the 1930s, Turing went to the USA and worked with John and Klara von Neumann – remember their names, we’ll come back to them shortly! – who were also fascinated by the idea of possible computer intelligence. Shortly before the Second World War broke out, Turing returned to the UK with everything he’d learned and took up a position with the Government Code and Cypher School. You might recognise his name from the 2014 film The Imitation Game, a dramatisation of how during the Second World War he cracked the Enigma Code, which was used by the Germans to send commercial, diplomatic and military communications, and so helped the Allies to defeat the Nazis. But he didn’t work alone. At their base, Bletchley Park, Joan Clarke rose from clerical work to deputy head of operations in Hut 8, becoming its longest-serving team member. To be promoted and receive her initial pay rise, Clarke’s title had to be changed to ‘linguist’ as the Civil Service had no protocols in place for a senior female cryptanalyst. Clarke was tasked with breaking navy ciphers in real time, resulting in almost immediate military action. The secrecy of Bletchley Park means that many of her achievements remain unknown today.
What we do know is that the Bletchley Park code-breaking operation was made up of nearly 10,000 people and about 75 per cent of these were women. Very few women there have been formally recognised as cryptanalysts working at the same level as their male peers. Thankfully, in more recent times, Mavis Lever, Margaret Rock and Ruth Briggs have also been named. I like to think I would have been friends with Sarah Baring, as she combined her work for Vogue with that of her duties as a linguist in Hut 4 at Bletchley Park.
As Professor Sue Black, the woman who campaigned to keep Bletchley Park open, explains, ‘the lifeblood of some of Britain’s bravest and most inspiring citizens pulsed through Bletchley’s veins at the most crucial turning point of the war. Here, thousands of men and women contributed to the effort that saved our nation and inspired future generations with their work in the fields of computing and technology.’ What’s great is that you can now go and visit these huts and put yourself in those women’s shoes.
ENIAC, John & Klara von Neumann (1903–1957 / 1911–1963) & Jean Bartik (1924–2011)
Around the same time in America, husband and wife team John and Klara von Neumann were using their respective skills to build early computers to help the war effort. Despite only having high school maths, Klara secured a wartime job coding with Princeton’s Office of Population Research. While her husband went to work on the Manhattan Project (war-time research on nuclear weapons), Klara become head of the Statistical Computing Group at Princeton until the end of the war. In peace time, she continued to program for many of the earliest computers, including the ENIAC, and trained others to do the same.
Meanwhile, ENIAC, the first digital computer, was being built in America for the war effort by John Mauchly and J. Presper Eckert. This innovative machine was, again, powered and programmed by a team of women. The original ENIAC wasn’t a computer like we have today, ones that stores program instructions in electronic memory. Instead, think of it as a machine with a series of huge plug boards rigged up to thousands of wires that needed to be manually pulled in and out. Needless to say, it was laborious.
ENIAC was set up to calculate ballistic trajectories for weapons, but it was the hard work and pluck of the original programmers, Jean Bartik, Betty Holberton, Kathleen Antonelli, Marlyn Meltzer, Ruth Teitelbaum and Frances Spence, who got this beast going. What’s more, all these women taught themselves how to operate the 150-foot tall machine as the engineers had no time for programming manuals or classes. They wrote the program by looking at ENIAC’s logical and electrical block diagrams, and created their own flowcharts and programming sheets, navigating the hundreds of wires and thousands of switches to place it on the ENIAC. These women were drawing on their mathematical ability, but I think the extra skill was their determination to get these machines to work for them.
Jean Bartik did once say that the ENIAC was a ‘son of a bitch’ to program, but these women were devoted to their work. On Valentine’s Day in 1946, Bartik and Betty Holberton were up late working on the machine ahead of a big press conference the next day. Reflecting on that evening, Bartik wrote ‘Most people consider Valentine’s Day a romantic day, but we never gave a thought that evening of romantic dinners. What we were thinking of was an all-important demonstration we were to run the next day for the world.’ The day came but no one was told about the work of these women. The lab only introduced the ENIAC’s hardware inventors – all men – to the press. Photos from the event show the women’s faces snapped in the background, uncredited. After the big reveal, there was a celebratory dinner with many invited guests. The women were sent home. It wasn’t until 1985 when Kathy Kleiman, an undergraduate programmer at Harvard, was searching for female role models that the story of the women of ENIAC was rediscovered. They had to wait until they were seventy years old to be recognised as the world’s first computer programmers. And yet we still benefit from technologies that they helped to develop: Klara von Neumann, for example, applied her understanding of ENIAC technology to make huge breakthroughs in weather forecasting.