+ All Categories
Home > Documents > AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the...

AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the...

Date post: 01-Jan-2021
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
32
AN INTELLIGENT FUTURE? Maximising the opportunities and minimising the risks of artificial intelligence in the UK
Transcript
Page 1: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

AN INTELLIGENT FUTURE?Maximising the opportunities and minimising the risks of artificial intelligence in the UK

Page 2: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

2 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

ACKNOWLEDGMENTS WRITTEN AND RESEARCHED BY: OLLY BUSTON, ROBERT HART, AND CATH ELLISTON. WITH THANKS TO: AMY BARRY, IRAKLI BERIDZE, MILES BRUNDAGE, KAY FIRTH-BUTTERFIELD, STEPHEN CAVE, JESSICA MONTGOMERY, RICHARD MOYES, HUW PRICE, NICK PURSER, DEOK JOO RHEE, JANE ROWE, MURRAY SHANAHAN, AND KATIE WARD.DESIGN AND LAYOUT: NICKPURSERDESIGN.COM / MEDIA AND COMMUNICATIONS: DIGACOMMUNICATIONS.COM

Page 3: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

3AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

AN INTELLIGENT FUTURE?Maximising the opportunities and minimising the risks of artificial intelligence in the UK

Endorsements

“Making the best of AI is one of the most important challenges of our century, a challenge we all face together. Future Advocacy are doing a commendable job of encouraging well-informed debate about these crucial issues, in government and in the public sphere.”[Huw Price, Bertrand Russell Professor of Philosophy, Cambridge University & Academic Director of the Leverhulme Centre for the Future of Intelligence]

“This excellent report helps show us how we can ensure Artificial Intelligence delivers on the needs and wants of real people. AI is a powerful and flexible tool that will increasingly transform businesses, governments, and societies. We need to get this right.”[Kay Firth-Butterfield, Former CO, Lucid AI’s Ethics Advisory Panel & Co-Founder, Consortium for Law and Ethics of Artificial Intelligence and Robotics, University of Texas, Austin]

“Artificial intelligence creates great opportunities for improving diagnosis and treatment. At the same time it brings challenges in areas such as privacy and accountability. This report provides great food for thought on how we can get the balance right.” [Dame Sally Davies, UK Chief Medical Officer]

“The development of AI will have profound global consequences and the UN has a vital role to play in making sure the opportunities are maximised and the risks are minimised. The UNICRI Centre on AI and Robotics seeks to enhance understanding through improved coordination, knowledge collection and dissemination, awareness-raising and outreach activities. Future Advocacy is a valuable contributor to these important global efforts.”[Irakli Beridze, Senior Strategy and Policy Advisor, United Nations Interregional Crime and Justice Research Institute]

Page 4: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

4 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

EXECUTIVE SUMMARy

The Intelligence Revolution

We are in the early stages of an intelligence revolution. Artificial intelligence (AI) already permeates many aspects of our lives. AI systems trade on the stock market, filter our email spam, recommend things for us to buy, navigate driverless cars, and in some places can determine whether you are paid a visit by the police.1

Although AI is not new, there has been a recent explosion of activity and interest in the field which has largely been driven by advances in machine learning. These are computer programs that automatically learn and improve with experience.2

Progress in machine learning has allowed moreversatile AI systems to be developed that can perform well at a range of tasks, particularly those that involve sorting data, finding patterns, and making predictions.

Opportunities and Risks

The fast-moving development of AI presents huge economic and social opportunities. Over the comingyears AI will drive economic productivity and growth; improve public services; and enable scientific breakthroughs.

But there are also risks. The intelligence revolution will cause great disruption to employment markets. Concerns about privacy and accountability will be amplified as AI makes possible increasinglysophisticated analysis of our personal data. And

the ability of AI to replace humans in military decision-making raises profound questions.

The more distant future is hard to predict. Oxford University Professor Nick Bostrom and others have speculated about the catastrophic risks of a ‘super-intelligence’ that humans struggle to control. While Bostrom’s doomsday scenario may be extraordinarilyunlikely, the possibility of it happening only needs to be extremely small for it to warrant attention.

The UK’s Unique Position

The UK was the crucible of the industrial revolution and is one of the key crucibles of the intelligence revolution. It is home to world-leading AI companies and world-leading academic centres of AI research, and is well placed to reap great economic and social benefits from the development of AI.

The UK also hosts world-leading academic centres focused on the safety of AI.3 This, alongside the UK’s membership of key multilateral policymaking fora such as the G7, G20, NATO, UN, and OECD, means the UK could and should play an important role in shaping and directing global debate and ensuring that the opportunities of AI are maximised and its risks are minimised.

Engaging the Public and Politicians

As part of our research for this report we commis-sioned a YouGov poll to assess British publicopinion on a range of issues relating to AI. The results of the poll are featured throughout this report.

1. The Chicago police department have used predictive policing to visit those at a high risk of committing an offence to offer them opportunities to reduce this risk, such as drug and alcohol rehabilitation or counseling. See Saunders, J., Hunt, P., & Hollywood, J. S. (2016). Predictions put into practice: a quasi-experimental evaluation of Chicago’s predictive policing pilot. Journal of Experimental Criminology, 12(3), 347-371 and Stroud, M. (2016, 19 August) Chicago’s predictive policing tool just failed a major test. The Verge (retrieved from http://theverge.com, accessed on 11 October, 2016). Areas of the UK, such as Kent, are beginning to use predictive policing. E.g. see O’Donoghue, R. (2016, 5 April) Is Kent’s Predictive Policing project the future of crime prevention? KentOnline (retrieved from http://kentonline.co.uk, accessed on 11 October, 2016).

2. Mitchell, T. (1997) Machine Learning. London, UK: McGraw-Hill Education.3. These include Cambridge’s Centre for the Study of Existential Risk and Oxford’s Future of Humanity Institute.

Page 5: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

5AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

YouGov Poll Graph 1

Which ONE, if either, of the following statements BEST describes your view towards Artificial Intelligence (AI)?

WOMEN MEN

– Significantly fewer women think that AI is more of an opportunity for humanity than a risk.

AI is more of an opportunity for humanity than a risk

AI is more of a risk tohumanity than an opportunity

Neither of these

Don’t know

0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50%

4. Some commendable work has been undertaken in this area, notable examples including the Royal Society and Nesta, and these will serve as useful starting points for wider public discourse.

5. There is some promising work within the Civil Service in this regard, notably the Government Office for Science’s forthcoming report on AI and governance; and published ethical guidelines on the use of data science tools for government analysts.

6. Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17.

YouGov Poll Graph 2

In general, do you think the UK Government should pay more or less attention to the potential opportunities and risks of Artificial Intelligence, or the same as it does currently?

– British people think the government should pay more attention to the opportunities and risks of AI.

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

The UK Government should pay more attention

The UK Government should pay less attention

The UK Government should pay the same amount of

attention as it does currently

Don’t know

42%

16%26%

13%15%

29%30%

43%28%

8%

26%

24%

0% 5% 10% 15% 20% 25% 30% 35% 40% 45% 50%

We need to have a much deeper and more informed public debate about AI in order to build the trust,understanding, and acceptance that are vital torealise the benefits of this technology and to ensure that AI is developed in ways that fit human wants and needs.4

We also need to have a much deeper and more informed political debate about AI.5 The words ‘artificial intelligence’ have only been said 32 times in the House of Commons since electronic records began, compared to 923 for ‘beer’ and 564 for ‘tea’. Hopefully the recently published report of the

Science and Technology Select Committee into Robotics and AI6 will provide much-needed stimulus.

Our recommendations to the UK Government are summarised below. The Government is not the only important actor in this space, but it does have a vital role to play, alongside industry, AI researchers, the media, and the public.

We welcome discussion as well as critique of our recommendations with the intention that this report will help stretch political horizons and shape an increasingly informed public debate on this dynamic and important subject.

Page 6: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

6 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

The UK Government should:

1. Make the AI opportunity a central pillar of the Prime Minister’s proposed industrial strategy and of the trade deals that the UK must negotiate post-Brexit.

2. Commission UK-specific research to assess which jobs are most at risk by sector, geography, age group, and gender. And then implement a smart strategy to address future job losses through retraining, job creation, financial support, and psychological support.

3. Draft a White Paper on adapting the education system to maximise the opportunities and minimise the risks created by AI.

4. Agree a ‘new deal on data’ between citizens, businesses, and government with policies on privacy, consent, transparency, and accountability through a nation-wide debate led by a respected and impartial public figure.

5. Promote transparency and accountability in AI decision-making by supporting research that facilitates an opening of the ‘black box’ of intelligent algorithms and supporting open data initiatives.

6. Establish systems of liability, accountability, justification, and redress for decisions made on the basis of AI. This would promote fairness and justice, and could encourage companies to invest in more transparent AI systems.

7. Support a ban on Lethal Autonomous Weapons Systems (LAWS) and work with international partners to develop a plan for the enforcement of the ban and for the prevention of the proliferation of LAWS.

8. Give appropriate attention to long-term issues of AI safety: support research into AI safety and horizon scanning; support the institutionalisation of safe AI research conduct in all sectors includingthe development of a code of ethics; developstandards and guidelines for whistle-blowers;and ensure students and researchers are trained in the ethical implications of their work.

9. Facilitate a House of Commons debate on maximising the opportunities and minimising the risks of AI in the UK.

10. Establish a Standing Commission on AI to examine the social, ethical, and legal implications of recent and potential developments in AI.

11. Develop mechanisms to enable fast transfers of information and understanding betweenresearchers, industry, and government to facilitate swift and accurate policy-making based on fact.

12. Launch a prize for the application of AI to tackling today’s major social challenges and delivering public goods.

Summary of policy recommendations

Page 7: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

7AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

INTRODUCTION

“The dream is finally arriving. This is what it was all leading up to…We’ve made more progress in the last five years than at any time in history.”7 Bill Gates

What is AI?

Defining ‘artificial intelligence’ is a complicated task, mainly because the concept of intelligence itself is hard to pin down. In this paper we use an inclusive definition of intelligence as ‘problem solving’ and consider an ‘intelligent system’ as one that takes the best possible action in a particular situation.8

As early as 1997 IBM’s Deep Blue beat world champion Garry Kasparov at chess, a game asso-ciated with high intelligence. That was impressive. But Deep Blue could not play scrabble. Deep Blue was “narrow” AI, which means it was very good at a particular task but could not switch between tasks.

In recent years, significant progress in ‘machine learning’ has meant that AI systems are becoming more flexible. ‘Machine learning’ refers to AI systems that are able to improve their performance at a task over time and have the ability to adapt their own rules and features based on their own output and experience.9 An example, also referred to as ‘deep learning’, is the way in which Google DeepMind’s AI system became exceptionally good at a wide range of Atari computer games. The system was instructed to maximise its score on various games, and the only input it received was the score and video game pixels.

Flexibility and adaptability are what makes current (and potential future) AI such a powerful tool, along with its ability to find patterns in, provide usefulinsights about, and make predictions from, vast datasets.

The State of Play Today

We are at the start of an intelligence revolution that could herald even greater economic and social change than the industrial revolution over a shorter timeframe. As the graphic below shows, AI is al-ready being used to perform a wide range of tasks.

7. Prigg, M. (2016, 2 June) Bill Gates claims ‘AI dream is finally arriving’ - and says machines will outsmart humans in some areas within a decade. Daily Mail (retrieved from http://dailymail.co.uk, accessed 12 October, 2016).

8. Russell, S. J., and Norvig, P., (1995) Artificial Intelligence: A Modern Approach, Englewood Cliffs, NJ: Prentice Hall.9. Mitchell, T. (1997) Machine Learning. London, UK: McGraw-Hill Education.

Garry Kasparov, former World Chess Champion, and considered by many to be the best player of all time, lost to IBM’s Deep Blue in 1997.

Page 8: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

8 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

10. Gibbs, S. (2016, 28 June) Chatbot lawyer overturns 160,000 parking tickets in London and New York The Guardian (retrieved from https://theguardian.com, accessed 11 October, 2016).

11. Guerrini, F. (2016, 11 May) From Lake Garda to the Thames: Why boat drones are taking to the water. ZDNet (retrieved from http://zdnet.com, accessed 11 October, 2016).

12. E.g. see Simon, M. (2016, 25 May) The Future of Humanity’s Food Supply Is in the Hands of AI. Wired (retrieved from https://wired.com, accessed 11 October, 2016).

13. Snow, J. (2016, 12 June) Rangers Use Artificial Intelligence to Fight Poachers. National Geographic (retrieved from http://news.nationalgeographic.com, accessed 11 October, 2016).

14. WashPostPR (2016, 5 August) The Washington Post experiments with automated storytelling to help power 2016 Rio Olympics coverage. The Washington Post (retrieved from https://washingtonpost.com, accessed 11 October, 2016).

15. Moorfields Eye Hospital (retrieved from http://www.moorfields.nhs.uk/news/moorfields-announces-research-partnership). Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.

Examples of AI use today

Virtual assistants: Siri, Cortana, and Google Now are all driven by AI.

Music and film recommendations: Spotify and Netflix suggest new songs and shows based on what we have previously listened to or watched.

Water management: AI is being used tocoordinate drones to test the water quality of a number of European rivers, including the Thames.11

Search engines:Google improves its results using intelligent algorithms.

Legal advice:The chatbot DoNotPay has successfully contested 160,000 parking tickets in London and New York.10

Agriculture:AI is being used to diagnose problems in crop growth; in smart tractors that can selectively spray weeds with herbicide; and in satellite imaging to identify areas where farmers will require more support.12

Health:AI is being used to interpret eye scans; improve treatment of severe combat wounds; and reduce hospital-acquired infections.15

Purchaseprediction:Amazon suggests products we may like based on our purchase or search history.

Creating art:AI has been used to compose music, write poetry, and produce paintings.

Transport:AI underpins a number of features in SatNav systems; is used to ease congestion in cities; and is behind recent advances in driverless cars.

Wildlife Protection:AI has been successfully deployed to inform rangers’ patrol routes in efforts to combat poaching in Uganda and Malaysia.13

Journalism:AI is being used to draft short articles and reports. The Washington Post deployed AI in its coverage of theOlympics.14

NEWS

Page 9: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

9AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

2016 in particular has seen a number of break-throughs. Earlier this year, Google DeepMind’s AlphaGo AI system beat Lee Sedol, the world’s leading player of the ferociously complicated game of ‘Go’, which originated in China.

DeepMind also announced a new partnership with Moorfields Eye Hospital, extending existing partnerships with the NHS.16 And in September, Uber began trialling driverless cars with members of the public in Pittsburgh thanks to AI’s ability to navigate complicated urban environments.17

IBM CEO Ginni Rometty has said that her organi-sation is betting the company on AI. And accordingto Google CEO Sundar Pichai, the tech giant is “thoughtfully applying it across all our products, be it search, ads, YouTube, or Play.”18

In the future, increasingly powerful and flexible AI will be deployed in almost every area imaginable. Ahead of us lies enormous potential. AI could turbo-charge productivity, empower citizens, and deliver cheap and safe transport, with innumerable benefits to businesses, society, and individuals. We

have the opportunity to develop a fundamentally better society in which AI is used to help solve some of our most pressing problems, including disease and climate change. But these technological advances do not come without risk.

The Structure of this Report

This report is made up of four main sections:

• Section 1 explores the impact of AI on employment. • Section 2 looks at the interaction of AI with ‘big

data’ and how it will amplify current challenges around privacy, fairness, and accountability.

• Section 3 focuses on the military use of AI with an emphasis on autonomy.

• Section 4 looks at the more distant future, including the risk that human beings might lose control of AI that possesses greater-than-human intelligence.

Each section makes concrete recommendations to the UK Government about how it can help maximise the opportunities and minimise the risks of AI.

16. Shead, S. (2016, July 10) ‘Google DeepMind: How, why, and where it’s working with the NHS’ Business Insider UK (Retrieved from http://uk.businessinsider.com, accessed 6 October, 2016).

17. Mui, C. (2016, August 22) ‘Uber Is Positioned To Slingshot Ahead Of Google In Driverless Cars’ Forbes (retrieved from http://forbes.com , accessed 6 October, 2016).

18. Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.

Page 10: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

10 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

SECTION 1: AI AND EMPLOYMENT

“The business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI.”19

Kevin Kelly, Founding Editor Wired Magazine

Increasing Productivity and Economic Growth

Artificial intelligence is already enabling a wave of innovation across every sector of the UK economy. It helps businesses use resources more efficiently, allows new approaches to old problems, and enablesentirely new business models to be developed, often built around AI’s powerful ability to interrogate large data sets.

The Pessimistic View of AI and Employment

History offers many examples of workers being replaced by new technologies. In the nineteenth century the mass replacement of skilled textile workersby industrial looms provoked the ‘Luddites’ to break the looms that were putting them out of work. More recently nearly all of the 60,000 jobs in 9000 Blockbuster video stores worldwide at the company’s peak in 2004 have now disappeared.20

The pessimistic view is that the intelligence revolution will drive a relentless wave of redundancy, leading

to increasing inequality. Various economists have predicted that developments in AI, alongsideadvances in other technologies, will usher in an age of mass unemployment.21,22

One Oxford study predicts that 35% of UK jobs are at high risk of automation over the next 20 years.23

The Bank of England’s Chief Economist Andy Haldane thinks this could be higher, with 15 million (half of all today’s workers) likely to be replaced.24 President Obama’s Chief Economist Jason Furman has suggested that 83% of jobs making less than $20 per hour in the US will face serious pressure from automation. For middle-income work that pays between $20 and $40 per hour, that number is still as high as 31%.25

Most at risk are jobs with routine intellectual compo-nents, cutting across all sectors of the economy. This includes many jobs traditionally viewed as ‘safe’ from automation like medicine, law, and journalism. AI is already being used to interpret scans andcomplex medical data, to sort through legal docu-ments, and to write brief articles and sports reports.

19. Kelly, K. [kevin2kelly]. (2016, 7 April) The business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI. #theinevitable [tweet] (retrieved from https://twitter.com/kevin2kelly/status/718166465216512001, accessed 6 October, 2016).

20. Harress, C. (2013, 5 December) The Sad End Of Blockbuster Video: The Onetime $5 Billion Company Is Being Liquidated As Competition From Online Giants Netflix And Hulu Prove All Too Much For The Iconic Brand. International Business Times (retrieved from http://ibtimes.com, accessed 11 October, 2016).

21. Ford, M. (2015). Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books.22. See Brynjolfsson, E. (2015, 4 June) Open Letter on the Digital Economy. MIT Technology Review. (retrieved from https://technologyreview.com,

accessed 6 October, 2016) and Brynjolfsson, E., & McAfee, A. (2011). Race against the machine. Lexington, MA: Digital Frontier Press.23. Frey, C. B, and Osborne, M. A. (2013) The Future of Employment: how susceptible are jobs to computerisation? Oxford Martin School, University

of Oxford. See also: BBC (2015, 11 September). Will a robot take your job? BBC. (retrieved from http://www.bbc.co.uk/news/technology-34066941, accessed 6 October, 2016) and Knowles-Cutler, A., Frey, C. B., and Osborne, M. A. (2014). Agile town: the relentless march of technology and London’s response. Deloitte. (retrieved from: http://.deloitte.com, accessed 6 October, 2016).

24. McGoogan, C. (2015, 13 November) Bank of England: 15 million British jobs at risk from robots. Wired (retrieved from https://wired.co.uk, accessed 6 October, 2016).

25. AI Now (2016) The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term: A summary of the AI Now public symposium, hosted by the White House and New York University’s Information Law Institute, July 7th, 2016. AI Now (retrieved from https://artificialintelligencenow.com, accessed 10 October, 2016).

Page 11: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

11AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

The Modern Transport Bill, announced in this year’s Queen’s Speech, will aim to “put the UK at the forefront of autonomous and driverless vehicles ownership and use.” One must assume that jobs involving driving are very clearly at risk. Uber now operates in over 20 UK cities,26 employing 25,000 people in London alone.27 The company’s stated goal is to replace their drivers entirely, which would drive down costs and accident numbers, but also jobs.28 The competition to lead the transition to driverless vehicles is fierce. Google, Tesla, Baidu, and nuTonomy29 are among the other big players in this race which will have a profound impact on taxi drivers, bus drivers, lorry drivers, and the transport sector as a whole.

According to Deloitte the UK sector which has the highest number of jobs with a high risk of automation is wholesale and retail. In this sector 2,168,000 jobs (or 59% of the total current workforce in this sector) have a high chance of being automated in the next

two decades. This is followed by the transport and storage sector where 1,524,000 jobs (74% of the workforce) are likely to be automated and human health and social work where 1,351,000 jobs (28% of the workforce) are at risk.30

The development of AI may also affect employment patterns and inequality between countries. In the past, many developing economies achieved growth by exploiting high numbers of low paid workers. This strategy was successful (in terms of growth generation) for the so-called East Asian ‘Tiger’ economies and later for China and India and has helped them to catch up with richer economies in terms of GDP. It is a strategy that may not be available in future if more and more routine physical and intellectual tasks are automated. Developing countries may therefore need to pursue leap-frog-ging strategies aimed at being competitive in those areas of employment that will not be impacted by automation.

26. Telegraph Reporters (2016, 16 May) What is Uber and what should I think about the controversies? The Telegraph (retrieved from https://telegraph.co.uk , accessed 6 October, 2016).

27. Titcomb, J. (2016, 2 June) Majority of Uber drivers in London work part time, study says. The Telegraph (retrieved from https://telegraph.co.uk, accessed 6 October, 2016).

28. Newman, J. (2014, 28 May) Uber CEO Would Replace Drivers With Self-Driving Cars. Time (retrieved from https://time.com, accessed on 6 October, 2016).

29. nuTonomy beat Uber to the punch by a matter of weeks when it launched trials for a self-driving taxi in Singapore and aims for a driverless fleet by 2018. See Vasegar, J. (2016, 29 August) nuTonomy looks to beat Uber at its own game. Financial Times (retrieved from https://ft.com, accessed 11 October, 2016).

30. Quoted in Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17.

Uber began trialling driverless cars in Pittsburgh in September 2016.

Page 12: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

12 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

The Optimistic View of AI and Employment

A utopian spin on these predictions of AI-fuelled unemployment is that a new era will arrive where work itself becomes optional and humans are freedfrom the financial and temporal constraints of employment to pursue other more fulfilling activities in a world of increasing abundance.31 Assuming this new age of machine workers eliminates scarcity, our chief economic problem will be that of distribution, not production.32

Other economists paint a different picture, with some believing that fears of unemployment stem

from the limitations of our imagination.33 Historically, technological advances have brought new demand, creating jobs in previously unimaginable sectors. Enter the words ‘social media jobs’ into any recruit-ment website and you will find hundreds of jobs that simply did not exist ten years ago. One estimate predicts that 65% of children in primary school today will be working in a job that doesn’t exist yet.34 At the same time, while machines may have a comparative advantage in routine tasks, humans will retain an edge in roles that require creativity, lateral thinking, interpersonal skills, caring, and adaptability for many years to come.35

31. E.g. see Wohlsen, M. (2014, 14 August) When Robots Take All the Work, What’ll Be Left for Us to Do? Wired (retrieved from https://wired.co.uk, accessed 7 October, 2016).

32. Autor, D. H., (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3-30.33. Mokyr, J., Vickers, C., and Ziebarth, N. L. (2015). The history of technological anxiety and the future of economic growth: Is this time different? The

Journal of Economic Perspectives, 29(3), 31-50.34. McLeod, Scott and Karl Fisch, “Shift Happens”, https://shifthappens. wikispaces.com. 35. Autor, D. H., (2015). Why are there still so many jobs? The history and future of workplace automation. The Journal of Economic Perspectives, 29(3), 3-30.

YouGov Poll Graph 3

How worried, if at all, are you that your job will be replaced by Artificial Intelligence (e.g. robots, machines) in the near future?

– British people tend not to be worried that their jobs will be replaced by Artificial Intelligence, robots, or machines in the near future.

2%

6%

20%

29%

2%

41%

8%

49%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Very worried

Fairly worried

Not very worried

Not at all worried

Don’t know

Not applicable - Not currently working

Net: Worried

Net: Not worried

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

Page 13: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

13AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

In the short-term, demand for computationaland technical literacy will increase, and the ability to interpretdata will be a valued skill. At least in the short-term, software

developers, coders, and data analysts will all be in high demand, and we will still need mechanics and technicians to maintain and

repair automated systems.

Creative roles will also display resilience, and will likelyexperience an increase in demand. Entrepreneurs, creative writers,

and scientists – disciplines requiring complex and creative thinking –are all in this category. Machines are also currently exceptionally poorat tasks involving social skills, meaning carers can expect to remain in

demand. Many people may simply prefer to be cared for by human carers.

There are also those jobs that will resist automation. Complex manual jobs that require a great deal of dexterity will endure. It is therefore unlikely that those employed as hairdressers, chefs, gardeners, dentists, and cleaners will

be replaced soon.

Other jobs will be transformed rather thanreplaced, with employees freed from routine tasks tofocus on more cognitively demanding areas. Thismay be the case with higher-level medical, legal,

management, and teaching work.

Jobs of the future

Though many jobs will disappear, the intelligence revolution will open up other avenues of employment, increasing demand in some existing areas orcreating new demands entirely.

Many new areas will also emerge, a number of these facilitatedby developments in AI. There will be handypersons to assist in

setting up smart homes as the Internet of Things connects increasing areas of our lives, and traffic monitors for fleets of driverless vehicles. Advances in

3D printing will spur demand for a new sector of designers and innovators, and as our personal data becomes increasingly revealing and difficult to keep confidential professions

will emerge to help us manage our data.

Page 14: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

14 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

For many tasks it may be that a combination of man and machine will be the most productive. It is heartening to note that although Deep Blue beat Kasparov in 1997, combinations of humans and machines became the world’s best chess players in the mid-2000’s, beating the best humans and the best computers by combining tactical support from the computer with strategic guidance from the human.36 The fact that Lee Sedol appears to have become a better ‘Go’ player since being beaten by Google DeepMind’s AlphaGo earlier this year is perhaps further evidence of the power of such human-machine synergy.37 Job growth in the future is likely to be in roles that complement technology rather than those that can be substituted by it.

A Prudent Approach

Ultimately, we should take care not to enter into complacent optimism or pessimism. Our inability to imagine jobs of the future does not mean we are bound to total unemployment. On the other hand, a historical precedent of new jobs offers little by way of assurance that such trends will continue as machines become as good as humans at many physical and intellectual tasks.

What does seem highly likely is that the rate and scope of change in employment markets will be unprecedented with rapid disruption across almost all sectors. It may be, for example, that most of the more than 1 million jobs38 in UK call centres simply disappear once AI-based call centre systems cross a certain quality threshold. This would be a major employment shock and would be concentrated in certain geographical areas (see map) many of which have already been hit hard by de-industrialisation.

As with all change, the impact will be different on different genders, geographic regions,39 and age groups. Improving our understanding of where the impact is likely to be felt is vital if the government is to develop a smart and proactive policy response.

36. Shanahan, M. (2015). The Technological Singularity. Cambridge, MA: MIT Press. P.191.37. Hassabis, D. [demishassabis]. (2016, 5 May) ‘Lee Sedol has won every single game he has played since the #AlphaGo match inc. using some new

AG-like strategies - truly inspiring to see!’ [tweet] (Retrieved from https://twitter.com/demishassabis/status/728020177992945664, accessed 10 October, 2016).

38. Unison (2013) Unison Calling: a guide to organising in call centres. Unison.39. A Deloitte study reveals that London will be significantly safer in terms of jobs, with 51% at low risk compared to 40% for the UK as a whole see

Knowles-Cutler, A., Frey, C. B., and Osborne, M. A. (2014). Agile town: the relentless march of technology and London’s response. Deloitte. (retrieved from: http://.deloitte.com, accessed 6 October, 2016). A Nesta study of creative industries, which are likely to resist automation for longer, also shows an uneven distribution, with “a dominant presence in London and the South-East of England”. See Mateos-Garcia, J. and Bakhshi, H. (2016) The Geography of Creativity in the UK. Nesta.

Employed population working as contact centre staff, by region

6% of employed population

5-6% of employed population

3-4% of employed population

2-3% of employed population

2% of employed populationBASED ON 2014 DATA FROM CONTACTBABEL

Page 15: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

15AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

Education and Training

Our education system will need to be radically reformed to maximise the opportunities and minimise the risks that AI presents.

Skills like coding and STEM (Science, Technology, Engineering, and Mathematics) subjects are already in high demand, and this will only increase in the short term. While coding is now being taught in schools and efforts to increase the uptake of STEM subjects in under-represented groups are underway, we need to continue and expand these initiatives. The recent Science and Technology Select Committee Report rightly criticises the Government for its failure to publish its long awaited Digital Strategy to address the digital skills crisis.40

In the longer term, there should be greater emphasis in the curriculum on skills that are likely to remainuniquely human for longer, such as creativity, lateralthinking, interpersonal skills, and caring. The schools of the future will need to be more like Montessori schools, with less emphasis on knowledge acquisition and rote learning and more on creative and flexible thought.41

Given the likely rate and scope of change in job markets over the coming years, a focus on self-directed, lifelong learning techniques will be essential to creating a flexible and dynamic workforce.

People should also be directed, via educationalopportunities, towards jobs such as caregivers which are in demand and likely to be resilient to change for longer.42

40. Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17.41. It is interesting to note that a disproportionately large number of the ‘global creative elite’ went to Montessori schools including Google founders

Larry Page and Sergei Brin, Amazon’s Jeff Bezos, videogame pioneer Will Wright, and Wikipedia founder Jimmy Wales. This has led to the coining of the phrase “Montessori Mafia”. E.g. see Denning, S. (2011, 2 August) Is Montessori The Origin Of Google And Amazon? Forbes (retrieved from http://forbes.com, accessed 12 October, 2016).

42. In some countries with ageing populations robot carers may take on a bigger role sooner. This may be the case in Japan, for example, which has invested heavily in robotics for care. See Hudson, A. (2013, 16 November) ‘A robot is my friend’: Can machines care for elderly? BBC (retrieved from http://bbc.co.uk, accessed 11 October, 2016).

43. Measures could include fostering tech start-ups; strengthening links between universities and industry; exploring collaborations on public infrastructure (e.g. smart cities, driverless cars, hospitals etc.); and exploring opportunities for government to support and fast-track innovative uses of AI. Early adopters are likely to become centres of this new revolution and reap the most rewards. We might look to nations like Singapore, for instance, to see how collaboration can be driven between sectors to foster an environment that is conducive to innovation. E.g. see Daga, A. and Armstrong, R. (2014, 4 April) Singapore targets investment in ‘disruptive’ technologies. Reuters (retrieved from http://reuters.com, accessed on 11 October, 2016).

44. In the USA the White House will conduct such a study and recommend policy responses by the end of 2016: Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.

1. The UK Government should make the AI opportunity a central pillar of the Prime Minister’s proposed industrial strategy43 and of the trade deals that the UK must negotiate post-Brexit.

2. The UK Government should commission UK-specific research to assess which jobs are most at risk by sector, geography, age group, and gender. And then implement a smart strategy to address future job losses through retraining, job creation, financial support, and psychological support.44

3. The Department of Education should draft a White Paper on adapting the education system to maximise the opportunities and minimise the risks created by AI.

POLICY RECOMMENDATIONS

Page 16: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

16 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

45. Panzarino, M. (2015, 2 June) Apple’s Tim Cook Delivers Blistering Speech On Encryption, Privacy. TechCrunch (retrieved from https://techcrunch.com, accessed 12 October, 2016).

46. E.g. see Manyika, J., Chui, M., Brown, B., Bughin, J., Dobbs, R., Roxburgh, C., and Byers, A. H. (2011). Big data: The next frontier for innovation, competition, and productivity. McKinsey Global Institute (retrieved from https://mckinsey.com, accessed 10 October, 2016) and Reinsel, D., Chute, C., Schlichting, W., McArthur, J., Minton, S., Xheneti, I, Toncheva, A. and Manfrediz, A. (2007). The Expanding Digital Universe. White paper, IDC.

SECTION 2: PERSONAL DATA: PRIVACY, FAIRNESS, AND ACCOUNTABILITY

“Some of the most prominent and successful companies have built their businesses by lulling their customers into complacency about their personal information. They’re gobbling up everything they can learn about you and trying to monetize it.”45 Tim Cook, Apple CEO

Introduction

As we go about our lives we generate vast quantities of data. We shop online, bank online, date online, and watch TV online. We communicate via email and text message. Increasingly, through the Internet of Things (IoT), we are connecting growing parts of

our lives to networks. We produce so much data, in fact, that it is impossible to store,46 let alone analyse, it all. This proliferation of data will continue as our use of digital technology and connectivity expand.

Analysing all this data to derive powerful insights into people’s lives is the sort of task that AI is very good at.

YouGov Poll Graph 4

What do British people think AI should be used for?

AI should be used for this

AI should not be used for this

Dont know

54% 26% 20%

30% 47% 23%

19% 60% 21%

19% 58% 23%

34% 41% 25%

45% 32% 23%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Gathering police intelligence

Determining eligibility for mortgages

Child risk assessment by social services

Determining eligibility for jobs

Determining eligibility for insurance

Helping to diagnose diseases

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

Page 17: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

17AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

Providing a Better Service

Businesses such as Netflix and Amazon use AI to tailor their offerings to customers, recommending films you may like and the books people like you are reading. The value that this kind of personalisa-tion delivers for customers translates into the profits and giant stock market valuations of data-driven companies like Google and Facebook. Between 2015-20, big data analytics and the IoT alone are expected to add £322 billion to the UK economy.47

Increasingly, AI algorithms are also being used to tailor public services and deliver public goods such as healthcare, improving energy efficiency,48 easing traffic flow,49 and controlling the spread of commu-nicable diseases.50,51 In the future, we can expect AI insights to be applied more widely.

Privacy

The personal and social benefits arising from AI’s ability to interrogate big data may be enormous, but there is also the risk that information people would rather have kept confidential will be revealed. One example of this was when a father accidentally opened the coupons for baby supplies mailed to his daughter by the store Target based apparently on an algorithmic prediction that she was pregnant.52

Certain forms of data, such as commercial and medical information, are collected and storedunder conditions of anonymity. However, advances in AI make anonymity increasingly fragile and it may become increasingly possible to re-assign identity to particular sets of information because ofAI’s ability to cross-reference between vast quantities of data in multiple data sets.53 These developments worsen existing concerns about privacy and raise new ones.54

Consent

Most of us are clueless about what data is collected about us, by whom, and for what purpose. This is recognised by both parties, with the necessaryacquisition of consent literally and figuratively a mere box-ticking exercise. We need to consider whether access to our personal data remains a reasonable condition of use of everyday services, from email to Facebook.

Given the swift advances in data analytics it is impossible to imagine all the uses data may servein the future, making it hard to assure the protection of data subjects. For example, Moorfields Eye Hospital drew heavy criticism this year for providing Google with sensitive patient information, seemingly without informed patient consent.

Bias

AI systems depend on the data on which they are trained and which they are given to assess, and may reflect back biases in that data in the action they recommend. Biases may exist in data because of weaknesses in data collection; these may be addressed by ‘cleaning the data’55 or improving data collection.

Bias may also occur when a process being modelled itself exhibits unfairness. For example, if data on job applications was gathered from an industry that systematically hired men over women and this data was then used to help select likely strong candidates in the future, this could then reinforce sexism in hiringdecisions. Addressing this kind of bias may require a combination of common sense along with more

47. Hogan, O., Holdgate, L., and Jayasuriya, R., (2016) The Value of Big Data and the Internet of Things to the UK Economy. Report for SAS. Centre for Economics and Business Research Ltd.

48. Cisco (2014) IoE-Driven Smart Street Lighting Project Allows Oslo to Reduce Costs, Save Energy, Provide Better Service. Cisco (retrieved from https://cisco.com, accessed 10 October, 2016).

49. Morris, D. (2015, 5 August) Big data could improve supply chain efficiency-if companies would let it. Fortune (retrieved from http://fortune.com, accessed 10 October, 2016).

50. Grant, E. (2012) The promise of big data. Harvard T. Chan. School of Public Health (retrieved from https://www.hsph.harvard.edu, accessed 10 October, 2016).

51. Parslow, W. (2014, 17 January) How big data could be used to predict a patient’s future. The Guardian (retrieved from https://theguardian.com, retrieved on 10 October, 2016).

52. Duhigg, C. (2012, 16 February) How Companies Learn Your Secrets. The New York Times (retrieved from http://nytimes.com, accessed 10 October, 2016).53. See Ohm, P. (2010). Broken promises of privacy: Responding to the surprising failure of anonymization. UCLA law review, 57, 1701.54. the Digital Economy Bill, currently being read in Parliament, is a case in point. It has been criticised for its ‘thin safeguards’ regarding the sharing

of publicly held data, and for a lack of precision in defining data sharing. Advances in AI will only serve to worsen existing tensions over how data is used by government.

55. Data cleaning refers to identifying incomplete, incorrect, inaccurate, irrelevant, etc. parts of a data set and then replacing, modifying, or deleting them.

Page 18: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

18 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

complex and political kinds of interventions to avoidreinforcing unfair stereotypes and inequalities.56

In spite of these well-researched potential sources of bias in AI decision-making, there remains a tendency to view AI decisions as neutral.57 Concerns about bias are compounded by the severe lack of diversity in the AI field raising fears that bias may

be considered less of a problem or may not be identified when it occurs. With all this potential for bias, discrimination laws may be an important mechanism for ensuring that AI does not make societymore unequal and unfair. Of course, concerns about data bias and machine prejudice must be considered alongside the existence of prejudice and bias in the human processes they replace.

Transparency

AI decision-making systems are often deployed as a background process, unknown and unseen by those they impact. Further problems arise from our inability to see how AI arrives at the decisions it makes. This is particularly true of some complicated machine learning algorithms which evolve over time. This ‘black box’ issue is exacerbated by the fact that

significant stores of data are not in the public domain, meaning it is impossible to test or challenge results. This is one reason why organisations like the London-based Open Data Institute advocate for this information to be made public.

Governments and businesses need to be able to provide an explanation that people can under-stand as to why decisions have been made, and

56. AI Now (2016) The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term: A summary of the AI Now public symposium, hosted by the White House and New York University’s Information Law Institute, July 7th, 2016. AI Now (retrieved from https://artificialintelligencenow.com, accessed 10 October, 2016).

57. E.g. see Zarsky, T. (2016). The trouble with algorithmic decisions an analytic road map to examine efficiency and fairness in automated and opaque decision making. Science, Technology & Human Values, 41(1), 118-132.

58. Spice, B. (2015, 7 July) Questioning the Fairness of Targeting Ads Online. Carnegie Mellon University. News (retrieved from http://cmu.edu, accessed 10 October 2016).

59. Ingold, D. and Soper, S., (2016, 21 April) Amazon Doesn’t Consider the Race of Its Customers. Should It? Bloomberg (retrieved from http://bloomberg.com, accessed 10 October, 2016).

60. Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016, 23 May) Machine Bias. Pro Publica (retrieved from https://propublica.org, accessed 10 October, 2016).

61. Barr, A. (2015, 1 July) Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms. The Wall Street Journal (retrieved from http://blogs.wsj.com, accessed 10 October, 2016).

62. Rose, A. (2010, 22 January) Are Face-Detection Cameras Racist? Time (retrieved from http://content.time.com, accessed 10 October 2016).63. Chen, B. X., (2009, 22 December) HP Investigates Claims of ‘Racist’ Computers. Wired (retrieved from https://wired.com, accessed 10 October, 2016).

Google ads promising help getting jobs paying more than $200,000 were shown to significantly fewer women than men.58

Amazon was criticised when its ‘Prime’ same day delivery service was only available in largely white, affluent areas due to decisions based on customer data.59

Recidivism software widely used by American courts to assess the likelihood of an individual re-offending was found to be two times less likely to falsely flag white people and two times more likely to falsely flag black people60 (this example was especially concerning as the algorithm was protected under Intellectual Property Laws and was not open to scrutiny).

Face recognition software has failed offensively on numerous occasions. Google’s photo app classified black people as gorillas;61 Nikon cameras thought Asian people were blinking;62 and Hewlett-Packard computers struggled to recognise black faces.63

BOX 1: EXAMPLES OF DATA BIAS AND MACHINE PREJUDICE

Page 19: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

19AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

citizens must be able to challenge them if unfair. Opening up the ‘black box’ in this way should also make it easier to retain meaningful human control over AI in the long-term.

Limits to AI in Data Analytics

It is important to recognise the limitations of data analysis. Correlation does not equal causation. AI today is capable of recognising patterns, and large diverse datasets can throw up many patterns indeed. Some are meaningful, others are not. This should be borne in mind as our use of these insights increases, especially when used to inform public policy.

Google’s ability to predict flu outbreaks, for in-stance, failed after what seemed like strong initial successes.64

The Urgent Need For Public Debate

All these developments challenge our current understanding of privacy, consent, and accountability, and our ability to make choices about information relating to us. Over time these issues will become

more important, as the amount of information held about us grows and the abili ty to analyse itimproves. They must be dealt with sensitively movingforward.

The agreement by the Government to establish a ‘Council of Data Ethics’ to address some of these issues is a welcome step, but this does not obviate the need for deeper public involvement and for further innovative ways to channel public opinion into policy on this very complicated issue.

We need a ‘new deal on data’ between citizens, business, and governments.65 This is in the interests of business and government as it will build trust. If we do not have a deeper public debate we riskundermining public confidence in this new technology, sparking opposition to its uptake.

The government needs to ensure all stakeholders can raise concerns in an open and constructive manner. Greater clarity is needed about who collects what, and for what purpose. People need to understand the rights of various parties and how to access information about how their own personal data is stored and used. Public debate should also focus on the uncertainties around how data might be used in the future.

• Japan is a leader in advanced robotics, and in 2015 the government published an action plan to facilitate the coming ‘robot revolution’ and maintain its status as a ‘robot superpower’. Because of Japan’s ageing population and declining workforce, an emphasis has been placed on robotics applied to manufacturing, service, care, and medicine.66 Japan is modifying its intellectual property and copyright laws to account for AI creations.67

• South Korea announced in 2007 that it was working on a ‘Robot Ethics Charter’ to prevent the misuse of robots and to safeguard human wellbeing.68

BOX 2: AI AND ROBOTICS REGULATION AROUND THE WORLD

64. Lazer, D. and Kennedy, R. (2015, 1 October) What We Can Learn From the Epic Failure of Google Flu Trends. Wired (retrieved from https://wired.com, accessed 11 October, 2016).

65. The case for a ‘new deal on data’ has been made in the USA by Alex “Sandy” Pentland. See for example Harvard Business Review (retrieved from https://hbr.org/2014/11/with-big-data-comes-big-responsibility accessed 10 October 2016).

66. The Headquarters for Japan’s Economic Revitalization (2015) New Robot Strategy: Japan’s Robot Strategy. Ministry of Economy, Trade and Industry (retrieved from http://meti.go.jp/english, accessed 10 October, 2016).

67. Segawa, N. (2016, 15 April) Japan eyes rights protection for AI artwork. Nikkei Asian Review (retrieved from http://asia.nikkei.com, accessed 10 October, 2016).

68. New Scientist (2007, 8 March) South Korea creates ethical code for righteous robots. New Scientist (retrieved from https://newscientist.com, accessed 10 October, 2016).

Page 20: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

20 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

• The European Parliament this summer released a proposal suggesting that robots be classed as ‘electronic persons’. This is a response to the increasing abilities and uses of robotics and AI systems, necessitating a rethink in how we conceive of taxation, legal liability, and social security.69

• The European Parliament adopted new regulations on data protection in 2016. These regulations,

likely to be in effect in 2018, include the ‘right to an explanation’ over algorithmic decisions that ‘significantly affect’ individuals’ lives.

• The US became the first nation to announce a policy on fully autonomous weapons in 2012. Directive Number 3000.09 requires, for up to 10 years, humans to be ‘in the loop’ when decisions are made about lethal force. Effectively, this is a ban on Lethal Autonomous Weapons Systems, though it is fairly weak, being both temporary and capable of being overridden by senior officials.70

• The UK has a liberal driverless cars policy, under which tests can be conducted anywhere in the UK without special permits.71

• Four US states have passed laws allowing driverless cars (Nevada, Florida, California, and Mich-igan),72 and the National Highway Traffic Safety Administration has said the AI system driving a car could be considered the car’s driver under federal law.73

69. See Prodhan, G. (2016, 21 June) Europe’s robots to become ‘electronic persons’ under draft plan Reuters (retrieved from http://reuters.com, accessed 10 October, 2016) and European Parliament (2016) Draft Report: Motion for a European Parliament Resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) European Parliament (retrieved from http://europarl.europa.eu, accessed 10 October, 2016).

70. See Department of Defense (2012) Directive Number 3000.09 United States of America Department of Defense and Human Rights Watch (2013, 16 April) US: Ban Fully Autonomous Weapons. Human Rights Watch (retrieved from https://hrw.org, accessed 10 October, 2016).

71. Murgia, M. (2016, 11 April) Britain leads the world in putting driverless vehicles on the roads. The Telegraph (retrieved from http://telegraph.co.uk, accessed 10 October, 2016).

72. Murphy, M., and Jee, C. (2016, 11 July) The great driverless car race: Where will the UK place? Techworld (retrieved from http://techworld.com, accessed 10 October, 2016).

73. Shepardson, D and Lienert, P. (2016, 10 February) Exclusive: In boost to self-driving cars, U.S. tells Google computers can qualify as drivers. Reuters (retrieved from http://reuters.com, accessed 11 October, 2016).

74. Similar approaches have been adopted when discussing other emerging revolutionary technologies, such as Baroness Warnock’s examination of IVF. 75. These could be built on the proposed European ‘right to explanation’, regarding machine-made decision over important real-life decisions. See Goodman,

B., and Flaxman, S. (2016). EU regulations on algorithmic decision-making and a” right to explanation”. arXiv preprint arXiv:1606.08813.

The UK Government should:

1. Agree a ‘new deal on data’ between citizens, businesses, and government with policies on privacy, consent, transparency, and accountability through a nation-wide debate led by a respected and impartial public figure.74

2. Promote transparency and accountability in AI decision-making by supporting research that facilitates an opening of the ‘black box’ of intelligent algorithms and supporting open data initiatives.

3. Establish systems of liability, accountability, justification, and redress for decisions made on the basis of AI. This would promote fairness and justice, and could encourage companies to invest in more transparent AI systems.75

POLICY RECOMMENDATIONS

Page 21: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

21AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

SECTION 3: MILITARY USES OF AI

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”76 Open Letter, Convened by Future of Life Institute

Introduction

New technologies have always been put to military use. Artificial intelligence is no different, and it is already being applied across a wide range of military contexts.

• Cyber defences: intelligent systems are routinely deployed in defending against cyber attack.

• Cyber attack: information put in the public domain by Edward Snowden suggests that the USA is developing a program called MonsterMind, which can launch retaliatory cyber attacks as well as possessing defensive capabilities.77

• Intelligence: AI is used to analyse vast amounts of data, including satellite images and telephone records, to inform military operations.

• Missile guidance: AI has been used to enhance missile targeting, and could allow for more precise and flexible control.

• In the air: AI is able to automate the processes of drones, including surveillance, patrolling, and targeting. It is possible this will lead to fully autonomous systems in the near future.

• At sea: the US ‘Sea Hunter’ is an anti-submarine vessel capable of significant autonomy.78

• Border defence: the South Korean Super aEgis II turret – deployed along the demilitarised zone – is capable of identifying and firing without human intervention. Safeguards were added removing automated firing capabilities due to fears it would make mistakes.79

BOX 3: HOW IS AI BEING PUT TO MILITARY USE TODAY?

76. Future of Life Institute (2015) Autonomous weapons: an open letter from AI and robotics researchers. Future of Life Institute (retrieved from http://futureoflife.org, accessed 10 October, 2016).

77. Zetter, K. (2014, 13 August) Meet MonsterMind, the NSA Bot That Could Wage Cyberwar Autonomously Wired (retrieved from https://wired.com, accessed 11 October, 2016).

78. Masunaga, S. (2016, 18 August) Say hello to underwater drones: The Pentagon is looking to extend its robot fighting forces. LA Times (retrieved from http://latimes.com, accessed 11 October, 2016).

79. See Parkin, S. (2015, 16 July) Killer robots: The soldiers that never sleep. BBC (retrieved from http://bbc.com, accessed 11 October, 2016).

Page 22: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

22 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

Advances in AI make it possible to increasingly take humans out of the loop when it comes to military decision-making. This move towards increasing autonomy threatens to upend existing concepts ofwarfare, and has been highlighted by many military powers as an area of key strategic interest.80 Though there are many big issues when it comes to military uses of AI, it is autonomy that perhaps raises the most challenging and novel problems.81

It remains useful to distinguish between the use of AI in cyber warfare and the use of AI in actual physical ‘kinetic’ weapons and systems, although the lines between ‘cyber’ and ‘real-world’ can be very blurred. For example, the Stuxnet cyber attack caused significant physical real-world damage to Iran’s nuclear centrifuges.

Cyber Warfare

A clear indication of the importance that the UK now places on cyber defence was the recent announcement of a new National Cyber Security Centre in London.82

In the cyber domain, the speed of attacks and the quantity of information involved make it impossible for humans to respond effectively without assistance. The growing use of intelligent cyber weapons exacerbates this issue, and makes pre-designed cyber defences alone insufficient to protect digital systems. Automation is required to respond to these attacks in real time.

Automated cyber defences are fairly uncontroversial, codified in the right to defend oneself. Much more controversial are automated cyber defences which are permitted to counter-strike as part of their manoeuvres, a capacity Edward Snowden claims to

be possessed by the US program ‘MonsterMind’.83

This issue is only magnified by the difficulties one encounters in assigning attribution to cyber attacks, meaning a counter-strike could potentially be launched against an innocent party.

It is imperative that means of regulating autonomous cyber systems are researched before the technology becomes more widely available; this space remains critically under-explored.

Drones

The advent of military Unmanned Vehicles (UVs) or drones has radically changed modern warfare. UVs already provide a number of strategic advan-tages in areas where human performance might be suboptimal or undesirable, such as very dangerous operations, or those that might be dull, repetitive or lengthy. Major military powers have highlighted the importance they place on drones for future operations, and it is likely their use will continue to increase in coming years. This year, as part of the wider, twice-yearly ‘Exercise Joint Warrior’, unmanned drones will take to the stage in a NATO-wide military drill. Held off the coast of Scotland, ‘Unmanned Warrior 2016’ will allow military powers and arms producers to showcase their achievements in this area. This high-lights the UK’s efforts to integrate drones into military operations.

Advances in drone-to-drone communication are likely to be particularly transformative, permitting swarming behaviour.84 Swarms would be capable of overwhelming tradition defences, which are geared towards large, singular entities. They would also mark a departure from recent trends towards

80. E.g. see Kaspersen, A. (2016, 15 June) On the Brink of an Artificial Intelligence Arms Race. World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016).

81. Many other areas are worthy of consideration. For instance, some uses of AI-enabled algorithms to analyse data and identify potential terrorists has been built upon questionably small datasets. E.g. see Robbins, M. (2016, 18 February) Has a rampaging AI algorithm really killed thousands in Pakistan? The Guardian (retrieved from https://theguardian.com, accessed 10 October, 2016).

82. Bourke, J., Cecil, N., and Prynn, J. (2016, 30 September) National Cyber Security Centre to lead digital war from new HQ in the heart of London. Evening Standard (retrieved from http://standard.co.uk, accessed 10 October, 2016).

83. Zetter, K. (2014, 13 August) Meet MonsterMind, the NSA Bot That Could Wage Cyberwar Autonomously Wired (retrieved from https://wired.com, accessed 10 October, 2016).

84. E.g. see Arquilla, J., and Ronfeldt, D. (2000) Swarming and the Future of Conflict, Santa Monica, Ca., RAND.

Page 23: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

23AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

large, expensive equipment, encouraging the use of small, replaceable and cheaper objects which are capable of acting as an adaptable and complex entity. It is likely that drones and swarms will recon-figure the battlefield, and require novel strategies for attack and defence.

As AI improves, many drone processes are likely to be automated. Automation permits more rapid responses, reducing the human labour required for operations,85 and removing the need for a secure and robust connection to the UV, allowing operations in a wider range of environments.

For many purposes, automation is fairly uncontro-versial. Routine, dull, or dangerous operations, such as surveillance, patrolling, or bomb disposal all benefit from automation and present few challengesthat are not already raised by virtue of remote operation. Many of these processes are already automated to varying degrees today.

Lethal Autonomous Weapons Systems

Lethal Autonomous Weapons Systems (LAWS) are broadly defined as systems capable of identifying, targeting, and killing without human intervention. Such systems raise profound ethical, legal and political questions, and have spawned a global movement calling for a pre-emptive ban on their use and development.

In a widely-publicised open letter led by the Futureof Life Institute, a number of high profile figures (including Stephen Hawking and the Google DeepMind founders Demis Hassabis and Mustafa Suleyman) argued that a failure to condemn and prohibit LAWS will lead to a dangerous AI-based arms race,86 the consequences of which would be highly damaging to humanity.87 It is likely that an AI arms race will be inherently less stable than that of the Cold War, and it could upset delicate geopolitical balances. It is also likely that research into safer AI systems will be of lower priority in an arms race situation, thus increasing the long-term risk of AI, which we explore further in the next section.88 Such an arms race is already showing signs of starting.89

Others question whether there are any circumstances under which use of LAWS could comply with inter-national humanitarian law, making them a funda-mentally unethical component of any arsenal.88 For many people, giving machines power over human life and death is a fundamental affront to human dignity.91

International support is growing for a ban on LAWS, and currently fourteen countries support this posi-tion.92 The UK has explicitly voiced its opposition to a ban, or other international regulation, though it insists weapons ‘will always be under human over-sight and control’.93 The current US position is that there should be a human in the loop,94 however they, and a number of major military powers, are actively developing, and in some instances deploying, weapons with high degrees of autonomy.

85. A single drone, for instance, can require hundreds of remote operatives e.g. see Sloan, E. (2015). Robotics at war. Survival: Global Politics and Strategy. 57(5), 107-120.

86. There are already signals that this may be beginning. For instance, The United States’ Third Offset Strategy places an emphasis on the importance of keeping ahead with advanced technology, and they, along with Russia and China, are investing heavily in AI and robotics. E.g. see Kaspersen, A. (2016, 15 June) On the Brink of an Artificial Intelligence Arms Race. World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016).

87. Future of Life Institute (2015) Autonomous weapons: an open letter from AI and robotics researchers. Future of Life Institute (retrieved from http://futureoflife.org, accessed 10 October, 2016).

88. Also see Armstrong, S., Bostrom, N., and Shulman, C. (2016). Racing to the precipice: a model of artificial intelligence development. AI and Society, 31(2), 201-206.

89. Kaspersen, A. (2016, 15 June) On the Brink of an Artificial Intelligence Arms Race. World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016).

90. E.g. see Rahim, R. A., (2015, 12 November) Ten reasons why it’s time to get serious about banning ‘Killer Robots’. Amnesty International (retrieved from https://amnesty.org, accessed 10 October, 2016).

91. E.g. see Asaro, P., (2012) On banning autonomous weapon systems: human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross 94(886). For potential problems with the human dignity argument see Saxton, A., (2016) (Un)Dignified Killer Robots? The Problem with the Human Dignity Argument Lawfare Institute (retrieved from https://lawfareblog.com, accessed 10 October, 2016).

92. In alphabetical order: Algeria, Chile, Costa Rica, Cuba, Bolivia, Ecuador, Egypt, Ghana, Holy See, Mexico, Nicaragua, Pakistan, State of Palestine, and Zimbabwe.

93. Bowcott, O. (2015, 13 April) UK opposes international ban on developing ‘killer robots’ The Guardian (retrieved from http://theguardian.com, accessed 10 October, 2016).

94. Defense Science Board (2016) Report of the Defense Science Board Summer Study on Autonomy. Defense Science Board.

Page 24: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

24 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

95. World Economic Forum (2016) What if robots go to war? World Economic Forum (retrieved from https://weforum.org, accessed 10 October, 2016).96. Dean, J., (2016, 10 June) RAF drone could strike without human sanction. The Times (retrieved from http://thetimes.co.uk, accessed 10 October, 2016).97. Respondents were prompted with this statement “The following question is about Lethal Autonomous Weapons. Lethal Autonomous Weapons are

weapons that can identify and attack human targets without human intervention. At the moment humans have to give the final command to go ahead. There are currently proposals for a pre-emptive international ban on Lethal Autonomous Weapons which could attack targets without this human approval.”

Until recently the official position of the UK’s major arms manufacturer BAE systems echoed the UKGovernment’s position that there will “always bea need for a man in the loop”.95 This position appeared to shift somewhat, however, when the company recently revealed that it is pushing ahead with development of armed Taranis drones and proceeding on the basis that an autonomous strike capability could be required in future.96

The notion of meaningful human control is the over-arching issue in this debate. Though the matter

is regularly debated at the UN, it is unlikely that it will be resolved swiftly. Indeed, it is possible that fully autonomous weapons will be available beforeany meaningful regulation is in place. 2016 is a critical year for action on this issue with the 5-yearly review conference of the Convention on ConventionalWeapons taking place from 12-16 December. Non-governmental organisations, including the Campaign Against Killer Robots, are trying to secure action on LAWS in this context.

A pre-emptive ban, especially if supported by major military powers, would likely set a global precedent,and the risk of condemnation would raise the political costs of other nations pursuing LAWS. The UK, as a member of the UN Security Council and NATO, is influential in global military matters, and could sway a number of nations to follow suit. Historically, the UK has taken a leading international role in disarmament, particularly in relation to chemical and biological weapons. It has the opportunity to do the same here. Our YouGov poll finds that 50% of British people think the Government should support a ban, while only 34% think the Government should oppose one (see graph 5).

A model of BAE systems Taranis drone on display at Farnborough

Airshow in 2008.

YouGov Poll Graph 5

In general, to what extent do you think the UK Government should support or oppose a pre-emptive ban on Lethal Autonomous Weapons, which could attack targets without human approval?97

– British people support a banon Lethal Autonomous Weapons.Strongly support

Tend to support

Tend to oppose

Strongly oppose

Don’t know

Net: Support

Net: Oppose

37%

13%

13%

21%

16%

50%

34%

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

All figures, unless otherwise stated, are from YouGov Plc. Total sample size was 2070 adults. Fieldwork was undertaken between 10th - 11th October 2016. The survey was carried out online. The figures have been weighted and are representative of all UK adults (aged 18+).

Page 25: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

25AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

98. The term LAWS is generally taken to be synonymous with the term ‘killer robots’, and much debate is focused around this idea. When developing regulation in this area it is also important to recognise the broader functions of autonomy in weapons systems, which extend beyond simply ‘killer robots’.

The UK Government should:

1. Support a ban on Lethal Autonomous Weapons Systems (LAWS) and work with international partners to develop a plan for the enforcement of the ban and for the prevention of the proliferation of LAWS.98

POLICY RECOMMENDATIONS

Difficulties of Enforcing a Ban on LAWS

The practical difficulties of enforcing a ban on LAWS would be significant. The perceived strategic benefits will create significant incentives for the development of this weaponry. AI-based weapons systems can be built with humans in the loop but with the removal of the human as a very simple final step. It would perhaps only take one real-world situation in future where the case for humans to be taken out of the loop is overwhelmingly strong, for the use of LAWS to start to become normalised.

The extraordinary difficulties of enforcing a ban on LAWS are compounded by the ease of access to AI, the small physical scale of AI research, the ‘copyability’ of software, and the dual use (military and non-military) of components. Looking for sites of clandestine LAWS development will be much harder than trying to detect large nuclear facilities.

The involvement of private companies in AI research means that much of this technology could be widelyavailable; potentially placing advanced military capabilities in the hands of non-state actors and terrorists.

Matters are further complicated by the current lack of an established definition for LAWS, with groups often talking past one another as a result. Autonomy,control, and harm all occur on a spectrum. To some, an autonomous weapon would be as simple as missiles with greater freedom in their targeting systems, while to others they are intelligent learning entities.

Furthermore, military decisions are made in complicated networks. One would need to differentiate between human control at the stage of development anddeployment of an autonomous system and human control at the stage of the weapon system’s operation (that is when it independently selects and attacks a target). Work is needed to define these terms and issues more clearly.

The approach of the UK should be to support a ban on LAWS and in addition to work with the international community to develop a plan for the complicated issue of its enforcement and the prevention of LAWS proliferation. For legal, ethical, and military-operational reasons it is vital that human control over weapons systems and the use of force must be retained.

Page 26: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

26 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

SECTION 4: SUPERINTELLIGENCE AND THE MORE DISTANT FUTURE

“It’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.”99 Stephen Hawking

Introduction

Despite recent increases in the power and flexibility of AI, existing AI systems are still only able to accomplish tasks within limited domains. We are yet to realise artificial general intelligence (AGI), which is capable of attempting more or less any problem a human can.

It is reasonable to expect, however, that given sufficientadvances in computer science and processing power100 it will be possible to achieve AGI at some point in the future.101 Experts, when surveyed, predicted a median timeline of 2040 for the arrival of human level machine intelligence,102 though within this group opinions varied from it being just around the corner to being impossible.103

From AGI to ASI: the Risks of Superintelligence

Recently a number of high profile figures (Stephen Hawking, Bill Gates, Steve Wozniak, and Elon Musk) as well as a number of leading computer scientists (Murray Shanahan and Stuart Russell) have expressed concern over the risks associated with the development of so-called ‘superintelligence’.

If AGI is capable of broadly matching humans across many domains, it is plausible that it will be capable of turning its abilities towards engineeringa smarter computer. This could trigger an ‘intelligenceexplosion’, resulting in an artificial ‘superintelligence’(ASI).104 These ideas have been explored most famously by Nick Bostrom in his book “Superintel-ligence”.105

An ASI will likely have a particular task encoded in its programming. Bostrom argues that whatever the main task there are a number of sub-goals which will almost always be helpful in achieving it. Thesesub-goals include resource acquisition, improved intellect, self-preservation, and subduing competition. Bostrom suggests that such sub-goals could prove disastrous to humans (who represent competition, material resources, and a potential threat to continued existence) if such a system was not aligned with human values.

Though this is a simplified account of a complex argument, it serves to highlight that we have reason to think carefully about the unrestricted development of AGI, and the risks it entails.106

99. Griffin, A. (2015, 8 October) Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants. The Independent (retrieved from http://independent.co.uk, accessed 12 October, 2016).

100. Processing power has doubled roughly every two years for several decades (as per Moore’s law). 101. These are but two possible areas which might facilitate the development of an AGI.102. Defined as a system ‘that can carry out most human professions at least as well as a typical human’.103. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford, OUP p. 19. 104. The dynamics of such an ‘explosion’, or even its possibility, are contested, and the nature of this would have profound impacts upon the types of risks

it poses and our ability to withstand them. For discussion see Hanson, R. (2013). The Hanson-Yudkowsky AI-Foom Debate. Berkeley, CA: Machine Intelligence Research Institute.

105. Bostrom (2014).106. For a more full description see Bostrom (2014).

Page 27: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

27AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

These long-term risks of AI development have prompted Elon Musk to donate $10 million to research aiming to tackle them, as well as supportingthe OpenAI initiative, which aims to facilitate the safe development of AI and to ensure its benefits are accessible to all. It has also contributed to the launch the £10 million Leverhulme Centre for the Future of Intelligence, a collaboration between Oxford, Cambridge, Imperial, and the University of California, Berkley.107 The scale of these efforts remains fairly small compared to the size of the challenge.

We Need to Think about Safety Now

Short-term exigencies make it extremely hard for governments to implement policies where the impact is uncertain and could be felt 20 or 30 years away. However, even though the probability of a dangerous superintelligence emerging is incredibly low, the magnitude of the risk involved justifies its consider-ation now.

Safety measures may take a significant amount of time to implement and a failure to consider safety now could lock us into a path of unsafe development. One example is the current lack of transparencyin some existing algorithms, which undermines the ability for a human to intervene in and understand AI decision-making.

Much is unknown about the possible dynamics of an intelligence explosion or the development of an AGI. It is unknown, for example, what the necessary components are for developing AGI, making it difficult to predict when or if an AGI might arise. Identifying signposts that signal this would be valuable.108,109

Scenarios are likely to differ if, for instance, only one actor (company or government) manages to develop this technology, as opposed to concurrent breakthroughs in multiple areas. Whether AGI is developed in the military, industrial, or public domain could also have significant implications for its social and political impact. Better mapping of the potential spaces of AGI development would therefore be useful.

As the race to develop increasingly powerful AI intensifies in different sectors it is vital this does not lead to a ‘race to the bottom’ when it comes to AI safety. It is important to instil and institutionalise the need for safe and mindful AI research in all sectors.

We do not advocate attending to these longer-term problems in lieu of present ones. This is not an ‘either-or’ choice: one can consider the risks associated with superintelligence alongside pressing problems today. Indeed, a number of solutions serve a dual purpose. Encouraging the transparency of AI systems, for instance, facilitates fairness and justice, but also preserves meaningful human control and under-standing of these systems, which will better enable us to prevent undesirable events in the further future.

The recent publication by Google researchers and others of a paper on ‘concrete problems in AI safety’ is a welcome example of seeking to address presentchallenges with one eye also on longer term safety.110

In another example, Google DeepMind was also reported, in June 2016, to be working with academics at Oxford University to develop a ‘kill switch’; code that would ensure an AI system could “be repeatedly and safely interrupted by human overseers without [the system] learning how to avoid or manipulate these interventions.”111

107. See University of Cambridge (2015) The future of intelligence: Cambridge University launches new centre to study AI and the future of humanity University of Cambridge (retrieved from http://cam.ac.uk, accessed 10 October, 2016).

108. Examples of such signposts might include: success in mapping and simulating the human brain, or advances in computer processing.109. In the USA the Executive Office of the President recently recommended that the National Science and Technology Council Subcommittee on Machine

Learning and Artificial Intelligence should monitor developments in AI, and report regularly, especially with regard to milestones: Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.

110. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565. 111. Science and Technology Committee (2016, 12 September) Robotics and Artificial Intelligence. HC 145 2016-17.

Page 28: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

28 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

Alternative Futures

The future is uncertain and extremely hard to predict. But if current advances in AI research and development, as well as in other areas such as computing power, continue then we should expect AI to become increasingly powerful and prevalent in our society.

An ‘intelligence explosion’, or superintelligence, is not a necessary requirement for AI to have a profound impact in the more distant future.112 Both the opportu-nities and the risks presented by AI today, a number of which are discussed in this paper, are likely to be amplified significantly. And these will be joined by new opportunities and risks yet to be envisioned.

The UK Government should:

1. Give appropriate attention to long-term issues of AI safety: support research into AI safety and horizon scanning; support the institutionalisation of safe AI research conduct in all sectors including the development of a code of ethics;113 develop standards and guidelines for whistle-blowers; and ensure students and researchers are trained in the ethical implications of their work.

POLICY RECOMMENDATIONS

112. In a recent article, Huw Price suggests that our grandchildren may be living in a different era, ‘perhaps more Machinocene than Anthropocene’. Price, H. (2016, 17 October) Now it’s time to prepare for the Machinocene. Aeon (retrieved from https://aeon.co, accessed 17 October, 2016).

113. This could build on the work of IEEE’s Global Initiative for Ethical Considerations in the Design of Autonomous Systems.

Page 29: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

29AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

CONCLUSION

The economic, political, military, and social history of the last two and a half centuries can in large part be seen as logical ripple effects of the industrialrevolution. The intelligence revolution may well have more profound impact over shorter timeframes. Research by McKinsey describes AI as contributing to a transformation of society “happening ten times faster and at 300 times the scale, or roughly 3,000 times the impact” of the Industrial Revolution.115 This revolution will create huge opportunities for economicgrowth, scientific development, and social advance-ment. However there are also significant risks ahead.

Citizens have a role to play in preparing themselves and their friends, colleagues, and families for the accelerated changes that AI will bring in areas such as employment patterns. At the same time citizens should demand that businesses and governments take action to maximise the opportunities and minimise the risks of AI development. We also need to find ways of having a much deeper and more informed public debate about AI.

Industry has a critical role to play, especially given the highly technical nature of AI and the challenges this poses to citizens and governments in ensuring that those developing AI act in the best interests of society. The recent launch of the ‘Partnership on AI to Benefit People and Society’ by Amazon, Google,

Facebook, IBM, and Microsoft is very welcome in this regard. The Partnership’s stated aims are to create a forum for open discussion around the benefitsand challenges of developing cutting edge AI, to advance public understanding, and to formulate best practices on some of the most important and challenging ethical issues in the field.116

We cannot rely entirely on public engagement and self-regulation from business to guarantee the best outcomes of AI development. The role of government must be central to maximising the opportunities and minimising the risks of AI.

Unfortunately political horizons have tended to shrink from election cycles, to media cycles, to Twitter cycles in recent years and there is a danger that some of the challenges of AI do not feel urgent enough. Another obstacle for politicians of all parties is the technical and fast-moving nature of this debate which makes it difficult for MPs (like everyone else) to understand, and a complicated conversation to have with the electorate. The fact that ‘artificial intelligence’ has only been mentioned 32 times in the House of Commons since electronic records began is not good enough given the profound economic, political, military, and social impact this technology is set to have.

114. Dadich, S. (2016, November) Barack Obama, Neural Nets, Self-Driving Cars, and the Future of the World. Wired (retrieved from https://wired.com, accessed 18 October, 2016).

115. Dobbs, R., Manyika, J., Woetzel, J. (2015) The four global forces breaking all the trends. McKinsey Global Institute (retrieved from https://mckinsey.com, accessed 10 October, 2016).

116. See Suleyman, M. (2016) Announcing the Partnership on AI to Benefit People & Society (retrieved from https://deepmind.com, accessed 28 September, 2016).

“We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly moreproductive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.”114

President Barack Obama

Page 30: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

30 AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

The UK risks falling behind other major players in making the most of the AI opportunity, especially the US where the White House recently published a very detailed report on preparing for the future of artificial intelligence.117

We are not powerless in the face of the future, it can be actively shaped. We have the opportunity to create

a fundamentally better society, though also face many risks. The UK government has a responsibility to ensure AI is developed and used in a way that maximises these benefits, and minimises these risks, and it is imperative that it acts swiftly and prudently to do so.

117. Executive Office of the President [2016, October] Preparing for the Future of Artificial Intelligence.118. This was also a recommendation of the recent report of the Science and Technology Committee (2016, 12 September) Robotics and Artificial

Intelligence. HC 145 2016-17.119. CSaP (Centre for Science and Policy) in Cambridge is one effective model in this area.120. The X Prize, or Nesta’s Challenge Prizes, could be models in this area.

The UK Government should:

1. Facilitate a House of Commons debate on maximising the opportunities and minimising the risks of AI in the UK.

2. Establish a Standing Commission on AI to examine the social, ethical, and legal implications of recent and potential developments in AI.118

3. Develop mechanisms to enable fast transfers of information and understanding between researchers, industry, and government to facilitate swift and accurate policy-making based on fact.119

4. Launch a prize for the application of AI to tackling today’s major social challenges and delivering

public goods.120

POLICY RECOMMENDATIONS

Page 31: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

31AN INTELLIGENT FUTURE? MAXIMISING THE OPPORTUNITIES AND MINIMISING THE RISKS OF ARTIFICIAL INTELLIGENCE IN THE UK

Page 32: AN INTELLIGENT FUTURE? · 2 an intelligent future? maximising the opportunities and minimising the risks of artificial intelligence in the uk acknowledgments written and researched

Future Advocacy is a think tank and consultancy working on some of the greatest challenges faced by humanity in the 21st Century.

www.futureadvocacy.org@FutureAdvocacy

OCTOBER 2016


Recommended