Dr. Jean-Marc Rickli about the future of AI

Wednesday, May 23, 2018 By

Dr. Jean-Marc Rickli, Head of Global Risks and Resilience at GCSP, explains the future of AI, related risks and opportunities.


Dr. Jean-Marc Rickli about the future of AI
Jean-Marc Rickli - Head of Global Risks and Resilience, Geneva Centre for Security Policy (Switzerland)

We are excited to welcome Dr. Jean-Marc Rickli as our guest expert to explain Artificial Intelligence and its impact on our society. Dr. Jean-Marc Rickli is responsible for the Geneva Centre for Security Policy's activities related to Global Risk and Resilience, under the umbrella of the Emerging Security Challenges Programme. He is a senior advisor for the AI (Artificial Intelligence) Initiative at the Future Society at Harvard Kennedy School and an expert on lethal autonomous weapons systems for the United Nations as well as the co-chair of the emerging security challenges working group of the NATO Partnership for Peace Consortium. Prior to his current appointment he was assistant professor at the defence studies department of King's College London. He holds a PhD and an MPhil in International Relations from Oxford University, UK, where he was also a Berrow scholar at Lincoln College.

Below are the questions we asked Jean-Marc:

1. What are the most common myths about AI that exist today?

It is not so much myths but rather different levels of discussions that have to do with different levels of definitions of AI. Some talk about the current state of AI that is qualified as weak AI. This is for instance Apple’s SIRI virtual assistant, algorithms that are developed to fulfil specific tasks. Some talk about artificial general intelligence (AGI) which is an AI that will be able to compete with all the tasks of the human brain. A company like Alphabet’s Deepmind is doing research on this. The last group is concerned with superintelligence, a state beyond the technological singularity point where AI will outperform human intelligence. This discussion has been popularized by books of Nick Bostrom or Ray Kurzweil for instance. The confusion in current debates comes from the fact that people talk about different levels of AI and mix them. Bostrom’s argument is that although AI can bring great benefits for humanity, it also potentially poses a catastrophic risk for humanity. This vision has also been supported by Elon Musk or Stephen Hawking. This vision is then considered unrealistic for the proponents of weak AI. Hence confusion arises because different levels of AI are mixed. The consequent perceived myths then come from different understandings of AI that imply different time horizons: now, near to middle term and long future.


2. Do you perceive AI as an emerging threat or rather as a unique opportunity for humanity?

Again, it depends which AI definition you are considering. Current uses of weak AI in the field of image recognition have tremendous beneficial impact in medicine for instance. An algorithm is now much better in spotting patterns in MRIs than radiologists. This will greatly improve cancer detection for instance. On the other hand, the same technology used in image recognition can be militarized. As a matter of fact, the US Air Force introduced the Maven program a year ago, an algorithm developed in cooperation with Google, for real time object identification on drone footage. The idea is then to equip weaponized drones to reduce the amount of work of the operator in the kill chain. So, the same technology, image recognition, can be used for both commercial and military purposes. AI is by essence a dual-use technology. When it comes to AGI and superintelligence, then one can only speculate but if we assume that a future artificial entity might supersede human cognitive capabilities and skills, then we better make sure that we have safeguards in place in case this entity goes rogue.


3. When people will invent strong AI, capable of replacing human in almost any area of activity, if ever?

There are strong disagreements among experts in the field. For instance, Ray Kurzweil believes that AI will exceed human capabilities by 2029 and states that the technological singularity where AI will surpass human intelligence will arise by 2045. Rodney Brooks on the other hand argues that this won’t happen in the next 100 or several 100 years. There is absolutely no agreements among experts. Last year however, a joint Yale and Oxford University study that surveyed 352 AI experts and concluded that researchers believe that there is a 50% chance of AI outperforming human in all tasks in 45 years so by 2063, and of automating all human jobs in 120 years.


4. Do you think that AI research have to be regulated or controlled somehow?

Of course. We are talking here about a technology that is highly disruptive by nature. Safeguards have to be put in place to prevent malicious uses of this technology. As this technology is dual-use, there is a need to involve all stakeholders in the regulation process: governments, the scientific community, the commercial sector and also the civil society.


5. Shall civilized countries regulate export of AI technologies similar to cryptography for example?

I don’t know what you mean by civilized countries. It is however a duty of the international community and ALL states to regulate the development of emerging disruptive technologies such as for instance: AI, synthetic biology or nanotechnologies. The tricky issue with AI is that algorithms are lines of code. Hence, unlike nuclear material, it can proliferate very quickly. Once a code has been released it can be used or modified very easily. So, here, we are facing a problem of horizontal (among states) and vertical (states to non-state actors) proliferation. This reinforces the need for cooperation between states and the private sector for regulations and safeguards.


6. What are the risks to privacy when Big Data is used to train AI?

As the Cambridge Analytica scandal demonstrated, no data is neutral. The power of AI is that it can aggregate and process an amount of data that is beyond human capabilities and infer patterns. So, the more data about yourself, the better an algorithm will be to profile you. Hence, there is a risk that an algorithm can be then used to manipulate you by sending information your way that will appeal to your psychological biases. As Cambridge Analytica demonstrated this is very powerful. Now, with generative adversarial networks (GANs), basically two system of neural networks competing in a zero-sum game scenario, you can fake videos or pictures that are impossible for human eyes to discern. This is highly dangerous tools of manipulations. There is also a debate about the biases in the data that are being used to train algorithms. For instance, there are racial and gender biased that are being reproduced by algorithms.


7. Do you agree to impose taxation on AI that replaces human?

I don’t think it is a practical way to look at the problem of job destruction. AI is increasingly used in a lot of different devices and applications. For instance, do you want to impose taxes on GPS systems? Not really. The real question, will increasingly be about man-machine interactions and job displacement. Certain jobs will probably disappear (especially those that are repetitive and that can be automatized easily), some new jobs will also emerge. Data scientist did not exist a few years ago and is now among the most sought after job on the market. Although there are increasing studies about the impact AI will have on the job market, we still really don’t know. What is sure, is that humans will increasingly have to team with machines. If entire groups of population will be jobless then, we will probably have to completely redefine our socio-economic system based on wealth creation through jobs and in fine capitalism.


8. Who shall be legally and financially accountable for AI mistakes?

This discussion depends on the level of autonomy in the device you are using. The more autonomous the algorithm you rely upon the more difficult the question becomes. This is for instance a key issue that car manufacturers are facing with the introduction of autonomous cars. I am not a lawyer but the discussion in the future will increasingly be about the responsibility of the user v.s. the one of the manufacturer or the programmer.


9. What are the dangers of malicious AI usage, for example by cybercriminals?

Well, there are too many to discuss here. I recommend you to read this report: The Malicious use of Artificial Intelligence: Forecasting, Prevention and Mitigation that provides lots of different scenarios where AI could be used in a malicious way. In the field of cyber criminality the use of GANs could lead to unseen cyber attacks in their destructive power. One could also imagine the use of adaptive malwares or ransomwares that could have terrible impact. It is worth noting that my institution, the Geneva Centre for Security Policy, offers short courses, 1-3 days that deal exactly with this topic, the impact of disruptive technologies on security. Feel free to look up our courses on our course catalogue on https://www.gcsp.ch/Courses


10. What would be your advice to governments on AI promotion, usage and regulation?

Again a very large debate. No simple answer but the bottom line is that governments have to cooperate with the commercial sector to develop adequate regulations. As for AI promotion and usage, the issue of the education of the population is key. A lot of efforts have to be done to educate people about basic cyber hygiene and then about the potential and myths of artificial intelligence. The problem is that the rate of growth of this technology is exponential and that our education system are very much linear.


High-Tech Bridge's series of Cybersecurity Leaders Interviews compile thoughts of the cybersecurity executives, thought leaders, visionaries and eminent technology experts.

User Comments
Add Comment

High-Tech Bridge on Facebook High-Tech Bridge on Twitter High-Tech Bridge on LinkedIn High-Tech Bridge RSS Feeds Send by Email
Share