Ethics of AI and Cybersecurity When Sovereignty Is at Stake: Paul Timmers
Ethics of AI and Cybersecurity When Sovereignty Is at Stake: Paul Timmers
https://doi.org/10.1007/s11023-019-09508-4
COMMENTARY
Paul Timmers1
Abstract
Sovereignty and strategic autonomy are felt to be at risk today, being threatened by
the forces of rising international tensions, disruptive digital transformations and
explosive growth of cybersecurity incidents. The combination of AI and cybersecu-
rity is at the sharp edge of this development and raises many ethical questions and
dilemmas. In this commentary, I analyse how we can understand the ethics of AI and
cybersecurity in relation to sovereignty and strategic autonomy. The analysis is fol-
lowed by policy recommendations, some of which may appear to be controversial,
such as the strategic use of ethics. I conclude with a reflection on underlying con-
cepts as an invitation for further research. The goal is to inspire policy-makers, aca-
demics and business strategists in their work, and to be an input for public debate.
Over the last few years strategic autonomy and sovereignty have become top politi-
cal priorities. Government leaders feel that national sovereignty is under threat.
The reason is a confluence of pervasive, transformative and even disruptive digital
technologies, explosive growth of cyber incidents, and rising international tensions
between the US and EU on one side and China and Russia at the other side, as well
as transatlantic tensions.
There is no doubt that these threats put sovereignty at stake. Kello (2017) argues
that ‘cyber’ creates a ‘sovereignty gap’. Both state and non-state actors are exploit-
ing cybersecurity means. Kello observes a combination of persistent disruption
(‘unpeace’), rogue state actors that misuse cyber technologies, and cyber-enabled
exercise of influence by non-state actors, from state-proxies (Maurer 2018) to
* Paul Timmers
[email protected]
1
University of Oxford, Oxford, UK
13
Vol.:(0123456789)
636
P. Timmers
Fig. 1 Approaches to address
strategic autonomy in relation to
cybersecurity
terrorists to global platforms, that systemically alter the balance of power in the tra-
ditional state-based (Westphalian) system of international relations.
Policy-makers and politicians tend to see strategic autonomy as a means to an
end, namely sovereignty. They often join up the ‘sovereignty’ or ‘strategic auton-
omy’ with a term that stands for a critical asset: data sovereignty, digital sovereignty,
technological sovereignty, strategic autonomy in defence and military, financial stra-
tegic autonomy, and so on.
I define strategic autonomy as “the ability, in terms of capacity and capabilities,
to decide and act upon essential aspects of one’s longer-term future in the economy,
society and their institutions” (Timmers 2019a). Contrary to the past, when strategic
autonomy was a term used mostly by France in the military and defence domain and
by India to emphasize its foreign policy independence, strategic autonomy nowadays
concerns much of economy and society, as well as democracy (think of fake news
during elections).
States generally follow three approaches to deal with the challenge of strategic
autonomy in the digital age (see Fig. 1). These are: (1) risk management, i.e. keep-
ing the risks to sovereignty manageable as much as possible, which emphasises
(cyber-)resilience, (2) strategic partnerships of like-minded states and possibly
including private actors to have control on the most critical technologies and sys-
tems, and (3) promoting global common goods, to develop and protect certain criti-
cal digital assets as a common global interest. A state can pursue one or several of
these approaches at the same time.
A fourth approach, i.e. going it completely alone, is at most feasible the US or the
People’s Republic of China. This approach appears to become increasingly popular
in these countries despite dire consequences for global trade, as it is inefficient and
requires decoupling of globally interwoven supply chains.
Let’s analyse each of the three approaches, from the perspective of sovereignty
being as stake and focusing on the ethical aspects of the use of AI.
13
Ethics of AI and Cybersecurity When Sovereignty is at Stake 637
A risk management approach seeks to strengthen each of the steps “identify, protect,
detect, defend, recover” in relation to risks, notably of critical infrastructures such
as electricity, water, health, cloud services, etc. The approach involves large scale
sensoring/monitoring of complex assets; big data-based threat detection and analy-
sis; real-time response interpreting business, legal, and ethical rules; and managed
infrastructure recovery.
In each of these, AI is considered an essential aid and is already becoming big
business. Only with AI is it possible to quickly sift through billions of sensor data
points so that the responsible CERT1 can focus on a handful of noteworthy situa-
tions only. The New York Stock Exchange reportedly is attacked half a trillion times
a day, with 30–40 attacks of consequence.2 Providers of AI-based cyber-resilience
solutions are already multi-billion-dollar companies.
What are the ethical challenges in cybersecurity risk management, notably when
making use of AI? Extensive monitoring and pervasive risk-prevention with the help
of AI can be highly intrusive and coercive for people, whether employees or citi-
zens. AI can also be so powerful that people feel that their sense of being in control
is taken away. They may get a false sense of security too. Deep-learning AI is, as of
today, not transparent in how it reaches a decision from so many data points, yet an
operator may blindly trust that decision. AI also can incite freeriding as it is tempt-
ing to offload responsibility onto ‘the system’.
Risk management is also an approach that accepts a residual risk. Financially this
may be offset by cyber insurance, but a political and sovereignty question is how
many lost lives are acceptable until internal legitimacy of the state and thereby sov-
ereignty is really at risk (the 2017 Wannacry attack that affected many UK hospitals
may have led to the loss of lives). This political question becomes even more sensi-
tive when it is an AI system that autonomously invokes a cyber-defensive strategy,
such as shutting down part of the electricity grid which implies a choice which peo-
ple to put at risk or not.
Technical experts also argue that systems are so complex that they can never be
fully protected. The fear is that risk management may not detect the presence of
a ‘kill switch’ in a system which could be activated in international conflict or by
accident and shutdown a critical infrastructure such as tele-communications (such
arguments have been put forward in the 5G/Huawei debate). Alternatively, the fear
is for systematic below-the-radar leakage of intellectual property, which eroding
long-term national competitiveness. The role of malicious AI would be to keep such
a kill-switch or systematic leakage hidden.
1
CERT = Computer Emergency Response Team, also called CSIRT: Computer Security Incident
Response Team.
2
Hacking Our Security: Digital Resilience for the Next Cyber Threat, interview with Ray Rothrock
(RedSeal), Nov 20, 2018, https://www.computerhistory.org/atchm/hacking-our-security-digital-resilience
-for-the-next-cyber-threat/.
13
638
P. Timmers
13
Ethics of AI and Cybersecurity When Sovereignty is at Stake 639
Let’s recall that the strategic partnership means working with sufficiently trusted
partners only and in areas that are the most critical; traditionally, that would mostly
comprise military systems. Today, however, as strategic autonomy concerns much
of economy, society and democracy, the questions are: what is so critical in eco-
nomic, societal and democratic systems that it should be developed with or supplied
by trusted partners only and who are these trusted partners? Recently, Germany pro-
posed a Europeans-only cloud, GAIA-X. Former French Minister Gérard Collomb
talked of Franco-European strategic autonomy.
In strategic partnership thinking, AI and cybersecurity takes three forms: (1) AI
as a component for the security and safety of critical infrastructures—think of tel-
ecoms, smart grids, industry 4.0, or democratic and judicial processes (2) securing
the AI that is enabling smart critical facilities such as to prevent hacking of algo-
rithms that control self-driving cars, and (3) weaponized AI, that is AI in cyber- or
cyber-kinetic weapons.
Strategic partnerships are with like-minded parties. Such ‘like-mindedness’
extends to ethics in relation to these first two forms of AI and cybersecurity. Recently
the European Commission’s high-level group on AI and ethics put forward AI and
ethics guidelines (European Commission 2019). Adherence to such guidelines will
13
640
P. Timmers
become part of the political debate on strategic partnerships. This is the kind of dis-
cussion that is familiar from personal data protection and the related EU law, the
General Data Protection Regulation (GDPR). Where Europeans stress personal data
protection as a human right by law (the GDPR is based on the corresponding Art
16 of the Treaty on the Functioning of the EU), other states consider the GDPR a
tool to erect a trade barrier and accuse the EU of using the GDPR for strategic trade
geopolitics.
Likewise, the EU must anticipate that its AI and ethics guidelines and possible
future legislation on AI will not be seen by everyone as an expression of human
rights but rather as a tool of trade politics, as strategic use of ethics. Indeed, we
need a debate on perception and reality of ‘strategic ethics’, even if that may be
controversial.
Pursuing a strategic partnership approach to strategic autonomy is clearly a
highly political matter. Actors must also be able to steer the direction of partnerships
and find common ground, like-mindedness. Doing so they must be able to embed or
adapt their own values, in this case to ethics and AI (Taddeo and Floridi 2018) and
thereby accept a degree of shared or pooled sovereignty.
AI and cybersecurity for (potentially) offensive purposes ranges from the singu-
lar kill-switches (in the past also called ‘logic bomb’) to AI-based cyber-attack or
counter-attack software such as for cyber-deterrence (Taddeo 2018). Such cyber-AI
can be combined with physical, kinetic weapons. The spectrum of weaponized AI
includes Lethal Autonomous Weapons (LAWS). Many of the ethical issues related
to smart weapons are discussed in by Brundage (2018).
In conclusion, the focus in the second approach, strategic partnerships, is on the
one hand strategic use of ethics or ‘strategic ethics’, and on the other hand the ethics
of AI-enabled cyber- and cyber-kinetic weapons. While there is much attention for
the latter, the former needs a more serious debate to determine the value and viabil-
ity of a strategic partnership relative to a risk management or global common good
approach.
13
Ethics of AI and Cybersecurity When Sovereignty is at Stake 641
13
642
P. Timmers
UN Charter. The 2015 report also advised to explore in the future practical work,
i.e. CBMs, such as developing common understandings on how international law
applies for an open secure stable accessible and peaceful ICT environment, and con-
versely how what the concepts are of international peace and security in the use of
ICTs area at technical, legal and policy level.
Clearly, the global common good approach to strategic autonomy, including for
AI and cybersecurity deserves much more attention. It would be in the well-consid-
ered, self-interest of states for their sovereignty and in the interest of global busi-
ness, it has a long tradition, and internationally the UN could give political support,
and work with the private sector, internet community and civil society.
5 Policy Recommendations
Several of the policy recommendations that can be derived from the preceding anal-
ysis are not limited to AI. Mutatis mutandis the analysis is also applicable to other
digital use cases and even to basic infrastructures such as electronic identification
and authentication.
13
Ethics of AI and Cybersecurity When Sovereignty is at Stake 643
6 Conclusions and Perspectives
13
644
P. Timmers
This opens the debate about the primacy or cost of state sovereignty for exam-
ple, relative to human rights which is an important element in the polarization on
cybersecurity in the UN. Cost of legitimacy is then the notion underpinning the link
between cyber and ethics in relation to sovereignty. Cyber raises that cost. The ques-
tion is what the acceptable cost is of maintaining state sovereignty, i.e. what justifies
plugging the sovereignty gap. This cost can include damage to people’s life, such as
not getting urgent healthcare (cf Wannacry) or suppression of freedom of expression
(cf Uighur surveillance in China).
This gives us three conceptual links related to sovereignty (see Fig. 3): state and
non-state actors are intelligent; the sovereignty gap has a cost; code conditions law
and law conditions code. The focus of the debate then becomes (internal and exter-
nal) state legitimacy which is a well-known notion in sovereignty political theory.
State legitimacy is contestable by intelligent actors. Maintaining state legitimacy has
a cost. State legitimacy is imposed on technology while technology also conditions
state legitimacy. Given the challenges of AI and cybersecurity, a further reflection
on ethics and state legitimacy may therefore be a fruitful area of research.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 Interna-
tional License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution,
and reproduction in any medium, provided you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons license, and indicate if changes were made.
References
Barlow, J.P. (1996). A declaration of the independence of cyberspace. Retrieved September 18, 2019,
from https://www.eff.org/cyberspace-independence.
Biersteker, T. (2012). State, sovereignty and territory. In W. Carlsnaes, et al. (Eds.), Handbook of interna-
tional relations. Thousand Oaks: SAGE Publications Ltd.
Broeders, D. (2017). Aligning the international protection of ‘the public core of the internet’ with
state sovereignty and national security. Journal of Cyber Policy, 2(3), 366–376. https://doi.
org/10.1080/23738871.2017.1403640.
13
Ethics of AI and Cybersecurity When Sovereignty is at Stake 645
Brundage, M., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and
mitigation. Retrieved September 18, 2019, from https://maliciousaireport.com/.
Cowhey, P., & Aronson, J. (2017). Digital DNA. Oxford: Oxford University Press.
Drew, A., & Parton, C. (2019). Committing to Huawei for 5G risks establishing a dependency. Financial
Times, Retrieved September 12, 2019.
European Commission. (2016). Regulation setting up a Union regime for the control of exports, transfer,
brokering, technical assistance and transit of dual-use items (recast).
European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved September 18, 2019, from
https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines.
Green, B. (2009). Lessons from the montreal protocol: Guidance for the next international climate change
agreement. Environmental Law, 39(1), 253–283.
Heinl, C. (2019). CBMs How to build trust and confidence in cyberspace? Lessons and good practices—
Cyber Direct Training. European Institute of Security Studies/EU Cyber Direct, to be published.
Hurrell, A., & Macdonald, T. (2012). Ethics and norms in international relations. In W. Carlsnaes, et al.
(Eds.), Handbook of international relations. Thousand Oaks: SAGE Publications Ltd.
Kello, L. (2017). The virtual weapon and international order. New Haven: Yale University Press.
Lessig, L. (2000). Code is law. Harvard Magazine 1 Jan 2000. Retrieved September 18, 2019, from https
://www.harvardmagazine.com/2000/01/code-is-law-html.
Lessig, L. (2006). Code: And other laws of cyberspace. Basic Books (2nd ed.).
Maurer, T. (2018). Cyber mercenaries: The state, hackers, and power. Cambridge: Cambridge University
Press.
Taddeo, M. (2018). Deterrence and norms to foster stability in cyberspace. Philosophy & Technology, 31,
323. https://doi.org/10.1007/s13347-018-0328-0.
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.
Timmers, P. (2019a). Strategic autonomy and cybersecurity. European Institute of Security Studies.
Retrieved September 18, 2019, from https://eucyberdirect.eu/content_research/strategic-autonomy-
and-cybersecurity/.
Timmers, P. (2019b). Cybersecurity—Cyber direct training. European Institute of Security Studies/EU
Cyber Direct, to be published.
UNIDIR. (2017). The weaponization of increasingly autonomous technologies: Autonomous weapon sys-
tems and cyber operations. UNIDIR Resources, No. 7.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
13
Minds & Machines is a copyright of Springer, 2019. All Rights Reserved.