Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 18

UNIT II ETHICAL INITIATIVES IN AI

INTERNATIONAL ETHICAL INITIATIVES:

International laws can facilitate collaboration, prevent conflicts, and ensure


that ethical principles are upheld in AI applications. They are also important when
it comes to establishing common standards for data protection, privacy,
accountability, and transparency, while having common AI compliance overview.

The Artificial Intelligence Ethics Policy sets out five overarching principles
that are designed to ensure best practice use of AI, focusing on trust,
transparency, customer benefit, fairness, privacy and accountability.

The Ethics of Artificial Intelligence is the first global framework for the
ethical use of artificial intelligence. It guides countries on how to maximize the
benefits of AI and reduce the risks it entails. To this end, it contains values and
principles, but also detailed policy recommendations in all relevant areas.

The Montreal Declaration for Responsible AI:

This initiative was launched in 2017 at the International Conference on


Responsible AI in Montreal. It calls for the development of AI systems that
prioritize ethical considerations, respect human rights, and promote diversity and
inclusion.

The Partnership on AI:

Founded in 2016, the Partnership on AI is a multi-stakeholder initiative that


brings together industry leaders, academics, and non-profit organizations to
address the ethical, social, and economic impacts of AI. It focuses on areas such as
fairness and transparency, accountability, and the societal impacts of AI
technologies.
The United Nations (UN) AI for Good Global Summit:

The UN organizes the AI for Good Global Summit, which brings together
policymakers, technologists, and experts from various sectors to explore the
potential of AI for achieving the Sustainable Development Goals (SDGs) while
addressing ethical and societal challenges.

The OECD Principles on Artificial Intelligence:

The Organization for Economic Co-operation and Development (OECD)


has developed principles for AI that emphasize human-centered values,
transparency, and accountability. These principles serve as a framework for
promoting trustworthy AI globally.

ETHICAL HARMS AND CONCERNS:

1. Privacy Violations: With the proliferation of technology and data


collection, there are increasing concerns about the misuse of personal
information, surveillance, and breaches of privacy.

2. Algorithmic Bias: Algorithms often reflect the biases of their developers


or the data they are trained on, leading to discriminatory outcomes in areas
like hiring, lending, and criminal justice.

3. Artificial Intelligence and Automation: Concerns arise regarding the


potential loss of jobs, the exacerbation of economic inequality, and the
ethical implications of autonomous decision-making by AI systems.

4. Environmental Impact: Industries and technologies can have significant


environmental consequences, such as pollution, resource depletion, and
habitat destruction, raising questions about sustainability and responsibility.
5. Healthcare Ethics: Issues such as patient autonomy, informed consent,
access to healthcare, and the equitable distribution of resources are central
to ethical debates in healthcare.

6. Bioethics: Ethical dilemmas arise in areas such as genetic engineering,


reproductive technologies, organ transplantation, and end-of-life care.

7. Corporate Responsibility: Companies face scrutiny over their practices


related to labor rights, environmental sustainability, fair trade, and corporate
governance.

8. Social Media and Online Behavior: Concerns include cyberbullying,


misinformation, echo chambers, and the impact of social media on mental
health and well-being.

9. Ethical Use of Emerging Technologies: Technologies like blockchain,


virtual reality, and biometrics raise new ethical questions regarding
ownership, security, consent, and potential misuse.

10. Global Justice and Human Rights: Issues such as poverty, inequality,
migration, and geopolitical conflicts raise complex ethical questions about
responsibility, solidarity, and the protection of human rights.

i. Algorithmic Bias:

1. Types of Bias:

i. Representation Bias: This occurs when the training data used


to develop an algorithm does not adequately represent the
diversity of the population it is meant to serve. As a result,
certain groups may be underrepresented or misrepresented,
leading to biased outcomes.

ii. Measurement Bias: This arises when the variables used to


train an algorithm are themselves biased or flawed, leading to
skewed results. For example, using income as a proxy for
creditworthiness can perpetuate socioeconomic biases.

iii. Aggregation Bias: When decisions are made based on


aggregated data, individual differences may be obscured,
leading to biased outcomes for certain groups.

2. Ethical Implications:

i. Justice and Fairness: Algorithmic bias can lead to unjust


outcomes, violating principles of fairness and equality.

ii. Harm and Discrimination : Biased algorithms can perpetuate


or exacerbate existing societal inequalities, leading to
discrimination and harm against marginalized groups.

iii. Transparency and Accountability: Lack of transparency in


algorithmic decision-making can make it difficult to identify
and address bias, undermining accountability and trust in AI
systems.

3. Mitigation Strategies:

i. Diverse and Representative Data: Ensuring that training data


is diverse and representative of the population to mitigate
representation bias.

ii. Bias Detection and Correction: Implementing techniques to


detect and mitigate bias in algorithms, such as fairness-aware
machine learning.

iii. Transparency and Explainability: Making AI systems more


transparent and explainable to stakeholders to enable scrutiny
and accountability.
ii. Corporate Responsibility:

1. Ethical AI Development: Companies have a responsibility to develop AI


systems that prioritize ethical considerations, including fairness,
transparency, accountability, and privacy. This involves integrating ethical
principles into the design and development process, such as conducting
ethical impact assessments and incorporating diverse perspectives.

2.Transparency and Accountability: Companies should strive for


transparency in their AI systems, providing clear explanations of how AI
algorithms make decisions and ensuring accountability for the outcomes.
This includes disclosing the data sources used, the algorithms employed,
and any potential biases or limitations of the system.

3.Fairness and Bias Mitigation: Companies must actively work to identify


and mitigate biases in their AI systems to ensure fair and equitable
outcomes for all users. This involves implementing techniques such as bias
detection, fairness-aware machine learning, and diverse and representative
training data sets.

4. Privacy Protection: Companies must uphold privacy rights and protect


user data when developing and deploying AI systems. This includes
obtaining informed consent for data collection and usage, implementing
robust data security measures, and adhering to relevant privacy regulations
and standards.

5.Responsible Deployment: Companies should consider the potential


societal impacts of their AI technologies and take steps to minimize any
negative consequences. This may involve conducting risk assessments,
considering the broader ethical implications of AI deployment, and
proactively addressing potential harms or unintended consequences.

iii. Bioethics:
1. Data Privacy and Confidentiality: In healthcare and biomedical research,
AI often relies on vast amounts of sensitive personal data. Bioethics
principles emphasize the importance of protecting patient privacy, ensuring
confidentiality, and obtaining informed consent for the collection, use, and
sharing of data in AI applications.

2. Informed Consent and Autonomy: Individuals should have the right to


make informed decisions about their participation in AI-driven healthcare
interventions, clinical trials, or research studies. This includes providing
clear information about the purpose, risks, benefits, and alternatives of AI
technologies and respecting patients' autonomy and preferences.

3. Clinical Validation and Safety: AI-driven healthcare technologies should


undergo rigorous testing, validation, and regulatory approval processes to
ensure their safety, effectiveness, and reliability in clinical practice.
Bioethics principles emphasize the importance of evidence-based medicine,
adherence to ethical standards, and continuous monitoring of AI systems to
mitigate potential risks and harms to patients.

4. Bias and Fairness: AI algorithms used in healthcare may be susceptible


to bias, leading to disparities in diagnosis, treatment recommendations, and
patient outcomes. Bioethics principles emphasize the need to address biases
in AI algorithms, ensure fairness and equity in healthcare decision-making,
and mitigate the potential impact of bias on vulnerable populations.

5. Professional Integrity and Responsibility: Healthcare professionals and


researchers have a professional responsibility to uphold ethical standards,
maintain patient trust, and prioritize patient well-being when using AI
technologies in clinical practice or research settings. This includes adhering
to professional codes of conduct, ethical guidelines, and legal regulations
governing the use of AI in healthcare.
Case studies
1)Case study: healthcare robots
Artificial Intelligence and robotics are rapidly moving into the field of healthcare and will increasingly
play roles in diagnosis and clinical treatment. For example, currently, or in the near future, robots will help
in the diagnosis of patients; the performance of simple surgeries; and the monitoring of patients' health and
mental wellness in short and long-term care facilities. They may also provide basic physical interventions, work
as companion careers, remind patients to take their medications, or help patients with their mobility. In some
fundamental areas of medicine, such as medical image diagnostics, machine learning has been proven to
match or even surpass our ability to detect illnesses.

Embodied AI, or robots, are already involved in a number of functions that affect people's physical safety. In
June 2005, a surgical robot at a hospital in Philadelphia malfunctioned during prostate surgery, injuring the
patient. In June 2015, a worker at a Volkswagen plant in Germany was crushed to death by a robot on the
production line. In June 2016, a Tesla car operating in autopilot mode collided with a large truck, killing the
car's passenger (Yadron and Tynan, 2016).

As robots become more prevalent, the potential for future harm will increase, particularly in the case of driverless
cars, assistive robots and drones, which will face decisions that have real consequences for human safety and
well-being. The stakes are much higher with embodied AI than with mere software, as robots have moving
parts in physical space (Lin et al., 2017). Any robot with moving physical parts poses a risk, especially to
vulnerable people such as children and the elderly.
Safety
Again, perhaps the most important ethical issue arising from the growth of AI and robotics in
healthcare is that of safety and avoidance of harm. It is vital that robots should not harm people, and that they
should be safe to work with. This point is especially important in areas of healthcare that deal with vulnerable
people, such as the ill, elderly, and children.

Digital healthcare technologies offer the potential to improve accuracy of diagnosis and treatments, but to
thoroughly establish a technology's long-term safety and performance investment in clinical trials is
required. The debilitating side-effects of vaginal mesh implants and the continued legal battles against
manufacturers (The Washington Post, 2019), stand as an example against shortcutting testing, despite
the delays this introduces to innovating healthcare. Investment in clinical trials will be essential to safely
implement the healthcare innovations that AI systems offer.
User understanding
The correct application of AI by a healthcare professional is important to ensure patient safety. For instance, the
precise surgical robotic assistant 'the da Vinci' has proven a useful tool in minimising surgical recovery, but
requires a trained operator (The Conversation, 2018).

A shift in the balance of skills in the medical workforce is required, and healthcare providers are preparing
to develop the digital literacy of their staff over the next two decades (NHS' Topol Review, 2009). With
genomics and machine learning becoming embedded in diagnoses and medical decision-making,
healthcare professionals need to become digitally literate to understand each technological tool and use it
appropriately. It is important for users to trust the AI presented but to be aware of each tool's strengths and
weaknesses, recognising when validation is necessary. For instance, a generally accurate machine learning
study to predict the risk of complications in patients with pneumonia erroneously considered those with
asthma to be at low risk. It reached this conclusion because asthmatic pneumonia patients were taken
directly to intensive care, and this higher-level care circumvented complications. The inaccurate
recommendation from the algorithm was thus overruled (Pulmonology Advisor, 2017).

However, it's questionable to what extent individuals need to understand how an AI system arrived at a
certain prediction in order to make autonomous and informed decisions. Even if an in-depth understanding
of the mathematics is made obligatory, the complexity and learned nature of machine learning algorithms
often prevent the ability to understand how a conclusion has been made from a dataset — a so called 'black
box' (Schönberger, 2019). In such cases, one possible route
to ensure safety would be to license AI for specific medical procedures, and to 'disbar' the AI if a certain
number of mistakes are made (Hart, 2018).
Data protection
Personal medical data needed for healthcare algorithms may be at risk. For instance, there are worries that
data gathered by fitness trackers might be sold to third parties, such as insurance companies, who could use
those data to refuse healthcare coverage (National Public Radio, 2018). Hackers are another major concern, as
providing adequate security for systems accessed by a range of medical personnel is problematic (Forbes, 2018).

Pooling personal medical data is critical for machine learning algorithms to advance healthcare
interventions, but gaps in information governance form a barrier against responsible and ethical data
sharing. Clear frameworks for how healthcare staff and researchers use data, such as genomics, in a way that
safeguards patient confidentiality is necessary to establish public trust and enable advances in healthcare
algorithms (NHS' Topol Review, 2009).
Legal responsibility
Although AI promises to reduce the number of medical mishaps, when issues occur, legal liability must be
established. If equipment can be proven to be faulty then the manufacturer is liable, but it is often tricky to
establish what went wrong during a procedure and whether anyone, medical personnel or machine, is to
blame. For instance, there have been lawsuits against the da Vinci surgical assistant (Mercury News, 2017),
but the robot continues to be widely accepted (The Conversation, 2018).

In the case of 'black box' algorithms where it is impossible to ascertain how a conclusion is reached, it is
tricky to establish negligence on the part of the algorithm's producer (Hart, 2018).

For now, AI is used as an aide for expert decisions, and so experts remain the liable party in most cases. For
instance, in the aforementioned pneumonia case, if the medical staff had relied solely on the AI and sent
asthmatic pneumonia patients home without applying their specialist knowledge, then that would be a
negligent act on their part (Pulmonology Advisor, 2017; International Journal of Law and Information
Technology, 2019).

Soon, the omission of AI could be considered negligence. For instance, in less developed countries with a
shortage of medical professionals, withholding AI that detects diabetic eye disease and so prevents
blindness, because of a lack of ophthalmologists to sign off on a diagnosis, could be considered
unethical (The Guardian, 2019; International Journal of Law and Information Technology, 2019).
Bias
Non-discrimination is one of the fundamental values of the EU (see Article 21 of the EU Charter of
Fundamental Rights), but machine learning algorithms are trained on datasets that often have proportionally
less data available about minorities, and as such can be biased (Medium, 2014). This can mean that algorithms
trained to diagnose conditions are less likely to be accurate for ethnic patients; for instance, in the dataset
used to train a model for detecting skin cancer, less than 5 percent of the images were from individuals with
dark skin, presenting a risk of misdiagnosis for people of colour (The Atlantic, 2018).

To ensure the most accurate diagnoses are presented to people of all ethnicities, algorithmic biases must be
identified and understood. Even with a clear understanding of model design this is a difficult task
because of the aforementioned 'black box' nature of machine learning. However, various codes of conduct
and initiatives have been introduced to spot biases earlier. For instance,The Partnership on AI, an ethics-
focused industry group was launched by Google, Facebook, Amazon, IBM and Microsoft (The Guardian,
2016) — although, worryingly, this board is not very diverse.
Equality of access
Digital health technologies, such as fitness trackers and insulin pumps, provide patients with the opportunity
to actively participate in their own healthcare. Some hope that these technologies will help to redress health
inequalities caused by poor education, unemployment, and so on. However, there is a risk that individuals who
cannot afford the necessary technologies or do not have the required 'digital literacy' will be excluded, so
reinforcing existing health inequalities (The Guardian, 2019).

The UK's National Health Services' Widening Digital Participation programme is one example of how a
healthcare service has tried to reduce health inequalities, by helping millions of people in the UK who lack the
skills to access digital health services. Programmes such as this will be critical in ensuring equality of
access to healthcare, but also in increasing the data from minority groups needed to prevent the biases in
healthcare algorithms discussed above.
Quality of care
'There is remarkable potential for digital healthcare technologies to improve accuracy of diagnoses
and treatments, the efficiency of care, and workflow for healthcare professionals' (NHS' Topol Review, 2019).

If introduced with careful thought and guidelines, companion and care robots, for example, could improve
the lives of the elderly, reducing their dependence, and creating more opportunities for social interaction.
Imagine a home-care robot that could: remind you to take your medications; fetch items for you if you are too
tired or are already in bed; perform simple cleaning tasks; and help you stay in contact with your family, friends
and healthcare provider via video link.
However, questions have been raised over whether a 'cold', emotionless robot can really substitute for a
human's empathetic touch. This is particularly the case in long-term caring of vulnerable and often lonely
populations, who derive basic companionship from caregivers. Human interaction is particularly important
for older people, as research suggests that an extensive social network offers protection against dementia. At
present, robots are far from being real companions. Although they can interact with people, and even show
simulated emotions, their conversational ability is still extremely limited, and they are no replacement for
human love and attention. Some might go as far as saying that depriving the elderly of human contact is
unethical, and even a form of cruelty.

And does abandoning our elderly to cold machine care objectify (degrade) them, or human caregivers?
It's vital that robots don't make elderly people feel like objects, or with even less control over their lives than
when they were dependent on humans — otherwise they may feel like they are 'lumps of dead matter: to be
pushed, lifted, pumped or drained, without proper reference to the fact that they are sentient beings' (Kitwood
1997).

In principle, autonomy, dignity and self-determination can all be thoroughly respected by a machine
application, but it's unclear whether application of these roles in the sensitive field of medicine will be deemed
acceptable. For instance, a doctor used a telepresence device to give a prognosis of death to a Californian
patient; unsurprisingly the patient's family were outraged by this impersonal approach to healthcare (The
Independent, 2019). On the other hand, it's argued that new technologies, such as health monitoring apps,
will free up staff time for more direct interactions with patients, and so potentially increase the overall quality
of care (The Guardian, Press Association, Monday 11 February 2019).
Deception
A number of 'carebots' are designed for social interactions and are often touted to provide an emotional
therapeutic role. For instance, care homes have found that a robotic seal pup's animal- like interactions with
residents brightens their mood, decreases anxiety and actually increases the sociability of residents with their
human caregivers. However, the line between reality and imagination is blurred for dementia patients, so is
it dishonest to introduce a robot as a pet and encourage a social-emotional involvement? (KALW, 2015) And if
so, is if morally justifiable?

Companion robots and robotic pets could alleviate loneliness amongst older people, but this would require
them believing, in some way, that a robot is a sentient being who cares about them and has feelings — a
fundamental deception. Turkle et al. (2006) argue that 'the fact that our parents, grandparents and children
might say 'I love you' to a robot who will say 'I love you' in return, does not feel completely comfortable; it
raises questions about the kind of authenticity we require of our technology'. Wallach and Allen (2009) agree
that robots designed to detect human social gestures and respond in kind all use techniques that are
arguably forms of deception. For an individual to benefit from owning a robot pet, they must continually
delude themselves about the real nature of their relation with the animal. What's more, encouraging elderly
people to interact with robot toys has the effect of infantilising them.
Autonomy
It's important that healthcare robots actually benefit the patients themselves, and are not just designed to
reduce the care burden on the rest of society — especially in the case of care and companion AI. Robots
could empower disabled and older people and increase their independence; in fact, given the choice, some
might prefer robotic over human assistance for certain intimate tasks such as toileting or bathing. Robots could
be used to help elderly people live in their own homes for longer, giving them greater freedom and
autonomy. However, how much control, or autonomy, should a person be allowed if their mental capability
is in question? If a patient asked a robot to throw them off the balcony, should the robot carry out that
command?
Liberty and privacy
As with many areas of AI technology, the privacy and dignity of users' needs to be carefully
considered when designing healthcare service and companion robots. Working in people's homes means
that robots will be privy to private moments such as bathing and dressing; if these moments are recorded,
who should have access to the information, and how long should recordings be kept? The issue becomes
more complicated if an elderly person's mental state deteriorates and they become confused — someone
with Alzheimer's could forget that a robot was monitoring them, and could perform acts or say things
thinking that they are in the privacy of their own home. Home-care robots need to be able to balance their
user's privacy and nursing needs, for example by knocking and awaiting an invitation before entering a
patient's room, except in a medical emergency.

To ensure their charge's safety, robots might sometimes need to act as supervisors, restricting their freedoms. For
example, a robot could be trained to intervene if the cooker was left on, or the bath was overflowing. Robots
might even need to restrain elderly people from carrying out potentially dangerous actions, such as
climbing up on a chair to get something from a cupboard. Smart homes with sensors could be used to detect
that a person is attempting to leave their room, and lock the door, or call staff — but in so doing the elderly
person would be imprisoned.
Moral agency
'There's very exciting work where the brain can be used to control things, like maybe they've lost the use of an arm…
where I think the real concerns lie is with things like behavioural targeting: going straight to the hippocampus and
people pressing 'consent', like we do now, for data access'. (John Havens)

Robots do not have the capacity for ethical reflection or a moral basis for decision-making, and thus humans
must currently hold ultimate control over any decision-making. An example of ethical reasoning in a robot
can be found in the 2004 dystopian film 'I, Robot', where Will Smith's character disagreed with how the
robots of the fictional time used cold logic to save his life over that of a child's. If more automated
healthcare is pursued, then the question of moral agency will require closer attention. Ethical reasoning is
being built into robots, but moral responsibility is about more than the application of ethics — and it is unclear
whether robots of the future will be able to handle the complex moral issues in healthcare (Goldhill, 2016).
Trust
Larosa and Danks (2018) write that AI may affect human-human interactions and relationships within the
healthcare domain, particularly that between patient and doctor, and potentially disrupt the trust we place in
our doctor.

'Psychology research shows people mistrust those who make moral decisions by calculating costs and
benefits — like computers do' (The Guardian, 2017). Our distrust of robots may also come from the number
of robots running amok in dystopian science fiction. News stories of computer mistakes
— for instance, of an image-identifying algorithm mistaking a turtle for a gun (The Verge, 2017) —
alongside worries over the unknown, privacy and safety are all reasons for resistance against the uptake of
AI (Global News Canada, 2016).

Firstly, doctors are explicitly certified and licensed to practice medicine, and their license indicates that they
have specific skills, knowledge, and values such as 'do no harm'. If a robot replaces a doctor for a particular
treatment or diagnostic task, this could potentially threaten patient-doctor trust, as the patient now needs to
know whether the system is appropriately approved or 'licensed' for the functions it performs.

Secondly, patients trust doctors because they view them as paragons of expertise. If doctors were seen as 'mere
users' of the AI, we would expect their role to be downgraded in the public's eye, undermining trust.

Thirdly, a patient's experiences with their doctor are a significant driver of trust. If a patient has an open line of
communication with their doctor, and engages in conversation about care and treatment, then the
patient will trust the doctor. Inversely, if the doctor repeatedly ignores the patient's wishes, then these
actions will have a negative impact on trust. Introducing AI into this dynamic could increase trust — if the
AI reduced the likelihood of misdiagnosis, for example, or improved patient care. However, AI could also
decrease trust if the doctor delegated too much diagnostic or decision-making authority to the AI,
undercutting the position of the doctor as an authority on medical matters.

As the body of evidence grows to support the therapeutic benefits for each technological approach, and as
more robotic interacting systems enter the marketplace, then trust in robots is likely to increase. This has
already happened for robotic healthcare systems such as the da Vinci surgical robotic assistant (The
Guardian, 2014).
Employment replacement
As in other industries, there is a fear that emerging technologies may threaten employment (The Guardian,
2017), for instance, there are carebots now available that can perform up to a third of nurses' work (Tech
Times, 2018). Despite these fears, the NHS' Topol Review (2009) concluded that 'these technologies will not
replace healthcare professionals but will enhance them ('augment them'), giving them more time to care for
patients'. The review also outlined how the UK's NHS will nurture a learning environment to ensure digitally
capable employees.
2)Case study: Autonomous Vehicles
Autonomous Vehicles (AVs) are vehicles that are capable of sensing their environment and operating
with little to no input from a human driver. While the idea of self-driving cars has been around since at least
the 1920s, it is only in recent years that technology has developed to a point where AVs are appearing on
public roads.

According to automotive standardisation body SAE International (2018), there are six levels of driving
automation:

An automated system may issue warnings and/or momentarily intervene in


0 No automation
driving, but has no sustained vehicle control.

The driver and automated system share control of the vehicle. For example, the
automated system may control engine power to maintain a set speed (e.g. Cruise
1 Hands on Control), engine and brake power to maintain and vary speed (e.g. Adaptive
Cruise Control), or steering (e.g. Parking Assistance). The driver must be ready to
retake full control at any time.
The automated system takes full control of the vehicle (including accelerating,
2 Hands off braking, and steering). However, the driver must monitor the driving and be prepared
to intervene immediately at any time.

The driver can safely turn their attention away from the driving tasks (e.g. to text or
watch a film) as the vehicle will handle any situations that call for an immediate
3 Eyes off response. However, the driver must still be prepared to intervene, if called upon by
the AV to do so, within a timeframe specified by the AV manufacturer.

As level 3, but no driver attention is ever required for safety, meaning the driver can
4 Minds off
safely go to sleep or leave the driver's seat.

Steering wheel No human intervention is required at all. An example of a level 5 AV would be a


5
optional robotic taxi.

Some of the lower levels of automation are already well-established and on the market, while higher level AVs
are undergoing development and testing. However, as we transition up the levels and put more
responsibility on the automated system than the human driver, a number of ethical issues emerge.
Societal and Ethical Impacts of AVs
'We cannot build these tools saying, 'we know that humans act a certain way, we're going to kill them – here's
what to do'.' (John Havens)

Public safety and the ethics of testing on public roads

At present, cars with 'assisted driving' functions are legal in most countries. Notably, some Tesla models
have an Autopilot function, which provides level 2 automation (Tesla, nd). Drivers are legally allowed to use
assisted driving functions on public roads provided they remain in charge of the vehicle at all times.
However, many of these assisted driving functions have not yet been subject to independent safety certification,
and as such may pose a risk to drivers and other road users. In Germany, a report published by the Ethics
Commission on Automated Driving highlights that it is the public sector's responsibility to guarantee the
safety of AV systems introduced and licensed on public roads, and recommends that all AV driving
systems be subject to official licensing and monitoring (Ethics Commision, 2017).

In addition, it has been suggested that the AV industry is entering its most dangerous phase, with cars being
not yet fully autonomous but human operators not being fully engaged (Solon, 2018). The risks this poses
have been brought to widespread attention following the first pedestrian fatality involving an autonomous car.
The tragedy took place in Arizona, USA, in May 2018, when a level 3 AV being tested by Uber collided
with 49-year-old Elaine Herzberg as she was walking her bike across a street one night. It was determined
that Uber was 'not criminally liable' by prosecutors (Shepherdson and Somerville, 2019), and the US National
Transportation Safety Board's preliminary report (NTSB, 2018), which drew no conclusions about the cause, said
that all elements of the self- driving system were operating normally at the time of the crash. Uber said that the
driver is relied upon to intervene and take action in situations requiring emergency braking – leading
some commentators to call out the misleading communication to consumers around the terms 'self- driving
cars' and 'autopilot' (Leggett, 2018). The accident also caused some to condemn the practice of testing AV
systems on public roads as dangerous and unethical, and led Uber to temporarily suspend its self-driving
programme (Bradshaw, 2018).

This issue of human safety — of both public and passenger — is emerging as a key issue concerning self-
driving cars. Major companies — Nissan, Toyota, Tesla, Uber, Volkswagen — are developing autonomous
vehicles capable of operating in complex, unpredictable environments without direct human control, and
capable of learning, inferring, planning and making decisions.

Self-driving vehicles could offer multiple benefits: statistics show you're almost certainly safer in a car driven by
a computer than one driven by a human. They could also ease congestion in cities, reduce pollution, reduce
travel and commute times, and enable people to use their time more productively. However, they won't mean
the end of road traffic accidents. Even if a self-driving car has the best software and hardware available, there is
still a collision risk. An autonomous car could be surprised, say by a child emerging from behind a parked
vehicle, and there is always the issue of how: how should such cars be programmed when they must decide
whose safety to prioritise?

Driverless cars may also have to choose between the safety of passengers and other road users. Say that a
car travels around a corner where a group of school children are playing; there is not enough time to stop,
and the only way the car can avoid hitting the children is to swerve into a brick wall — endangering the
passenger. Whose safety should the car prioritise: the children’s', or the passenger's?

Processes and technologies for accident investigation

AVs are complex systems that often rely on advanced machine learning technologies. Several serious
accidents have already occurred, including a number of fatalities involving level 2 AVs:

 In January 2016, 23-year-old Gao Yaning died when his Tesla Model S crashed into the back of a
road-sweeping truck on a highway in Hebei, China. The family believe Autopilot was engaged
when the accident occurred and accuse Tesla of exaggerating the system's capabilities. Tesla
state that the damage to the vehicle made it impossible to determine whether Autopilot was
engaged and, if so, whether it malfunctioned. A civil case into the crash is ongoing, with a third-
party appraiser reviewing data from the vehicle (Curtis, 2016).
 In May 2016, 40-year-old Joshua Brown died when his Tesla Model S collided with a truck while
Autopilot was engaged in Florida, USA. An investigation by the National Highways and Transport
Safety Agency found that the driver, and not Tesla, were at fault (Gibbs, 2016). However, the National
Highway Traffic Safety Administration later determined that both Autopilot and over-reliance by
the motorist on Tesla's driving aids were to blame (Felton, 2017).

 In March 2018, Wei Huang was killed when his Tesla Model X crashed into a highway safety
barrier in California, USA. According to Tesla, the severity of the accident was
'unprecedented'. The National Transportation Safety Board later published a report attributing
the crash to an Autopilot navigation mistake. Tesla is now being sued by the victim's family
(O'Kane, 2018).

Unfortunately, efforts to investigate these accidents have been stymied by the fact that standards, processes, and
regulatory frameworks for investigating accidents involving AVs have not yet been developed or adopted.
In addition, the proprietary data logging systems currently installed in AVs mean that accident investigators
rely heavily on the cooperation of manufacturers to provide critical data on the events leading up to an accident
(Stilgoe and Winfield, 2018).

One solution is to fit all future AVs with industry standard event data recorders — a so-called 'ethical black box'
— that independent accident investigators could access. This would mirror the model already in place for air
accident investigations (Sample, 2017).
Near-miss accidents
At present, there is no system in place for the systematic collection of near-miss accidents. While it is possible
that manufacturers are collecting this data already, they are not under any obligation to do so — or to share the
data. The only exception at the moment is the US state of California, which requires all companies that are
actively testing AVs on public roads to disclose the frequency at which human drivers were forced to take
control of the vehicle for safety reasons (known as 'disengagement').
In 2018, the number of disengagements by AV manufacturer varied significantly, from one
disengagement for every 11,017 miles driven by Waymo AVs to one for every 1.15 miles driven by Apple
AVs (Hawkins, 2019). Data on these disengagements reinforces the importance of ensuring that human
safety drivers remain engaged. However, the Californian data collection process has been criticised, with
some claiming its ambiguous wording and lack of strict guidelines enables companies to avoid reporting
certain events that could be termed near-misses.

Without access to this type of data, policymakers cannot account for the frequency and significance of near-miss
accidents, or assess the steps taken by manufacturers as a result of these near-misses. Again, lessons could be
learned from the model followed in air accident investigations, in which all near misses are thoroughly logged
and independently investigated. Policymakers require comprehensive statistics on all accidents and near-
misses in order to inform regulation.
Data privacy

It is becoming clear that manufacturers collect significant amounts of data from AVs. As these vehicles
become increasingly common on our roads, the question emerges: to what extent are these data
compromising the privacy and data protection rights of drivers and passengers?

Already, data management and privacy issues have appeared, with some raising concerns about the potential
misuse of AV data for advertising purposes (Lin, 2014). Tesla have also come under fire for the unethical use of
AV data logs. In an investigation by The Guardian, the newspaper found multiple instances where the company
shared drivers' private data with the media following crashes, without their permission, to prove that its
technology was not responsible (Thielman, 2017). At the same time, Tesla does not allow customers to see
their own data logs.

One solution, proposed by the German Ethics Commission on Automated Driving, is to ensure that that all
AV drivers be given full data sovereignty (Ethics Commission, 2017). This would allow them to control how
their data is used.
Employment
The growth of AVs is likely to put certain jobs — most pertinently bus, taxi, and truck drivers — at risk.

In the medium term, truck drivers face the greatest risk as long-distance trucks are at the forefront of AV
technology (Viscelli, 2018). In 2016, the first commercial delivery of beer was made using a self-driving
truck, in a journey covering 120 miles and involving no human action (Isaac, 2016). Last year saw the first fully
driverless trip in a self-driving truck, with the AV travelling seven miles without a single human on board
(Cannon, 2018).

Looking further forward, bus drivers are also likely to lose jobs as more and more buses become driverless.
Numerous cities across the world have announced plans to introduce self-driving shuttles in the future, including
Edinburgh (Calder, 2018), New York (BBC, 2019a) and Singapore (BBC 2017). In some places, this vision has
already become a reality; the Las Vegas shuttle famously got off to a bumpy start when it was involved in a
collision on its first day of operation (Park, 2017), and tourists in the small Swiss town of Neuhausen Rheinfall
can now hop on a self-driving bus to visit the nearby waterfalls (CNN, 2018). In the medium term, driverless
buses will likely be limited to routes that travel along 100% dedicated bus lanes. Nonetheless, the advance of
self-driving shuttles has already created tensions with organised labour and city officials in the USA (Weinberg,
2019). Last year, the Transport Workers Union of America formed a coalition in an attempt to stop
autonomous buses from hitting the streets of Ohio (Pfleger, 2018).

Fully autonomous taxis will likely only become realistic in the long term, once AV technology has been
fully tested and proven at levels 4 and 5. Nonetheless, with plans to introduce self-driving taxis in London by
2021 (BBC, 2018), and an automated taxi service already available in Arizona, USA (Sage, 2019), it is easy
to see why taxi drivers are uneasy.
The quality of urban environments
In the long-term, AVs have the potential to reshape our urban environment. Some of these changes may
have negative consequences for pedestrians, cyclists and locals. As driving becomes more automated, there
will likely be a need for additional infrastructure (e.g. AV-only lanes). There may also be more far-reaching
effects for urban planning, with automation shaping the planning of everything from traffic congestion and
parking to green spaces and lobbies (Marshall and Davies, 2018). The rollout of AVs will also require that
5G network coverage is extended significantly — again, something with implications for urban planning
(Khosravi, 2018).
The environmental impact of self-driving cars should also be considered. While self-driving cars have the
potential to significantly reduce fuel usage and associated emissions, these savings could be counteracted
by the fact that self-driving cars make it easier and more appealing to drive long distances (Worland, 2016).
The impact of automation on driving behaviours should therefore not be underestimated.
Legal and ethical responsibility
Ethical dilemmas in development
From a legal perspective, who is
responsible for crashes caused by In 2014, the Open Roboethics initiative (ORi 2014a, 2014b)
conducted a poll asking people what they thought an
robots, and how should victims be
autonomous car in which they were a passenger should do if a
compensated (if at all) when a vehicle
child stepped out in front of the vehicle in a tunnel. The car
controlled by an algorithm causes
wouldn’t have time to brake and spare the child, but could
injury? If courts cannot resolve this swerve into the walls of the tunnel, killing the passenger. This
problem, robot manufacturers may incur is a spin on the classic 'trolley dilemma', where one has the
unexpected costs that would discourage option to divert a runaway trolley from a path that would hurt
investment. However, if victims are not several people onto the path that would only hurt one.
properly compensated then autonomous
vehicles are unlikely to be trusted or
accepted by the public. 36 % of participants said that they would prefer the car to swerve
into the wall, saving the child; however, the majority (64 %) said
Robots will need to make judgement they would wish to save themselves, thus sacrificing the child. 44
calls in conditions of uncertainty, or 'no win' % of participants thought that the passenger should be able to
situations. However, which ethical choose the car’s course of action, while 33 % said that
approach or theory should a robot be lawmakers should choose. Only 12 % said that the car’s
manufacturers should make the decision. These results suggest
programmed to follow when there's no
that people do not like the idea of engineers making moral
legal guidance? As Lin et al. explain,
decisions on their behalf.
different approaches can generate
different results, including the number of
crash fatalities. Asking for the passenger’s input in every situation would be
impractical. However, Millar (2016) suggests a ‘setup’
Additionally, who should choose the procedure where people could choose their ethics settings after
ethics for the autonomous vehicle — purchasing a new car. Nonetheless, choosing how the car reacts
drivers, consumers, passengers, in advance could be seen as premeditated harm, if, for example
manufacturers, politicians? Loh and Loh a user programmed their vehicle to always avoid vehicle
(2017) argue that responsibility should collisions by swerving into cyclists. This would increase the user’s
be shared among the engineers, the accountability and liability, whilst diverting responsibility away
driver and the autonomous driving from manufacturers.
system itself.
However, Millar (2016) suggests that the user of the technology, in this case the passenger in the self-
driving car, should be able to decide what ethical or behavioural principles the robot ought to follow. Using the
example of doctors, who do not have the moral authority to make important decisions on end-of-life care
without the informed consent of their patients, he argues that there would be a moral outcry if engineers
designed cars without either asking the driver directly for their input, or informing the user ahead of time
how the car is programmed to behave in certain situations.

3)Case study: Warfare and weaponisation


Although partially autonomous and intelligent systems have been used in military technology since at least
the Second World War, advances in machine learning and AI signify a turning point in the use of
automation in warfare.

AI is already sufficiently advanced and sophisticated to be used in areas such as satellite imagery analysis
and cyber defence, but the true scope of applications has yet to be fully realised. A recent report concludes that
AI technology has the potential to transform warfare to the same, or perhaps even a greater, extent than the
advent of nuclear weapons, aircraft, computers and biotechnology (Allen and Chan, 2017). Some key ways in
which AI will impact militaries are outlined below.Lethal autonomous weapons
As automatic and autonomous systems have become more capable, militaries have become more willing to
delegate authority to them. This is likely to continue with the widespread adoption of AI, leading to an AI
inspired arms-race. The Russian Military Industrial Committee has already approved an aggressive plan
whereby 30% of Russian combat power will consist of entirely remote-controlled and autonomous robotic
platforms by 2030. Other countries are likely to set similar goals. While the United States Department of
Defense has enacted restrictions on the use of autonomous and semi- autonomous systems wielding lethal
force, other countries and non-state actors may not exercise such self-restraint.
Drone technologies
Standard military aircraft can cost more than US$100 million per unit; a high-quality quadcopter Unmanned
Aerial Vehicle, however, currently costs roughly US$1,000, meaning that for the price of a single high-end
aircraft, a military could acquire one million drones. Although current commercial drones have limited range, in
the future they could have similar ranges to ballistic missiles, thus rendering existing platforms obsolete.
Robotic assassination
Widespread availability of low-cost, highly-capable, lethal, and autonomous robots could make targeted
assassination more widespread and more difficult to attribute. Automatic sniping robots could assassinate
targets from afar.
Mobile-robotic-Improvised Explosive Devices
As commercial robotic and autonomous vehicle technologies become widespread, some groups will
leverage this to make more advanced Improvised Explosive Devices (IEDs). Currently, the technological
capability to rapidly deliver explosives to a precise target from many miles away is restricted to powerful
nation states. However, if long distance package delivery by drone becomes a reality, the cost of precisely
delivering explosives from afar would fall from millions of dollars to thousands or even hundreds. Similarly,
self-driving cars could make suicide car bombs more frequent and devastating since they no longer require
a suicidal driver.

Hallaq et al. (2017) also highlight key areas in which machine learning is likely to affect warfare. They describe
an example where a Commanding Officer (CO) could employ an Intelligent Virtual Assistant (IVA) within a
fluid battlefield environment that automatically scanned satellite imagery to detect specific vehicle types, helping
to identify threats in advance. It could also predict the enemy's intent, and compare situational data to a stored
database of hundreds of previous wargame exercises and live engagements, providing the CO with access
to a level of accumulated knowledge that would otherwise be impossible to accrue.

Employing AI in warfare raises several legal and ethical questions. One concern is that automated weapon
systems that exclude human judgment could violate International Humanitarian Law, and threaten our
fundamental right to life and the principle of human dignity. AI could also lower the threshold of going to
war, affecting global stability.

International Humanitarian law stipulates that any attack needs to distinguish between combatants and non-
combatants, be proportional and must not target civilians or civilian objects. Also, no attack should
unnecessarily aggravate the suffering of combatants. AI may be unable to fulfil these principles without
the involvement of human judgment. In particular, many researchers are concerned that Lethal
Autonomous Weapon Systems (LAWS) — a type of autonomous military robot that can independently
search for and 'engage' targets using lethal force — may not meet the standards set by International
Humanitarian Law, as they are not able to distinguish civilians fromcombatants, and would not be able to
judge whether the force of the attack was proportional given the civilian damage it would incur.

Amoroso and Tamburrini (2016, p. 6) argue that: '[LAWS must be] capable of respecting the
principles of distinction and proportionality at least as well as a competent and conscientious human
soldier'. However, Lim (2019) points out that while LAWS that fail to meet these requirements should not
be deployed, one day LAWS will be sophisticated enough to meet the requirements of distinction and
proportionality. Meanwhile, Asaro (2012) argues that it doesn't matter how good LAWS get; it is a moral
requirement that only a human should initiate lethal force, and it is simply morally wrong to delegate life or
death decisions to machines.

Some argue that delegating the decision to kill a human to a machine is an infringement of basic human
dignity, as robots don't feel emotion, and can have no notion of sacrifice and what it means to take a life. As
Lim et al (2019) explain, 'a machine, bloodless and without morality or mortality, cannot fathom the
significance of using force against a human being and cannot do justice to the gravity of the decision'.

Robots also have no concept of what it means to kill the 'wrong' person. 'It is only because humans can feel
the rage and agony that accompanies the killing of humans that they can understand sacrifice and the use
of force against a human. Only then can they realise the 'gravity of the decision' to kill' (Johnson and Axinn
2013, p. 136).

However, others argue that there is no particular reason why being killed by a machine would be a
subjectively worse, or less dignified, experience than being killed by a cruise missile strike. 'What matters is
whether the victim experiences a sense of humiliation in the process of getting killed. Victims being
threatened with a potential bombing will not care whether the bomb is dropped by a human or a robot' (Lim
et al, 2019). In addition, not all humans have the emotional capacity to conceptualise sacrifice or the relevant
emotions that accompany risk. In the heat of battle, soldiers rarely have time to think about the concept of
sacrifice, or generate the relevant emotions to make informed decisions each time they deploy lethal force.

Additionally, who should be held accountable for the actions of autonomous systems — the
commander, programmer, or the operator of the system? Schmit (2013) argues that the responsibility
for committing war crimes should fall on both the individual who programmed the AI, and the
commander or supervisor (assuming that they knew, or should have known, the autonomous weapon
system had been programmed and employed in a war crime, and that they did nothing to stop it from
happening)

You might also like