Professional Documents
Culture Documents
Competenta Engleza 2017
Competenta Engleza 2017
Isaac Asimov gave us the basic rules of good robot behaviour: don’t harm humans, obey orders
and protect yourself. Now the British Standards Institute has issued a more official
version aimed at helping designers create ethically sound robots. The document,
BS8611 Robots and robotic devices, is written in the dry language of a health and safety
manual, but the undesirable scenarios it highlights could be taken directly from fiction. Robot
deception, robot addiction and the possibility of self-learning systems exceeding their remits are
all noted as hazards that manufacturers should consider.
Welcoming the guidelines at the Social Robotics and AI* conference in Oxford, Alan Winfield, a
professor of robotics at the University of the West of England, said they represented “the first
step towards embedding ethical values into robotics and AI”. “As far as I know this is the first
published standard for the ethical design of robots,” Winfield said after the event. “It’s a bit more
sophisticated than that Asimov’s laws – it basically sets out how to do an ethical risk
assessment of a robot.”
The BSI document begins with some broad ethical principles: “Robots should not be designed
solely or primarily to kill or harm humans; humans, not robots, are the responsible agents; it
should be possible to find out who is responsible for any robot and its behaviour.” It goes on to
highlight a range of more contentious issues, such as whether an emotional bond with a robot is
desirable, particularly when the robot is designed to interact with children or the elderly. Noel
Sharkey, emeritus professor of robotics and AI at the University of Sheffield, said this was an
example of where robots could unintentionally deceive us. “There was a recent study where little
robots were embedded in a nursery school,” he said. “The children loved it and actually bonded
with the robots. But when asked afterwards, the children clearly thought the robots were more
cognitive than their family pet.”
The code suggests designers should aim for transparency, but scientists say this could prove
tricky in practice. “The problem with AI systems right now, especially these deep learning
systems, is that it’s impossible to know why they make the decisions they do,” said Winfield.
Deep learning agents, for instance, are not programmed to do a specific task in a set way.
Instead, they learn to perform a task by attempting it millions of times until they evolve a
successful strategy – sometimes one that its human creators had not anticipated and do not
understand. The guidance even hints at the prospect of sexist or racist robots, warning against
“lack of respect for cultural diversity or pluralism”. “This is already showing up in police
technologies,” said Sharkey, adding that technologies designed to flag up suspicious people to
be stopped at airports had already proved to be a form of racial profiling.
Winfield said: “Deep learning systems are quite literally using the whole of the data on the
internet to train on, and the problem is that that data is biased. These systems tend to favour
white middle-aged men, which is clearly a disaster. All the human prejudices tend to be
absorbed, or there’s a danger of that.”
(www.theguardian.com)
3. The negative developments that the document BS8611 Robots and robotic devices refers to
4. Alan Winfield’s attitude towards the document BS8611 Robots and robotic devices is positive
because
5. The document BS8611 Robots and robotic devices is considered to be more advanced than
Asimov’s laws because
A. it analyses the prospect of robots’ doing harm.
B. establishes guidelines on how to evaluate the robot ethics.
C. takes ethical concerns one step further.
D. it is the last document to be released on the topic of robot ethics.
6. One of the controversial issues being described in the document BS8611 Robots and robotic
devices is
A. the uncertain effects of the emotional connection between people and robots.
B. the emotional bond created between children and robots.
C. whether robots should be allowed to interact with the elderly.
D. the lack of safety concerns when it comes to robots interacting with children and the elderly.
9. The problem with the online data that deep learning systems use in order to evolve is that it is
A. prejudiced.
B. uncontrollable.
C. neutral.
D. uninterpretable.
A. subjective.
B. negative.
C. reassuring.
D. matter-of-fact.
BAREM DE EVALUARE
Varianta 2
1-C; 2-C; 3-A; 4-D; 5-B; 6-A; 7-D; 8-B; 9-A; 10-D.
Vocabular 10 puncte
• foloseşte vocabularul în mod corect 5 puncte
• foloseşte un vocabular variat şi adecvat temei 5 puncte
-: 0-10 puncte
A1: 11 - 30 puncte
A2: 31 - 60 puncte
B1: 61 - 80 puncte
B2: 81 - 100 puncte
Proba C
de evaluare a competenţelor lingvistice într-o limbă de circulaţie internaţională studiată pe
parcursul învăţământului liceal
Iunie 2017
Biletul nr.1
1. Answer the following question: What is the quality that you appreciate the most in your best
friend?
3. Do you think that setting and managing a budget can be of benefit to a teenager? Use relevant
arguments and examples to support your ideas.
Proba C
de evaluare a competenţelor lingvistice într-o limbă de circulaţie internaţională
studiată pe parcursul învăţământului liceal
BAREM DE EVALUARE
Subiectul I 20 de puncte
• formulează un răspuns scurt, adecvat subiectului, folosind expresii/fraze simple, asigurând
prin relatorii cel mai des folosiţi legătura între acestea 14 puncte
• foloseşte un repertoriu lexical elementar, adecvat temei 2 puncte
• foloseşte relativ corect forme şi structuri gramaticale foarte simple 2 puncte
• pronunţă relativ corect cuvintele folosite 2 puncte
Vocabular: 15 de puncte
• foloseşte un vocabular variat şi adecvat subiectului 10 puncte
• foloseşte vocabularul în mod corect 5 puncte
Pronunţie: 10 de puncte
• are o pronunţie şi o intonaţie corecte şi fireşti 5 puncte
• se exprimă fluent 5 puncte