Framing dissent and poverty as a menace to public order can threaten fundamental rights, particularly when it’s used to justify the deployment of predictive technology.
People are better able to see and correct biases in algorithms’ decisions than in their own decisions, even when algorithms are trained on their decisions.
Using technology to screen job applicants might be faster than reading CVs and face- to-face interviews but the most suitable candidate could be overlooked.
The explosion of generative AI tools like ChatGPT and fears about where the technology might be headed distract from the many ways AI affects people every day – for better and worse.
Large language models have been shown to ‘hallucinate’ entirely false information, but aren’t humans guilty of the same thing? So what’s the difference between both?
Biased algorithms in health care can lead to inaccurate diagnoses and delayed treatment. Deciding which variables to include to achieve fair health outcomes depends on how you approach fairness.
AI algorithms reinforce existing biases. Before they are introduced as routine tools in clinical care, we must establish ethical guidelines to reduce the risk of harm.
Powerful new AI systems could amplify fraud and misinformation, leading to widespread calls for government regulation. But doing so is easier said than done and could have unintended consequences.