![]() |
![]() ![]() |
View previous stories | |
Artificial intelligence laws needed to protect fundamental rights by European Agency for Fundamental Rights, agencies Sep. 2021 Artificial intelligence risks to privacy demand urgent action. (OHCHR) UN High Commissioner for Human Rights Michelle Bachelet has stressed the urgent need for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place. She also called for AI applications that cannot be used in compliance with international human rights law to be banned. “Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said. As part of its work on technology and human rights, the UN Human Rights Office has today published a report that analyses how AI – including profiling, automated decision-making and other machine-learning technologies – affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression. “Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” the High Commissioner said. The report looks at how States and businesses alike have often rushed to incorporate AI applications, failing to carry out due diligence. There have already been numerous cases of people being treated unjustly because of AI, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition. The report details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analysed in multiple and often opaque ways. The data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant. Long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways. “Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Bachelet said. The inferences, predictions and monitoring performed by AI tools, including seeking insights into patterns of human behaviour, also raise serious questions. The biased datasets relied on by AI systems can lead to discriminatory decisions, and these risks are most acute for already marginalized groups. “The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real. This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,” Bachelet said. There also needs to be much greater transparency by companies and States in how they are developing and using AI. “The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society,” the report says. “We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact. The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us,” Bachelet stressed. http://www.ohchr.org/EN/Issues/DigitalAge/Pages/DigitalAgeIndex.aspx Aug. 2021 Computer says No: How the EU's AI laws cause new injustice. (EU Observer, agencies) Big Tech companies pushing for self-regulation, while their algorithms create new injustice in European public services, investigation by Alina Yanchur, Camille Schyns, Greta Rosen Fondahn, Sarah Pilz. When Roger Derikx applied for childcare benefits, he thought he would be receiving money - not losing it. Derikx, 46 years old, was asked to repay the Dutch government €68,000. The problem: he was never told why he had to return the benefits, to which he was entitled. Authorities repossessed his car, and took a 40 percent cut of his salary for years. "You have two little children, and you want to give them everything," Derikx said, "but every time they ask something, you have to say 'no' - that's hard." Derikx was one of 26,000 parents accused of making fraudulent benefit claims by the Dutch tax authority, which used an algorithm for fraud detection. The system was "unlawful and discriminatory," an official investigation found, eventually leading Dutch prime minister Mark Rutte to resign earlier this year. For Derikx, the investigation came too late. The financial hardship took such a toll on his marriage that he ended up getting a divorce. "That's a big price," Derikx said. The child benefits scandal in the Netherlands is not unique. Reports of abuse have surfaced across Europe. Danish authorities also used artificial intelligence to identify children in "vulnerable families" that disproportionately targeted parents with foreign origins. And the Netherlands, one of Europe's frontrunners in this field, used algorithms to create risk profiles of children under the age of 12. Critics have long advocated against such uses of AI. In January, 61 advocacy organisations sent an open letter to the EU demanding "red lines" on AI applications that threatened fundamental rights. Meanwhile, industry representatives argued that regulating AI would stifle innovation. Upon taking office, European Commission president Ursula von der Leyen had made regulating artificial intelligence a political priority, and the EU claimed to seek compromise between addressing the risks and protecting innovation. By April 2021, the EU Commission announced a long-awaited proposal to regulate artificial intelligence. Data compiled for this investigation shows that commission officials logged 152 official lobby meetings on AI over the time period from December 2019 until August 2021. Corporations and business associations accounted for over 100, or two-thirds, of those meetings; less than one third was with civil society, academics, and trade unions.. http://euobserver.com/investigations/152695 http://www.bbc.com/news/world-europe-55674146 http://edri.org/ http://www.accessnow.org/ http://www.hrw.org/news/2021/11/10/how-eus-flawed-artificial-intelligence-regulation-endangers-social-safety-net http://www.theguardian.com/australia-news/2021/jun/11/robodebt-court-approves-18bn-settlement-for-victims-of-governments-shameful-failure Jan. 2021 Open letter: Civil society call for the introduction of red lines in the upcoming European Commission proposal on Artificial Intelligence. (Extract) Social scoring and AI systems determining access to social rights and benefits AI systems have been deployed in various contexts in a manner that threatens the allocation of social and economic rights and benefits. For example, in the areas of welfare resource allocation, eligibility assessment and fraud detection, the deployment of AI systems to predict risk, verify people’s identity and calculate their benefits greatly impacts people’s access to vital public services and has a potentially grave impact on the fundamental right to social security and social assistance. This is due to the likelihood of discriminatory profiling, mistaken results and the inherent fundamental rights risks associated with the processing of sensitive biometric data. A number of examples demonstrate how automated decision-making systems are negatively impacting and targeting poor, migrant and working class people, including the deployment of SyRI in the Netherlands and the use of data-driven systems in Poland to profile unemployed people, with severe implications for data protection and non-discrimination rights. Further, uses in the context of employment and education have highlighted highly-intrusive worker and student surveillance, including social scoring systems, intensive monitoring for performance targets, and other measures which limit work autonomy, diminish well-being and limit workers’ and students’ privacy and fundamental rights. There have also been cases of discriminatory use of AI technologies against persons with disabilities by state and private entities in the allocation of social benefits and access to education. The upcoming legislative proposal must legally restrict uses and deployments of AI which unduly infringe upon access to social rights and benefits. http://edri.org/wp-content/uploads/2021/01/EDRi-open-letter-AI-red-lines.pdf Dec. 2020 Artificial intelligence laws needed to protect fundamental rights of Europeans, report by the European Agency for Fundamental Rights. From tracking the spread of COVID-19 to deciding who will receive social benefits, artificial intelligence (AI) affects the lives of millions of Europeans. Automation can improve decision-making. But AI can lead to mistakes, discrimination and be hard to challenge. A new EU Agency for Fundamental Rights (FRA) report reveals confusion about the impact of AI on people’s rights. This even among organisations already using it. FRA calls on policymakers to provide more guidance on how existing rules apply to AI and ensure any future AI laws protect fundamental rights. “AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people's rights both in the development and use of AI,” says FRA Director Michael O’Flaherty. “We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.” The FRA report ‘Getting the future right – Artificial intelligence and fundamental rights in the EU’ identifies pitfalls in the use of AI, for example in predictive policing, medical diagnoses, social services, and targeted advertising. It calls on the EU and EU countries to: Make sure that AI respects all fundamental rights - AI can affect many rights - not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards. Guarantee that people can challenge decisions taken by AI - people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions. Assess AI before and during its use to reduce negative impacts - private and public organisations should carry out assessments of how AI could harm fundamental rights. Provide more guidance on data protection rules - the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used. Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it. Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job. The report draws on over 100 interviews with public and private organisations already using AI. These include observations from experts involved in monitoring potential fundamental rights violations. Its analysis is based on real uses of AI from Estonia, Finland, France, the Netherlands and Spain. The report points out the many sectors in which AI is now already widely used, including in decisions on who will receive social benefits, predicting criminality and risk of illness and creating targeted advertising. The report highlights that much of the focus in developing AI has been on its "potential to support economic growth" while the aspect of its impact on fundamental rights has been neglected. It is possible that "people are blindly adopting new technologies without assessing their impact before actually using them," David Reichel, one of the experts behind the report, told the AFP news agency. Reichel told AFP that even when data sets did not include information linked to gender or ethnic origin, there was still "a lot of information than can be linked to protected attributes." One example used in the report is employing facial recognition technology for law enforcement. It says even small error rates could lead to many innocent people being falsely picked out if the technology were used in places where large numbers are scanned, such as airports or train stations. "A potential bias in error rates could then lead to disproportionately targeting certain groups in society," the report says. The report calls for legislation on AI to "create more effective safeguards." As the use of AI needs to be more transparent, more accountable and include human review. http://fra.europa.eu/en/news/2020/now-time-ensure-artificial-intelligence-works-europeans * Fundamental Rights Forum 2021: http://hybrid.fundamentalrightsforum.eu/Programme Visit the related web page |
|
Global attack on freedom of expression is having a dangerous impact on public health crisis by Amnesty International, Article 19, agencies Attacks on freedom of expression by governments, combined with a flood of misinformation across the world during the Covid-19 pandemic, have had a devastating impact on peoples’ ability to access accurate and timely information to help them cope with the burgeoning global health crisis, said Amnesty International today in a new report. Silenced and Misinformed: Freedom of Expression in Danger During Covid-19 reveals how governments’ and authorities’ reliance on censorship and punishment throughout the crisis has reduced the quality of information reaching people. The pandemic has provided a dangerous situation where governments are using new legislation to shut down independent reporting, as well as attack people who have been directly critical or even attempted to look into their government’s response to Covid-19. “Throughout the pandemic, governments have launched an unprecedented attack on freedom of expression, severely curtailing peoples’ rights. Communication channels have been targeted, social media has been censored, and media outlets have been closed down – having a dire impact of the public’s ability to access vital information about how to deal with Covid-19,” said Amnesty International’s senior director for research advocacy and policy, Rajat Khosla. “In the midst of a pandemic, journalists and health professionals have been silenced and imprisoned. As a result, people have been unable to access information about Covid-19, including how to protect themselves and their communities. Approximately five million people have lost their lives to Covid-19 and lack of information will have likely been a contributory factor.” The government of China has a long history of controlling freedom of expression. During the early days of the pandemic, health workers and professional and citizen journalists attempted to raise the alarm as early as December 2019. However, they were targeted by the government for reporting on the outbreak of what was then an unknown disease. By February 2020, 5,511 criminal investigations had been opened against individuals who published information about the outbreak for “fabricating and deliberately disseminating false and harmful information”. In one harrowing case, citizen journalist Zhang Zhan travelled to Wuhan in February 2020 to report on the Covid-19 outbreak. She went missing in May 2020 in Wuhan. It was later revealed that she was detained by police, charged with “picking quarrels and provoking trouble” and sentenced to four years’ imprisonment. Numerous other countries have put in place oppressive laws, restricting the right to freedom of expression and silencing critics under the guise or in the context of the pandemic, including Tanzania, Russia and Nicaragua. Over the past few years, the Tanzanian government has introduced a raft of laws and used them to silence journalists, human rights defenders and members of the political opposition. Under former President Magufuli’s administration, the Tanzanian government took a denialist stance on Covid-19. From March to May 2020, authorities used laws prohibiting and criminalizing “false news” and other measures to restrict media coverage of the government’s handling of Covid-19. While initially trying to downplay the impact of the pandemic and intimidate those raising concerns, the Nicaraguan authorities used Covid-19 to introduce the “Special Law on Cybercrimes” in October 2020. In practice, it enables authorities to punish those who criticize government policies and gives them ample discretion to repress freedom of expression. In April 2020, Russia expanded its existing anti-“fake news” legislation and introduced criminal penalties for “public dissemination of knowingly false information” in the context of emergencies. Although the amendments have been described as part of the authorities’ response to Covid-19, these measures will remain in force beyond the pandemic. “It’s clear Covid-19 related restrictions on freedom of expression are not just time-bound, extraordinary measures to deal with a temporary crisis. They are part of an onslaught on human rights that has been taking place globally in the last few years – and governments have found another excuse to ramp up their attack on civil society,” said Rajat Khosla. “Restricting freedom of expression is dangerous and must not become the new normal. Governments must urgently lift such restrictions and guarantee the free flow of information to protect the public’s right to health.” Amnesty’s report highlights the role of social media companies in facilitating the rapid spread of misinformation around Covid-19. This is because platforms are designed to amplify attention-grabbing content to engage users and have not done enough due diligence to prevent the spread of false and misleading information. The onslaught of misinformation – whether that be through social media companies or people in a position of power seeking to spread division and confusion for their own gain – is posing a serious threat to the rights to freedom of expression and to health. It is making it increasingly difficult for individuals to have a fully informed opinion and make educated choices about their health based on the best available scientific facts. A variety of sources is key, as is the ability to challenge and debate available information. “As we are urging governments and pharmaceutical companies to ensure vaccines are distributed and made available to everyone around the world, states and social media companies must also ensure the public has unfettered access to accurate, evidence-based and timely information. This is a crucial step to minimize vaccine hesitancy driven by misinformation,” said Rajat Khosla. “So far, 6.6 billion* doses have been administered globally, yet only 2.5% of people in low-income countries have received at least one dose. With less than 75 days left until the end of the year, we’re calling on states and pharmaceutical companies to drastically change course and to do everything needed to deliver two billion vaccines to low and lower-middle income countries starting now – but they need safe, reliable information to help inform their decisions.” Amnesty International is calling on states to stop using the pandemic as an excuse to silence independent reporting, lift all undue restrictions on the right to freedom of expression and provide credible, reliable, accessible information so the public can be fully informed about the pandemic. Censorship does not help in dealing with misinformation: free and independent media and strong civil society do. States must overhaul the destructive business model of Big Tech – one of the root causes of the spread of mis/disinformation online. Social media companies must also stop hiding their heads in the sand and take measures to address the viral spread of misinformation, including by ensuring their business models do not endanger human rights. http://www.amnesty.org/en/latest/news/2021/10/covid-19-global-attack-on-freedom-of-expression-is-having-a-dangerous-impact-on-public-health-crisis/ http://www.amnesty.org/en/latest/news/2021/10/covid-19-time-for-countries-blocking-trips-waiver-to-support-lifting-of-restrictions-2/ July 2021 Protect and promote freedom of expression during the COVID-19 pandemic The Global Expression Report 2021 (GxR21), ARTICLE 19’s report tracking freedom of expression across the world, reports that two thirds of all countries imposed restrictions on the media in relation to the Covid pandemic. Following the downward trend from 2019, the global scores for Freedom of Expression and the Right to Information reached its lowest point since 2010. Quinn McKew, Executive Director of ARTICLE 19: “The global pandemic has brought the world to a tipping point where governments and private actors face a stark choice. They must either commit to building a world based on rights to expression and information or they must become bystanders to the rapid decline in the freedoms that sustain robust and engaged societies.” Unlike any other year in recent history, 2020 drove home just how vital access to accurate, reliable and timely information is during a global health crisis. It made it clear that freedom of expression is the fundamental human right enabling us all to demand the highest attainable standard of health. When faced with such crises, governments have a fundamental duty to be transparent about their decisions, and a legal obligation to protect people’s lives. And yet the GxR21 reveals that, rather than focusing on controlling the virus, protecting public health and improving access to information, a number of governments used the pandemic as an excuse to: Suppress critical information; Implement states of emergency without proper limits; Place unreasonable and unnecessary restrictions on the media. And by presenting a false choice between human rights and public health, many governments shut down public discussion and any scrutiny of their decisions. In other words, they wasted public money and valuable time using the pandemic to entrench their power. It has been well documented that the pandemic exposed and deepened cracks in our systems of government. However, the GxR21 demonstrates how widespread the impact has been. We have seen deployment of security forces and violent police tactics, as well as the deliberate spread of disinformation online and the weak efforts by legislators to respond to the problem. Increased surveillance also posed a threat as millions were asked to download apps that collected data without adequate privacy and data protection assurances. All over the world during 2020 public participation was dismantled: decisions were made without consultation, oversight was undermined, powers were centralised, and accountability limited. “Populist leaders and those who seek to entrench their own power, hate accountability:”, continued McKew. “Freedom of expression is often the first port of call for autocrats looking to erode democracy, and it must become the means by which we reverse this downward trend. “As the pandemic recedes, we will not only need to rigorously roll back all the restrictions that have been placed on us and our rights, and roundly reject the surveillance imposed on us during 2020, but also heal the cracks which existed long before. “That means addressing those failures of economic and political systems that have allowed individuals to take control of resources and institutions, and which have left many by the wayside in terms of economic opportunity and political inclusion. “In rebuilding our relationships with government, media, academia, and the arts, we must demand our rights to know and our rights to speak – online, on the streets, wherever we feel we need to do so. And we must be heard.“ Highlights from GxR 21: The starkest deterioration in the GxR scores came from data looking at freedom of assembly and public participation in decision-making. Protests continued and were influential but government responses to them became more brutal and repressive, often using them as an excuse to implement broader crackdowns on opposition. For example, Belarus and Thailand have seen huge drops in GxR scores after protest movements in 2020 were met with repressive state responses both on the streets and in the legislature and courts. In 2020, 62 journalists were killed and record numbers were imprisoned (274 imprisonments). Journalists, bloggers, and whistleblowers were arrested – often arbitrarily - detained and prosecuted for criticising governments’ responses to COVID-19. China, Turkey, and Egypt were the biggest jailers. Of 620 violations of press freedom recorded globally in the first 14 months of the pandemic, 34% were physical and verbal attacks on journalists; 34% were arrests of journalists, or charges filed against journalists and media organisations by governments; and a further 14% were government-imposed restrictions on access to information. Arrests quadrupled from March to May 2020, and harassment and physical attacks rose across the world – from Brazil to Italy, Kenya, Senegal, and Nigeria. Journalists, bloggers, human rights defenders (HRDs) and political activists were summoned for questioning and arrested for expressing views on COVID-19 or sharing information, including in Palestine, Poland, Madagascar, Eswatini, India, Tunisia, Niger and Cameroon. Whistleblowers were inadequately protected – and, in many cases, even silenced by governments themselves. Most of this violence and harassment happened in a context of total impunity. Most murders of journalists do not even reach the headlines in international media. In 2020, three-quarters of women journalists experienced online abuse and harassment. HRDs are also under attack. At least 331 were killed in 2020, 69% of whom were working on indigenous people’s or land rights. The majority of killings of HRDs took place in Latin America; Colombia alone accounted for 53% of murders of HRDs globally. Two-thirds of the world’s population – 4.9 billion people – are living in countries that are highly restricted or experiencing a free expression crisis, more than at any time in the last decade. In Asia and the Pacific, 85% of the population lives in countries where free expression is in crisis or highly restricted – a 39% rise since 2010. In the Americas, the regional global score for freedom of expression is at its lowest in a decade. Not a single country in Africa has a good GxR score, meaning more people are living in countries where free expression is in crisis or highly restricted than have been in the last decade. In Europe and Central Asia, 34% of the population live in countries where free expression is in crisis.In the Middle East and North Africa, 72% of the population lives in countries in crisis. http://www.article19.org/resources/new-global-analysis-in-a-pandemic-protecting-people-means-protecting-expression/ http://www.article19.org/gxr-2021/ http://www.article19.org/gxr2020/ http://www.article19.org/resources/inside-expression-september-2020-right-to-know-right-now/ http://www.article19.org/coronavirus-impacts-on-freedom-of-expression/ http://www.article19.org/what-we-do/ July 2020 Protect and promote freedom of expression during the COVID-19 pandemic, by David Kaye - Special Rapporteur on Freedom of Expression Governments around the world must take action to protect and promote freedom of expression during the COVID-19 pandemic, which many States have exploited to crack down on journalism and silence criticism, a UN expert said today. Presenting his latest report on freedom of expression and disease pandemics to the Human Rights Council, the UN Special Rapporteur on Freedom of Expression, David Kaye, raised serious concern over new measures restricting and punishing the free flow of information. "People have died because governments have lied, hidden information, detained reporters, failed to level with people about the nature of the threat, and criminalised individuals under the guise of 'spreading false information'," the Special Rapporteur said. "People have suffered because some governments would rather protect themselves from criticism than allow people to share information, learn about the outbreak, know what officials are or are not doing to protect them. "In the past three months, numerous governments have used the COVID-pandemic to repress expression in violation of their obligations under human rights law," Kaye said. "Since the earliest days of the pandemic, I have raised concerns about repression of expression that has a direct impact on public health information, including Belarus, Cambodia, China, Iran, Egypt, India, Myanmar, and Turkey." The Special Rapporteur urged governments to address five challenges in particular: Reinforce access to information and share as much as possible about the course of the disease and the tools people should use to protect themselves and their communities. End the practice of internet shutdowns and other limitations on access to the internet. Refrain from all attacks on the media and release all journalists detained, whether during or before the pandemic, especially given Do not treat the so-called infodemic as a problem that criminalisation will solve. Penalties limit the willingness of people to share valuable information and they are often subject to abuse. Government should work with social media companies, where much disinformation takes place, to ensure that they are transparent enough for governments to take meaningful steps to promote and protect public health. Ensure that any public health surveillance measures are consistent with fundamental legal standards of necessity and proportionality and are transparent, non-discriminatory, limited in duration and scope, subject to oversight, and never be used to criminalise individuals. "I am further concerned about efforts to repress disinformation using tools of criminal law, which are likely to hamper the free flow of information, such as in Brazil and Malaysia," the Special Rapporteur said. The pandemic has underscored how the freedom of expression reinforces public health initiatives. Governments must ensure that their laws, policies and practices meet their obligations in order to promote human rights and public health, the Special Rapporteur said. http://undocs.org/A/HRC/44/49 http://www.ohchr.org/EN/Issues/Pages/Webinar-In-the-shadow-of-COVID-19.aspx http://www.ohchr.org/EN/issues/freedomopinion/pages/opinionindex.aspx http://www.ohchr.org/EN/NewsEvents/Pages/COVID-19.aspx http://www.ohchr.org/EN/HRBodies/SP/Pages/COVID-19-and-Special-Procedures.aspx http://monitor.civicus.org/COVID19/ http://monitor.civicus.org/ http://www.access-info.org/blog/2020/09/28/access-to-information-day-2020/ http://www.amnesty.org/en/latest/news/2020/07/health-workers-rights-covid-report/ http://www.hrw.org/topic/free-speech http://www.womenpeacesecurity.org/support-civil-society-security-council/ |
|
View more stories | |
![]() ![]() ![]() |