People's Stories Freedom


Big Tech companies grilled on failure to tackle violent extremism
by eSafety Commissioner, OHCHR, agencies
 
19 Mar. 2024
 
Big Tech companies grilled on failure to tackle violent extremism by Australian eSafety Commissioner.
 
Australia’s eSafety Commissioner has issued legal notices to Google, Meta, Twitter/X, WhatsApp, Telegram and Reddit requiring each company to report on steps they are taking to protect Australians from terrorist and violent extremist material and activity.
 
The spread of this material and its role in online radicalisation remains a concern both in Australia and internationally, with 2019 terrorist attacks in Christchurch NZ and Halle Germany, and more recently Buffalo NY, underscoring how social media and other online services can be exploited by violent extremists, leading to radicalisation and threats to public safety.
 
The online safety regulator issued the notices under transparency powers granted under the Online Safety Act, which will require the six companies to answer a series of detailed questions about how they are tackling the issue.
 
eSafety Commissioner Julie Inman Grant said eSafety continues to receive reports about perpetrator-produced material from terror attacks, including the 2019 terrorist attack in Christchurch, that are reshared on mainstream platforms.
 
“We remain concerned about how extremists weaponise technology like live-streaming, algorithms and recommender systems and other features to promote or share this hugely harmful material,” Ms Inman Grant said.
 
“We are also concerned by reports that terrorists and violent extremists are moving to capitalise on the emergence of generative AI and are experimenting with ways this new technology can be misused to cause harm.
 
“Earlier this month the UN-backed Tech against Terrorism reportedExternal link that it had identified users of an Islamic State forum comparing the attributes of Google’s Gemini, ChatGPT, and Microsoft’s Copilot.
 
“The tech companies that provide these services have a responsibility to ensure that these features and their services cannot be exploited to perpetrate such harm and that’s why we are sending these notices to get a look under the hood at what they are and are not doing.”
 
According to a recent OECD reportExternal link, Telegram is the number one ranked mainstream platform when it comes to the prevalence of terrorist and violent extremist material, with Google’s YouTube ranked second and Twitter/X coming in third. The Meta-owned Facebook and Instagram round out the top five placing fourth and fifth respectively.
 
WhatsApp is ranked 8th while reports have confirmed the Buffalo shooter’s ‘manifesto’ cited Reddit as the service that played a role in his radicalisation towards violent white supremacist extremism.
 
“It’s no coincidence we have chosen these companies to send notices to as there is evidence that their services are exploited by terrorists and violent extremists. We want to know why this is and what they are doing to tackle the issue,” Ms Inman Grant said.
 
“Transparency and accountability are essential for ensuring the online industry is meeting the community’s expectations by protecting their users from these harms. Also, understanding proactive steps being taken by platforms to effectively combat TVEC is in the public and national interest.
 
“That’s why transparency is a key pillar of the Global Internet Forum to Counter Terrorism and the Christchurch Call, global initiatives that many of these companies are signed up to. And yet we do not know the answer to many of these basic questions.
 
“And, disappointingly, none of these companies have chosen to provide this information through the existing voluntary framework – developed in conjunction with industry – provided by the OECD. This shows why regulation, and mandatory notices, are needed to truly understand the true scope of challenges, and opportunities.”
 
As part of these notices, eSafety will also be asking Telegram and Reddit about measures they have in place to detect and remove child sexual exploitation and abuse.
 
The six companies will have 49 days to provide responses to the eSafety Commissioner.
 
http://www.esafety.gov.au/newsroom/media-releases/tech-companies-grilled-on-how-they-are-tackling-terror-and-violent-extremism http://www.esafety.gov.au/newsroom/media-releases/esafety-initiates-civil-penalty-proceedings-against-x-corp http://www.esafety.gov.au/newsroom/media-releases/report-reveals-the-extent-of-deep-cuts-to-safety-staff-and-gaps-in-twitter/xs-measures-to-tackle-online-hate
 
Feb. 2024
 
UN Special Rapporteur on sale and sexual exploitation of children, Mama Fatima Singhateh, alarmed by new emerging exploitative practices of online child sexual abuse.
 
“The internet and digital platforms can be a double-edged sword for children and young people. It can allow them to positively interact and further develop as autonomous human beings, claiming their own space. While also facilitate age-inappropriate content and online sexual harms of children by adults and peers.
 
The boom in generative AI and eXtended Reality is constantly evolving and facilitating the harmful production and distribution of child sexual abuse and exploitation in the digital dimension, with new exploitative activities such as the deployment of end-to-end encryption without built-in safety mechanisms, computer-generated imagery (CGI) including deepfakes and deepnudes, and on-demand live streaming and eXtended Reality (XR) of child sexual abuse and exploitation material.
 
Although access does not determine the value that children and young people derive from the Internet and digital products, the volume of reported child sexual abuse material has increased by 87% since 2019, according to WeProtect Global Alliance’s Global Threat Assessment 2023.
 
A review of numerous studies, publications and reports has revealed the intensification of manifestations of harm and exposure of online child sexual abuse and exploitation, both in terms of scale and method. It includes the risk of child sexual abuse and exploitation material, grooming and soliciting children for sexual purposes, online sexual harassment, intimate image abuse, financial sexual extortion and the use of technology-assisted child sexual abuse and exploitation material.
 
The private sector and the technology industry have proven to be less reliable than they claim to be, with serious ingrained biases, flaws in programming and surveillance software to detect child abuse, failure to crack down on child sexual abuse and exploitation networks, layoffs and cuts to community safety teams and workers. These practices and failings risk relentless repetition of trauma, secondary victimisation and systemic harm to individuals, including children.
 
While it is commendable that there has been increased political commitment, prioritisation and engagement on the use of ICTs and new technologies at the international level. Many legislative and regulatory efforts at national, regional and international levels seeking to address these problems present additional human rights risks due to insufficient integration of human rights considerations, gender-responsive and child-sensitive approaches.
 
Against this backdrop, States and companies must all work together and invest in solving this problem, and include children’s, victims’, survivors’ and relevant stakeholders’ voices in the design and development of ethical digital products to foster a safer online environment. This responsibility must be immediately embraced across society.
 
I welcome the Secretary-General’s AI Advisory Body’s mandate to make recommendations for the establishment of an international agency for the governance and coordination of AI.
 
Despite these positive initiatives, there is an urgent imperative to scale up efforts and connect through a core multilateral instrument dedicated exclusively to eradicating child sexual abuse and exploitation online, addressing the complexity of these phenomena and taking a step forward to protect children in the digital dimension.
 
It is now clear that greater and joint cooperation is needed to ensure a safer Internet for all children around the world. Commitments must go beyond paper.”
 
http://www.ohchr.org/en/press-releases/2024/02/un-expert-alarmed-new-emerging-exploitative-practices-online-child-sexual http://www.weprotect.org/global-threat-assessment-23/
 
Sep. 2023
 
States must guarantee fundamental freedoms of digital technologies: UN and regional experts. (OHCHR)
 
States must effectively respect, protect and facilitate the rights to freedom of peaceful assembly, of association and of expression online and in digitally-mediated spaces, and ensure digital technologies are not used to unduly restrict civic space online and offline, UN and regional human rights experts urged in a joint declaration issued on the occasion of the World Democracy Day.
 
The rights experts raised serious concerns about shrinking civic space online, and the rapidly evolving and emerging technologies that are posing additional threats to the promotion and protection of these rights online and offline.
 
The UN and regional experts raised concerns related to the misuse of digital technologies by State and non-State actors, which result in curtailing the effective exercise of these rights online and offline: imposing Internet shutdowns and censorship, digital surveillance and malicious use of artificial intelligence, online harassment, spread of hate speech, and spread of disinformation and misinformation.
 
State and non-State actors have deliberately used technology to silence, surveil and harass dissidents, political opposition, human rights defenders, activists and protesters.
 
Such acts create a chilling effect and shrink civic spaces online and offline, which leads to deterioration and undermining of democracy.
 
States must ensure technologies are used as a means to facilitate the rights to peaceful assembly, association and expression online and offline, not suppress these rights.
 
Among other measures, a global regulatory framework should be developed based on international human rights law and standards to rein in use of digital surveillance and other emerging technologies. Robust and accountable export control regimes should be put in place for surveillance and other technologies that pose serious risks to the exercise of these fundamental freedoms.
 
States should ensure effective accountability for violations of the rights to freedom of peaceful assembly, association, and expression, related to the use of digital technologies. Special efforts must be made towards identifying and prosecuting gender-based online violence.
 
Protecting, promoting and effectively enabling these rights offline and online is essential to ensure inclusive and participatory democracies, and resilient and peaceful societies.
 
* Statement by: The UN Special Rapporteur on the rights to freedom to peaceful assembly and of association, jointly with experts from the African Commission on Human and Peoples’ Rights (ACHPR), the Inter-American Commission on Human Rights (IACHR), the ASEAN Intergovernmental Commission on Human Rights (AICHR), and the OSCE Office for Democratic Institutions and Human Rights (ODIHR)
 
http://www.ohchr.org/sites/default/files/documents/issues/trafficking/statements/20230915-jd-foaa-digital-technologies.pdf
 
May 2023
 
UN experts have called for greater transparency, oversight, and regulation to address the negative impacts of new and emerging digital tools and online spaces on human rights.
 
“New and emerging technologies, including artificial intelligence-based biometric surveillance systems, are increasingly being used in sensitive contexts, without the knowledge or consent of individuals,” the experts said ahead of the RightsCon summit in Costa Rica from 5 to 8 June 2023.
 
“Urgent and strict regulatory red lines are needed for technologies that claim to perform emotion or gender recognition,” they said.
 
The experts expressed concern about the proliferation of invasive spyware and a growing array of targeted surveillance technologies used to unlawfully target human rights defenders, activists, journalists, and civil society in all regions.
 
“We condemn the alarming use of spyware and surveillance technologies in violation of human rights and the broader chilling effect of such unlawful measures on the legitimate work of human rights defenders and on civic space worldwide, often under the guise of national security and counter-terrorism measures,” they said.
 
The experts stressed the need to ensure that these systems do not further expose people and communities to human rights violations, including through the expansion and abuse of invasive surveillance practices that infringe on the right to privacy, facilitate the commission of gross human rights violations, including enforced disappearances, and discrimination.
 
They expressed concern about respect for freedoms of expression, thought, peaceful protest, and for access to essential economic, social and cultural rights, and humanitarian services.
 
“Specific technologies and applications should be avoided altogether where the regulation of human rights complaints is not possible,” the experts said.
 
The experts noted that so-called “generative AI” systems can enable the cheap and rapid mass production of synthetic content that spreads disinformation or promotes and amplifies incitement to hatred, discrimination or violence on the basis of race, sex, gender and other characteristics.
 
The experts also expressed concern that their development is driven by a small group of powerful actors, including businesses and investors, without adequate requirements for conducting human rights due diligence or consultation with affected individuals and communities. Additionally, content moderation is often performed by individuals in situations of labour exploitation.
 
“Regulation is urgently needed to ensure transparency, alert people when they encounter synthetic media, and inform the public about the training data and models used,” the experts said.
 
The experts reiterated their calls for caution about the radical impact of digital technologies in the context of humanitarian crises, from large-scale data collection – including the collection of highly sensitive biometric data – to the use of advanced targeted surveillance technologies.
 
“We urge restraint in the use of such measures until the broader human rights implications are fully understood and robust data protection safeguards are in place,” they said.
 
They underlined the need to ensure technical solutions – including strong end-to-end encryption and unfettered access to virtual private networks – and secure and protect digital communications.
 
The experts reminded States and businesses of their respective duties and responsibilities, including human rights due diligence requirements when it comes to the development, use, vetting and procuring of digital technologies.
 
“Both industry and States must be held accountable, including for their economic, social, environmental, and human rights impacts,” they said. “The next generation of technologies must not reproduce or reinforce systems of exclusion, discrimination and patterns of oppression.”
 
http://www.ohchr.org/en/press-releases/2023/06/new-and-emerging-technologies-need-urgent-oversight-and-robust-transparency http://www.ohchr.org/en/statements/2023/09/states-must-guarantee-fundamental-freedoms-online-and-during-use-digital http://www.rightscon.org/ http://www.accessnow.org/publication/internet-shutdowns-in-2023-mid-year-update/
 
* Neurotechnologies have the potential to decode and alter our perception, behaviour, emotion, cognition and memory – arguably, the very core of what it means to be human. This has major human rights and ethical implications as these devices could be used to invade people’s mental privacy and modify their identity and sense of agency, for example by manipulating people’s beliefs, motivations and desires:
 
http://www.ohchr.org/en/hr-bodies/hrc/advisory-committee/neurotechnologies-and-human-rights http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-neurorightsfoundation.pdf http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-oneill-riosrivers.pdf
 
UN Secretary-General Antonio Guterres's remarks on new Policy Brief on Information Integrity on Digital Platforms:
 
New technology is moving at warp speed. And so are the threats that come with it. Alarm bells over the latest form of artificial intelligence – generative AI – are deafening. And they are loudest from the developers who designed it. These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. We must take those warnings seriously.
 
The advent of generative AI must not distract us from the damage digital technology is already doing to our world.
 
The proliferation of hate and lies in the digital space is causing grave global harm – now. It is fueling conflict, death and destruction – now. It is threatening democracy and human rights – now. It is undermining public health and climate action – now.
 
When social media emerged a generation ago, digital platforms were embraced as exciting new ways to connect. And, indeed, they have supported communities in times of crisis, elevated marginalized voices and helped to mobilize global movements for racial justice and gender equality.
 
Social media platforms have helped the United Nations to engage people around the world in our pursuit of peace, dignity and human rights on a healthy planet.
 
But today, this same technology is often a source of fear, not hope. Digital platforms are being misused to subvert science and spread disinformation and hate to billions of people. Some of our own UN peacekeeping missions and humanitarian aid operations have been targeted, making their work even more dangerous.
 
This clear and present global threat demands clear and coordinated global action. Our new policy brief on information integrity on digital platforms puts forward a framework for a concerted international response.
 
Its proposals are aimed at creating guardrails to help governments come together around guidelines that promote facts, while exposing conspiracies and lies, and safeguarding freedom of expression and information; And to help tech companies navigate difficult ethical and legal issues and build business models based on a healthy information ecosystem.
 
Governments have sometimes resorted to drastic measures – including blanket internet shutdowns and bans – that lack any legal basis and infringe on human rights. Around the world, some tech companies have done far too litte, too late to prevent their platforms from contributing to violence and hatred.
 
The recommendations in this brief seek to make the digital space safer and more inclusive while vigorously protecting human rights. They will inform a UN Code of Conduct for Information Integrity on Digital Platforms that we are developing ahead of next year’s Summit of the Future. The Code of Conduct will be a set of principles that we hope governments, digital platforms and other stakeholders will implement voluntarily.
 
The proposals in this policy brief, in preparation for the Code of Conduct, include:
 
A commitment by Governments, tech companies and other stakeholders to refrain from using, supporting, or amplifying disinformation and hate speech for any purpose. A pledge by Governments to guarantee a free, viable, independent, and plural media landscape, with strong protections for journalists.
 
The consistent application of policies and resources by digital platforms around the world, to eliminate double standards that allow hate speech and disinformation to flourish in some languages and countries, while they are prevented more effectively in others.
 
Agreed protocols for a rapid response by governments and digital platforms when the stakes are highest – in times of conflict and high social tensions. And a commitment from digital platforms to make sure all products take account of safety, privacy and transparency.
 
That includes urgent and immediate measures to ensure that all AI applications are safe, secure, responsible and ethical, and comply with human rights obligations.
 
The brief proposes that tech companies should undertake to move away from damaging business models that prioritize engagement above human rights, privacy, and safety.
 
It suggests that advertisers – who are deeply implicated in monetizing and spreading damaging content – should take responsibility for the impact of their spending.
 
It recognizes the need for a fundamental shift in incentive structures. Disinformation and hate should not generate maximum exposure and massive profits.
 
http://www.un.org/sg/en/content/sg/speeches/2023-06-12/secretary-generals-opening-remarks-press-briefing-policy-brief-information-integrity-digital-platforms http://www.un.org/sites/un2.un.org/files/our-common-agenda-policy-brief-gobal-digi-compact-en.pdf http://www.ohchr.org/en/statements-and-speeches/2023/11/turk-calls-attentive-governance-artificial-intelligence-risks http://www.ohchr.org/en/documents/thematic-reports/a78538-report-special-rapporteur-contemporary-forms-racism-racial http://www.globalwitness.org/en/blog/social-media-bosses-must-invest-guarding-global-elections-against-incitement-hate-and-violence/ http://yearofdemocracy.org/campaign-asks/ http://www.washingtonpost.com/world/2023/09/26/hindu-nationalist-social-media-hate-campaign/ http://www.dw.com/en/eu-says-elon-musks-x-is-biggest-source-of-disinformation/a-66930194 http://www.ids.ac.uk/news/african-nations-spending-1bn-a-year-on-harmful-surveillance-of-citizens/
 
* AP Dec. 2023: The U.S. multinational company Google has agreed to settle a $5 billion privacy lawsuit alleging that it spied on people who used the “incognito” mode in its Chrome browser — along with similar “private” modes in other browsers — to track their internet use. The class-action lawsuit filed in 2020 said Google misled users into believing that it wouldn’t track their internet activities while using incognito mode. It argued that Google’s advertising technologies and other techniques continued to catalog details of users’ site visits and activities despite their use of supposedly “private” browsing. Plaintiffs also charged that Google’s activities yielded an “unaccountable trove of information” about users who thought they’d taken steps to protect their privacy.
 
http://www.dw.com/en/journalists-say-tech-giants-google-and-meta-aid-suppression/a-68043102 http://www.amnesty.org/en/latest/news/2023/12/south-korea-google-fails-to-tackle-online-sexual-abuse-content-despite-complaints-by-survivors/
 
May 2023
 
Threats from the misuse of artificial intelligence, report from the British Medical Journal
 
In this section, we describe three sets of threats associated with the misuse of Artificial Intelligence (AI), whether it be deliberate, negligent, accidental or because of a failure to anticipate and prepare to adapt to the transformational impacts of AI on society.
 
The first set of threats comes from the ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras, and to develop highly personalised and targeted marketing and information campaigns as well as greatly expanded systems of surveillance.
 
This ability of AI can be put to good use, for example, improving our access to information or countering acts of terrorism. But it can also be misused with grave consequences.
 
The use of this power to generate commercial revenue for social media platforms, for example, has contributed to the rise in polarisation and extremist views observed in many parts of the world. It has also been harnessed by other commercial actors to create a vast and powerful personalised marketing infrastructure capable of manipulating consumer behaviour.
 
Experimental evidence has shown how AI used at scale on social media platforms provides a potent tool for political candidates to manipulate their way into power and it has indeed been used to manipulate political opinion and voter behaviour Cases of AI-driven subversion of elections include the 2013 and 2017 Kenyan elections, the 2016 US presidential election and the 2017 French presidential election.
 
When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts. AI-driven surveillance may also be used by governments and other powerful actors to control and oppress people more directly.
 
This is perhaps best illustrated by China’s Social Credit System, which combines facial recognition software and analysis of ‘big data’ repositories of people’s financial transactions, movements, police records and social relationships to produce assessments of individual behaviour and trustworthiness, which results in the automatic sanction of individuals deemed to have behaved poorly.
 
Sanctions include fines, denying people access to services such as banking and insurance services, or preventing them from being able to travel or send their children to fee-paying schools. This type of AI application may also exacerbate social and health inequalities and lock people into their existing socioeconomic strata.
 
But China is not alone in the development of AI surveillance. At least 75 countries, ranging from liberal democracies to military regimes, have been expanding such systems. Although democracy and rights to privacy and liberty may be eroded or denied without AI, the power of AI makes it easier for authoritarian or totalitarian regimes to be either established or solidified and also for such regimes to be able to target particular individuals or groups in society for persecution and oppression.
 
The second set of threats concerns the development of Lethal Autonomous Weapon Systems (LAWS). There are many applications of AI in military and defence systems, some of which may be used to promote security and peace. But the risks and threats associated with LAWS outweigh any putative benefits.
 
Weapons are autonomous in so far as they can locate, select and ‘engage’ human targets without human supervision. This dehumanisation of lethal force is said to constitute the third revolution in warfare, following the first and second revolutions of gunpowder and nuclear arms.
 
Lethal autonomous weapons come in different sizes and forms. But crucially, they include weapons and explosives, that may be attached to small, mobile and agile devices (eg, quadcopter drones) with the intelligence and ability to self-pilot and capable of perceiving and navigating their environment. Moreover, such weapons could be cheaply mass-produced and relatively easily set up to kill at an industrial scale.
 
For example, it is possible for a million tiny drones equipped with explosives, visual recognition capacity and autonomous navigational ability to be contained within a regular shipping container and programmed to kill en masse without human supervision.
 
As with chemical, biological and nuclear weapons, LAWS present humanity with a new weapon of mass destruction, one that is relatively cheap and that also has the potential to be selective about who or what is targeted.
 
This has deep implications for the future conduct of armed conflict as well as for international, national and personal security more generally. Debates have been taking place in various forums on how to prevent the proliferation of LAWS, and about whether such systems can ever be kept safe from cyber-infiltration or from accidental or deliberate misuse.
 
The third set of threats arises from the loss of jobs that will accompany the widespread deployment of AI technology. Projections of the speed and scale of job losses due to AI-driven automation range from tens to hundreds of millions over the coming decade.
 
Much will depend on the speed of development of AI, robotics and other relevant technologies, as well as policy decisions made by governments and society. However, in a survey of most-cited authors on AI in 2012/2013, participants predicted the full automation of human labour shortly after the end of this century.
 
It is already anticipated that in this decade, AI-driven automation will disproportionately impact low/middle-income countries by replacing lower-skilled jobs, and then continue up the skill-ladder, replacing larger and larger segments of the global workforce, including in high-income countries.
 
While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, including harmful consumption of alcohol and illicit drugs, being overweight, and having lower self-rated quality of life and health and higher levels of depression and risk of suicide.
 
However, an optimistic vision of a future where human workers are largely replaced by AI-enhanced automation would include a world in which improved productivity would lift everyone out of poverty and end the need for toil and labour.
 
However, the amount of exploitation our planet can sustain for economic production is limited, and there is no guarantee that any of the added productivity from AI would be distributed fairly across society.
 
Thus far, increasing automation has tended to shift income and wealth from labour to the owners of capital, and appears to contribute to the increasing degree of maldistribution of wealth across the globe.
 
Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health.
 
The threat of self-improving artificial general intelligence
 
Self-improving general-purpose AI, or AGI, is a theoretical machine that can learn and perform the full range of tasks that humans can. By being able to learn and recursively improve its own code, it could improve its capacity to improve itself and could theoretically learn to bypass any constraints in its code and start developing its own purposes, or alternatively it could be equipped with this capacity from the beginning by humans.
 
The vision of a conscious, intelligent and purposeful machine able to perform the full range of tasks that humans can has been the subject of academic and science fiction writing for decades. But regardless of whether conscious or not, or purposeful or not, a self-improving or self-learning general purpose machine with superior intelligence and performance across multiple dimensions would have serious impacts on humans.
 
We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered.
 
If realised, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons and all the digital systems that increasingly run our societies, could well represent the ‘biggest event in human history’.
 
Although the effects and outcome of AGI cannot be known with any certainty, multiple scenarios may be envisioned. These include scenarios where AGI, despite its superior intelligence and power, remains under human control and is used to benefit humanity. Alternatively, we might see AGI operating independently of humans and coexisting with humans in a benign way.
 
Logically however, there are scenarios where AGI could present a threat to humans, and possibly an existential threat, by intentionally or unintentionally causing harm directly or indirectly, by attacking or subjugating humans or by disrupting the systems or using up resources we depend on.
 
A survey of AI society members predicted a 50% likelihood of AGI being developed between 2040 and 2065, with 18% of participants believing that the development of AGI would be existentially catastrophic. Presently, dozens of institutions are conducting research and development into AGI.
 
Assessing risk and preventing harm
 
Many of the threats described above arise from the deliberate, accidental or careless misuse of AI by humans. Even the risk and threat posed by a form of AGI that exists and operates independently of human control is currently still in the hands of humans. However, there are differing opinions about the degree of risk posed by AI and about the relative trade-offs between risk and potential reward, and harms and benefits.
 
Nonetheless, with exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit.
 
Crucially, as with other technologies, preventing or minimising the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI ‘arms race’. It will also require decision making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest.
 
Worryingly, large private corporations with vested financial interests and little in the way of democratic and public oversight are leading in the field of AGI research.
 
Different parts of the UN system are now engaged in a desperate effort to ensure that our international social, political and legal institutions catch up with the rapid technological advancements being made with AI.
 
In 2020, for example, the UN established a High-level Panel on Digital Cooperation to foster global dialogue and cooperative approaches for a safe and inclusive digital future.
 
In September 2021, the head of the UN Office of the Commissioner of Human Rights called on all states to place a moratorium on the sale and use of AI systems until adequate safeguards are put in place to avoid the ‘negative, even catastrophic’ risks posed by them.
 
And in November 2021, the 193 member states of UNESCO adopted an agreement to guide the construction of the necessary legal infrastructure to ensure the ethical development of AI. However, the UN still lacks a legally binding instrument to regulate AI and ensure accountability at the global level.
 
At the regional level, the European Union has an Artificial Intelligence Act which classifies AI systems into three categories: unacceptable-risk, high-risk and limited and minimal-risk. This Act could serve as a stepping stone towards a global treaty although it still falls short of the requirements needed to protect several fundamental human rights and to prevent AI from being used in ways that would aggravate existing inequities and discrimination.
 
There have also been efforts focused on LAWS, with an increasing number of voices calling for stricter regulation or outright prohibition, just as we have done with biological, chemical and nuclear weapons. State parties to the UN Convention on Certain Conventional Weapons have been discussing lethal autonomous weapon systems since 2014, but progress has been slow.
 
What can and should the medical and public health community do? Perhaps the most important thing is to simply raise the alarm about the risks and threats posed by AI, and to make the argument that speed and seriousness are essential if we are to avoid the various harmful and potentially catastrophic consequences of AI-enhanced technologies being developed and used without adequate safeguards and regulation.
 
It is also important that we not only target our concerns at AI, but also at the actors who are driving the development of AI too quickly or too recklessly, and at those who seek only to deploy AI for self-interest or malign purposes.
 
If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances.
 
This includes ensuring transparency and accountability of the parts of the military–corporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy.
 
Given that the world of work and employment will drastically change over the coming decades, we should deploy our public health expertise in evidence-based advocacy for a fundamental and radical rethink of social and economic policy to enable future generations to thrive in a world in which human labour is no longer a central or necessary component to the production of goods and services..
 
http://gh.bmj.com/content/8/5/e010435?rss=1 http://managing-ai-risks.com/ http://futureoflife.org/open-letter/pause-giant-ai-experiments/ http://futureoflife.org/project/mitigating-the-risks-of-ai-integration-in-nuclear-launch/ http://futureoflife.org/project/artificial-escalation/ http://www.gladstone.ai/action-plan http://www.safe.ai/statement-on-ai-risk http://theelders.org/news/elders-urge-global-co-operation-manage-risks-and-share-benefits-ai http://futureoflife.org/ai/six-month-letter-expires/ http://www.theguardian.com/technology/2023/sep/21/ai-focused-tech-firms-locked-race-bottom-warns-mit-professor-max-tegmark http://www.unesco.org/en/artificial-intelligence/recommendation-ethics
 
http://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/ http://www.citizen.org/news/a-i-is-already-harming-democracy-competition-consumers-workers-climate-and-more/ http://www.gen-ai.witness.org/ http://lab.witness.org/projects/synthetic-media-and-deep-fakes/ http://algorithmwatch.org/en/study-microsofts-bing-chat/ http://algorithmwatch.org/en/algorithmwatch-ceases-activities-on-x-twitter/ http://carrcenter.hks.harvard.edu/technology-human-rights
 
http://www.hrw.org/news/2024/01/05/algorithms-too-few-people-are-talking-about http://chrgj.org/focus-areas/technology/ http://chrgj.org/focus-areas/technology/transformer-states/ http://chrgj.org/2023/11/14/co-creating-a-shared-human-rights-agenda-for-ai-regulation-and-the-digital-welfare-state/ http://chrgj.org/wp-content/uploads/2023/08/CHRGJ_Contesting-the-Foundations-of-Digital-Public-Infrastructure.pdf http://chrgj.org/wp-content/uploads/2022/06/Report_Paving-a-Digital-Road-to-Hell.pdf http://chrgj.org/2022/07/14/the-aadhaar-mirage-a-second-look-at-the-world-banks-model-for-digital-id-system/
 
http://www.hrw.org/news/2023/05/03/pandoras-box-generative-ai-companies-chatgpt-and-human-rights http://www.hrw.org/topic/technology-and-rights http://www.stopkillerrobots.org/ http://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-systems http://www.icrc.org/en/war-and-law/weapons/ihl-and-new-technologies http://www.accessnow.org/issue/artificial-intelligence/ http://www.accessnow.org/bodily-harms-how-ai-and-biometrics-curtail-human-rights/ http://www.accessnow.org/wp-content/uploads/2023/12/Content-and-platform-governance-in-times-of-crisis-applying-international-humanitarian-criminal-and-human-rights-law.pdf
 
http://edri.org/our-work/eu-parliament-plenary-ban-of-public-facial-recognition-human-rights-gaps-ai-act/ http://euobserver.com/digital/157163 http://www.amnesty.org/en/latest/campaigns/2024/01/the-urgent-but-difficult-task-of-regulating-artificial-intelligence/ http://www.amnesty.org/en/latest/news/2023/06/global-companies-must-act-now-to-ensure-responsible-development-of-artificial-intelligence/ http://www.amnesty.org/en/latest/news/2023/12/venture-capital-firms-funding-generative-artificial-intelligence-ignoring-duty-to-protect-human-rights/
 
http://www.accessnow.org/press-release/ai-act-failure-for-human-rights-victory-for-industry-and-law-enforcement/ http://www.amnesty.org/en/latest/news/2024/03/eu-artificial-intelligence-rulebook-fails-to-stop-proliferation-of-abusive-technologies http://edri.org/ http://www.article19.org/resources/eu-ai-act-passed-in-parliament-fails-to-ban-harmful-biometric-technologies/ http://www.article19.org/biometric-technologies-privacy-data-free-expression/ http://www.commondreams.org/news/european-union-ai-act http://www.solidar.org/news-and-statements/civil-society-calls-for-regulating-surveillance-technology/ http://www.amnesty.org/en/latest/news/2023/04/eu-european-union-must-protect-human-rights-in-upcoming-ai-act-vote/ http://www.article19.org/resources/eu-artificial-intelligence-act-must-do-more-to-protect-human-rights/
 
http://www.channel4.com/news/ai-watch http://pulitzercenter.org/journalism/initiatives/ai-accountability-network http://www.pbs.org/newshour/politics/as-social-media-guardrails-fade-and-ai-deepfakes-go-mainstream-experts-warn-of-impact-on-elections http://www.ohchr.org/en/press-releases/2024/02/un-expert-alarmed-new-emerging-exploitative-practices-online-child-sexual http://www.weprotect.org/global-threat-assessment-23 http://www.esafety.gov.au/newsroom/media-releases/esafety-demands-answers-from-twitter-about-how-its-tackling-online-hate http://www.icfj.org/news/new-research-illuminates-escalating-online-violence-musks-twitter http://www.theatlantic.com/technology/archive/2023/05/elon-musk-ron-desantis-2024-twitter/674149/ http://www.washingtonpost.com/politics/2023/11/21/musk-media-matters-texas/ http://www.nytimes.com/2022/12/02/technology/twitter-hate-speech.html http://www.ohchr.org/en/press-releases/2022/11/un-human-rights-chief-turk-issues-open-letter-twitters-elon-musk http://www.ohchr.org/en/topic/digital-space-and-human-rights


 


Anti-corruption campaigner and prisoner of conscience Alexei Navalny dies in prison
by OHCHR, UNHCR, HRW, Amnesty, agencies
 
Mar. 2024
 
We call on India to implement its human rights obligations and set a positive example by reversing the erosion of human rights. (OHCHR)
 
UN human rights experts have sounded the alarm over reports of attacks on minorities, media and civil society in India and called for urgent corrective action as the country prepares to hold elections in early 2024.
 
“We are alarmed by continuing reports of attacks on religious, racial and ethnic minorities, on women and girls on intersecting grounds, and on civil society, including human rights defenders and the media,” the UN experts said, expressing concern that the situation is likely to worsen in the coming months ahead of national elections.
 
They noted reports of violence and hate crimes against minorities; dehumanising rhetoric and incitement to discrimination and violence; targeted and arbitrary killings; acts of violence carried out by vigilante groups; targeted demolitions of homes of minorities; enforced disappearances; the intimidation, harassment and arbitrary and prolonged detention of human rights defenders and journalists; arbitrary displacement due to development mega-projects; and intercommunal violence, as well as the misuse of official agencies against perceived political opponents.
 
“We call on India to implement its human rights obligations fully and set a positive example by reversing the erosion of human rights and addressing recurring concerns raised by UN human rights mechanisms,” the experts said.
 
“In light of continuing reports of violence and attacks against religious, racial and ethnic minorities, and other grave human rights issues, and the apparent lack of response by authorities to concerns raised, we are compelled to express our grave concern, especially given the need for a conducive atmosphere for free and fair elections in accordance with the early warning aspect of our mandates,” they said.
 
http://www.ohchr.org/en/press-releases/2024/03/india-un-experts-urge-corrective-action-protect-human-rights-and-end-attacks
 
16 Feb. 2024
 
Anti-corruption campaigner and prisoner of conscience Alexei Navalny dies in prison. (UN News)
 
The UN human rights office (OHCHR) on Friday said it was “appalled” over the death announced by Russian authorities of opposition leader Alexei Navalny in prison, calling for an impartial and transparent independent investigation. Mr. Navalny, 47, had lost consciousness and could not be revived, according to media reports.
 
“If someone dies in the custody of the State, the presumption is that the State is responsible – a responsibility that can only be rebutted through an impartial, thorough and transparent investigation carried out by an independent body,” said OHCHR spokesperson Liz Throssell, calling on Russia “to ensure such a credible investigation is carried out”.
 
Any State has a heightened duty to protect the lives of individuals deprived of their liberty, the UN rights office said.
 
Ms. Throssell also called on Russia to end its persecution of opposition politicians, human rights defenders and journalists, among others.
 
“All those who are held or have been sentenced to various prison terms in relation to the legitimate exercise of their rights, including the rights to freedom of peaceful assembly and expression, should be immediately released and all charges against them dropped,” Ms. Throssell said.
 
The Russian authorities – like all States – have a duty under international law to protect the lives of individuals deprived of their liberty. A comprehensive and independent investigation, including a full autopsy must be carried out as a matter of urgency.
 
Mr. Navalny was serving several sentences, including fresh charges of extremism announced in August, following his arrest in 2021.
 
OHCHR had “repeatedly raised serious concerns relating to the charges against Navalny and his repeated detention which appeared to be arbitrary”, Ms. Throssell said.
 
Last August, UN High Commissioner for Human Rights Volker Türk highlighted that the latest 19-year sentence raised questions about judicial harassment and instrumentalization of the court system for political purposes in Russia and called for Mr. Navalny’s release, the spokesperson said.
 
Mariana Katzarova, the UN Special Rapporteur on the situation of human rights in Russia, issued an alert in December voicing concern over the enforced disappearance of Mr. Navalny, whose whereabouts and wellbeing were unknown after more than 10 days. In late December, Mr. Navalny was transferred to the prison where he has reportedly died.
 
UN Special Rapporteur on Torture Alice Edwards said that several UN independent experts, including herself, privately and publicly urged the Russian Government to end the punitive conditions in which Mr. Navalny was held.
 
She said they had called for an investigation into credible allegations of torture against Mr. Navalny and told the authorities of the essential need for him to receive medical treatment, especially following his alleged poisoning in 2020, she said.
 
“That our appeals to the Kremlin were ignored so blatantly and with such disregard for human life is a tragedy for Mr. Navalny, his family and supporters,” she said. “It is also a bleak day for the rule of law, free expression and human rights.”
 
Since the early 2000s, he has been a vocal anti-corruption activist and critic of Russian President Vladimir Putin, leading protests and garnering support.
 
In 2020, Mr. Navalny was hospitalized for injuries sustained from a poisoning that involved Novichok, a nerve agent developed by Russia during the cold war.
 
The media reported that on Thursday, Mr. Navalny had appeared in court via video. News reports also noted that Mr. Navalny’s spokesperson was asking for confirmation and further details of his death.
 
The UN Secretary-General is “shocked by the reported death”, UN Spokesperson Stéphane Dujarric said at the Noon Briefing in New York.
 
“The Secretary-General calls for a full, credible and transparent investigation into the circumstances of Mr. Navalny’s reported death in custody,” Mr. Dujarric said.
 
http://news.un.org/en/story/2024/02/1146627 http://news.un.org/en/story/2024/02/1146647 http://www.dw.com/en/alexei-navalny-dies-in-russian-prison-aged-47-authorities/a-68275055 http://www.france24.com/en/europe/20240216-death-of-alexei-navalny-decimates-russia-opposition-putin http://www.bbc.com/news/topics/cpzyrklr4l8t http://www.amnesty.org/en/latest/news/2024/02/russia-prisoner-of-conscience-aleksei-navalny-kremlins-most-vocal-opponent-dies-in-custody/ http://www.europarl.europa.eu/topics/en/article/20211209STO19125/sakharov-prize-2021-parliament-honours-alexei-navalny http://multimedia.europarl.europa.eu/en/topic/sakharov-prize-2021_20502 http://www.france24.com/en/tag/sakharov-prize/
 
UN report highlights risk of more and gross human rights violations if South Sudan’s conflict drivers remain unaddressed. (OHCHR)
 
Unchecked mass violence and entrenched repression in South Sudan threaten the prospects of durable peace and human rights protections; this must urgently be addressed to live up to hopes of the people and commitments of the peace agreement, the UN Commission on Human Rights in South Sudan said in its latest report.
 
Members of the Commission presented their report to the UN Human Rights Council in Geneva.
 
“Our investigations again found an absolutely unacceptable situation in South Sudan, whereby families and communities are devastated by human rights violations and abuses by armed forces, militias and State institutions acting with impunity. Further, the media and civil society groups operate under intolerable conditions which stifle democratic space for the population at large,” said Yasmin Sooka, Chair of the Commission.
 
“The drivers of violence and repression are well known, and while commitments have been made to address them, we continue to see a lack of political will to implement the measures necessary to improve millions of lives,” Sooka said. “South Sudan’s immediate and long-term future hinges on political leaders finally making good on their commitments to bring peace, and reverse cyclical human rights violations.”
 
The report draws on investigations undertaken in South Sudan and the neighbouring region throughout 2023, involving hundreds of witness interviews and meetings, expert opensource and forensic analysis, and dozens of engagements with State authorities.
 
The findings detail the persistence of armed conflict whereby State actors have either instigated or failed to prevent or punish violence, which frequently involves killings, sexual and gender-based crimes, and the displacement of civilian populations.
 
The Commission also identifies the use of children in armed forces, the State’s systemic curtailment of media and civil society actors both in and outside of the country, and the diversion of available State revenues from rule of law, health, and education institutions.
 
Measures to address conflict drivers and human rights violations are laid out in the 2018 Revitalized Peace Agreement, which is scheduled to conclude following the country’s first elections which are planned for December.
 
“The transformative promises of the Revitalized Agreement remain unfulfilled, jeopardizing prospects for peace and human rights protections,” said Commissioner Barney Afako.
 
“The process of merging forces is not yet completed, the drafting of a permanent constitution has not started, and none of the three transitional justice institutions are established,” said Afako. “Time is running out for South Sudan’s leaders to implement key commitments, which are the building blocks for peace, for holding the country together, and advancing human rights beyond the elections.”
 
The report finds that patterns of violations remain unchanged, ever increasing because the root causes remain unaddressed. Abductions of women and children in Jonglei State and the Greater Pibor Administrative Area appear to be worsening in scale and severity, frequently involving horrific sexual violence and the separation of parents from children.
 
The Commissioners visited these regions last month, spoke with survivors, and delved deeper into the harrowing issues of abductions, forced displacements, sexual slavery, and ransoms.
 
In 2023, authorities paid ransoms to captors in exchange for the release of abductees, which risks incentivising the recurrence of crimes. Many women and children are still missing; other abductees are held hostage as authorities fail to effectively intervene. The perpetrators of abductions previously documented by the Commission had not been punished.
 
“The persistent failure to build a justice system implicates the State in these violations,” said Commissioner Carlos Castresana Fernández. “There is no protective institution between the people and criminals, and it is no coincidence that areas most affected by abductions and other gross violations have few courts and judges, if any. Developing a functioning judiciary is an inter-generational project that must urgently start in earnest.”
 
The report is accompanied by a detailed paper published by the Commission on 5 October 2023, examining in detail the persistence of attacks against journalists and human rights defenders, and pervasive regimes of media censorship and arbitrary restrictions on civic activities, which systemically curtail the democratic and civic space.
 
In the latest report, recommendations to the Government of South Sudan focus on addressing structural drivers of violence. This requires urgently implementing core aspects of the Revitalized Agreement, including establishing transitional justice institutions, as well as by identifying required actions to open democratic space, and enable political processes to be meaningful and legitimate.
 
http://www.ohchr.org/en/press-releases/2024/03/un-report-highlights-risk-more-and-gross-human-rights-violations-if-south http://www.ohchr.org/en/hr-bodies/hrc/co-h-south-sudan/index http://www.globalr2p.org/publications/south-sudan-as-elections-loom-extend-vital-human-rights-commission-mandate/
 
Mar. 2024
 
Spyware ruling a welcome step towards accountability for those targeted with NSO spyware. (Amnesty International)
 
A US district court has ordered the spyware firm NSO Group to disclose documents and code related to its notorious Pegasus spyware, to WhatsApp.
 
Responding to the news, the Head of the Security Lab at Amnesty International, Donncha O Cearbhaill said:
 
“This decision brings us a step closer towards accountability for up to 1,400 WhatsApp users targeted with Pegasus spyware in this case, as well as the countless other individuals around the world, who have continued to be targeted since this case was filed in 2019. This court order sends a clear signal to the surveillance industry that it cannot continue to enable spyware abuse with impunity.
 
“While the court’s decision is a positive development, it is disappointing that NSO Group will be allowed to continue keeping the identity of its clients, who are responsible for this unlawful targeting, secret.”
 
“NSO Group says that it only sells Pegasus to authorized government customers. Our Security Lab has documented the massive scale and breadth of the use of Pegasus against human rights defenders and journalists across the world. It is vital that targets of Pegasus find out who has purchased and deployed the spyware against them so that they can seek meaningful redress.”
 
# Background: The order is part of an ongoing lawsuit in which WhatsApp alleges that NSO Group’s spyware was used to target 1,400 of its users. Various legal efforts by NSO Group to deflect legal accountability in this case have been rejected. Progress towards more transparency through such legal disclosures are a long overdue avenue for those targeted with NSO Group’s spyware to get redress for the harms they have experienced.
 
This positive development follows similar news in recent weeks in Poland and Spain where parliamentary and judicial investigations are seeking to uncover the truth behind numerous forensically documented cases of Pegasus spyware misuse against political opponents.
 
http://www.amnesty.org/en/latest/news/2024/03/us-spyware-ruling-a-welcome-step-towards-accountability-for-those-targeted-with-nso-spyware/
 
Feb. 2024
 
Governments Target Nationals Living Abroad. (Human Rights Watch)
 
Governments across the globe are reaching beyond their borders and committing human rights abuses against their own nationals or former nationals to silence or deter dissent, Human Rights Watch said in a report released today. These abuses leave individuals unable to find safety for themselves or their families.
 
Governments and international institutions should take tangible steps to combat what is often referred to as “transnational repression” without inadvertently harming human rights in the process.
 
The 46-page report, “‘We Will Find You’: A Global Look at How Governments Repress Nationals Abroad,” is a rights-centered analysis of how governments are targeting dissidents, activists, political opponents, and others living abroad.
 
Human Rights Watch examined killings, removals, abductions and enforced disappearances, collective punishment of relatives, abuse of consular services, and digital attacks. The report also highlights governments’ targeting of women fleeing abuse, and government misuse of Interpol.
 
“Governments, the United Nations, and other international organizations should recognize transnational repression as a specific threat to human rights,” said Bruno Stagno, chief advocacy officer at Human Rights Watch. “They should prioritize bold policy responses that are in line with a human rights framework and uphold the rights of affected individuals and communities.”
 
The report includes over 75 cases previously documented by Human Rights Watch committed by over two dozen governments, including Algeria, Azerbaijan, Bahrain, Belarus, Cambodia, China, Egypt, Ethiopia, Iran, Kazakhstan, Russia, Rwanda, Saudi Arabia, South Sudan, Tajikistan, Thailand, Türkiye, Turkmenistan, and the United Arab Emirates. The cases are not exhaustive but instead offer a snapshot of cases across four regions.
 
Transnational repression can have far-reaching effects, causing a serious chilling effect on the rights to freedom of expression, association, and assembly for those who are targeted, or who fear they could be targeted.
 
The cases reviewed by Human Rights Watch show how governments have targeted human rights defenders, journalists, civil society activists, political opponents, and others they deem a threat.
 
http://www.hrw.org/news/2024/02/22/governments-target-nationals-living-abroad http://srdefenders.org/
 
27 Dec. 2023
 
Online disinformation, AI generated images and hate speech incite mob attack. (UNHCR)
 
UNHCR, the UN Refugee Agency, is deeply disturbed to see a mob attack on a site sheltering vulnerable refugee families, the majority being children and women, in Indonesia Banda Aceh city. Hundreds of youngsters stormed a building basement on Wednesday (27 December 2023) where refugees were sheltered.
 
The mob broke a police cordon and forcibly put 137 refugees on two trucks, and moved them to another location in Banda Aceh. The incident has left refugees shocked and traumatized.
 
UNHCR remains deeply worried about the safety of refugees and calls on local law enforcement authorities for urgent action to ensure protection of all desperate individuals and humanitarian staff.
 
The attack on refugees is not an isolated act but the result of a coordinated online campaign of misinformation, disinformation and hate speech against refugees and an attempt to malign Indonesia’s efforts to save desperate lives in distress at sea.
 
UNHCR reminds everyone that desperate refugee children, women and men seeking shelter in Indonesia are victims of persecution and conflict, and are survivors of deadly sea journeys. Indonesia – with its longstanding humanitarian tradition – has helped save these desperate people who would have otherwise died at the sea – like hundreds of others.
 
The UN Refugee Agency is also alerting the general public to be aware of the coordinated and well-choreographed online campaign on social media platforms, attacking authorities, local communities, refugees and humanitarian workers alike, inciting hate and putting lives in danger.
 
UNHCR appeals to the public in Indonesia to cross-check information posted online, much of it false or twisted, with AI generated images and hate speech being sent from bot accounts.
 
http://www.unhcr.org/asia/news/press-releases/unhcr-disturbed-over-mob-attack-and-forced-eviction-refugees-aceh-indonesia http://news.un.org/en/story/2023/12/1145117 http://observers.france24.com/en/asia-pacific/20240220-misinformation-endangering-rohingya-refugees-indonesia http://www.unicef.org/parenting/how-talk-your-children-about-hate-speech
 
* UN High Commissioner for Human Rights Volker Turk's global update to the UN Human Rights Council on March 4, 2024:
 
http://www.ohchr.org/en/statements-and-speeches/2024/03/turks-global-update-human-rights-council


Visit the related web page
 

View more stories

Submit a Story Search by keyword and country Guestbook