People's Stories Freedom

View previous stories


Over 300 million full-time jobs to be lost for artificial intelligence business profits by 2030
by HRW, ICRC, The Elders, MIT, agencies
 
June 2025
 
Procurement and deployment of artificial intelligence must be aligned with human rights: UN experts. (UN Working Group on Business and Human Rights)
 
UN experts today called on States and businesses to ensure that the procurement and deployment of artificial intelligence (AI) systems are aligned with the UN Guiding Principles on Business and Human Rights.
 
“AI systems are transforming our societies, but without proper safeguards, they risk undermining human rights,” said Lyra Jakuleviciene, Chairperson of the UN Working Group on Business and Human Rights, presenting a report to the 59th session of the Human Rights Council.
 
In the report, the Working Group noted that States are increasingly shifting from voluntary guidelines to binding legislation on AI and human rights, but regulatory landscape is still fragmented, lacks universal standards with agreed definitions, integration of the perspective of the Global South. “The exceptions are broad and involvement of civil is society limited,” they said.
 
The experts outlined how AI systems — when procured or deployed without adequate human rights due diligence in line with the Guiding Principles — can lead to adverse impacts on all human rights, including discrimination, privacy violations, and exclusion, particularly for at-risk groups including women, children and minorities.
 
They stressed that both public and private actors must conduct robust human rights impact assessments and ensure transparency, accountability, oversight and access to remedy.
 
“States must act as responsible regulators, procurers, and deployers of AI,” Jakuleviciene said. “They must set clear red lines on AI systems that are fundamentally incompatible with human rights, such as those used for remote real-time facial recognition, mass surveillance or predictive policing.”
 
The Working Group stressed the responsibility of businesses to respect human rights across the AI lifecycle, including when using third-party AI systems. “Businesses cannot outsource their human rights responsibilities,” the experts said. “Businesses must ensure meaningful stakeholder engagement throughout the procurement and deployment processes, especially with those most at risk of harm.”
 
“We need urgent global cooperation to ensure that AI systems are procured and deployed in ways that uphold human rights, and ensure access to remedy for any AI-related human rights abuses,” Jakuleviciene said.
 
In the report, the Working Group outlined emerging practices by States and businesses, and made recommendations to States, businesses, and other actors on how to incorporate the Guiding Principles on business and human rights into AI procurement and deployment.
 
http://www.ohchr.org/en/press-releases/2025/06/procurement-and-deployment-artificial-intelligence-must-be-aligned-human
 
Mar. 2025
 
Neurotechnologies can allow access to what people think, and can be used to manipulate people’s brains, leading to violations of privacy in one’s own thoughts and decision-making - UN Special Rapporteur on the right to privacy
 
Neurotechnologies have the potential to decode and alter our perception, behaviour, emotion, cognition and memory – arguably, the very core of what it means to be human. This has major human rights and ethical implications as these devices could be used to invade people’s mental privacy and modify their identity and sense of agency, for example by manipulating people’s beliefs, motivations and desires.
 
Regulation of neurotechnologies is vital to ensure an ethical approach and protect fundamental human rights in the digital age, a UN expert said today.
 
In a report to the 58th session of the Human Rights Council, Ana Brian Nougrères, the UN Special Rapporteur on the right to privacy, set out the foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy.
 
“Neurotechnologies are tools or devices that record or alter brain activity and generate neurodata that not only allow us to identify a person, but also provide an unprecedented depth of understanding of their individuality,” the expert said.
 
The report outlined key definitions, fundamental principles and guidelines for the protection of human dignity, the protection of mental privacy, and the recognition of neurodata as highly sensitive personal data and the requirement of informed consent for their processing.
 
“Neurodata is highly sensitive personal data, as it is directly related to cognitive state and reflects unique personal experiences and emotions,” the Special Rapporteur said. As such, neurodata should be subject to the precautionary principle, enhanced accountability and special measures to ensure security, confidentiality and limited circulation to prevent access or misuse, as well as manipulation, due to its potential to negatively affect an individual’s mental integrity and thought processes.
 
“While I welcome the potential mental health benefits of neurotechnologies, I am concerned that neurodata will not only allow access to what people think, but also manipulate people’s brains, leading to a violation of privacy in one’s own thoughts and decision-making,” Brian Nougrères said.
 
The report makes four key recommendations to States: Developing a specific regulatory framework for neurotechnologies and the processing of neurodata to ensure responsible use; incorporating established principles of the right to privacy into national legal frameworks; promoting ethical practices in the use of neurotechnologies to address the risks of technological innovation; and promoting education about neurotechnologies and neurodata to ensure informed consent.
 
“Integrating ethical values into the design and use of neurotechnologies is essential to ensure non-discriminatory implementation and effective protection of individuals’ right to privacy when processing their neurodata,” the expert said.
 
http://www.ohchr.org/en/press-releases/2025/03/un-expert-calls-regulation-neurotechnologies-protect-right-privacy http://docs.un.org/en/A/HRC/58/58 http://www.ohchr.org/en/hr-bodies/hrc/advisory-committee/neurotechnologies-and-human-rights http://docs.un.org/en/A/HRC/57/61 http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-neurorightsfoundation.pdf http://neurorightsfoundation.org/ http://washingtonlawyer.dcbar.org/mayjune2025/index.php#/p/32 http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-oneill-riosrivers.pdf
 
Oct. 2024
 
Responsible artificial intelligence solidarity criteria: transparency, fairness, non discrimination and inclusion - report from Cecilia Bailliet, the Independent Expert on human rights and international solidarity.
 
“Those living in poverty, and in situations of vulnerability are particularly affected by the expansion of AI surveillance which is being used by States as a tool of ‘over-policing’ marginalised communities,” said Cecilia Bailliet, the Independent Expert on human rights and international solidarity, in a report to the UN General Assembly.
 
“I have grave concerns about when the use of AI violates the right to privacy due to facial recognition; discrimination against women, persons with disabilities and minorities, among others, in hiring; or the denial of self-realisation of life goals (or a life’s project) such as denial of requests for housing or educational loans.”
 
“It is imperative to identify intersectoral vulnerabilities to AI discrimination, including race, ethnicity, religion, gender, location, nationality and socioeconomic status,” Bailliet said.
 
“The concentration of power among the technology companies and AI developers, is concerning, and poses a significant risk to worsening the digital divide between and within countries and among different sectors of society,” the expert said.
 
“Despite the risks, there is also an opportunity for AI to be used as a unifying force, by creating preventive and reactive solidarity mechanisms to address disinformation and misinformation campaigns that result in societal violence or in the harassment, surveillance, discrimination or disproportional censorship of structurally silenced communities,” Bailliet said.
 
She called upon States, corporations, and civil society to promote a global multistakeholder AI international solidarity governance model to promote the full inclusion of vulnerable groups and individuals in data processing and decision making in the life cycle of AI.
 
http://www.ohchr.org/en/press-releases/2024/10/ai-international-solidarity-approach-urgently-needed-unite-humanity-says http://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years http://apnews.com/article/israel-palestinians-ai-technology-737bc17af7b03e98c29cec4e15d0f108 http://www.hks.harvard.edu/centers/carr/our-work/carr-commentary/notes-new-frontier-power http://www.hks.harvard.edu/centers/carr-ryan/our-work/carr-ryan-commentary/how-chinese-ai-models-impact-labor-rights-and http://knightcolumbia.org/content/knight-institute-and-smu-law-clinic-seek-immediate-release-of-records-related-to-texas-schools-use-of-surveillance-technology http://www.reuters.com/technology/artificial-intelligence/musks-doge-using-ai-snoop-us-federal-workers-sources-say-2025-04-08/ http://www.theguardian.com/us-news/ng-interactive/2025/apr/10/elon-musk-doge-spying
 
Aug. 2024
 
Killer Robots: New UN Report urges Treaty by 2026. (Human Rights Watch, agencies)
 
Governments should heed United Nations Secretary-General Antonio Guterres’ call to open negotiations on a new international treaty on lethal autonomous weapons systems Human Rights Watch said today.
 
These “killer robots” select and attack targets based on sensor processing rather than human inputs, a dangerous development for humanity.
 
In a report released on August 6, 2024, the secretary-general reiterated his call for states to conclude by 2026 a new international treaty “to prohibit weapons systems that function without human control or oversight and that cannot be used in compliance with international humanitarian law.” This treaty should regulate all other types of autonomous weapons systems, the secretary-general said.
 
“The UN secretary-general emphasizes the enormous detrimental effects removing human control over weapons systems would have on humanity,” said Mary Wareham at Human Rights Watch. “The already broad international support for tackling this concern should spur governments to start negotiations without delay.”
 
Technological advances are driving the development of weapons systems that operate without meaningful human control, delegating life-and-death decisions to machines. The machine rather than the human operator would determine where, when, or against what force is applied.
 
“Without explicit legal rules, the world faces a grim future of automated killing that will place civilians everywhere in grave danger.”
 
* Human Rights Watch is a cofounder of Stop Killer Robots, the coalition of more than 260 nongovernmental organizations across 70 countries that is working for new international law on autonomy in weapons systems.
 
http://www.hrw.org/news/2025/02/06/google-announces-willingness-develop-ai-weapons http://www.amnesty.org/en/latest/news/2025/02/global-googles-shameful-decision-to-reverse-its-ban-on-ai-for-weapons-and-surveillance-is-a-blow-for-human-rights/ http://www.hrw.org/news/2024/12/05/killer-robots-un-vote-should-spur-treaty-negotiations http://www.hrw.org/news/2024/10/29/binding-rules-urgently-needed-killer-robots http://www.hrw.org/news/2024/08/26/killer-robots-new-un-report-urges-treaty-2026 http://www.stopkillerrobots.org/news/ http://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/A-79-88-LAWS.pdf http://www.stopkillerrobots.org/news/next-steps-un-secretary-general-report/ http://www.stopkillerrobots.org/news/new-publication-summarises-state-submissions-to-unsg-report-on-autonomous-weapons/ http://www.ipsnews.net/2024/11/ai-powered-weapons-depersonalise-violence-making-easier-military-approve-destruction/ http://disrupting-peace.captivate.fm/episode/ai-autonomous-weapons-today http://www.citizen.org/article/deadly-and-imminent-report/ http://www.icrc.org/en/document/autonomous-weapons-icrc-submits-recommendations-un-secretary-general http://www.stopkillerrobots.org/news/vienna-conference-affirms-commitment-to-new-international-law/ http://www.icrc.org/en/document/statement-icrc-president-mirjana-spoljaric-vienna-conference-autonomous-weapon-systems-2024
 
Oct. 2023
 
In a joint appeal, the Secretary-General of the United Nations, Antonio Guterres, and the President of the International Committee of the Red Cross, Mirjana Spoljaric, are calling on political leaders to urgently establish new international rules on autonomous weapon systems, to protect humanity:
 
Today we are joining our voices to address an urgent humanitarian priority. The United Nations and the International Committee of the Red Cross (ICRC) call on States to establish specific prohibitions and restrictions on autonomous weapon systems, to shield present and future generations from the consequences of their use. In the current security landscape, setting clear international red lines will benefit all States.
 
Autonomous weapon systems – generally understood as weapon systems that select targets and apply force without human intervention – pose serious humanitarian, legal, ethical and security concerns.
 
Their development and proliferation have the potential to significantly change the way wars are fought and contribute to global instability and heightened international tensions. By creating a perception of reduced risk to military forces and to civilians, they may lower the threshold for engaging in conflicts, inadvertently escalating violence.
 
We must act now to preserve human control over the use of force. Human control must be retained in life and death decisions. The autonomous targeting of humans by machines is a moral line that we must not cross. Machines with the power and discretion to take lives without human involvement should be prohibited by international law.
 
Our concerns have only been heightened by the increasing availability and accessibility of sophisticated new and emerging technologies, such as in robotics and Artificial Intelligence technologies, that could be integrated into autonomous weapons.
 
The very scientists and industry leaders responsible for such technological advances have also been sounding the alarm. If we are to harness new technologies for the good of humanity, we must first address the most urgent risks and avoid irreparable consequences.
 
This means prohibiting autonomous weapon systems which function in such a way that their effects cannot be predicted. For example, allowing autonomous weapons to be controlled by machine learning algorithms – fundamentally unpredictable software which writes itself – is an unacceptably dangerous proposition.
 
In addition, clear restrictions are needed for all other types of autonomous weapons, to ensure compliance with international law and ethical acceptability. These include limiting where, when and for how long they are used, the types of targets they strike and the scale of force used, as well as ensuring the ability for effective human supervision, and timely intervention and deactivation.
 
Despite the increasing reports of testing and use of various types of autonomous weapon systems, it is not too late to take action. After more than a decade of discussions within the United Nations, including in the Human Rights Council, under the Convention on Certain Conventional Weapons and at the General Assembly, the foundation has been laid for the adoption of explicit prohibitions and restrictions. Now, States must build on this groundwork, and come together constructively to negotiate new rules that address the tangible threats posed by these weapon technologies.
 
International law, particularly international humanitarian law, prohibits certain weapons and sets general restrictions on the use of all others, and States and individuals remain accountable for any violations. However, without a specific international agreement governing autonomous weapon systems, States can hold different views about how these general rules apply. New international rules on autonomous weapons are therefore needed to clarify and strengthen existing law. They will be a preventive measure, an opportunity to protect those that may be affected by such weapons and essential to avoiding terrible consequences for humanity.
 
We call on world leaders to launch negotiations of a new legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems and to conclude such negotiations by 2026. We urge Members States to take decisive action now to protect humanity.
 
http://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-systems http://www.icrc.org/en/law-and-policy/autonomous-weapons http://www.stopkillerrobots.org/news/landmark-joint-call/ http://www.hrw.org/news/2023/10/06/protect-humanity-killer-robots http://www.hrw.org/topic/arms/killer-robots
 
Apr. 2024
 
UN: Autonomous weapons systems in law enforcement: submission to the United Nations Secretary-General. (Amnesty International)
 
In response to Resolution 78/241 “Lethal autonomous weapon systems”, adopted by the UN General Assembly on 22 December 2023, Amnesty International would like to submit its views for consideration by the UN Secretary-General. The Resolution requests the Secretary-General to seek views on “ways to address the related challenges and concerns [that autonomous weapon systems] raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force”.
 
While recognizing that much of this debate has focused on the use of AWS by the military in conflict settings, primarily using the international humanitarian law framework, this submission will highlight the intractable challenges related to the use of AWS in law enforcement contexts in relation to compliance with international human rights law and standards on the use of force. http://www.amnesty.org/en/documents/ior40/7981/2024/en/
 
June 2024
 
The lack of progress on AI safety and call for global governance of this existential risk. (The Elders)
 
Mary Robinson, Chair of The Elders and former President of Ireland:
 
"I remain deeply concerned at the lack of progress on the global governance of artificial intelligence. Decision-making on AI’s rapid development sits disproportionately within private companies without significant checks and balances.
 
AI risks and safety issues cannot be left to voluntary agreements between corporations and a small number of nations. Governance of this technology needs to be inclusive with binding, globally agreed regulations.
 
The recent AI Seoul Summit saw some collaboration, but the commitments made remain voluntary. There have been some other developments, notably with the EU AI Act and the California bill SB-1047, but capacity and expertise within governments and international organisations is struggling to keep up with AI’s advancements.
 
Ungoverned AI poses an existential risk to humanity and has the potential to exacerbate other global challenges – from nuclear risks and the use of autonomous weapons, to disinformation and the erosion of democracy.
 
Effective regulation of this technology at the multilateral level can help AI be a force for good, not a runaway risk. Along with my fellow Elders, I reaffirm our call for an international AI safety body".
 
http://theelders.org/news/mary-robinson-reaffirms-elders-call-global-governance-ai http://www.unesco.org/en/articles/new-unesco-report-warns-generative-ai-threatens-holocaust-memory
 
OpenAI and Google DeepMind workers warn of AI industry risks in open letter
 
A group of current and former employees at prominent artificial intelligence companies have issued an open letter that warns of a lack of safety oversight within the industry and called for increased protections for whistleblowers.
 
The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees.
 
“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
 
http://righttowarn.ai/ http://keepthefuturehuman.ai/ http://www.theguardian.com/technology/article/2024/may/25/big-tech-existential-risk-ai-scientist-max-tegmark-regulations http://futureoflife.org/ai-policy/ai-experts-major-ai-companies-have-significant-safety-gaps/ http://futureoflife.org/cause-area/artificial-intelligence/ http://futureoflife.org/statement/agi-manhattan-project-max-tegmark/
 
May 2024
 
Artificial intelligence (AI) systems are getting better at deceiving us. (MIT)
 
As AI systems have grown in sophistication so has their capacity for deception, scientists warn. The analysis, by Massachusetts Institute of Technology (MIT) researchers identified wide-ranging instances of AI systems double-crossing opponents in games, bluffing and pretending to be human. One system altered its behaviour during mock safety tests, raising the prospect of auditors being lured into a false sense of security.
 
“As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious,” said Dr Peter Park, an AI existential safety researcher at MIT and author of the research.
 
Park was prompted to investigate after Meta, which owns Facebook, developed a program called Cicero that performed in the top 10% of human players at the world conquest strategy game Diplomacy. Meta stated that Cicero had been trained to be “largely honest and helpful” and to “never intentionally backstab” its human allies.
 
“It was very rosy language, which was suspicious because backstabbing is one of the most important concepts in the game,” said Park.
 
Park and colleagues sifted through publicly available data and identified multiple instances of Cicero telling premeditated lies, colluding to draw other players into plots and, on one occasion, justifying its absence after being rebooted by telling another player: “I am on the phone with my girlfriend.” “We found that Meta’s AI had learned to be a master of deception,” said Park.
 
The MIT team found comparable issues with other systems, including a Texas hold ’em poker program that could bluff against professional human players and another system for economic negotiations that misrepresented its preferences in order to gain an upper hand.
 
In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that had evolved to rapidly replicate, before resuming vigorous activity once testing was complete. This highlights the technical challenge of ensuring that systems do not have unintended and unanticipated behaviours.
 
“That’s very concerning,” said Park. “Just because an AI system is deemed safe in the test environment doesn’t mean it’s safe in the wild. It could just be pretending to be safe in the test.”
 
The review, published in the journal Patterns, calls on governments to design AI safety laws that address the potential for AI deception. Risks from dishonest AI systems include fraud, tampering with elections and “sandbagging” where different users are given different responses. Eventually, if these systems can refine their unsettling capacity for deception, humans could lose control of them, the paper suggests.
 
Patterns: Loss of control over AI systems
 
"A long-term risk from AI deception concerns humans losing control over AI systems, leaving these systems to pursue goals that conflict with our interests. Even current AI models have nontrivial autonomous capabilities.. Today’s AI systems are capable of manifesting and autonomously pursuing goals entirely unintended by their creators.
 
For a real-world example of an autonomous AI pursuing goals entirely unintended by their prompters, tax lawyer Dan Neidle describes how he tasked AutoGPT (an autonomous AI agent based on GPT-4) with researching tax advisors who were marketing a certain kind of improper tax avoidance scheme. AutoGPT carried this task out, but followed up by deciding on its own to attempt to alert HM Revenue and Customs, the United Kingdom’s tax authority. It is possible that the more advanced autonomous AIs may still be prone to manifesting goals entirely unintended by humans.
 
A particularly concerning example of such a goal is the pursuit of human disempowerment or human extinction. We explain how deception could contribute to loss of control over AI systems in two ways: first, deception of AI developers and evaluators could allow a malicious AI system to be deployed in the world; second, deception could facilitate an AI takeover".
 
http://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X http://www.technologyreview.com/2024/05/10/1092293/ai-systems-are-getting-better-at-tricking-us/
 
March 2024
 
The Relationship between Digital Technologies and Atrocity Prevention. (Global Centre for the Responsibility to Proect)
 
New and emerging digital technologies — including, among others, social media platforms, artificial intelligence, geospatial technology, facial recognition and surveillance tools — have and will continue to rapidly shift the space of human interaction in the modern world. As such, these technologies can both directly and indirectly impact how various actors may perpetrate or prevent mass atrocity crimes.
 
Due to the rapid pace at which these technologies are developing, there is a notable gap in the capacity of multilateral institutions, individual states, regional organizations and private corporations to respond to the threat, as well as harness the potential of various digital technologies.
 
Building upon an event hosted by the Global Centre and the European Union on 29 June 2023, this policy brief examines the relationship between digital technologies and atrocity prevention, highlighting several technologies that may directly contribute to the perpetration and/or prevention of atrocities. This brief also offers actionable recommendations for relevant stakeholders to address and mitigate the risks of emerging technology.
 
http://www.globalr2p.org/publications/the-relationship-between-digital-technologies-and-atrocity-prevention/
 
Mar. 2024
 
Over 300 million full-time jobs around the world to be lost to artificial intelligence by 2030 further heightening inequalities.
 
Artificial intelligence (AI) will impact 40% of jobs around the world according to a report by the International Monetary Fund. AI, the term for computer systems that can perform tasks usually associated with human levels of intelligence, is poised to profoundly change the global economy. AI will have the ability to perform key tasks that are currently executed by humans. This will lower demand for labour, heighten job losses, lower wages and permanently eradicate jobs.
 
IMF's managing director Kristalina Georgieva said "in most scenarios, AI will worsen overall inequality". “Countries’ choices regarding the definition of AI property rights, as well as redistributive and other fiscal policies, will ultimately shape its impact on income and wealth distribution”.
 
The IMF analysis reports 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs will be negatively affected. AI jobs exposure is 40% in emerging market economies and 26% for low-income countries, according to the IMF. The report echoes earlier reports estimating AI would replace over 300 million full-time jobs.
 
In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, according to a report by Goldman Sachs economists. The report predicts that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies.
 
Companies are hoping to generate higher profits through automation by downsizing their workforce. For the 300 million newly unemployed workers, many whose incomes provide support for their families, the impacts will be devastating. Corporations are lobbying governments to spin their narratives for their own profit. Citizens should challenge corporate monied interests and financial elites capture of Government policies and regulatory frameworks, and resist Government from delivering public sector services via mecha chatbots, outsourced commercial automation and the like.
 
http://www.citizen.org/article/artificial-intelligence-lobbyists-descend-on-washington-dc/ http://blogs.lse.ac.uk/inequalities/2024/10/08/feeding-the-machine-seven-links-between-ai-and-inequalities/ http://blogs.lse.ac.uk/inequalities/2024/05/01/todays-colonial-data-grab-is-deepening-global-inequalities/ http://www.openglobalrights.org/digital-id-from-governance-by-technology-to-governance-of-technologies/ http://chrgj.org/transformer-states/ http://www.globalwitness.org/en/campaigns/digital-threats/greenwashing-and-bothsidesism-ai-chatbot-answers-about-fossil-fuels-role-climate-change/ http://www.theguardian.com/business/2025/jan/06/virtual-employees-could-join-workforce-as-soon-as-this-year-openai-boss-says http://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality
 
http://www.accessnow.org/press-release/ai-action-summit-a-missed-opportunity-for-human-rights-centered-ai-governance/ http://blog.witness.org/2025/02/french-ai-action-summit-critical-information-actors-must-be-centered-in-public-interest-ai/ http://www.techpolicy.press/human-rights-can-be-the-spark-of-ai-innovation-not-stifle-it/ http://socialmediavictims.org/character-ai-lawsuits http://www.kofiannanfoundation.org/news/2024-kofi-annan-lecture-delivered-by-maria-ressa/ http://www.openglobalrights.org/the-artificial-intelligence-dilemma-for-peacebuilders-and-human-rights-defenders/


 


Cambodia: Environmental Activists Sentenced to 6 to 8 Years
by Amnesty International, HRW, news agencies
 
Oct. 2024
 
Cambodia: Award winning investigative journalist arrested on Baseless Charge - Mech Dara Faces Two Years in Prison for Incitement.
 
Cambodian authorities have arrested and charged an award-winning journalist in apparent reprisal for his investigative journalism, Human Rights Watch said today. Mech Dara, 36, had recently reported on human trafficking and cybercrime that was critical of the government’s role.
 
Military police arrested Dara on September 30, 2024, while he was in his car with his family in an expressway toll line heading to Phnom Penh. A court charged him with inciting social unrest under articles 494 and 495 of the Criminal Code. Local authorities had accused Dara of wanting to “cause social disorder and confusion” and called on the Information Ministry to take legal action against him.
 
“Arresting the award-winning journalist Mech Dara on bogus charges shows that the Cambodian government is determined to stamp out all that remains of independent media in the country,” said Bryony Lau, deputy Asia director at Human Rights Watch. “Cambodian authorities should be investigating the cyber-scam centers and other corruption uncovered by journalists and rights groups, instead of increasing their assault on freedom of expression and the media.”
 
Cambodian authorities previously arrested Dara in 2022 after he reported on the rescue of Vietnamese nationals from an alleged cyber-scam compound in Sihanoukville. Such operations engage in online global scams through forced labor. In 2023, the US State Department acknowledged Dara’s extensive reporting on transnational trafficking and scam compounds with the Trafficking in Persons (TIP) Report Hero Award. It stated that Dara’s reporting “brought international attention about human trafficking and exploitation in Cambodia.”
 
In 2023, the Office of the UN High Commissioner for Human Rights reported that at least 100,000 people in Cambodia have been enslaved for the purpose of carrying out online scams.
 
The Cambodian government effectively controls all national TV and radio stations broadcasting in Khmer as well as newspapers reporting in Khmer, the national language. In February 2023, the Cambodian government effectively eliminated all vestiges of media freedom in the country by shutting down Voice of Democracy, one of the last remaining independent domestic news outlets. Much of its reporting was also unpopular with senior government officials, including coverage that also covered human trafficking of foreigners into cyber-scam operations with backing by senior ruling party officials.
 
http://news.mongabay.com/2025/05/cambodian-environmental-journalist-ouk-mao-arrested http://www.hrw.org/news/2024/10/03/cambodia-investigative-journalist-arrested-baseless-charge http://www.amnesty.org/en/latest/news/2024/10/cambodia-charges-against-journalist-highlight-clampdown-on-press-freedom/ http://www.ohchr.org/en/press-releases/2023/08/hundreds-thousands-trafficked-work-online-scammers-se-asia-says-un-report http://bangkok.ohchr.org/wp-content/uploads/2023/08/ONLINE-SCAM-OPERATIONS-2582023.pdf http://www.theguardian.com/world/2022/oct/10/sold-to-gangs-forced-to-run-online-scams-inside-cambodias-cybercrime-crisis http://www.latimes.com/world-nation/story/2022-11-01/i-was-a-slave-up-to-100-000-held-captive-by-chinese-cyber-criminals-in-cambodia
 
2 July 2024
 
Cambodia: Environmental Activists Sentenced to 6 to 8 Years. (News agencies)
 
Ten members of a Cambodian environmental activist group that campaigned against destructive infrastructure projects and alleged corruption have been each sentenced to six years in prison on charges of conspiring against the state.
 
Three of the members of the group Mother Nature Cambodia were also convicted on Tuesday of insulting Cambodian King Norodom Sihamoni, for which they were sentenced to an additional two years in prison, giving them a total of eight years behind bars.
 
Only five of the defendants attended the trial, the others were convicted in absentia. They included four Cambodians whose whereabouts are unknown and Alejandro Gonzalez-Davidson, a Spanish national who co-founded the group and was deported in 2015 and barred from ever returning to Cambodia.
 
The five who attended the trial were arrested outside the court after the verdict and sentences were issued. They had marched to the Phnom Penh Municipal Court with supporters, dressed in traditional white clothing worn at funerals, which they said represented the death of justice in Cambodia.
 
Phun Keoraksmey, a 22-year-old member of the group whose mother was by her side, said she was prepared to go to prison. "But I never want to go back to jail because I never did anything wrong. But I will never run from what I am responsible for. I chose this way, I chose this path," she said.
 
A number of diplomatic postings in Cambodia including the EU, UK, German, Swedish, US and Australian embassies posted statements on social media expressing their concerns about the judgement.
 
"Deeply concerned about increasing persecution & arrests of human rights defenders in Cambodia, such as the latest verdict on Mother Nature environmental activists," said the European Union's ambassador to Cambodia, Igor Driesmans, in a statement that was shared by the French embassy.
 
The Cambodian human rights group Licadho called the verdict "very disappointing".
 
"Today, the court has ruled that youth activists fighting for environmental protections and democratic principles are, in effect, acting against the state," it said.
 
"It is astounding that Cambodian authorities are convicting youth activists who are advocating for clean water in Phnom Penh, protecting mangrove forests in Koh Kong and warning against the privatisation of land in protected areas and presenting it as an attack against the state."
 
The Mother Nature Cambodia group last year was the co-winner of the Right Livelihood Award, sometimes characterised as the "Alternative Nobel," issued by a Stockholm-based foundation to organisations and individuals working to "safeguard the dignity and livelihoods of communities around the world".
 
Three members of the group, who were serving suspended prison sentences at that time, were denied permission by Cambodian court authorities to travel to Sweden to accept the award.
 
Mother Nature, founded in 2012, was deregistered as a non-government organisation by the Cambodian government in 2017 but its members vowed to carry on its work, with some serving jail time in recent years.
 
Human Rights Watch last month accused Cambodian authorities of trying the activists on politically motivated charges "to muzzle criticism of governmental policies".
 
"For more than a decade, Mother Nature has campaigned against environmentally destructive infrastructure projects, exposed corruption in the management of Cambodia's natural resources, and mobilised young Cambodians to defend the country's dwindling biodiversity," it said in a statement. It noted that Cambodia has one of the world's highest deforestation rates and levels of wildlife trafficking.
 
Cambodia's government has long been accused of using the judicial system to persecute critics and political opponents.
 
The government insists the country observes the rule of law under an electoral democracy, but parties seen as challengers to the ruling Cambodian People's Party have been dissolved by the courts or had their leaders harassed.
 
Under former prime minister Hun Sen, who held power for almost four decades, the government was widely criticised for human rights abuses that included suppression of freedom of speech and association. His son, Hun Manet, succeeded him last year but there have been few signs of political liberalisation.
 
"Instead of listening to young leaders at the forefront of the environmental movement, the Cambodian government has chosen to jail those that dare to speak out," Amnesty International's deputy regional director for research, Montse Ferrer, said in a statement after the court's ruling.
 
"The government has shown time and time again that it will not tolerate any dissent. This verdict is yet another sign that Cambodia's government has no intention of protecting the right to freedom of peaceful assembly."
 
http://rightlivelihood.org/news/cambodia-must-immediately-release-mother-nature-activists/ http://rightlivelihood.org/the-change-makers/find-a-laureate/mother-nature-cambodia/ http://www.civicus.org/index.php/media-resources/news/interviews/7226-cambodia-we-face-repression-because-we-disrupt-projects-that-benefit-powerful-people-and-corporations http://www.ohchr.org/en/press-releases/2024/07/experts-condemn-conviction-environmental-activists-cambodia http://www.hrw.org/news/2024/07/02/cambodia-environmental-activists-sentenced-6-8-years http://www.hrw.org/news/2024/07/02/cambodia-smear-campaign-against-labor-group http://www.amnesty.org/en/latest/news/2024/07/cambodia-conviction-of-youth-activists-a-further-blow-to-cambodias-environmental-movement/
 
July 2024
 
Freedom of association eradicated in Belarus: Special Rapporteur. (OHCHR)
 
Over the course of four years since the contested 2020 presidential elections, the Belarusian authorities have eradicated freedom of association in the country, Anaïs Marin, the Special Rapporteur on the human rights situation in Belarus, told the Human Rights Council today.
 
Her yearly report demonstrates intentional suppression of all types of independent associations, including civil society organisations and initiatives, political parties, trade unions, bar associations, religious and cultural organisations, and online communities. Authorities have used a range of measures to crackdown on free assembly and association, including mandatory re-registration campaigns; restrictions on access to funding and retaliations for donations; liquidation of associations through or without judicial proceedings; designation of undesirable associations as “extremist formations”; and the persecution of their leaders, members, volunteers and supporters.
 
“Since 1 January 2021, Belarus has lost over 1,500 registered public associations. All independent trade unions have been dismantled and the number of political parties had fallen from 16 to 4 in the period leading up to the February 2024 parliamentary elections,” the Special Rapporteur noted.
 
“All those who ever dared speaking up against the government or its policies are either behind bars or in exile”, she said.
 
Her report, which examines major developments in the human rights situation in Belarus from 1 April 2023 to 31 March 2024, also highlights other concerning trends, such as continuous ill-treatment in detention; restrictive measures targeting Belarusian citizens relocating abroad; harassment of minorities; the prosecution in absentia of alleged “extremists” living in exile; and intimidation of their relatives and violations of the right to privacy.
 
“Of particular concern is the growing number of allegations of ill-treatment of inmates convicted on what appear to be politically motivated charges, including persons who suffer chronic and acute diseases,” Marin said. “There have been over a dozen reported cases of deaths in custody since 2020, most likely caused by inadequate or untimely medical care.
 
http://www.ohchr.org/en/press-releases/2024/07/freedom-association-eradicated-belarus-special-rapporteur http://news.un.org/en/story/2024/07/1151736
 
14 June 2024
 
China: #MeToo activists sentenced to 5 years prison for human rights and social justice advocacy. (Amnesty International)
 
Responding to the sentencing of Chinese #MeToo activist Sophia Huang Xueqin to five years in prison and labour activist Wang Jianbing to three years and six months in prison, both for “inciting subversion of state power”, Amnesty International’s China Director Sarah Brooks said: “Tomorrow marks exactly one thousand days since Sophia Huang Xueqin and Wang Jianbing were arrested. These convictions will prolong their deeply unjust detention and have a further chilling effect on human rights and social justice advocacy in a country where activists face increasing state crackdowns.
 
“In reality, they have committed no actual crime. Instead, the Chinese government has fabricated excuses to deem their work a threat, and to target them for educating themselves and others about social justice issues such as women’s dignity and workers’ rights.
 
“#MeToo activism has empowered survivors of sexual violence around the world, but in this case the Chinese authorities have sought to do the exact opposite by stamping it out.
 
“These malicious and totally groundless convictions show just how terrified the Chinese government is of the emerging wave of activists who dare to speak out to protect the rights of others.
 
“Sophia Huang Xueqin and Wang Jianbing have been jailed solely for exercising their right to freedom of expression, and they must be immediately and unconditionally released.”
 
Background:
 
Guangzhou Intermediate Court today sentenced Sophia Huang Xueqin to five years in prison and labour activist Wang Jianbing to three years and six months in prison for “inciting subversion of state power”. Sophia Huang Xueqin said in court that she would appeal.
 
Sophia Huang Xueqin is a journalist who has been involved in several #MeToo campaigns to provide support and assistance to survivors of sexual assault and harassment. Wang Jianbing has provided legal support for people with disabilities and workers with occupational diseases. He is also a prominent supporter of the #MeToo movement in China.
 
Their conviction is related to their attendance at weekly gatherings with fellow activists, hosted by Wang Jianbing; their participation in online human rights education; and online posts on issues deemed “sensitive” by the Chinese government.
 
The pair were arrested in Guangzhou on 19 September 2021, the day before Huang was planning to leave China for the UK to study for a master’s degree.
 
Since their arrest, both activists have been prevented from seeing family members. Meanwhile, dozens of their friends have been summoned by the police and had their homes searched and electronic devices confiscated.
 
Sophie Huang Xueqin is believed to have been subjected to ill-treatment in detention, leading to the dramatic deterioration of her health.
 
In January 2023, Sophia Huang Xueqin and Wang Jianbing were transferred to Guangzhou City No 1 Detention Centre, awaiting trial at the court.
 
The Chinese authorities systematically use national security charges with extremely vague provisions, such as “subverting state power” and “inciting subversion of state power”, to prosecute lawyers, scholars, journalists, activists, NGO workers, and others.
 
The UN Working Group on Arbitrary Detention determined in 2022 that Wang Jianbing was being arbitrarily detained and has repeatedly called on China to repeal the crime of “inciting subversion” or bring it into line with international standards.
 
http://www.amnesty.org/en/latest/news/2024/06/china-malicious-conviction-of-metoo-and-labour-activists-shows-beijings-growing-fear-of-dissent/
 
June 2024
 
Russia: Authorities punishing imprisoned anti-war critics and dissenters by denying family contact. (Amnesty International)
 
Russia’s authorities are systematically denying arbitrarily imprisoned government critics contact with their families, Amnesty International revealed in a new report.
 
Using several emblematic cases, the report documents how authorities have exploited legal loopholes and fabricated pretexts to further isolate dissidents, including those imprisoned for speaking out against Russia’s invasion of Ukraine.
 
“These are not isolated practices being used by a few rogue officials. This is a deliberate strategy of the Russian government to isolate and silence dissenters and inflict further suffering on them and their families. All forms of contact – visits, phone calls, letters – are being curtailed,” said Natalia Prilutskaya, Amnesty International’s Russia Research
 
A range of tactics is being used by the authorities to arbitrarily deprive prisoners of contact with their families and friends.
 
One method is to routinely deny requests for visits and phone calls while the individual is held in pretrial detention, often without citing any reasons. In other instances, family members are being designated as “witnesses” in their loved one’s trial proceedings, thereby precluding them from having any contact. In such cases, families might not see their loved ones for months or even years. Authorities may also delay detainees’ and prisoners’ mail or outright ban correspondence with certain individuals.
 
Another tactic is the unannounced transfer of prisoners from pretrial detention to penal institutions on the eve of a planned family visit, which is then cancelled. Such transfers require judicial approval by the same court that approves the visit, making this practice even more cynical.
 
A form of harassment widely used by the penal authorities is arbitrarily placing a prisoner in a disciplinary cell for allegedly committing a minor, often made-up disciplinary violation, just before a scheduled family visit. The prisoner is then deprived of visits and phone calls for the duration of the punishment.
 
Such practices not only additionally punish the incarcerated dissenters but also inflict severe psychological suffering on their families.
 
“These tactics are utterly inhumane. The authorities not only devastate the lives of those who dared to express dissenting views by imprisoning them, but they also rob their loved ones of the few possibilities to keep in touch. This cruel ill-treatment must stop immediately, and all those imprisoned solely for exercising their rights to freedom of expression, association and peaceful assembly must be released,” said Natalia Prilutskaya.
 
http://www.amnesty.org/en/latest/news/2024/06/russia-authorities-punishing-imprisoned-anti-war-critics-and-dissenters-by-denying-family-contact/ http://www.ohchr.org/en/press-releases/2025/01/russia-special-rapporteur-appalled-prison-sentences-punish-navalny-lawyers http://www.ohchr.org/en/press-releases/2025/02/no-justice-alexei-navalny-and-more-lives-risk-russia-warns-un-special http://www.ohchr.org/en/press-releases/2025/02/russias-repression-home-and-aggression-against-ukraine-demand-justice-no http://freedomhouse.org/report/transnational-repression/2024/no-way-or-out-authoritarian-controls-freedom-movement


 

View more stories

Submit a Story Search by keyword and country Guestbook