![]() |
|
|
View previous stories | |
|
Human Rights Watch World Report 2026 by Philippe Bolopion Executive Director The global human rights system is in peril. Under relentless pressure from US President Donald Trump, and persistently undermined by China and Russia, the rules-based international order is being crushed, threatening to take with it the architecture human rights defenders have come to rely on to advance norms and protect freedoms. To defy this trend, governments that still value human rights, alongside social movements, civil society, and international institutions, need to form a strategic alliance to push back. To be fair, the downward spiral predated Trump’s re-election. The democratic wave that began over 50 years ago has given way to what scholars term a “democratic recession.” Democracy is now back to 1985 levels according to some metrics, with 72 percent of the world’s population now living under autocracy. Russia and China are less free today than 20 years ago. And so is the United States. Of course, democracy is not a panacea for human rights violations; the US and other longtime democracies have their own histories of colonial crimes, racism, abusive justice systems, and wartime atrocities. More recently, authoritarian leaders have exploited public mistrust and anger to win elections and then dismantled the very institutions that brought them to power. Democratic institutions are crucial to represent the will of the people and keep power in check. It’s no surprise that whenever democracy is undermined, rights are too, as evident in recent years in India, Turkiye, the Philippines, El Salvador, and Hungary. In this context, 2025 may be seen as a tipping point. In just 12 months, the Trump administration has carried out a broad assault on key pillars of US democracy and the global rules-based order, which the US, despite inconsistencies, was, with other states, instrumental in helping to establish. In short order, Trump’s second-term administration has undermined trust in the sanctity of elections, reduced government accountability, gutted food assistance and healthcare subsidies, attacked judicial independence, defied court orders, rolled back women’s rights, obstructed access to abortion care, undermined remedies for racial harm, terminated programs mandating accessibility for people with disabilities, punished free speech, stripped protections from trans and intersex people, eroded privacy, and used government power to intimidate political opponents, the media, law firms, universities, civil society, and even comedians. Claiming a risk of “civilizational erasure” in Europe and leaning on racist tropes to cast entire populations as unwelcome in the US, the Trump administration has embraced policies and rhetoric that align with white nationalist ideology. Immigrants and asylum seekers have been subjected to inhumane conditions and degrading treatment; 32 died in US Immigration and Customs Enforcement custody in 2025, and as of mid-January 2026, an additional 4 have died. Masked immigration enforcement agents have targeted people of color, using excessive force, terrorizing communities, wrongfully arresting scores of citizens, and, most recently, unjustifiably killing two people in Minneapolis, whose deaths Human Rights Watch has documented. The US president of course has the authority to tighten US borders and enforce stricter immigration policies. The administration is not, however, entitled to deny legal process to asylum seekers, mistreat undocumented migrants, or unlawfully discriminate. In a well-functioning democracy, no electoral mandate should supersede domestic legislation, constitutional protections, or international human rights law. Trump’s team has repeatedly bypassed these guardrails. The violations have not stopped at the border. The Trump administration used a 1798 law to send hundreds of Venezuelan migrants to an infamous prison in El Salvador, where they were tortured and sexually abused. Its blatantly unlawful strikes on boats in the Caribbean and the Pacific extrajudicially killed more than 120 people whom Trump claims were drug traffickers. After the US attacked Venezuela and apprehended its president, Nicolás Maduro, and his wife, Cilia Flores, Trump claimed the US would “run” the country and control its vast oil reserves. Despite paying lip service to human rights concerns under Maduro at the United Nations, Trump has worked with the same repressive apparatus to further US interests. Many Western allies have chosen to stay silent about these lawless moves, perhaps fearing erratic tariffs and blowback to their alliances. Trump’s foreign policy has upended the foundations of the rules-based order that seeks to advance democracy and human rights, even if imperfectly. Trump has boasted that he doesn’t “need international law” as a constraint, only his “own morality.” His administration has politicized the US State Department’s annual human rights report, stepped away from the global prohibition on antipersonnel landmines, voiced support for rewriting international rules on asylum, and skipped the UN’s Universal Periodic Review of the US’ human rights record. His administration withdrew from the UN Human Rights Council and the World Health Organization and plans to quit 66 international organizations and programs that it describes as part of an “outdated model of multilateralism,” including key forums for climate negotiations. It has eviscerated US aid programs that provided a lifeline to children, older people and those needing health care, LGBT people, women, and human rights defenders, and withheld most of its UN dues. Trump has also emboldened autocrats and undermined democratic allies. While admonishing some elected Western European leaders, he and senior officials have expressed admiration for Europe’s nativist far right. He has favored autocrats such as Hungary’s Prime Minister Viktor Orban, Turkiye’s President Recep Tayyip Erdogan, and El Salvador’s President Nayib Bukele, while continuing decades of US support to Saudi Crown Prince Mohammed bin Salman and Egypt’s President Abdel Fattah al-Sisi. His administration has unjustifiably imposed sanctions to punish respected Palestinian human rights organizations, the International Criminal Court’s (ICC) prosecutor and many of its judges, a UN special rapporteur, and for several months, a Brazilian Supreme Court judge and his wife. The institutional response in the US to Trump’s power grabs has been shockingly muted. Much of Congress, controlled by his own party, has not challenged his supercharged expansion of executive power. The leaders of the US’ most powerful technology companies have made significant donations and sought to placate the president. Some big law firms and prestigious universities have made deals rather than assert their independence, and some media organizations seem afraid to attract the president’s ire. Has the US switched sides on the human rights playing field? While US engagement with human rights institutions has always been selective, China and Russia have long pursued an illiberal agenda. They stand much to gain from a US government that now expresses open hostility to universal rights. China and Russia remain strategic rivals of the US, but all three countries are now led by leaders who share open disdain for norms and institutions that could constrain their power. Together, they wield considerable economic, military, and diplomatic power. If they were to consistently act as allies of convenience to erode global rules, they could threaten the entire system. Already, a loose international network of countries such as North Korea, Iran, Venezuela, Myanmar, Cuba, and Belarus work in concert with Russia and China. These leaders share very little ideologically but align in undermining human rights and promoting a regressive international agenda. In word and in practice, the US government is now helping them in this endeavor. The US’ weakening of multilateral institutions also dealt a serious blow to global efforts to prevent or stop grave international crimes. The “never again” movement, born from the horrors of the Holocaust and reignited by the Rwandan and Bosnian genocides, spurred the UN General Assembly to embrace the Responsibility to Protect (R2P) in 2005. Meant to guide international intervention to prevent and stop atrocities in tandem with efforts to prosecute and punish serious crimes, R2P made a real difference in places like the Central African Republic and Kenya. Today, R2P is rarely invoked and the ICC is under siege. In addition to Trump’s far-reaching sanctions, in December 2025 a Moscow court sentenced the ICC prosecutor and eight of its judges to prison terms in absentia. Moreover, despite being ICC fugitives, in 2025, Russia’s President Vladimir Putin was welcomed by Donald Trump in Alaska, and Israel’s Prime Minister Benjamin Netanyahu traveled to Hungary, an ICC member state at the time, at Orban’s invitation. Twenty years ago, the US government and civil society were instrumental in galvanizing a response to mass atrocities in Darfur. Sudan is burning again, but this time under Trump, with relative impunity. Sudan’s Rapid Support Forces (RSF), which emerged from the militias that led the prior ethnic cleansing campaign, are again committing murder and rape on a mass scale. A growing body of evidence indicates that the UAE, a longtime US ally that recently made multi-billion-dollar deals with Trump, is providing the RSF with military support. In the Occupied Palestinian Territory, the Israeli armed forces have committed acts of genocide, ethnic cleansing, and crimes against humanity, killing over 70,000 people since the October 2023 Hamas-led attacks on Israel and displacing the vast majority of Gaza’s population. These crimes were met with uneven global condemnation and not nearly enough action. Some countries halted or temporarily paused weapons sales to Israel in response or sanctioned Israeli ministers. Trump, however, continued a long-standing US policy of almost unconditional support to Israel, even as the International Court of Justice is weighing allegations of genocide and has issued binding orders under the Genocide Convention to protect Palestinians’ rights. Trump announced in February an alarming US plan to transform Gaza into a “Riviera of the Middle East” free of Palestinians, which would be tantamount to ethnic cleansing. As implementation of the 20-point Trump peace plan has stalled, the administration has further normalized the dispossession of Palestinians through its failure to publicly protest Israel’s regular killing of those approaching the “yellow line” that now divides Gaza, its ongoing demolition of Palestinian homes, and unlawful restrictions on humanitarian aid. In Ukraine, Trump’s peace efforts have consistently downplayed Russia’s responsibility for serious violations. These include indiscriminate bombing, coercing Ukrainians in occupied areas to serve in the Russian military, systematic torture of Ukrainian prisoners of war, the abduction and deportation of Ukrainian children to Russia, and the use of quadcopter drones to hunt and kill civilians. Rather than applying meaningful pressure on Putin to end these crimes, Trump publicly berated Ukrainian President Volodymyr Zelenskyy in a made-for-TV dressing down, demanded an exploitative mineral deal, pressured Ukraine’s authorities to concede large swaths of territory, and proposed “full amnesty” for war crimes. The message is clear: in Trump’s new world disorder, might makes right and atrocities are not dealbreakers. With the US undermining the global human rights system, who will rise in its defense? Despite rhetorical flourishes, many governments treat rights and the rule of law as a hindrance, rather than a benefit, to security and economic growth. The European Union, Canada, and Australia appear to hold back out of fear of antagonizing the US and China. Others are weakened by the way political parties displaying illiberal tendencies have skewed their domestic politics and discourse away from a rights-respecting approach. In many parts of Western Europe, including the United Kingdom, Germany, and France, many voters gladly accept limits on the rights of “others,” whether immigrants, women, racial and ethnic minorities, LGBT people, or other marginalized communities. But as history shows, would-be autocrats never stop at “others.” To fill this vacuum, there is an urgent need for a new global alliance to support international human rights within a rules-based order. Individually, these countries may be easily overwhelmed by the global influence of the US and China. But together, they could become a powerful political force and substantial economic bloc. The obvious participants in such a cross-regional alliance would be established democracies with significant economic and geopolitical clout, including, but not limited to, Australia, Brazil, Canada, Japan, South Africa, South Korea, and the UK, as well as the EU as an institution and many of its member states. It’s critical to look beyond the usual suspects. The multilateral order was built brick by brick by states from all regions over decades. Countries such as Costa Rica, Ghana, Malaysia, Mexico, Senegal, Sierra Leone, and Vanuatu have played important roles on specific human rights initiatives in key international forums. Creative diplomats from smaller states such as Liechtenstein and The Gambia have been instrumental in advancing international justice. And it should be recognized that support for human rights has never come just from powerful democracies or countries with the strongest domestic rights records. In theory, India, long considered the world’s largest democracy, could be a key member of this global alliance, considering its prior role in opposing apartheid in South Africa and defending minority rights in Tibet and Sri Lanka. Unfortunately, under a Narendra Modi administration that actively promotes Hindu majoritarianism, India can hardly hold itself out as a human rights champion. As the Indian authorities oppress political opponents, target minorities, especially Muslims and Christians, censor independent voices, ban books, and commit atrocities in counterinsurgency operations, it is unlikely, for now, to see value in bolstering a system that might one day be used against it. However, India has also been targeted by the Trump administration for its purchase of Russian oil and regards China, with which it has clashed over their shared border, as a strategic competitor. The Indian government, which has historically chosen “nonaligned” status, might find that cleaning up its human rights record to join with other democracies could help protect it from the aggressive great powers. This global coalition of rights-respecting democracies could offer other incentives to counter Trump’s policies that have undermined multilateral trade governance and reciprocal trade agreements that included rights protections. Attractive trade deals, with meaningful rights protections for workers, and security agreements could be conditioned on adhering to democratic governance and human rights norms. Democracy already comes with benefits. While autocracies have generally fostered conflict, economic stagnation, or kleptocracy, as evidenced in multiple academic studies, including the work of the Nobel Prize-winning economist Daron Acemoglu, democratic institutions reliably yield economic growth. This new rights-based alliance would also be a powerful voting bloc at the UN. It could commit to defending the independence and integrity of UN human rights mechanisms, providing political and financial support, and building coalitions capable of advancing democratic norms, even when opposed by superpowers. Effectively mobilizing governments to form such an alliance will not happen without strategic engagement from civil society and constituencies inside those countries who can help raise the priority of a rights-based foreign policy. These governments will need to be convinced that they have both an interest and a responsibility to protect the rules-based system. Projects of this nature are bubbling up. Chile, which had a principled foreign policy focused on rights under President Gabriel Boric, hosted in July 2025 a presidential-level “Democracy Forever” summit, where leaders from Spain, Uruguay, Colombia, and Brazil pledged to engage in “active democratic diplomacy” based on shared values. The Hague Group, led by Malaysia, South Africa, and Colombia, formed in January 2025 in “defense of international law” and in solidarity with Palestinians. Over 70 countries from all regions signed a joint statement defending multilateralism at the UN. Earlier, in 2017, former Danish Prime Minister Anders Fogh Rasmussen set up the Alliance of Democracies Foundation to rally the dwindling ranks of democratic countries to “support each other against authoritarian pressures.” Whatever its precise contours, an alliance of rights-respecting democracies would offer a hopeful counterpoint to the authoritarian trope of China’s and Russia’s leaders standing alongside North Korea’s Kim Jong Un, observing military hardware in a parade in Beijing’s Tiananmen Square in September. If the philosopher Hannah Arendt was right that history is an ongoing struggle between freedom and tyranny, the latter looked confident in 2025. Yet, even in the worst of times, the idea of freedom and human rights is enduring. People power remains an engine for change. In the US, “No Kings” marches have drawn millions, protesters in Chicago, Minneapolis, Los Angeles, and around the country have stood up against the deployment of the National Guard and ICE abuses, and students are still organizing for Palestine on university campuses despite draconian crackdowns and visa revocations. Buoyed by popular resistance, South Korean parliamentarians impeached their president to prevent him from grabbing power through martial law. Grassroots aid efforts by Sudan’s emergency response rooms, Hong Kong’s fire relief, Sri Lanka’s cyclone relief community kitchens, and Ukrainian mutual aid and solidarity collectives represent the best of this trend. In 2025, Gen Z protests against corruption, inadequate public services, and poor governance in Nepal, Indonesia, and Morocco brought to the forefront the need for governments to listen to their youth and tackle corruption and inequality. But as the difficulties of restoring rights in Bangladesh after years under an authoritarian government illustrates, gains won through public mobilization can easily be lost unless democratic participation and free expression remain unassailable. In this more hostile world, civil society is more critical than ever. It’s also increasingly endangered, particularly in an environment where funding is scarce. In 2025, Human Rights Watch was labeled “undesirable” and banned from operating in Russia. For partners in Egypt, Hong Kong, and India, these tactics are all too familiar. Restrictions on civil society and protest have become more commonplace in Europe, including the UK and France. And now, for the first time, many worry about risks associated with their operational presence in the US, where the Open Society Foundations, a major donor, have already been threatened, and the administration is preparing a list of “domestic terrorists” under overbroad guidance that could be interpreted to include the work of many progressive groups. Breaking the authoritarian wave and standing up for human rights is a generational challenge. In 2026, it will play out most acutely in the US, with far-reaching consequences for the rest of the world. Fighting back will require a determined, strategic, and coordinated reaction from voters, civil society, multilateral institutions, and rights-respecting governments around the globe. http://www.hrw.org/world-report/2026 Visit the related web page |
|
|
Accountability for harms arising from algorithmic systems by Amnesty International, agencies 27 Feb. 2026 This week, the Pentagon issued an ultimatum to the AI company Anthropic to drop two key guardrails regarding the use of its AI system, Claude: one barring “mass domestic surveillance,” and another prohibiting the Pentagon from using its tech to build AI-powered weapons that can kill without a human operator. Worker organizations and Unions representing 700,000 Tech Employees issued the following demand: Amazon, Google, Microsoft must Reject the Pentagon’s Demands: We are speaking out today because the Pentagon is demanding that Anthropic abandon two major safety guardrails for Claude, which is the only frontier AI model currently deployed in classified Department of War operations. This intimidation is an ultimatum: AI companies can either agree to the Pentagon’s terms, or be designated a “supply chain risk,” or forced to provide the technology through the Defense Production Act. Those guardrails, which the Pentagon originally agreed to in its contract with Anthropic, are 1) no mass domestic surveillance, and 2) no fully autonomous agents, which means no AI-powered weaponry that can kill people without human oversight. The Pentagon set a “deadline” for Anthropic to submit to its demands by Friday. As of Thursday afternoon, Anthropic issued a statement saying it will reject the Pentagon’s demands and uphold these guardrails. How the Pentagon reacts remains to be seen, but we know they will rapidly seek to onboard other models without these guardrails in place, regardless of whether they try to force Anthropic to comply. We are writing to urge our own companies to also refuse to comply should they or the frontier labs they invest in enter into further contracts with the Pentagon. If any tech company caves to the Pentagon’s demands, War Secretary Pete Hegseth will have won the ability to surveil our communities — –here and abroad — –en masse, at an unprecedented level. He will have the power to build and deploy A.I.-powered drones that kill people without the approval of any human. Our employers are already complicit in providing their technologies to power mass atrocities and war crimes; capitulating to the Pentagon’s intimidation will only further implicate our labor in violence and repression. It is not a given that our companies will do the right thing. xAI just signed a contract with the Pentagon to deploy Grok (that recently referred to itself as Mecha Hitler) in classified environments — as far as we know, without any guardrails. Our own companies are also on the brink of accepting similar contract terms. Google is in negotiations with the Pentagon to deploy Gemini, its own frontier model, for classified uses. Gemini is already being deployed by the War Department through “GenAI.mil.” Amazon and Microsoft are heavily invested in Anthropic and OpenAI, while OpenAI is also in negotiations with the Department of War. All three companies already host government data through Google Cloud, Microsoft Azure and Amazon Web Services (AWS). We need Congress to pass federal regulation that prohibits the irresponsible and unconstitutional use of AI for violence and mass surveillance. In the absence of federal oversight, we are taking matters into our own hands. As workers who make these companies run, here are our demands: Executive leadership at Google, Microsoft, and Amazon must reject the Pentagon’s advances and provide workers with transparency about contracts with other repressive state agencies including DHS, CBP, and ICE. We invite workers to join us in organizing to ensure our leadership does not use our labor for mass surveillance, weaponry, and war. http://medium.com/@notechforapartheid/jointstatement-5561f1572e46 http://notdivided.org/ http://futurism.com/artificial-intelligence/ai-workers-pentagon-anthropic http://www.hrw.org/news/2026/03/03/us-militarys-dangerous-slide-toward-fully-autonomous-killing http://www.france24.com/en/tv-shows/a-propos/20260310-radical-acceleration-in-targenting-cycle-through-ai-points-to-lack-of-human-oversight http://www.democracynow.org/2026/3/18/ai_warfare http://www.france24.com/en/americas/20260308-openai-robotics-chief-quits-over-ai-potential-use-for-war-and-surveillance-artificial-intelligence-pentagon http://www.theguardian.com/commentisfree/2026/mar/06/moltbook-risk-ai-agents-artificial-life http://www.nature.com/articles/d41586-026-00834-z http://counterhate.com/research/killer-apps/ http://www.hks.harvard.edu/centers/carr-ryan/our-work/carr-ryan-commentary/governance-procurement-how-ai-rights-became http://www.bbc.com/news/articles/c4g7k7zdd0zo Future of Life Institute: Anthropic vs. DoW: The U.S. Department of War gave AI company Anthropic an ultimatum last week: allow the military unrestricted access to their AI, or lose a $200M contract with the Pentagon and be blacklisted from all government work. Anthropic stood firm that AI shouldn't control weapons or be used for mass surveillance of Americans, which the Pentagon wouldn't concede to. Vocal support for Anthropic's commitment to their (bare minimum) redlines emerged from employees at rivals such as OpenAI and Google. After the Friday deadline imposed by Secretary of War Pete Hegseth came and went, Anthropic’s work with the government was suspended and they were slapped with a national security “supply chain risk” label from the White House - usually reserved for foreign adversaries - which could critically disrupt Anthropic’s other business partnerships. Hegseth also threatened that the government could invoke the Defense Production Act to force Anthropic to serve them their systems tailored “to the military's needs", though it hasn’t yet materialized. Just hours later on Friday, OpenAI announced they had struck a deal with the Pentagon allowing them to use their models across their classified network. While OpenAI CEO Sam Altman had expressed support for Anthropic’s redlines and claims that OpenAI shares them, it’s still unclear on how - if at all - the Pentagon will respect them when using OpenAI’s models. The move has been met with further skepticism from the public, with a call for ChatGPT users to cancel their subscriptions spreading across social media. Anthropic drops safety pledge: The same week as their showdown with Hegseth, news broke that Anthropic is dropping a central pillar of their Responsible Scaling Policy (RSP), in which they pledge to never train an AI system unless they can guarantee in advance that their safety measures are adequate. While they insist they're not abandoning safety, they’re replacing firm pre-deployment guarantees with looser commitments to transparency, risk reports, and “Frontier Safety Roadmaps,” essentially switching from an offensive to defensive strategy. Not a reassuring move from the AI company that’s built their brand on safety. AI goes nuclear: Especially relevant given the Anthropic-DoW battle, King’s College London ran war game simulations with AI systems from Anthropic, OpenAI, and Google, and found that in 95% of scenarios they chose to deploy nuclear weapons. The LLMs frequently escalated conflicts to nuclear strikes, showing little hesitation even after being reminded of the catastrophic human consequences. None of them chose full de-escalation or surrender - in fact, when facing defeat they tended to escalate instead. 27 Feb. 2026 Stop Killer Robots responds to the Anthropic-Pentagon standoff: Machines must not be allowed to make life and death decisions. It is time for international regulation. Stop Killer Robots, the global coalition of 270+ civil society organisations calling for new international law on autonomous weapons, has responded to reports that the U.S. Department of Defence pressured Anthropic to adjust the terms of use for the company’s frontier large language model, Claude. Anthropic’s terms currently stipulate that the AI model may not be used in fully autonomous weapon systems – machines capable of selecting and killing targets without human control. The company has refused to acquiesce to the Pentagon’s demands, on the grounds that “frontier AI systems are simply not reliable enough to power fully autonomous weapons” and that the oversight mechanisms needed to protect civilian lives and military personnel “don’t exist today.” Although the dispute here is about a large language model AI system, the same reservations should apply to all AI-enabled targeting systems. Stop Killer Robots warns that the fact that Anthropic has made this caveat about the reliability of frontier AI models is a signal the world cannot afford to ignore. This is a technology company telling its government that it cannot responsibly provide what is being asked of it because the capability to do so safely does not exist, and neither do the guardrails to prevent catastrophic harm if it did. If, as Anthropic acknowledges, the guardrails don’t exist for autonomous weapons, then the same is true for AI enabled decision support systems which they have not drawn a line against. The focus on fully autonomous weapons deflects from the realities of how militaries and companies are already using AI technologies. AI-enabled decision support systems, in which humans remain only nominally in the decision-making process, reduce human beings to proxy data points – weight, gender, age, social associations etc. sorted and processed by machines to generate targeting recommendations. When operators defer to those outputs or accept them in incredibly short timeframes, rather than exercising independent judgement, the line between a system that recommends a strike and one that executes it becomes dangerously thin. Automation bias is not a theoretical concern; it is an operational reality that diminishes human decision making and accountability. Diplomatic discussions on autonomous weapons have been ongoing for over a decade. Later this year, at the Convention on Conventional Weapons (CCW) Review Conference at the United Nations in Geneva, States will have a concrete opportunity to act, to put guardrails in place on autonomous weapons systems and commence negotiations on new international law. That opportunity must not be squandered. The vast majority of States have now explicitly expressed their support for a new legally binding instrument, and the urgency to move forward cannot be overstated. It is also essential that there is a strong international response to the integration of AI into the use of force in all its forms. Anything less will leave the most dangerous developments ungoverned and the most vulnerable people unprotected. Nicole Van Rooijen, Executive Director of Stop Killer Robot explains: “We are at a pivotal moment for humanity. When the companies building these technologies are themselves refusing to deploy it on safety grounds, it must raise alarm bells for governments and people everywhere. The standards Anthropic has chosen to maintain are a bare minimum of responsible conduct, not cause for celebration. And yet even those basic standards are already under pressure from the most powerful military in the world.” “The rate of technological development will not wait for diplomacy to find its feet. Every year without binding international law, the gap between what these systems can do and our ability to govern them grows wider. Those who will pay the price are ordinary people: civilians in conflict zones and citizens in both authoritarian and democratic States who will face these dehumanising technologies as their use becomes normalised and human rights, and democratic values eroded.” “This moment demands political and moral leadership of the highest order. States must come to the table this year not just to talk, but to act. The time for kicking the can down the road has passed, the moment has arrived and what is needed now is new law.” http://www.stopkillerrobots.org/news/press-release-stop-killer-robots-responds-to-the-anthropic-pentagon-standoff/ Feb. 2026 India AI Impact Summit failed to rein in destructive practices of governments and technology companies. (Amnesty International) “To date, AI summits have failed to advance the necessary regulations for a digitally safe future. If there is one clear takeaway from the India AI Impact Summit, it is that these gatherings have time and again proven largely irrelevant and ineffective at advancing binding rights protections or the safeguards necessary in the context of immense AI investment. Each year and at each summit, the gulf between state action to safeguard people’s rights and wellbeing, and an increasingly unchecked powerful AI industry keeps growing. They have advanced techno-solutionist narratives and soft governance instruments, where industry and government deepen their alliances. “States must urgently course‑correct the current AI trajectory, adopt binding guardrails that draw clear prohibitions around technologies that are incompatible with human rights. “The Summit’s push on sovereignty, innovation and ‘democratisation’ feeds a global trend of turning AI into a race predicated on power accumulation and economic growth at all costs, rather than the collective global action needed to interrupt this. Achieving such a goal would only be possible if the Summit included strong civil society and impacted community engagement on rights concerns which was woefully absent from the start. “While India was lauded by world leaders for its technological progress, the human rights concerns arising out of technology deployment in the country were papered over. Amnesty International’s own research has shown that the deployment of harmful technologies such as facial recognition and automation in the public sector have threatened the right to privacy and social protection in India and have led to discrimination and exclusion of marginalized communities. Systems of mass surveillance are being expanded in an already pernicious context of rights abuses. * Massive job losses from the adoption of AI and Automation are predicted to be in the hundreds of millions globally and remain absent from international discussions. With AI industry PR agencies promoting "universal basic incomes" as a supposed antidote to such concerns. Such promotions are farcical, with the World Inequality Report 2026 reporting that the richest 10% of the world’s population owning 75% of wealth and the bottom half just 2%, with the top 1% wealthier than the bottom 90% combined. Government and financial institutions will continue to pursue austerity for the working classes and largesse for economic elites and the millions impacted by AI job losses will be left impoverished for Wall Street investors profits. * The International Monetary Fund estimates at least 300 million full-time jobs globally affected by AI-related automation by 20230. Goldman Sachs has predicted that as many as half of all jobs could be fully automated by 2045, driven by generative AI and robotics. Amazon believes it can use robots to avoid adding more than half a million jobs in the next few years, the New York Times reports. 800 million workers to lose their jobs because of automation. That's the alarming conclusion of a 2017 report by global management consultants McKinsey. 23 Feb. 2026: Ethan Mollick - Ralph J. Roberts Distinguished Faculty Scholar, Co-Director, Generative AI Labs at Wharton University: "The CEOs of the AI labs have spent the last two years ominously discussing massive future job losses even as they continued AI development. As AI becomes more salient outside of the “AI bubble,” workers and policymakers are going to start taking that kind of talk very seriously". * The environmental impact of datacenters required to power AI continues to raise alarm. The International Energy Agency projects that global electricity consumption of AI datacenters will increase 15% each year from 2024 to 2030, more than four times faster than the growth of electricity consumption from all other sectors. “The demand for new datacenters cannot be met in a sustainable way,” says Noman Bashir, a climate impact fellow at the Massachusetts Institute of Technology’s climate and sustainability consortium. “The pace at which companies are building new datacenters means the bulk of the electricity to power them must come from fossil fuel-based power plants.” Speaking at the India AI Impact summit, OpenAI boss Sam Altman was condemned for comparing how much power is used by artificial intelligence models compared to the amount of energy required for human development. “People talk about how much energy it takes to train an AI model – but it also takes a lot of energy to train a human,” Altman told the Indian Express. “It takes about 20 years of life – and all the food you consume during that time – before you become smart.” Altman’s remarks generated a widespread backlash online, with many describing his comments as a dystopian disregard for human life, a common condition amongst tech billionaires. More than 230 environmental groups have called for a moratorium on building AI datacenters in the US. “The rapid, largely unregulated rise of datacenters to fuel the AI and crypto frenzy is disrupting communities across the country and threatening Americans’ economic, environmental, climate and water security,” their letter states. http://www.amnesty.org/en/latest/news/2026/02/global-india-ai-impact-summit-failed-to-reign-in-destructive-practices-of-governments-and-technology-companies/ http://www.amnesty.org/en/latest/news/2026/04/eu-simplification-laws http://www.accessnow.org/press-release/ai-action-summit-a-missed-opportunity-for-human-rights-centered-ai-governance/ http://www.accessnow.org/issue/artificial-intelligence/ http://www.ituc-csi.org/unions-call-for-strong-ai-guardrails-to-protect-workers http://www.hks.harvard.edu/centers/carr-ryan/publications/banality-global-algorithmic-violence-global-digital-transformations http://www.ids.ac.uk/publications/smart-city-surveillance-in-africa-mapping-chinese-ai-surveillance-across-11-countries/ http://theelders.org/news/governments-must-act-now-manage-ai-public-good http://www.foodandwaterwatch.org/wp-content/uploads/2025/12/Org-Letter_-National-Data-Center-Moratorium.pdf Head In The Cloud. (IPES Food) Challenging the false promise of digital agriculture and cultivating innovation from the ground up. Today, ‘innovation’ has become synonymous with the rapid development of AI, precision agriculture, bioengineering, and automation. Governments and donors are investing billions in corporate-led digitalization of farming, promoted as essential for climate resilience and productivity. Head In The Cloud examines how this shift is reshaping power in food systems – concentrating control in the hands of major technology and agribusiness firms, increasing farmer dependency, and reinforcing high-cost, high-input production models. At the same time, the report documents farmer-led and community-based innovations that are strengthening soil health, conserving agrobiodiversity, adapting crops to climate change, and building resilient local food systems. These bottom-up approaches prioritize autonomy, ecological sustainability, and knowledge-sharing – yet remain underfunded and marginalized in policy and investment decisions. http://ipes-food.org/report/head-in-the-cloud/ 13 Jan. 2026 Malaysia and Indonesia block Elon Musk’s Grok over sexualized AI images. (agencies) Malaysia and Indonesia have become the first countries to block Grok, the artificial intelligence chatbot developed by Elon Musk’s company xAI, as concerns grow among authorities that it is being misused to generate sexually explicit and nonconsensual images. There is growing scrutiny of generative AI tools that can produce realistic images, sound and text, and concern that existing safeguards are failing to prevent their abuse. The Grok chatbot, accessed through Musk’s social media platform X, has been criticized for generating manipulated images, including depictions of women in bikinis or sexually explicit poses, as well as images involving children. “The government sees nonconsensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesian Communication and Digital Affairs Minister Meutya Hafid said in a statement. Scrutiny of Grok is growing, including in the European Union, India, France and the United Kingdom, which said Monday it was moving to criminalize “nudification apps.” Britain’s media regulator also launched an investigation into whether Grok broke the law by allowing users to share sexualized images of children. Regulators in the two Southeast Asian nations said existing controls weren’t preventing the creation and spread of fake pornographic content, particularly involving women and children. Indonesia’s government blocked access to Grok on Saturday, followed by Malaysia on Sunday. Initial findings showed Grok lacks effective safeguards to stop users from creating and distributing pornographic content based on real photos of Indonesian residents, Alexander Sabar, director-general of digital space supervision, said in a statement. He said such practices risk violating privacy and image rights when photos are manipulated or shared without consent. The Malaysian Communications and Multimedia Commission noted “repeated misuse” of the tool to generate obscene, sexually explicit and nonconsensual manipulated images, including content involving women and children. The regulator said notices were issued this month to X Corp. and xAI demanding stronger safeguards. “The restriction is imposed as a preventive and proportionate measure while legal and regulatory processes are ongoing,” it said, adding that access will remain blocked until effective safeguards are put in place. The U.K.'s media regulator said it launched an investigation into whether Grok violated its duty to protect people from illegal content. The regulator, Ofcom, said Grok-generated images of children being sexualized or people being undressed may amount to pornography or child sexual abuse material. http://www.politico.com/news/magazine/2026/01/21/elon-musk-donald-trump-social-media-laws-column-00738440 http://www.dw.com/en/eu-opens-probe-into-musks-grok-chatbot/a-75663255 http://www.france24.com/en/france/20260203-paris-prosecutor-s-cybercrime-unit-raids-x-s-french-office http://www.unicef.org/press-releases/deepfake-abuse-is-abuse http://www.unicef.org/reports/artificial-intelligence-and-child-sexual-abuse-and-exploitation * UN Bodies issue Joint Statement on Artificial Intelligence and the Rights of the Child: http://tinyurl.com/yp9nndha States should strengthen AI governance frameworks to uphold and protect children’s rights. Global organisations are urged to integrate children’s rights across all AI-related policies and strategies. Governments and companies must ensure AI systems are transparent, accountable and designed to protect children. States must prevent and address violence and exploitation of children enabled or amplified by AI. Stronger, child-centred data protection measures are needed to safeguard privacy within AI systems. AI-driven decisions should prioritise the best interests and holistic development of every child. Inclusive, bias-free AI is essential to ensure all children benefit. Children’s views and experiences should meaningfully inform AI policymaking and system design. AI development should support environmental sustainability while minimising long-term ecological harm to future generations. * UN Agencies include United Nations Committee on the Rights of the Child (CRC); United Nations Children's Fund (UNICEF); United Nations Educational, Scientific and Cultural Organization (UNESCO); Office of the United Nations High Commissioner for Human Rights; Special Representative of the United Nations Secretary-General for Children and Armed Conflict; Special Representative of the United Nations Secretary-General on Violence against Children; United Nations Special Rapporteur on the sale, sexual exploitation and sexual abuse of children; United Nations Interregional Crime and Justice Research Institute (UNICRI). http://www.unicef.org/innocenti/reports/policy-guidance-ai-children Mar. 2026 New Mexico jury says Meta harms children's mental health and safety, violating state law. Social Media Giants face legal action. (NPR, agencies) A New Mexico jury determined that Meta knowingly harmed children's mental health and concealed what it knew about child sexual exploitation on its social media platforms. New Mexico jurors sided with state prosecutors who argued that Meta — which owns Instagram, Facebook and WhatsApp — prioritized profits over safety, and violated parts of the state's Unfair Practices Act. The jury agreed with allegations that Meta made false or misleading statements and agreed that Meta engaged in "unconscionable" trade practices that unfairly took advantage of the vulnerabilities of and inexperience of children. New Mexico's case was among the first to reach trial in a wave of litigation involving social media platforms and their impacts on children. "Meta's house of cards is beginning to fall," said Sacha Haworth, executive director of watchdog group The Tech Oversight Project. "For years, it's been glaringly obvious that Meta has failed to stop sexual predators from turning online interactions into real world harm." Tech companies have been protected from liability for content posted on their social media platforms under Section 230, a 30-year-old provision of the U.S. Communications Decency Act, as well as a First Amendment shield. New Mexico prosecutors say Meta still should be responsible for its role in pushing out that content through complex algorithms that proliferate material that is harmful for children. The state’s attorney general, Raul Torrez, sued Meta in 2023, accusing it of misleading consumers about the safety of its platforms. The company’s lax safety protocols allowed sexual predators to contact minors, the lawsuit added. The jury, in State District Court in Santa Fe, agreed, ordering Meta to pay $375 million in damages for violating state consumer protection laws. “The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” Mr. Torrez said in a statement. “Meta executives knew their products harmed children, disregarded warnings from their own employees and lied to the public about what they knew.” Mr. Torrez said he would ask the judge, Bryan Biedscheid, for additional financial penalties during a bench trial that is scheduled to start May 4. Mr. Torrez also plans to ask the court to force changes to Meta’s apps to make them safer for young users. http://www.npr.org/2026/03/24/g-s1-115019/new-mexico-meta-children-mental-health http://www.nytimes.com/2026/03/24/technology/meta-new-mexico-child-safety-violations.html http://www.theguardian.com/technology/2026/mar/24/meta-new-mexico-jury http://www.npr.org/2026/03/25/nx-s1-5746125/meta-youtube-social-media-trial-verdict http://www.nytimes.com/2026/03/25/technology/social-media-trial-verdict.html http://techoversight.org/2026/03/25/the-tech-oversight-project-heralds-verdict-in-social-media-addiction-trials-as-an-earthquake-for-big-tech/ http://techoversight.org/bigtechontrial/ http://www.citizen.org/news/historic-california-verdict-against-meta-and-google-marks-social-medias-big-tobacco-moment/ http://socialmediavictims.org/blog/meta-and-youtube-held-responsible-for-harm-to-vulnerable-users-in-first-of-its-kind-trial Dec. 2025 Accountability for harms arising from algorithmic systems. (Amnesty International) With the widespread use of Artificial Intelligence (AI) and automated decision-making systems (ADMs) that impact our everyday lives, it is crucial that rights defenders, activists and communities are equipped to shed light on the serious implications these systems have on our human rights, Amnesty International said ahead of the launch of its Algorithmic Accountability toolkit. The toolkit draws on Amnesty International’s investigations, campaigns, media and advocacy in Denmark, Sweden, Serbia, France, India, United Kingdom, Occupied Palestinian Territory (OPT), the United States and the Netherlands. It provides a ‘how to’ guide for investigating, uncovering and seeking accountability for harms arising from algorithmic systems that are becoming increasingly embedded in our everyday lives specifically in the public sector realms of welfare, policing, healthcare, and education. Regardless of the jurisdiction in which these technologies are deployed, a common outcome from their rollout is not “efficiency” or “improving” societies—as many government officials and corporations claim—but rather bias, exclusion and human rights abuses. “The toolkit is designed for anyone looking to investigate or challenge the use of algorithmic and AI systems in the public sector, including civil society organizations (CSOs), journalists, impacted people or community organizations. It is designed to be adaptable and versatile to multiple settings and contexts. “Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies and meeting this current moment of supercharged investments in AI. Given how these systems can enable mass surveillance, undermine our right to social protection, restrict our freedom to peaceful protest and perpetuate exclusion, discrimination and bias across society,” said Damini Satija, Programme Director at Amnesty Tech. The toolkit introduces a multi-pronged approach based on the learnings of Amnesty International’s investigations in this area over the last three years, as well as learnings from collaborations with key partners. This approach not only provides tools and practical templates to research these opaque systems and their resulting human rights violations, but it also lays out comprehensive tactics for those working to end these abusive systems by seeking change and accountability via campaigning, strategic communications, advocacy or strategic litigation. One of the many case studies the toolkit draws on is Amnesty International’s investigation into Denmark’s welfare system, exposing how the Danish welfare authority Udbetaling Danmark (UDK)’s AI-powered welfare system fuels mass surveillance and risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of AI tools to flag individuals for social benefits fraud investigations. The investigation could not have been possible without the collaboration with impacted communities, journalists and local civil society organisations and in that spirit, the toolkit is premised on deep collaboration between different disciplinary groups. The toolkit situates human rights law as a critically valuable component of algorithmic accountability work, especially given this is a gap in the ethical and responsible AI fields and audit methods’. Amnesty International’s method ultimately emphasises collaborative work, while harnessing the collective influence of a multi-method approach. Communities and their agency to drive accountability remains at the heart of the process. “This issue is even more urgent today, given rampant unchecked claims and experimentation around the supposed benefits of using AI in public service delivery. State actors are backing enormous investments in AI development and infrastructure and giving corporations a free hand to pursue their lucrative interests, regardless of the human rights impacts now and further down the line,” said Damini Satija. “Through this toolkit, we aim to democratize knowledge and enable civil society organizations, investigators, journalists, and impacted individuals to uncover these systems and the industries that produce them, demand accountability, and bring an end to the abuses enabled by these technologies.” http://www.amnesty.org/en/latest/research/2025/12/algorithmic-accountability-toolkit/ http://www.amnesty.org/en/latest/news/2025/12/global-amnesty-international-launches-an-algorithmic-accountability-toolkit-to-enable-investigators-rights-defenders-and-activists-to-hold-powerfu/ http://www.coe.int/en/web/commissioner/-/regulation-is-crucial-for-responsible-ai http://thebulletin.org/the-ai-power-trip/ http://www.bmj.com/content/392/bmj.r2606 http://www.openglobalrights.org/will-human-rights-guide-technological-development/ http://www.business-humanrights.org/en/blog/why-regulation-is-essential-to-tame-techs-rush-for-ai/ http://carnegieendowment.org/europe/strategic-europe/2025/10/corporate-geopolitics-when-billionaires-rival-states http://www.nytimes.com/2025/10/21/technology/inside-amazons-plans-to-replace-workers-with-robots.html http://www.kcl.ac.uk/shall-we-play-a-game http://safe.ai/ai-risk http://ai-frontiers.org/articles/the-evidence-for-ai-consciousness-today http://www.theguardian.com/environment/2026/jan/29/gas-power-ai-climate http://globalwitness.org/en/campaigns/digital-threats/enabled-emissions-how-ai-helps-to-supercharge-oil-and-gas-production/ http://globalwitness.org/en/campaigns/digital-threats/ai-chatbots-share-climate-disinformation-to-susceptible-users/ http://globalwitness.org/en/campaigns/digital-threats http://www.ohchr.org/en/press-releases/2025/06/procurement-and-deployment-artificial-intelligence-must-be-aligned-human http://www.ohchr.org/sites/default/files/documents/issues/civicspace/resources/brief-data-privacy-ai-report-rev.pdf http://carnegieendowment.org/research/2025/10/how-china-views-ai-risks-and-what-to-do-about-them http://www.dw.com/en/middle-east-using-ai-to-stop-dissent-before-it-even-starts/a-76095344 http://futureoflife.org/ai-safety-index-winter-2025/ http://pwd.org.au/disability-representative-organisations-call-for-transparency-on-computer-generated-ndis-plans/ http://www.acoss.org.au/media_release/acoss-statement-on-the-robodebt-settlement/ http://theconversation.com/people-are-getting-their-news-from-ai-and-its-altering-their-views-269354 http://www.mdpi.com/2076-0760/14/6/391 http://icct.nl/publication/reading-between-lines-importance-human-moderators-online-implicit-extremist-content http://www.ipsnews.net/2025/09/unga80-lies-spread-faster-than-facts/ http://www.citizen.org/news/bipartisan-group-of-state-lawmakers-condemn-federal-ai-preemption-efforts/ http://www.hrw.org/news/2025/12/16/trump-administration-takes-aim-at-ai-accountability-laws http://www.citizen.org/news/trump-grants-his-greedy-big-tech-buddies-christmas-wish-with-dangerous-ai-preemption-eo/ http://www.hks.harvard.edu/centers/carr-ryan/our-work/carr-ryan-commentary/has-technology-outpaced-human-rights-frameworks http://www.democracynow.org/2026/1/1/empire_of_ai_karen_hao_on http://www.nytimes.com/2025/12/08/technology/ai-slop-sora-social-media.html http://politicsofpoverty.oxfamamerica.org/the-rise-of-the-tech-oligarchy/ http://politicsofpoverty.oxfamamerica.org/rise-of-the-tech-oligarchy-part-ii/ http://www.amnesty.org/en/latest/news/2025/08/amnesty-launches-breaking-up-with-big-tech-briefing/ http://www.amnesty.org/en/documents/POL30/0226/2025/en/ http://link.springer.com/article/10.1007/s00146-025-02371-1 http://link.springer.com/article/10.1007/s00146-025-02623-0 Visit the related web page |
|
|
View more stories | |