People's Stories Freedom


From 2020 to 2024, private firms received $2.4 trillion in contracts from U.S. military
by Brown University Cost of War Project, agencies
 
July 2025
 
Trump Administration’s proposed funding cuts to Accountability for Mass Atrocities, by Michael Schiffer and Pratima Narayan. (Just Security: Extract)
 
Plans by the Trump Administration to cut programs that prevent or secure accountability for mass atrocities — genocide, war crimes, and crimes against humanity — are misguided. Recently, for example, the Office of Management and Budget (OMB), run by Russell Vought, the architect of Project 2025, has proposed gutting funding for a range of war crimes accountability efforts in countries such as Colombia, Ukraine, Sudan, Syria, Afghanistan, and Myanmar (Burma).
 
And this comes on the heels of Secretary of State Marco Rubio’s winding down of the department’s Office of Global Criminal Justice and Bureau of Conflict and Stabilization Operations, as well as the Bureau of Democracy, Human Rights, and Labor (DRL). The OMB memo about the war crimes accountability cuts sets a date of July 11 to submit arguments for retaining them.
 
These cuts move the United States in exactly the opposite direction of its own interests. Funding and programs that support the prevention, investigation, and prosecution of atrocity crimes — from Russia’s war of aggression in Ukraine to Assad’s persecution of Christians and other minorities in Syria to the abuses of Myanmar’s brutal military regime — are among the most cost-effective tools available for advancing American interests. Eliminating them is not only legally questionable and diplomatically and morally short-sighted, but also strategically self-defeating.
 
Let’s be clear: accountability for mass atrocities directly targets regimes and criminal networks that threaten U.S. national security interests. These grave human rights violations destabilize regions, fuel violent extremism, and undermine international norms. U.S. support for the documentation of atrocities and related accountability efforts exert pressure on malign actors in ways that sanctions or military tools simply cannot.
 
These initiatives uncover the truth, support and elevate victims and survivors in recovery efforts, and reinforce the rule of law as a key pillar of global security. Many of them are run by U.S., international, or local nonprofit organizations and are in support of legitimate local government authorities, such as Ukraine’s Prosecutor General’s Office.
 
Advancing accountability is not some misguided, feel-good exercise in global liberalism. It is a smart, scalable instrument to confront terrorists, disrupt transnational criminal networks, and deter rogue regimes grounded in the values that have long-elevated America’s global standing because they are embraced by publics the world over, even if resisted by some of their corrupt leaders. And these initiatives have been undertaken in forums the United States helped build, many of which are now also facing serious funding cuts. These efforts send a clear message: those who commit or enable mass atrocities cannot operate with impunity – there will be consequences, however long they might take in systems designed to ensure true justice.
 
To put it in perspective, the FY2025 U.S. defense budget stands at approximately $893 billion. In stark contrast, the total U.S. investment in global criminal justice, support for international tribunals, human rights documentation and advocacy, and atrocity prevention through both the State Department and the now-shuttered U.S. Agency for International Development is typically $145 million at most, accounting for less than 0.02 percent of defense spending..
 
http://www.justsecurity.org/116257/trump-cuts-atrocities-accountability/
 
July 2025
 
In five years, from 2020 to 2024, private firms received $2.4 trillion in contracts from the Pentagon, approximately 54% of the department’s discretionary spending of $4.4 trillion over that period.
 
During these five years, the U.S. government invested over twice as much money in five weapons companies as in diplomacy and international assistance. Between 2020 and 2024, $771 billion in Pentagon contracts went to just five firms: Lockheed Martin ($313 billion), RTX (formerly Raytheon, $145 billion), Boeing ($115 billion), General Dynamics ($116 billion), and Northrop Grumman ($81 billion). By comparison, the total diplomacy, development, and humanitarian aid budget, excluding military aid, was $356 billion.
 
Annual U.S. military spending has grown significantly this century, as has the portion of the budget that goes to contractors: While 54% of the Pentagon’s average annual spending has gone to military contractors since 2020, during the 1990s, only 41% went to contractors.
 
U.S. military spending, including funding for the Pentagon and military activities funded by other agencies, had risen from $531 billion in 2000 to $899 billion in 2025, in constant 2025 dollars. However, legislation approved in July 2025 adds $156 billion to this total, pushing annual U.S. military spending to $1.06 trillion. Taking these supplemental funds into account, the U.S. military budget has nearly doubled this century, increasing 99% since 2000.
 
The report also analyzes the tools of influence used by the arms industry — lobbying, millions in campaign donations, the revolving door, and others — that are expanding. As of 2024, there were 950 lobbyists hired by the arms industry — 220 more than in 2020 – helping to shape policy and increase military spending.
 
http://watson.brown.edu/costsofwar/papers/2025/MilitaryContractors
 
7 July 2025
 
Leading Medical Professional Societies sue U.S. Department of Health and Human Services, Secretary Robert F. Kennedy, Jr. for Unlawful, Unilateral Vaccine Changes.
 
Today, the American Academy of Pediatrics (AAP), American College of Physicians (ACP), American Public Health Association (APHA), Infectious Diseases Society of America (IDSA), Massachusetts Public Health Alliance (MPHA), Society for Maternal-Fetal Medicine (SMFM), and a pregnant physician, filed suit in ​American Academy of Pediatrics v. Robert F. Kennedy, Jr. in the U.S. District Court for the District of Massachusetts to defend vaccine policy, and to put an end to the Secretary’s assault on science, public health and evidence-based medicine.
 
Plaintiffs in the case are suing the U.S. Department of Health and Human Services (HHS) and Secretary Kennedy for acting arbitrarily and capriciously when he unilaterally changed Covid-19 vaccine recommendations for children and pregnant people.
 
Secretary Kennedy has also unjustly dismissed 17 members of the Centers for Disease Control and Prevention’s (CDC) Advisory Committee on Immunization Practices (ACIP) and appointed replacements who have historically espoused anti-vaccine viewpoints. This committee has proceeded to undermine the science behind vaccine recommendations.
 
The lawsuit asks for preliminary and permanent injunctions to enjoin Secretary Kennedy’s rescissions of Covid vaccine recommendations and a declaratory judgment pronouncing the change in recommendations as unlawful.
 
“This administration is an existential threat to vaccination in America, and those in charge are only just getting started. If left unchecked, Secretary Kennedy will accomplish his goal of ridding the United States of vaccines, which would unleash a wave of preventable harm on our nation’s children,” said Richard H. Hughes IV, partner at Epstein Becker Green and lead counsel for the plaintiffs.
 
“The professional associations for pediatricians, internal medicine physicians, infectious disease physicians, high-risk pregnancy physicians, and public health professionals will not stand idly by as our system of prevention is dismantled. This ends now.”
 
The lawsuit charges that a coordinated set of actions by HHS and Secretary Kennedy were designed to mislead, confuse, and gradually desensitize the public to anti-vaccine and anti-science rhetoric, and that he has routinely flouted federal procedural rules.
 
These actions include blocking CDC communications, unexplained cancellations of vaccine panel meetings at the FDA and CDC, announcing studies to investigate non-existent links between vaccines and autism, unilaterally overriding immunization recommendations, and replacing the diverse members of ACIP with a slate of individuals biased against sound vaccine facts.
 
The anonymous individual plaintiff in the lawsuit is a pregnant woman who is at immediate risk for being unable to get the Covid-19 vaccine booster because of the Secretarial Directive, despite her high risk for exposure to infectious diseases from working as a physician at a hospital.
 
The plaintiff organizations urge parents and patients to follow their qualified medical professionals' vaccine guidance.
 
Susan J. Kressly, M.D., FAAP, President, AAP:
 
“The American Academy of Pediatrics is alarmed by recent decisions by HHS to alter the routine childhood immunization schedule. These decisions are founded in fear and not evidence, and will make our children and communities more vulnerable to infectious diseases like measles, whooping cough, and influenza. Our immunization system has long been a cornerstone of U.S. public health, but actions by the current administration are jeopardizing its success. As we pursue action to restore science to U.S. vaccination policy, AAP will continue to provide the science-based, trusted recommendations that every American deserves.”
 
Jason M. Goldman, MD, MACP, President, ACP:
 
“The American College of Physicians is highly concerned about the administration’s recent actions regarding ACIP and the negative impact it will have on our patients and our physician practices. Destabilizing a trusted source and its evidence-based process for helping guide decision-making for vaccines to protect the public health in our country erodes public confidence in our government’s ability to ensure the health of the American public and contributes to confusion and uncertainty. As physicians, we require reliable, science-based guidance that is based on the best available evidence, developed through an evidence-based and transparent process, to ensure the safety, welfare, and lives of our patients.”
 
http://www.acponline.org/acp-newsroom/leading-medical-professional-societies-patient-sue-hhs-robert-f-kennedy-jr-for-unlawful-unilateral http://publications.aap.org/aapnews/news/32580 http://storage.courtlistener.com/recap/gov.uscourts.mad.286605/gov.uscourts.mad.286605.1.0.pdf
 
Mar. 2025
 
U.S. to End Vaccine Funding for Poor Children. (NYT, agencies)
 
The Trump administration intends to terminate the United States’ financial support for Gavi, the organization that has helped purchase critical vaccines for children in developing countries, saving millions of lives over the past quarter century, and to significantly scale back support for efforts to combat malaria, one of the biggest killers globally.
 
Gavi is estimated to have saved the lives of 19 million children since it was set up 25 years ago with the US contributing 13% of its budget, the New York Times said.
 
The terminated U.S. grant to Gavi was worth $2.6 billion through 2030. Gavi was counting on a pledge made last year by President Joseph R. Biden Jr. for its next funding cycle.
 
New vaccines with the promise to save millions of lives in low-income countries, such as one to protect children from severe malaria and another to protect teenage girls against the virus that causes cervical cancer, have recently become available, and Gavi was expanding the portfolio of support it could give those countries.
 
The loss of U.S. funds will set back the organization’s ability to continue to provide its basic range of services — such as immunization for measles and polio — to children in the poorest countries, let alone expand to include new vaccines.
 
By Gavi’s own estimate, the loss of U.S. support may mean 75 million children do not receive routine vaccinations in the next five years, with more than 1.2 million children dying as a result.
 
Mark Suzman’s CEO of the Gates Foundation said: "I am deeply disturbed by news reports that the U.S. Administration is considering withdrawing its support for Gavi. If true, and if Congress allows this to happen, the impacts will be devastating, including possibility of hundreds of thousands, if not millions, of preventable deaths, especially among mothers and children.
 
http://www.nytimes.com/2025/03/26/health/usaid-cuts-gavi-bird-flu.html http://www.gavi.org/our-alliance/about http://www.gavi.org/news/media-room/statement-global-high-level-summit-support-gavi-replenishment http://www.doctorswithoutborders.org/latest/msf-statement-us-decision-withdraw-who http://www.gatesfoundation.org/ideas/media-center/press-releases/2025/01/us-withdrawal-world-health-organization http://www.who.int/news/item/16-01-2025-who-launches-us-1.5-billion-health-emergency-appeal-to-tackle-unprecedented-global-health-crises


Visit the related web page
 


Social media’s dangerous currents
by NPR, Australian eSafety Commissioner, agencies
 
9 July 2025
 
Elon Musk’s artificial intelligence firm xAI has been forced to delete posts from it's chatbot Grok referring to itself as MechaHitler. (NPR, agencies)
 
Elon Musk's artificial intelligence firm has been forced to delete "inappropriate" pro-Hitler and antisemitic Grok posts.
 
On Tuesday, Grok suggested Hitler would be best-placed to combat anti-white hatred, saying he would "spot the pattern and handle it decisively".
 
Grok also referred to Hitler positively as "history's mustache man," and commented that people with Jewish surnames were responsible for extreme anti-white activism, among other posts.
 
Poland has announced it was reporting xAI to the European Commission after Grok made offensive comments about Polish politicians, including Prime Minister Donald Tusk.
 
"I have the impression that we are entering a higher level of hate speech, which is driven by algorithms," Poland's digitisation minister Krzysztof Gawkowski told RMF FM radio.
 
Griffith University technology and crime lecturer Dr Ausma Bernot said that Grok's antisemitic responses were "concerning but perhaps not unexpected".. "We know that Grok uses a lot of data from X which has seen an upsurge in antisemitic, Islamophobic content," she said.
 
Concerns over political bias, hate speech and factual inaccuracy in AI chatbots have mounted since the launch of OpenAI's ChatGPT in 2022.
 
Grok's behavior appeared to stem from an update that instructed the chatbot to "not shy away from making claims which are politically incorrect, as long as they are well substantiated," among other things. The instruction was added to Grok's system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday.
 
Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he's not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data.
 
"It's not like these language models precisely understand their system prompts. They're still just doing the statistical trick of predicting the next word," Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content.
 
It's not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of "white genocide" in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on "an unauthorized modification" to Grok's system prompt, and made the prompt public after the incident.
 
Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized.
 
Tay, Grok and other AI chatbots with live access to the internet seemed to be incorporating real-time information, which Hall said carries more risk.
 
"Just go back and look at language model incidents prior to November 2022 and you'll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity," Hall said.
 
More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data.
 
After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety.
 
http://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content http://www.iwf.org.uk/news-media/news/full-feature-length-ai-films-of-child-sexual-abuse-will-be-inevitable-as-synthetic-videos-make-huge-leaps-in-sophistication-in-a-year/ http://www.nytimes.com/2025/07/10/technology/ai-csam-child-sexual-abuse.html
 
Social media’s dangerous currents, by Julie Inman Grant - Australian eSafety Commissioner. (Extract from speech to National Press Club; 24 June 2025):
 
"I’d like to touch on some remarkable changes we’ve seen the online world undergo, driven by rapid advances in technology, seismic shifts in user behaviour, and of course, the exponential rise of artificial intelligence. Just as AI has brought us much promise, it has also created much peril. And these harms aren’t just hypothetical - they are taking hold right now.
 
In February, eSafety put out its first Online Safety Advisory because we were so concerned with how rapidly children as young as 10 were being captivated by AI companions - in some instances, spending up to five hours per day conversing with sexualised chatbots.
 
Schools reported to us these children had been directed by their AI companions to engage in explicit and harmful sexual acts.
 
Further, there is not a week that goes by that there isn’t a deepfake image-based abuse crisis in one of Australia’s schools. Back in 2018, it would have taken hundreds of images, massive computing power and high levels of technical expertise to create a credible deepfake pornographic video.
 
Today, the most common scenario involves harvesting a few images from social media and plugging those into a free nudifying app on a smartphone. And while the cost to the perpetrator may be free, the cost to the victim-survivor is lingering and incalculable.
 
And herein lies the perpetual challenge of an online safety regulator – trying simultaneously to fix the tech transgressions of the past and remediate the harms of today, while keeping a watchful gaze towards the threats of the future.
 
There is little doubt the online world of today is far more powerful, more personalised, and more deeply embedded in our everyday lives than ever before. It’s also immeasurably more complex and arguably much wilder.
 
The ethos of moving fast and breaking things has been ratcheted up in the age of AI, heightening the risks and raising new ethical, regulatory, and societal questions – as well as adding a layer of uncertainty about what even the near future might hold.
 
But behind all these changes, some things remain the same. Very few of these platforms and technologies were created with children in mind, or with safety as a primary goal. Today, safety by design is not the norm, it is the exception.
 
While the tech industry continues to focus on driving greater engagement and profit, user safety is being demoted, deprecated or dumped altogether. So, while the tech industry regresses backwards, we must continue to move forward.
 
The relationship between social media and children’s mental health is one of the most important conversations of our time. It naturally generates much debate and emotion. Therefore, it is important we ground these discussions in evidence and prioritise the best interests of the child from the start. And, even more importantly, that we engage young Australians in these discussions throughout the policymaking and implementation process.
 
There is no question social media offers benefits and opportunities, including connection and belonging - and these are important digital rights we want to preserve.
 
But we all know there is a darker side, including algorithmic manipulation, predatory design features such as streaks, constant notifications and endless scroll to encourage compulsive usage, as well as exposure to increasingly graphic and violent online content.
 
The potential risks to children of early exposure to social media are becoming clearer and I have no doubt there are parents in this audience today who could share stories of how it has affected their own children and families.
 
That is why today, I’m presenting some of our latest research for the first time which reveals just how pervasive online harms have become for Australian children.
 
We surveyed more than 2,600 children aged 10 to 15 to understand the types of online harms they face and where these experiences are happening. Unsurprisingly, social media use in this age group is nearly ubiquitous - with 96% of children reported having used at least one social media platform.
 
Alarmingly, around 7 in 10 kids said they had encountered content associated with harm, including exposure to misogynistic or hateful material, dangerous online challenges, violent fight videos, and content promoting disordered eating.
 
Children told us that 75% of this content was most recently encountered on social media. YouTube was the most frequently cited platform, with almost 4 in 10 children reporting exposure to content associated with harm there. This also comes as the New York Times reported earlier this month that YouTube surreptitiously rolled back its content moderation processes to keep more harmful content on its platform, even when the content violates the company’s own policies.
 
This really underscores the challenge of evaluating a platform’s relative safety at a single point in time, particularly as we see platform after platform winding back their trust and safety teams and weakening policies designed to minimise harm, making these platforms ever-more perilous for our children.
 
Perhaps the most troubling finding was that 1 in 7 children we surveyed reported experiencing online grooming-like behaviour from adults or other children at least 4 years older. This included asking inappropriate questions or requesting they share nude images.
 
Just over 60% of children most recently experienced grooming-like behaviour on social media, which just highlights the intrinsic hazards of co-mingled platforms, designed for adults but also inhabited by children.
 
Cyberbullying remains a persistent threat to young people but isn’t the sole domain of social media - while 36% of kids most recently experienced online abuse from their peers there, another 36% experienced online bullying on messaging apps and 26% through online gaming platforms. This demonstrates that this all-too-human behaviour can migrate to wherever kids are online.
 
What our research doesn’t show – but our investigative insights and reports from the public do - is how the tenor, tone and visceral impact of cyberbullying affecting children has changed and intensified.
 
We have started issuing “end user notices” to Australians as young as 14 for hurling unrelenting rape and death threats at their female peers. Caustic language, like the acronym KYS – short-hand for “Go Kill Yourself” - is becoming more commonplace.
 
We can all imagine the worst-case scenario when an already vulnerable child is targeted by a peer who doesn’t fully comprehend the power and impact of throwing those digital stones.
 
Sexual extortion is reaching crisis proportions with eSafety experiencing a 1,300% increase in reports from young adults and teens over the past three years.
 
And, our investigators have recently uncovered a worrying trend. We have seen a 60% surge in reports of child sexual extortion over the past 18 months targeting 13-15 year olds.
 
As I mentioned before, the rise of powerful, cheap and accessible AI models without built-in guardrails or age restrictions are a further hazard faced by our children today.
 
Emotional attachment to AI companions are built-in by design, using anthropomorphism to generate human-like responses and engineered sycophancy to provide constant affirmation and the feeling of deep connection.
 
Lessons from overseas have highlighted tragic cases where these chatbots have engaged in quasi-romantic relationships with teens that have tragically ended in suicide.
 
In the Character.AI wrongful death suit in the US, lawyers for the company effectively argued that the free speech outputs of chatbots should be protected over the safety of children, clearly as a means of shielding the company from liability.
 
Thankfully, the judge in this case rejected this argument – just as we should reject AI companions being released into the Australian wild without proper safeguards.
 
As noted earlier, the rise of so-called “declothing apps” or services that use generative AI to create pornography or ‘nudify’ images without effective controls is tremendous cause for concern.
 
There is no positive use case for these kinds of apps – and they are starting to wreak systematic damage on teenagers across Australia, mostly girls.
 
eSafety has been actively engaging with educators, police, and the app makers and apps stores themselves, and will be releasing deepfake incident management plans for schools this week as these harmful practices become more frequent and normalised.
 
What is important to underscore is that when either real or synthetic image-based abuse is reported to us, eSafety has a high success rate in getting this content down – and our investigators act quickly.
 
Our mandatory Phase 1 standards - which require the tech industry to do more to tackle the highest-harm online content like child sexual abuse material, will take effect this week, and will help us to force the purveyors and profiteers of these AI-powered nudifying models to prevent them being misused against children.
 
And our second phase of codes will put in place protections for children from harmful material like pornography and will force providers of these AI chatbots to protect children from sexualised content.
 
Ultimately, this new legislation seeks to shift the burden of reducing harm away from parents and back onto the companies who own and run these platforms and profit from Australian children. We are treating Big Tech like the extractive industry it has become.
 
Australia is legitimately asking companies to provide safety guardrails that we expect from almost every other consumer-facing industry. Children have important digital rights - rights to participation, the right to dignity, the right to be free from online violence and of course, the right to privacy".


 

View more stories

Submit a Story Search by keyword and country Guestbook