People's Stories Freedom

View previous stories


New and emerging technologies need urgent oversight and robust transparency
by UN Special procedure holders, agencies
UN Office for Human Rights (OHCHR), agencies
 
Sep. 2023
 
States must guarantee fundamental freedoms of digital technologies: UN and regional experts. (OHCHR)
 
States must effectively respect, protect and facilitate the rights to freedom of peaceful assembly, of association and of expression online and in digitally-mediated spaces, and ensure digital technologies are not used to unduly restrict civic space online and offline, UN and regional human rights experts urged in a joint declaration issued on the occasion of the World Democracy Day.
 
The rights experts raised serious concerns about shrinking civic space online, and the rapidly evolving and emerging technologies that are posing additional threats to the promotion and protection of these rights online and offline.
 
The UN and regional experts raised concerns related to the misuse of digital technologies by State and non-State actors, which result in curtailing the effective exercise of these rights online and offline: imposing Internet shutdowns and censorship, digital surveillance and malicious use of artificial intelligence, online harassment, spread of hate speech, and spread of disinformation and misinformation.
 
State and non-State actors have deliberately used technology to silence, surveil and harass dissidents, political opposition, human rights defenders, activists and protesters.
 
Such acts create a chilling effect and shrink civic spaces online and offline, which leads to deterioration and undermining of democracy.
 
States must ensure technologies are used as a means to facilitate the rights to peaceful assembly, association and expression online and offline, not suppress these rights.
 
Among other measures, a global regulatory framework should be developed based on international human rights law and standards to rein in use of digital surveillance and other emerging technologies. Robust and accountable export control regimes should be put in place for surveillance and other technologies that pose serious risks to the exercise of these fundamental freedoms.
 
States should ensure effective accountability for violations of the rights to freedom of peaceful assembly, association, and expression, related to the use of digital technologies. Special efforts must be made towards identifying and prosecuting gender-based online violence.
 
Protecting, promoting and effectively enabling these rights offline and online is essential to ensure inclusive and participatory democracies, and resilient and peaceful societies.
 
* Statement by: The UN Special Rapporteur on the rights to freedom to peaceful assembly and of association, jointly with experts from the African Commission on Human and Peoples’ Rights (ACHPR), the Inter-American Commission on Human Rights (IACHR), the ASEAN Intergovernmental Commission on Human Rights (AICHR), and the OSCE Office for Democratic Institutions and Human Rights (ODIHR)
 
http://www.ohchr.org/sites/default/files/documents/issues/trafficking/statements/20230915-jd-foaa-digital-technologies.pdf
 
May 2023
 
UN experts have called for greater transparency, oversight, and regulation to address the negative impacts of new and emerging digital tools and online spaces on human rights.
 
“New and emerging technologies, including artificial intelligence-based biometric surveillance systems, are increasingly being used in sensitive contexts, without the knowledge or consent of individuals,” the experts said ahead of the RightsCon summit in Costa Rica from 5 to 8 June 2023.
 
“Urgent and strict regulatory red lines are needed for technologies that claim to perform emotion or gender recognition,” they said.
 
The experts expressed concern about the proliferation of invasive spyware and a growing array of targeted surveillance technologies used to unlawfully target human rights defenders, activists, journalists, and civil society in all regions.
 
“We condemn the alarming use of spyware and surveillance technologies in violation of human rights and the broader chilling effect of such unlawful measures on the legitimate work of human rights defenders and on civic space worldwide, often under the guise of national security and counter-terrorism measures,” they said.
 
The experts stressed the need to ensure that these systems do not further expose people and communities to human rights violations, including through the expansion and abuse of invasive surveillance practices that infringe on the right to privacy, facilitate the commission of gross human rights violations, including enforced disappearances, and discrimination.
 
They expressed concern about respect for freedoms of expression, thought, peaceful protest, and for access to essential economic, social and cultural rights, and humanitarian services.
 
“Specific technologies and applications should be avoided altogether where the regulation of human rights complaints is not possible,” the experts said.
 
The experts noted that so-called “generative AI” systems can enable the cheap and rapid mass production of synthetic content that spreads disinformation or promotes and amplifies incitement to hatred, discrimination or violence on the basis of race, sex, gender and other characteristics.
 
The experts also expressed concern that their development is driven by a small group of powerful actors, including businesses and investors, without adequate requirements for conducting human rights due diligence or consultation with affected individuals and communities. Additionally, content moderation is often performed by individuals in situations of labour exploitation.
 
“Regulation is urgently needed to ensure transparency, alert people when they encounter synthetic media, and inform the public about the training data and models used,” the experts said.
 
The experts reiterated their calls for caution about the radical impact of digital technologies in the context of humanitarian crises, from large-scale data collection – including the collection of highly sensitive biometric data – to the use of advanced targeted surveillance technologies.
 
“We urge restraint in the use of such measures until the broader human rights implications are fully understood and robust data protection safeguards are in place,” they said.
 
They underlined the need to ensure technical solutions – including strong end-to-end encryption and unfettered access to virtual private networks – and secure and protect digital communications.
 
The experts reminded States and businesses of their respective duties and responsibilities, including human rights due diligence requirements when it comes to the development, use, vetting and procuring of digital technologies.
 
“Both industry and States must be held accountable, including for their economic, social, environmental, and human rights impacts,” they said. “The next generation of technologies must not reproduce or reinforce systems of exclusion, discrimination and patterns of oppression.”
 
http://www.ohchr.org/en/press-releases/2023/06/new-and-emerging-technologies-need-urgent-oversight-and-robust-transparency http://www.ohchr.org/en/statements/2023/09/states-must-guarantee-fundamental-freedoms-online-and-during-use-digital http://www.rightscon.org http://english.elpais.com/international/women-leaders-of-latin-america/2024-08-01/artificial-intelligence-doesnt-think-it-doesnt-learn-it-doesnt-decide.html http://www.accessnow.org/publication/internet-shutdowns-in-2023-mid-year-update/
 
* Neurotechnologies have the potential to decode and alter our perception, behaviour, emotion, cognition and memory – arguably, the very core of what it means to be human. This has major human rights and ethical implications as these devices could be used to invade people’s mental privacy and modify their identity and sense of agency, for example by manipulating people’s beliefs, motivations and desires:
 
http://www.ohchr.org/en/hr-bodies/hrc/advisory-committee/neurotechnologies-and-human-rights http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-neurorightsfoundation.pdf http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-oneill-riosrivers.pdf http://www.ohchr.org/en/statements-and-speeches/2023/11/turk-calls-attentive-governance-artificial-intelligence-risks http://www.globalwitness.org/en/blog/social-media-bosses-must-invest-guarding-global-elections-against-incitement-hate-and-violence/ http://yearofdemocracy.org/campaign-asks/ http://www.washingtonpost.com/world/2023/09/26/hindu-nationalist-social-media-hate-campaign/ http://www.dw.com/en/eu-says-elon-musks-x-is-biggest-source-of-disinformation/a-66930194 http://www.ids.ac.uk/news/african-nations-spending-1bn-a-year-on-harmful-surveillance-of-citizens/
 
* AP Dec. 2023: The U.S. multinational company Google has agreed to settle a $5 billion privacy lawsuit alleging that it spied on people who used the “incognito” mode in its Chrome browser — along with similar “private” modes in other browsers — to track their internet use. The class-action lawsuit filed in 2020 said Google misled users into believing that it wouldn’t track their internet activities while using incognito mode. It argued that Google’s advertising technologies and other techniques continued to catalog details of users’ site visits and activities despite their use of supposedly “private” browsing. Plaintiffs also charged that Google’s activities yielded an “unaccountable trove of information” about users who thought they’d taken steps to protect their privacy.
 
http://www.dw.com/en/journalists-say-tech-giants-google-and-meta-aid-suppression/a-68043102 http://www.amnesty.org/en/latest/news/2023/12/south-korea-google-fails-to-tackle-online-sexual-abuse-content-despite-complaints-by-survivors/
 
May 2023
 
Threats from the misuse of artificial intelligence, report from the British Medical Journal
 
In this section, we describe three sets of threats associated with the misuse of Artificial Intelligence (AI), whether it be deliberate, negligent, accidental or because of a failure to anticipate and prepare to adapt to the transformational impacts of AI on society.
 
The first set of threats comes from the ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras, and to develop highly personalised and targeted marketing and information campaigns as well as greatly expanded systems of surveillance.
 
This ability of AI can be put to good use, for example, improving our access to information or countering acts of terrorism. But it can also be misused with grave consequences.
 
The use of this power to generate commercial revenue for social media platforms, for example, has contributed to the rise in polarisation and extremist views observed in many parts of the world. It has also been harnessed by other commercial actors to create a vast and powerful personalised marketing infrastructure capable of manipulating consumer behaviour.
 
Experimental evidence has shown how AI used at scale on social media platforms provides a potent tool for political candidates to manipulate their way into power and it has indeed been used to manipulate political opinion and voter behaviour Cases of AI-driven subversion of elections include the 2013 and 2017 Kenyan elections, the 2016 US presidential election and the 2017 French presidential election.
 
When combined with the rapidly improving ability to distort or misrepresent reality with deepfakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts. AI-driven surveillance may also be used by governments and other powerful actors to control and oppress people more directly.
 
This is perhaps best illustrated by China’s Social Credit System, which combines facial recognition software and analysis of ‘big data’ repositories of people’s financial transactions, movements, police records and social relationships to produce assessments of individual behaviour and trustworthiness, which results in the automatic sanction of individuals deemed to have behaved poorly.
 
Sanctions include fines, denying people access to services such as banking and insurance services, or preventing them from being able to travel or send their children to fee-paying schools. This type of AI application may also exacerbate social and health inequalities and lock people into their existing socioeconomic strata.
 
But China is not alone in the development of AI surveillance. At least 75 countries, ranging from liberal democracies to military regimes, have been expanding such systems. Although democracy and rights to privacy and liberty may be eroded or denied without AI, the power of AI makes it easier for authoritarian or totalitarian regimes to be either established or solidified and also for such regimes to be able to target particular individuals or groups in society for persecution and oppression.
 
The second set of threats concerns the development of Lethal Autonomous Weapon Systems (LAWS). There are many applications of AI in military and defence systems, some of which may be used to promote security and peace. But the risks and threats associated with LAWS outweigh any putative benefits.
 
Weapons are autonomous in so far as they can locate, select and ‘engage’ human targets without human supervision. This dehumanisation of lethal force is said to constitute the third revolution in warfare, following the first and second revolutions of gunpowder and nuclear arms.
 
Lethal autonomous weapons come in different sizes and forms. But crucially, they include weapons and explosives, that may be attached to small, mobile and agile devices (eg, quadcopter drones) with the intelligence and ability to self-pilot and capable of perceiving and navigating their environment. Moreover, such weapons could be cheaply mass-produced and relatively easily set up to kill at an industrial scale.
 
For example, it is possible for a million tiny drones equipped with explosives, visual recognition capacity and autonomous navigational ability to be contained within a regular shipping container and programmed to kill en masse without human supervision.
 
As with chemical, biological and nuclear weapons, LAWS present humanity with a new weapon of mass destruction, one that is relatively cheap and that also has the potential to be selective about who or what is targeted.
 
This has deep implications for the future conduct of armed conflict as well as for international, national and personal security more generally. Debates have been taking place in various forums on how to prevent the proliferation of LAWS, and about whether such systems can ever be kept safe from cyber-infiltration or from accidental or deliberate misuse.
 
The third set of threats arises from the loss of jobs that will accompany the widespread deployment of AI technology. Projections of the speed and scale of job losses due to AI-driven automation range from tens to hundreds of millions over the coming decade.
 
Much will depend on the speed of development of AI, robotics and other relevant technologies, as well as policy decisions made by governments and society. However, in a survey of most-cited authors on AI in 2012/2013, participants predicted the full automation of human labour shortly after the end of this century.
 
It is already anticipated that in this decade, AI-driven automation will disproportionately impact low/middle-income countries by replacing lower-skilled jobs, and then continue up the skill-ladder, replacing larger and larger segments of the global workforce, including in high-income countries.
 
While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, including harmful consumption of alcohol and illicit drugs, being overweight, and having lower self-rated quality of life and health and higher levels of depression and risk of suicide.
 
However, an optimistic vision of a future where human workers are largely replaced by AI-enhanced automation would include a world in which improved productivity would lift everyone out of poverty and end the need for toil and labour.
 
However, the amount of exploitation our planet can sustain for economic production is limited, and there is no guarantee that any of the added productivity from AI would be distributed fairly across society.
 
Thus far, increasing automation has tended to shift income and wealth from labour to the owners of capital, and appears to contribute to the increasing degree of maldistribution of wealth across the globe.
 
Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health.
 
The threat of self-improving artificial general intelligence
 
Self-improving general-purpose AI, or AGI, is a theoretical machine that can learn and perform the full range of tasks that humans can. By being able to learn and recursively improve its own code, it could improve its capacity to improve itself and could theoretically learn to bypass any constraints in its code and start developing its own purposes, or alternatively it could be equipped with this capacity from the beginning by humans.
 
The vision of a conscious, intelligent and purposeful machine able to perform the full range of tasks that humans can has been the subject of academic and science fiction writing for decades. But regardless of whether conscious or not, or purposeful or not, a self-improving or self-learning general purpose machine with superior intelligence and performance across multiple dimensions would have serious impacts on humans.
 
We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered.
 
If realised, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons and all the digital systems that increasingly run our societies, could well represent the ‘biggest event in human history’.
 
Although the effects and outcome of AGI cannot be known with any certainty, multiple scenarios may be envisioned. These include scenarios where AGI, despite its superior intelligence and power, remains under human control and is used to benefit humanity. Alternatively, we might see AGI operating independently of humans and coexisting with humans in a benign way.
 
Logically however, there are scenarios where AGI could present a threat to humans, and possibly an existential threat, by intentionally or unintentionally causing harm directly or indirectly, by attacking or subjugating humans or by disrupting the systems or using up resources we depend on.
 
A survey of AI society members predicted a 50% likelihood of AGI being developed between 2040 and 2065, with 18% of participants believing that the development of AGI would be existentially catastrophic. Presently, dozens of institutions are conducting research and development into AGI.
 
Assessing risk and preventing harm
 
Many of the threats described above arise from the deliberate, accidental or careless misuse of AI by humans. Even the risk and threat posed by a form of AGI that exists and operates independently of human control is currently still in the hands of humans. However, there are differing opinions about the degree of risk posed by AI and about the relative trade-offs between risk and potential reward, and harms and benefits.
 
Nonetheless, with exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit.
 
Crucially, as with other technologies, preventing or minimising the threats posed by AI will require international agreement and cooperation, and the avoidance of a mutually destructive AI ‘arms race’. It will also require decision making that is free of conflicts of interest and protected from the lobbying of powerful actors with a vested interest.
 
Worryingly, large private corporations with vested financial interests and little in the way of democratic and public oversight are leading in the field of AGI research.
 
Different parts of the UN system are now engaged in a desperate effort to ensure that our international social, political and legal institutions catch up with the rapid technological advancements being made with AI.
 
In 2020, for example, the UN established a High-level Panel on Digital Cooperation to foster global dialogue and cooperative approaches for a safe and inclusive digital future.
 
In September 2021, the head of the UN Office of the Commissioner of Human Rights called on all states to place a moratorium on the sale and use of AI systems until adequate safeguards are put in place to avoid the ‘negative, even catastrophic’ risks posed by them.
 
And in November 2021, the 193 member states of UNESCO adopted an agreement to guide the construction of the necessary legal infrastructure to ensure the ethical development of AI. However, the UN still lacks a legally binding instrument to regulate AI and ensure accountability at the global level.
 
At the regional level, the European Union has an Artificial Intelligence Act which classifies AI systems into three categories: unacceptable-risk, high-risk and limited and minimal-risk. This Act could serve as a stepping stone towards a global treaty although it still falls short of the requirements needed to protect several fundamental human rights and to prevent AI from being used in ways that would aggravate existing inequities and discrimination.
 
There have also been efforts focused on LAWS, with an increasing number of voices calling for stricter regulation or outright prohibition, just as we have done with biological, chemical and nuclear weapons. State parties to the UN Convention on Certain Conventional Weapons have been discussing lethal autonomous weapon systems since 2014, but progress has been slow.
 
What can and should the medical and public health community do? Perhaps the most important thing is to simply raise the alarm about the risks and threats posed by AI, and to make the argument that speed and seriousness are essential if we are to avoid the various harmful and potentially catastrophic consequences of AI-enhanced technologies being developed and used without adequate safeguards and regulation.
 
It is also important that we not only target our concerns at AI, but also at the actors who are driving the development of AI too quickly or too recklessly, and at those who seek only to deploy AI for self-interest or malign purposes.
 
If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances.
 
This includes ensuring transparency and accountability of the parts of the military–corporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy.
 
Given that the world of work and employment will drastically change over the coming decades, we should deploy our public health expertise in evidence-based advocacy for a fundamental and radical rethink of social and economic policy to enable future generations to thrive in a world in which human labour is no longer a central or necessary component to the production of goods and services..
 
http://gh.bmj.com/content/8/5/e010435?rss=1 http://managing-ai-risks.com/ http://futureoflife.org/project/mitigating-the-risks-of-ai-integration-in-nuclear-launch/ http://www.gladstone.ai/action-plan http://www.safe.ai/statement-on-ai-risk http://theelders.org/news/elders-urge-global-co-operation-manage-risks-and-share-benefits-ai http://www.theguardian.com/technology/article/2024/may/10/is-ai-lying-to-me-scientists-warn-of-growing-capacity-for-deception http://www.theguardian.com/technology/2023/sep/21/ai-focused-tech-firms-locked-race-bottom-warns-mit-professor-max-tegmark http://www.unesco.org/en/artificial-intelligence/recommendation-ethics
 
http://www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/ http://www.citizen.org/news/a-i-is-already-harming-democracy-competition-consumers-workers-climate-and-more/ http://www.gen-ai.witness.org/ http://lab.witness.org/projects/synthetic-media-and-deep-fakes/ http://algorithmwatch.org/en/study-microsofts-bing-chat/ http://algorithmwatch.org/en/algorithmwatch-ceases-activities-on-x-twitter/ http://carrcenter.hks.harvard.edu/technology-human-rights
 
http://www.hrw.org/news/2024/01/05/algorithms-too-few-people-are-talking-about http://chrgj.org/focus-areas/technology/ http://chrgj.org/focus-areas/technology/transformer-states/ http://chrgj.org/2023/11/14/co-creating-a-shared-human-rights-agenda-for-ai-regulation-and-the-digital-welfare-state/ http://chrgj.org/wp-content/uploads/2023/08/CHRGJ_Contesting-the-Foundations-of-Digital-Public-Infrastructure.pdf http://chrgj.org/wp-content/uploads/2022/06/Report_Paving-a-Digital-Road-to-Hell.pdf http://chrgj.org/2022/07/14/the-aadhaar-mirage-a-second-look-at-the-world-banks-model-for-digital-id-system/
 
http://www.hrw.org/news/2023/05/03/pandoras-box-generative-ai-companies-chatgpt-and-human-rights http://www.hrw.org/topic/technology-and-rights http://www.stopkillerrobots.org/ http://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-systems http://www.icrc.org/en/war-and-law/weapons/ihl-and-new-technologies http://www.accessnow.org/issue/artificial-intelligence/ http://www.accessnow.org/bodily-harms-how-ai-and-biometrics-curtail-human-rights/ http://www.accessnow.org/wp-content/uploads/2023/12/Content-and-platform-governance-in-times-of-crisis-applying-international-humanitarian-criminal-and-human-rights-law.pdf
 
http://edri.org/our-work/eu-parliament-plenary-ban-of-public-facial-recognition-human-rights-gaps-ai-act/ http://euobserver.com/digital/157163 http://www.amnesty.org/en/latest/campaigns/2024/01/the-urgent-but-difficult-task-of-regulating-artificial-intelligence/ http://www.amnesty.org/en/latest/news/2023/06/global-companies-must-act-now-to-ensure-responsible-development-of-artificial-intelligence/ http://www.amnesty.org/en/latest/news/2023/12/venture-capital-firms-funding-generative-artificial-intelligence-ignoring-duty-to-protect-human-rights/ http://www.amnesty.org/en/latest/news/2024/06/amnesty-international-introduce-digital-safety-tools/ http://securitylab.amnesty.org/
 
http://www.accessnow.org/press-release/ai-act-failure-for-human-rights-victory-for-industry-and-law-enforcement/ http://www.amnesty.org/en/latest/news/2024/03/eu-artificial-intelligence-rulebook-fails-to-stop-proliferation-of-abusive-technologies http://edri.org/ http://www.article19.org/resources/eu-ai-act-passed-in-parliament-fails-to-ban-harmful-biometric-technologies/ http://www.article19.org/biometric-technologies-privacy-data-free-expression/ http://www.solidar.org/news-and-statements/civil-society-calls-for-regulating-surveillance-technology/ http://www.article19.org/resources/eu-artificial-intelligence-act-must-do-more-to-protect-human-rights/
 
http://www.channel4.com/news/ai-watch http://pulitzercenter.org/journalism/initiatives/ai-accountability-network http://www.pbs.org/newshour/politics/as-social-media-guardrails-fade-and-ai-deepfakes-go-mainstream-experts-warn-of-impact-on-elections http://www.ohchr.org/en/press-releases/2024/02/un-expert-alarmed-new-emerging-exploitative-practices-online-child-sexual http://www.weprotect.org/global-threat-assessment-23 http://www.esafety.gov.au/newsroom/media-releases/esafety-demands-answers-from-twitter-about-how-its-tackling-online-hate http://www.icfj.org/news/new-research-illuminates-escalating-online-violence-musks-twitter http://www.theatlantic.com/technology/archive/2023/05/elon-musk-ron-desantis-2024-twitter/674149/ http://www.washingtonpost.com/politics/2023/11/21/musk-media-matters-texas/ http://www.nytimes.com/2022/12/02/technology/twitter-hate-speech.html http://www.ohchr.org/en/press-releases/2022/11/un-human-rights-chief-turk-issues-open-letter-twitters-elon-musk http://www.ohchr.org/en/topic/digital-space-and-human-rights


Visit the related web page
 


Free, pluralistic and independent media, a vital pillar of democracy
by UNESCO, IFJ, RSF, CPJ, OHCHR
 
Impunity: Silence costs lives. (International Federation of Journalists)
 
To mark World Day to end impunity for crimes committed against journalists on 2 November, the International Federation of Journalists (IFJ) is calling on governments across the world to condemn, investigate, and arrest those who kill, harass and intimidate journalists, and enact clear and enforceable legislation to protect journalists’ safety.
 
Who ordered the killing of Anna Politkovskaya, Jamal Khashoggi, Arshad Sharif, Javier Valdez, Martinez Zogo and all other journalists whose murders remain unpunished?
 
Since the adoption of the UN Plan of Action on the Safety of Journalists and the Issue of Impunity in 2012, that aimed to create “a free and safe environment for journalists and media workers”, very little has been done to bring journalists’ killers to justice, carry out independent investigations into attacks against media workers and provide journalists with the necessary safe environments in which to carry out their duties.
 
Since the beginning of the Israel-Hamas war on 7 October, at least 30 journalists and media workers have been killed. Such a death toll raises questions about whether journalists in Gaza are being deliberately targeted. “Wearing professional safety equipment is not enough to stay alive when journalists and sometimes their families appear to have become targets in a war” says the IFJ.
 
The federation is also deeply concerned with the state of impunity in Cameroon, India, Kosovo and Mexico where the targeting of media professionals and /or the lack of reaction from public authorities in bringing killers and harassers of journalists to justice conveys the message that it is normal practice to get away with it.
 
The cases of only one in ten murdered journalists are properly investigated, according to the UN. Behind every statistic there is a human tragedy – a death, a kidnapping, a family left without a mother, father, a brother or sister.
 
This year again, the IFJ demands a complete, independent investigation into all killings, attacking or intimidating of journalists and the adoption of a binding instrument that will force world governments to act.
 
IFJ President Dominique Pradalie said: “Targeted – aimed directly at – hunted down like wild game – journalists are victims of the enemies of press freedom. Who is going to investigate, who is going to bring the guilty parties to justice? It's time to put an end to this slaughter, which affects not only war zones but also newsrooms in Mexico, India, Peru and Serbia. The United Nations must adopt a convention for the active protection of journalists.”
 
The IFJ urges the UN and world governments to support its proposal for a UN binding Convention on the safety and independence of journalists and other media professionals.
 
* The IFJ represents more than 600,000 journalists in 146 countries
 
http://www.ifj.org/media-centre/news/detail/category/press-releases/article/impunity-silence-costs-lives http://www.abc.net.au/listen/programs/latenightlive/peter-greste-jodie-ginsburg-and-jason-rezaian-on-the-dire-state-/103431824 http://rsf.org/en http://cpj.org/
 
May 2023
 
UN Secretary-General Antonio Guterres message on World Press Freedom Day 2023
 
"For three decades, on World Press Freedom Day, the international community has celebrated the work of journalists and media workers. This day highlights a basic truth: all our freedom depends on press freedom.
 
Freedom of the press is the foundation of democracy and justice. It gives all of us the facts we need to shape opinions and speak truth to power. And as this year’s theme reminds us, press freedom represents the very lifeblood of human rights.
 
But in every corner of the world, freedom of the press is under attack. Truth is threatened by disinformation and hate speech seeking to blur the lines between fact and fiction, between science and conspiracy.
 
The increased concentration of the media industry into the hands of a few, the financial collapse of scores of independent news organizations, and an increase of national laws and regulations that stifle journalists are further expanding censorship and threatening freedom of expression.
 
Meanwhile, journalists and media workers are directly targeted on and offline as they carry out their vital work. They are routinely harassed, intimidated, detained and imprisoned. At least 67 media workers were killed in 2022 — an unbelievable 50 per cent increase over the previous year. Nearly three quarters of women journalists have experienced violence online, and one in four have been threatened physically.
 
On this and every World Press Freedom Day, the world must speak with one voice. Stop the threats and attacks. Stop detaining and imprisoning journalists for doing their jobs. Stop the lies and disinformation. Stop targeting truth and truth-tellers. As journalists stand up for truth, the world stands with them".
 
May 2023
 
Free, pluralistic and independent media, a vital pillar of democracy, International Freedom of Expression Rapporteurs stress. (OHCHR)
 
(Commemorating the 30th anniversary of World Press Freedom Day and the 75th anniversary of the Universal Declaration of Human Rights, freedom of expression mandate holders from the United Nations (UN), the Organization for Security and Co-operation in Europe (OSCE), the African Commission of Human Rights (ACHPR), and the Inter-American Commission for Human Rights (IACHR) issued a Joint Declaration on Media Freedom and Democracy).
 
“We are alarmed that in many countries around the world laws to protect media freedom are being eroded, physical and online attacks against journalists persist with impunity and the use of courts and the legal system to harass journalists and media outlets is on the rise.
 
Deeply disturbing trends of authoritarianism, co-optation of public power, erosion of judicial independence, and backsliding on human rights in many established and emerging democracies creates an urgency and imperative for States to reaffirm and renew their commitment to protect and promote independent, free and pluralist media as a vital pillar of democracy and an enabler of sustainable development.
 
Independent, free and pluralistic media play a critical role in providing reliable news and information, enabling robust public debate, and contributing to building well-informed and active citizenry. As watchdogs, the media critically scrutinise those in power, investigate and report on matters of public interest, and by doing so, contribute to strengthening democratic processes and institutions.
 
The 2023 Joint Declaration on Media Freedom and Democracy highlights the conditions that independent, pluralistic, and quality media need to thrive. It outlines the role of the media in enabling and sustaining democratic societies and identifies the elements for an enabling environment for media freedom and sets out clear, succinct recommendations to States, online platforms, and the media sector.
 
Both States and private companies have obligations and responsibilities to address the growing threats to media freedom and the safety of journalists, and to urgently reverse the decline in public trust in democratic institutions.”
 
May 2023
 
Press freedom: Another step backwards, says the International Federation of Journalists
 
As international organisations and media prepare to celebrate the 30th anniversary of World Press Freedom Day on 3 May, the International Federation of Journalists (IFJ) says press freedom has taken another step backwards and freedom of expression is not the driver for other human rights that it should be.
 
On 3 May 1993, the UN General Assembly proclaimed an international day for press freedom. This day is meant to remind world governments that they need to respect their commitment to press freedom. This year, UNESCO is focussing its activities on ‘Shaping a Future of Rights: Freedom of expression as a driver for all other human rights’.
 
However, the IFJ deplores the fact that freedom of expression is far from acting as a driver for other human rights and that press freedom is clearly taking a step backwards.
 
“From Peru to Iran, from Sudan to Afghanistan, governments are taking drastic measures to impede freedom of expression and prevent the public’s right to know, including internet restrictions, beating, jailing and intimidating journalists, controlling media content and introducing drastic media laws and other laws to curb the free flow of information. Since the adoption of the Windhoek Declaration in 1991, very little has been undertaken to create concrete conditions at international level to guarantee freedom and security for journalists,” said IFJ President Dominique Pradalie.
 
The figures speak for themselves. According to the IFJ’s latest list of media professionals killed in the course of duty, 68 media staff were killed in 2022. Very few of these cases have been investigated because impunity for killing media workers has been the rule over the years.
 
The IFJ also points to ongoing media crackdowns, which have led to large numbers of journalists being jailed, with at least 375 journalists and media workers behind bars in 2022. China has emerged as the world’s biggest jailer of journalists.
 
Ongoing wars and civil unrest in countries such as Afghanistan, Iran, Hong Kong, Myanmar, Peru, Sudan, Ukraine and Yemen have also seen journalists being deliberately targeted and killed. Thirteen journalists have been killed since Russia invaded Ukraine on 24 February 2022. And thousands of Afghan journalists and their families have had to leave Afghanistan for fear of being killed.
 
Digital surveillance and the widespread use of spying software have been used on hundreds of journalists in order to kill stories, putting many journalists at risk of seeing their sources and whereabouts and other personal data being publicly disclosed.
 
Repressive laws and Strategic Lawsuits against Public Participation (SLAPPs) have also been widely used to curb free speech and to force journalists to censor themselves all over the world.
 
The fragile media economy, the decline in local news reporting and poor trade union representation have led to drastic cuts in newsrooms, with massive lay-offs and increased discrimination against the most vulnerable categories of journalists.
 
The IFJ deplores the fact that, despite the good will expressed in the two UN resolutions (1738 and 2222) on the protection of journalists in conflict zones, no real commitment has been made to eradicate violence against journalists, to make them safer and to make any attacks against them illegal.
 
The IFJ calls for the urgent adoption of a binding international instrument that will strengthen press freedom by forcing governments to investigate and respond to attacks against the media.
 
* The IFJ represents more than 600,000 journalists in 146 countries
 
http://www.ohchr.org/en/statements-and-speeches/2023/05/free-pluralistic-and-independent-media-vital-pillar-democracy http://www.ohchr.org/en/stories/2023/11/without-free-press-there-no-democracy http://www.unesco.org/en/days/press-freedom-2023/joint-statement http://www.unesco.org/en/articles/three-imprisoned-iranian-women-journalists-awarded-2023-unesco/guillermo-cano-world-press-freedom http://www.unesco.org/en/days/press-freedom http://www.unesco.org/en/world-media-trends http://www.un.org/en/observances/press-freedom-day/resources
 
http://www.ifj.org/media-centre/news/detail/category/press-releases/article/press-freedom-another-step-backwards-says-ifj http://www.passblue.com/2023/05/03/vox-pop-what-press-freedom-means-for-10-journalists-worldwide/ http://rsf.org/en/2023-world-press-freedom-index-journalism-threatened-fake-content-industry http://rsf.org/en/what-it-s-be-journalist-sahel-rsf-report-threats-journalism-african-region http://www.hrw.org/report/2024/02/13/i-cant-do-my-job-journalist/systematic-undermining-media-freedom-hungary http://taxjustice.net/2023/05/03/world-press-freedom-day/ http://www.icij.org/inside-icij/2023/05/its-a-poisonous-cocktail-how-legal-threats-are-being-leveraged-against-journalists-in-panama/ http://globalvoices.org/special/wpfd-2023/ http://www.article19.org/equally-safe/
 
http://cpj.org/2024/01/journalist-casualties-in-the-israel-gaza-conflict/ http://knightcolumbia.org/blog/knight-institute-and-public-interest-organizations-call-on-president-biden-to-protect-journalists-reporting-on-the-israel-gaza-war http://cpj.org/reports/2022/11/killing-with-impunity-vast-majority-of-journalists-murderers-go-free/ http://www.accessnow.org/press-release/spyware-press-freedom-statement/ http://www.ohchr.org/en/press-releases/2023/05/un-rights-chief-issues-call-protect-and-expand-civic-space http://www.ohchr.org/en/documents/thematic-reports/ahrc5029-reinforcing-media-freedom-and-safety-journalists-digital-age http://www.ohchr.org/en/special-procedures/sr-freedom-of-opinion-and-expression http://www.article19.org/sustainable-development/ http://www.article19.org/wp-content/uploads/2023/07/SDGs-Briefing-July-2023.pdf


 

View more stories

Submit a Story Search by keyword and country Guestbook