Svetoslav Ivanov on Press Freedom, AI, and Misinformation

On the occasion of World Press Freedom Day, one of the most experienced and renowned journalists in the bTV newsroom, Svetoslav Ivanov, gave an interview to the media intelligence company Ruepoint. He shared his thoughts on press freedom, artificial intelligence, and fighting misinformation.

Реклама

On the occasion of World Press Freedom Day, one of the most experienced and renowned journalists in the bTV newsroom, Svetoslav Ivanov, gave an interview to the media intelligence company Ruepoint. He shared his thoughts on press freedom, artificial intelligence, and fighting misinformation.

As we commemorate World Press Freedom Day, could you share your perspective on the importance of this day and why it's relevant in today's media landscape?

In the current era, we are living in the “Times of the Great Noise.” The effect of social media platforms, particularly Facebook, their algorithms, has empowered individuals and bad actors to navigate and manipulate people‘s hearts and minds. This issue is not new and has been widely acknowledged and discussed globally. However, its resolution remains elusive, as it is deeply rooted in the democratic principle of freedom of expression.

Paradoxically, it is the very essence of democracy that is being exploited by undemocratic entities and regimes to undermine democratic societies. These actors leverage the democratic principles of free speech to spread divisions. The assertion that “Democracy eats itself” holds some truth in this context.

Regrettably, social media platforms have been insufficient in addressing the negative impact of bad actors on democracy. The erosion of democracy poses a grave threat to the existence of freedom of expression and weakens the role of the media outlets such as „Gatekeepers”. 

The rise of AI has greatly impacted the media landscape. How do you see AI influencing press freedom?

While AI holds promise for breakthroughs in areas such as cancer research, its intersection with information and current affairs poses substantial dangers. Notably, Cambridge has declared “hallucinate” as the word of the year for 2023.

You know, hallucination is defined as a “false vision perceived as reality.” This concept gained relevance with the mass use of generative artificial intelligence, which became widely accessible in 2023. OpenAI’s product, Chat GPT, garnered global attention and swiftly integrated into our daily lives.

This technology relies on large language models to simulate realistic human interactions. It can respond to diverse questions and generate written content, including stories, poetry, essays, and even assist with children’s homework.

Furthermore, this principle extends to creating realistic video images and audio imitations of real individuals. Consequently, it blurs the distinction between truth and fiction in our minds, even when labeled as “AI Generated.”, it still influences our subconscious perceptions.

Therefore, the new wave of misinformation and fabricated news is on the horizon. This phenomenon, coupled with the echo chambers created by social media algorithms that reinforce our biases and drive us into perpetual conflict, poses a significant threat to our perception of reality and our collective knowledge.

Historical events may be simplified, institutions and individuals may be attacked, and biographies may be forged. This is because AI systems are trained on content generated by individuals with varying motivations and current large language models operate in a manner that incorporates false information into their responses when summarizing a topic. They fail to differentiate between truth and false.

In this context, hallucination means radicalization. If we collectively abandon diverse sources of knowledge and turn to a single source for answers, as Noah Yuval Harari called it “The New Oracle”, the risk of a fundamental shift in our worldview becomes increasingly real.

Our today is characterized by distortion, oversimplification, and polarization. This has led to a breakdown in civil conversation and an inability to engage in productive dialogue. Artificial intelligence has the potential to exacerbate this problem by further amplifying these negative trends.

In response to this challenge, I am proud to be among the founders of an international initiative that aims to provide a new and comprehensive approach to addressing these issues out loud.

How do you perceive the influence of AI, particularly in regard to combating the dissemination of misinformation and fake news?

In recent months, I have noticed a concerning trend on Facebook in Bulgaria, where genuine journalistic content is being suppressed. Respected media outlets and individuals are having their articles flagged as “violent” and their reach significantly reduced by the platform. This might be a result of new algorithms implemented by Meta that aim to distance themselves from news content globally.

While I understand the intention behind these changes, I am concerned that they may have unintended consequences. By altering the algorithm in this manner, there is a risk of eliminating authentic news content from the platform, while simultaneously amplifying the spread of misinformation and propaganda from troll farms.

Last month, the European Union requested information from Meta regarding their algorithms, deepfakes, the new ad-free subscription model, and the practice of shadow banning, which involves limiting the visibility of certain accounts or posts.

If genuine journalism were to become subject to shadow banning, it raises serious questions about the integrity and reliability of information on the platform and could have serious implications for our understanding of reality. If the reality of journalism could be “shadow banned”, what would happen with the reality itself?

That’s a good question indeed. From the perspective of a journalist at a major media outlet, how do you navigate the complexities of the digital age to ensure the accuracy and reliability of your reporting in the face of misinformation?

It is a difficult task. Let us take our emotions as humans – by closing ourselves in bubbles on one divisive topic or another, it is much easier right now to judge others who do not agree with you. Universal global hysteria, division, and non-acceptance of the other point of view are prevalent.

Why don’t we accept the other point of view or opinion? Because we are not sure that it is based on the reality that our own consciousness reproduces. We often think that it is the result of manipulation and foreign influence. When we think this way, however, we must never forget that the other person thinks the same way about us and our point of view.

The possibility of rapprochement and national dialogue is gradually disappearing. Another reason is that in the last decade, constantly, the major media outlets have been targeted by “hallucinating leaders” and populists as “enemies of the people.” There are many examples of harassment against journalists all over the world. Their reporting is a huge obstacle to every manipulation or propaganda tactic.

As we look ahead, what do you believe are the most pressing issues confronting press freedom, and how can PR and comms teams collaborate with journalists and media outlets to address these challenges?

It is very important finally for media outlets to establish legally binding agreements with social media and AI platforms. The content produced by journalists is an asset, and it is no longer acceptable for social media platforms to profit from this content without fair compensation.

For instance, if a media outlet’s documentary is hosted on YouTube, TikTok, or Facebook, the social media platform should provide appropriate remuneration.

In January, The New York Times took a significant step by filing a lawsuit against popular AI platforms for copyright infringement related to its written works. The lawsuit, filed in Federal District Court in Manhattan, alleges that millions of articles published by The Times were utilized to train automated chatbots that now compete with the news outlet as a credible source of information.

It is likely that other media outlets will follow suit, seeking legal recourse not only against AI platforms but also against platforms like TikTok, META etc. that exploit our work for their own financial gain.

 

You can have a look at the interview here.