Repository logo
Communities & Collections
All of DSpace
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register. Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Zilinsky, Jan"

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Item
    Can AI-Attributed News Challenge Partisan News Selection? Evidence from a Conjoint Experiment
    (SAGE Publications, 2025) Zoizner, Alon; Matthes, Jörg; Corbu, Nicoleta; De Vreese, Claes; Esser, Frank; Koc-Michalska, Karolina; Schemer, Christian; Theocharis, Yannis; Zilinsky, Jan
    With artificial intelligence (AI) increasingly shaping newsroom practices, scholars debate how citizens perceive news attributed to algorithms versus human journalists. Yet, little is known about these preferences in today’s polarized media environment, where partisan news consumption has surged. The current study explores this issue by providing a comprehensive and systematic examination of how citizens evaluate AI-attributed news compared to human-based news from like-minded and cross-cutting partisan sources. Using a preregistered conjoint experiment in the United States (N = 2,011) that mimics a high-choice media environment, we find that citizens evaluate AI-attributed news as negatively as cross-cutting news sources, both in terms of attitudes (perceived trustworthiness) and behavior (willingness to read the news story), while strongly preferring like-minded sources. These patterns remain stable across polarizing and non-polarizing issues and persist regardless of citizens’ preexisting attitudes toward AI, political extremity, and media trust. Our findings thus challenge more optimistic views about AI’s potential to facilitate exposure to diverse viewpoints. Moreover, they suggest that increased automation of news production faces both public mistrust and substantial reader resistance, raising concerns about the future viability of AI in journalism.
  • Loading...
    Thumbnail Image
    Item
    Justifying an Invasion: When Is Disinformation Successful?
    (Routledge, Taylor & Francis Inc., 2024) Zilinsky, Jan; Theocharis, Yannis; Pradel, Franziska; Corbu, Nicoleta
    Conventional wisdom suggests that social media, especially when used by authoritarian powers with nefarious aims, leaves citizens of democratic countries vulnerable to psychological influence campaigns. But such concerns overlook predispositions among recipients of false claims to reject (or to endorse) conspiratorial narratives. Analyzing responses from a survey fielded in 19 countries, we find that it is a preexisting conspiracy outlook at the individual level, more so than media diets, which consistently predicts rating Russia’s pretenses for the invasion as more accurate. In all countries, individuals who view the world in general with a conspiratorial mindset are more likely to believe war-related disinformation. Receiving news via YouTube, Facebook, or TikTok is also associated with greater belief in Russian narratives justifying the invasion in several countries, but these relationships are weaker compared to those observed for conspiracy thinking. Without downplaying a potential positive role for media interventions, the findings highlight the importance of a deeper understanding of the underlying causes of conspiratorial thinking.

DSpace software copyright © 2002-2025 LYRASIS

  • Privacy policy
  • End User Agreement
  • Send Feedback