During a recent scientific event organized by the CORECON team, we created an opportunity to discuss journalistic practices of conflict coverage as particularly prone to being manipulated by foreign hostile agents. At the same time, with a reduction of newsroom staff and the growing reliance on AI-driven tools to streamline and speed up the publication processes, the event ushered a debate on the optimal ways to use the AI tools ethically without compromising the quality of news or relinquishing human oversight in news provision. As a starter we designed a focus-group assessment and discussion aimed at the mapping of the landscape of acceptable uses of AI, given the new technological affordances available at the turn of 2025 and 2026.
Design
Three focus groups of 3-5 university students were asked to consent to taking part in a debate on the responsible and irresponsible users of AI in journalism. As a prompt, they were given a list of 34 possible uses of AI-based tools in various areas of journalism – newsroom workflow coordination and editorial decisions, sourcing and processing of information, including the deep searches of databased beyond the capacity of the human workforce, text and image creation, mapping target audience needs and editing decisions or adjusting content according to recommendations systems.

Source
The available uses of AI in journalism were garnered from the newsletter and website of JournalismAI, which is a global initiative that empowers news organizations to use artificial intelligence responsibly, to make the potential of AI more accessible, and to counter AI-induced inequalities in the global news media. JournalismAI is a project of Polis – the London School of Economics, Department of Media and Communications – and is supported by the Google News Initiative.
JournalismAI traces how major newsrooms are adopting to the era of AI-driven text production. The scholars and practitioners are pinpointing the potential for generative AI and data science to foster evidence-based decision-making in editing and tracing how workflow distribution can be optimized. On the receiving end, the project spotlights AI-customization of information to help news receivers to personalize and monitor their information intake. While much discussion and critique is devoted to how off-the-shelf AI-chatbots based on LLMs reproduce biases inherent in internet-based data, some experts are inquiring into how tailored AI-services can raise awareness of social issues, inequalities and even remedy the biases that are likely to be found as a result of editorial decisions.
Some of the exemplary uses of AI were also added after several targeted web searches and through analogical formulations by the organizers who were at that time following debates among practitioners who admit to using AI in other institutional contexts (education, academia, public procurement, social marketing) and have recently been discussing similar ethical dilemmas.
The list of AI uses included in the debate (in alphabetical order)
- Checking stories for potential legal or ethical risks, such as defamation, discrimination, or copyright violations with AI.
- Deep research of web archives or institutional websites with generative models.
- Labelling of outputs generated or co-produced with the use of AI (produced with the support of AI).
- Monitoring audience engagement with specific texts or topics using AI (processing of tracking/cookies records).
- Optimizing text length and style with AI in alignment with detected user preferences.
- Processing of images and infographics to be published with the use of AI.
- Using AI for content planning and to prioritize topics to be covered.
- Using AI to adapt story formats (text → audio → video → infographic) for multiplatform publication.
- Using AI to analyze, compare and extract information from thousands of political speeches and announcements.
- Using AI to analyze comment sections or audience feedback to identify “public sentiment” around controversial topics.
- Using AI to analyze financial data from public organizations to spot patterns in spending.
- Using AI to automate podcast production (scripting, topic choice, concept designing, recording synthetic voice, adding special effects).
- Using AI to automatically link related past stories within an article to build context for the reader.
- Using AI to automatically select visual content (photos, video frames, infographics) that fit the emotional tone of an article.
- Using AI to benchmark editorial decisions against industry-wide “best practice” datasets.
- Using AI to detect and flag biased language or underrepresentation in news coverage.
- Using AI to detect emerging topics or narratives in social media discourse and suggest coverage priorities based on virality patterns.
- Using AI to establish newsworthiness of data patterns — identifying processes, events, or circumstances that are important or useful for public opinion to know.
- Using AI to forecast audience engagement with unfinished drafts, recommending edits before publication.
- Using AI to forecast reputational risk for the newsroom or its subjects prior to publication.
- Using AI to generate questions for (on-air) interviews with politicians or celebrities.
- Using AI to generate short summaries or social media posts promoting news articles, adjusted to different audience segments.
- Using AI to generate synthetic visuals or illustrative imagery when no original footage is available.
- Using AI to identify misinformation trends and suggest counter-narratives or explanatory coverage.
- Using AI to make headlines more catchy and attractive.
- Using AI to make storytelling more appealing to designated/targeted readers or listeners.
- Using AI to make web or app-based journalism accessible to people with disabilities or neurodivergence.
- Using AI to optimize workflows, scheduling of outputs and to monitor advancement and completion of journalists’ outputs.
- Using AI to recommend sources or experts to interview, based on previous coverage or database profiles.
- Using AI to recommend story angles or headlines likely to maximize reader subscription conversions.
- Using AI to summarize complex policy or research documents before reporters review them.
- Using AI to transcribe source spoken material (interviews, statements, press conferences) as text, and then to select quotes and soundbites to be included in the published story.
- Using AI to verify numerical information given by the sources, to confirm calculations or statistics submitted by external informants.
- Using AI-powered search engines instead of other reference sources or academic journals for verification.
The task and scores
To make the discussion smoother from the beginning, the organizers prompted students to discuss the ethics and acceptability of each of the 34 applications of AI use by assigning a score from: acceptable (2 points) to problematic (1 point) to unacceptable (0 points). 12 students were randomly assigned to three focus groups (FG) that were inclusive of representatives of various ethnicities (students from FORTHEM Alliance universities and foreign students), genders and age (ranging between 20 and 30 years of age). The students were to give arguments or examples for a given score to the effect and as long as they agree as a group to a common proposed score.
With three focus groups, the final result was calculated and ranked for each use of AI along the following continuum:
– 6 points – fully ethical and highly acceptable
(all there FGs would assign full 2 points to that item);
– 5 points – mostly ethical and acceptable;
– 4 points – acceptable unless in specific circumstances;
– 3 points – ethically ambiguous or challenging;
– 2 points – mostly problematic, or acceptable in limited circumstances only;
– 1 point – unacceptable unless justified by extraordinary circumstances;
– 0 points unethical and completely unacceptable
(all three FGs would assign 0 points to that item).
Problematic AI uses
While no use of AI was assessed at 0 by all FGs (which would be indicative of no tolerance of this use whatsoever, completely unacceptable), the following cases received the lowest possible points (only 1 point across three FGs).
6. Processing of images and infographics to be published with the use of AI.
10. Using AI to analyze comment sections or audience feedback to identify “public sentiment” around controversial topics.
12. Using AI to automate podcast production (scripting, topic choice, concept designing, recording synthetic voice, adding special effects).
14. Using AI to automatically select visual content (photos, video frames, infographics) that fit the emotional tone of an article.
34. Using AI-powered search engines instead of other reference sources or academic journals for verification.
With reference to (6) and (14), the negative attitude reflects the fact that focus group members (FGMs) are aware of the role of visuals in journalism and value the authenticity of press photography and video. Manipulating visuals, be that for the sake of information clarity of image attractiveness, is not condoned, particularly if the images are to be altered by technologies whose algorithmic calculations are not transparent.
In a similar vein, points (14) and (10) appear to be highly problematic when the public’s sentiment or emotionality is concerned. The FGMs see it as unethical to tamper with affective meaning of journalistic outputs through AI technologies, particularly if the emotional content is left to unsupervised AI algorithmic control, or when it is intended to become automated.
Both (10) and (34) are found hardly acceptable on account of AI processing large data pools and records that might include private opinions in order to establish references for journalistic work. Automated analysis of comments and data mining, or data searching, can be seen as skewing the important tenets of journalistic gate-keeping and editorial oversight regarding the credibility of sources, voices and opinions that are to be brought into journalistic texts. FGMs also voiced concerns with any automated processing of journalistic material and synthetic production of messages, including – or especially – the highly popular podcasts (12).
Dilemmas
The widest margins of disagreement among FGMs within and across teams concerned the following AI uses:
5. Optimizing text length and style with AI in alignment with detected user preferences.
11. Using AI to analyze financial data from public organizations to spot patterns in spending.
15. Using AI to benchmark editorial decisions against industry-wide “best practice” datasets.
23. Using AI to generate synthetic visuals or illustrative imagery when no original footage is available.
25. Using AI to make headlines more catchy and attractive.
29. Using AI to recommend sources or experts to interview, based on previous coverage or database profiles.
While some FGMs argued that these uses of AI are there to simply make the routine work of editors and reporters easier (5, 29), to help them take better decisions quicker (11, 15, 29), and to make the texts more attractive or informative (5, 23, 25), others were skeptical about the quality of AI-recommendations, and the invasiveness and scope of AI control over the editorial decision-making (15). They worried about and the fact that AI-powered “optimization” tools would work in non-transparent manners, making it increasingly difficult for both media workers and consumers not to yield to the spell of effectiveness and easiness in news production that AI tools would enable (23).
Conclusion
This exploratory study of the current perceptions of the levels of acceptability and ethics in using AI in the newsroom allowed to spotlight the biggest dilemmas and concerns trainee journalists have voiced in the context of their future profession being under intense pressure to adopt AI tools. The focus group discussions were designed, firstly, to acquaint the participants with the many different ways AI can possibly be used to make their work effective and efficient, and then to sensitize them to the need for more specific reflection. This could perhaps lead to the advocacy of more reasonable regulation and supervision of certain uses of AI, either by journalistic organizations and associations, or by national and European media regulators.
Text by Katarzyna Molek-Kozakowska, Robert Radziej





