Luqman Kamaldeen Oladayo reports,

The campus journalists from Usmanu Danfodiyo University and other tertiary institutions have been charged to go beyond simply understanding how AI works, and investigate how the systems are designed, deployed, and governed while holding people such as; companies, governments, and institutions accountable for their societal impact.

Nurudeen Akewushola, an investigative journalist with International Centre for Investigative Reporting (ICIR), gave the charge during the training session supported by the Pulitzer Centre under its Impactive Initiative Project.

The session held on Saturday, February 14, discussed both the opportunities and risks associated with the growing use of AI in society and journalism.

The training focused on equipping participants with skills to critically examine how AI tools are developed, deployed, and sometimes misused, particularly in ways that affect the public through misinformation, fraud, and lack of regulation.

Akewushola explained AI Accountability Reporting as a form of investigative and public interest journalism that examines how AI systems are designed, deployed and governed to make people accountable for the impact in society

The training highlighted different AI development stages, ranging from; data, computing, AI models and applications. While also identifying related issues, actors as well as impacted people.

To ensure clear and adequate understanding of the concept, the facilitator practicalized AI accountability reporting with five different case studies one of which was his story titled: “AI assisted ponzi schemes: How Meta, YouTube’s regulatory lapses enable scammers smile to banks” published on International Center for Investigative Reporting (ICIR) on October 9th, 2025.

Participants React

Habeeb Olokoba, a journalist from UDUS commended the training as an eye opening one. He expressed how the training gave him insights about issues regarding AI accountability around him that he has for long not paid attention to.

For Esther Nwackukwu, journalist from Federal University of Lokoja, the highlight of the session is how the facilitator took participants back in time on AI history, corroboration with case studies, effects as well as the evolution of Artificial Intelligence in this present era.

Another campus journalist from UDUS, Sebiotimo Abdullateef Ayomide, on the other hand, commended the facilitator for impacting the minds of young journalists on AI Accountability Reporting.

“This irregular and very unusual training is a highly inspiring and insightful one. Thanks to the facilitator,” he said.

While speaking with Pen Press UDUS, the facilitator explained that the training was executed under Impact Initiative Project supported by Pulitzer centre following his investigation on how Nigerians are falling for AI-driven scams published on ICIR

“The training goes beyond reporting how AI works,” he said, adding that it exposed student journalists to the part of investigative journalism that examines how artificial intelligence systems are designed and deployed while holding the authorities and other actors accountable for their impact on society—abuse, misinformation and perpetration of scams.

Beyond How AI Works

Mr. Akewushola was interested in AI when he noticed how quickly it was being adopted in society for both innovation and as an instrument of deception. He further stated how scammers, political actors, and institutions use AI tools like deepfakes, voice cloning, and automated content to manipulate people at scale.

“Then, I realised journalists need to understand these technologies in order to investigate misuse, verify digital content, and protect the public from any sort of abuse,” he explained.

How journalists Can Navigate AI Tools Responsibly 

He further explained how journalists can make use of AI to analyse large datasets, transcribe interviews, detect manipulated images and videos, track online networks, and automate background research.

“ When used responsibly, AI saves time and strengthens accuracy,” he opined.

However, he cautioned reporters to be careful when using AI as risks such as relying too heavily on AI outputs without verification, using biased or inaccurate tools, mishandling sensitive data, and spreading false information generated by AI systems can creep in.

“Journalists must always cross-check results, understand tool limitations, and maintain strong ethical standards,” he advised.

He concluded by urging the campus journalists interested in AI accountability reporting to start by building strong fundamentals in verification, critical thinking, and storytelling. He further suggested learning digital tools and open-source intelligence (OSINT) methods.

“Most importantly, be curious about how technology works, follow global AI and media trends, and see AI as a tool to strengthen journalism, not replace journalistic judgment,” he concluded.

Leave a Reply

Your email address will not be published. Required fields are marked *