Following the recent Artificial Intelligence Journalism World Forum in Dubai, where I moderated a panel on artificial intelligence (AI), ethics and journalism, I’ve been thinking a lot more about the ethics involved for all content creators using AI or considering using AI.
Video is an essential part of advertising and content-marketing strategies. Join our free Online Briefing on Video 2021: Future Trends and Effectiveness to learn how to do it even better.
Is AI even on your radar? It wasn’t really on mine. I worked in newsrooms until 2015, and not at one point do I remember having a discussion about it. Things have changed now, though, with articles being created using artificial intelligence software on the rise.
Perhaps one of the first moments for me when I realised that AI article generation is a very real thing was seeing a piece generated by AI in the respected UK newspaper, The Guardian. The title alone might sum up the feelings we all have (A robot wrote this entire article. Are you scared yet, human?), in a piece compiled by GPT-3, a cutting-edge AI language model that uses machine learning to produce human-like text, based on article prompts fed into by a human.
And in the US, The Washington Post has been generating AI-fuelled articles as far back as 2016, when it trialled using an AI system, its homegrown ‘Heliograf’ software, to generate coverage of the Rio Olympics. Heliograf has since generated more than 300 articles.
Last year, The Post took it a step further, using Heliograf to produce AI-powered audio updates on the US presidential election. Heliograf supports other initiatives including high school sports coverage and comment moderation.
The point to note here is perhaps that Heliograf is being used on fact-based data coverage – where it has won awards – rather than long-form human interest stories, for example.
These sorts of articles have few ethical elements, other than the age-old argument that AI will take our jobs, of course.
Closer to home, in the UAE, Al Arabiya has welcomed AI into its newsroom.
So far, the application of AI for news seems to be centred around reports with figures, such as sports results and elections, but with technology advancing all the time, AI-related content will surely only grow.
What AI can’t yet do is analyse context for a political or social feature, for example, and surely it will never match the creativity of humans? And what about sensitivity? A robot can’t report on deaths in a sensitive manner.
The Artificial Intelligence Journalism for Research and Forecasting unit, a think tank established in the UAE in 2018, believes there should be a code of ethics for journalists – but let’s face it, there needs to be a code of ethics for all content creators.
How does AI recognise if they have the rights to use an image or video? Or that the image isn’t in bad taste, full of hate or humorous?
This begs the question, who has the right to create a code of ethics, in fact, what are ethics and what are AI ethics?
It’s easy for bias to slip into our writing, conscious or unconscious. Add in morality, social conventions and local cultural mores, and you begin to understand the hurdles faced by generating content using AI.
Many publications have spent centuries building public trust. There is a strong uneasy feeling among humans when it comes to the rise of the robot, and I, for one, think it will erode the brand cache and trust if we leave reporting to AI-based language bots.
But, rather than feeling full of angst and unease, let’s try and use technology as a tool, a useful adjunct to ease the burden of our busy work lives, not fear it and halt its inevitable growth before we’ve allowed it to bloom.
Journalism might be a relatively late adopter of AI, but tech moves fast, and education and training in the newsroom – and across the media landscape will be vital for us working at the coalface to keep up with the rapid pace of change.
In my own experience, after I left journalism and headed up English content for a telecoms firm, I rolled out text for chatbots without thinking deeply about the consequences of jobs that may be lost. Chatbots just seem like a natural progression.
I learnt to type on a typewriter, and moved on to desktop computers, etc – they are all tools to help us work better, and perhaps AI should be considered a great help to busy reporters in wrangling data-based information at greater speeds than could be imagined just a few years ago.
The best way to avoid ethical trips and troubles is for us all to educate ourselves on the rise of AI, to think about how we can best use it to enhance our work life, rather than take our jobs, and most of all, actively engage in debate surrounding AI in events such as the recent Artificial Intelligence Journalism World Forum.