The news that Die Aktuelle magazine had published an AI-generated ‘interview’ with Michael Schumacher, the former Formula 1 driver, shocked the world. Schumacher, a seven-time F1 champion, suffered severe head injuries in a skiing accident in 2013 and has not been seen in public since. The magazine’s cover featured a smiling Schumacher with the headline “Michael Schumacher, the first interview” and a strapline that read “It sounded deceptively real.” However, it was later revealed that the supposed quotes from Schumacher had been produced by an AI programme called character.ai.
The article included statements attributed to Schumacher about his health and family. These included, “I can, with the help of my team, actually, stand by myself and even slowly walk a few steps,” and “My wife and my children were a blessing to me, and without them, I would not have managed it. Naturally, they are also unfortunate about how it has all happened. They support me and are standing firmly at my side.” The Schumacher family confirmed they planned to pursue legal action against the magazine.
The article’s publication has raised questions about using AI-generated content in journalism. While AI can be used to automate certain aspects of the writing process, such as generating headlines or summarising data, using AI to create entire articles raises concerns about the accuracy and authenticity of the content. In the case of the Schumacher article, the quotes attributed to the former driver were not his own words. The article misled readers who believed they were reading an exclusive interview with Schumacher.
The legal action being taken by the Schumacher family is a significant development in the debate over AI-generated content in journalism. It demonstrates serious consequences for media outlets that publish misleading or false information, even if an AI programme generates it. The family’s decision to take legal action sends a clear message that they will not tolerate the exploitation of Schumacher’s image or reputation.
The case also raises questions about the ethics of using AI-generated content in journalism. While AI has the potential to make the writing process faster and more efficient, it also raises concerns about the accuracy and authenticity of the content. As AI-generated content becomes more widespread, media outlets must ensure that they are transparent about how the content was generated and that it is clearly labelled as such. This will be essential to maintain readers’ trust and avoid the risk of legal action.
The Schumacher case also highlights the importance of respecting the privacy of public figures. Schumacher’s medical condition has been kept private by his family since he was brought home in 2014 after an induced coma. In a 2021 Netflix documentary, Schumacher’s wife Corinna spoke about their family life: “We’re trying to carry on as a family, the way Michael liked it and still does. And we are getting on with our lives. ‘Private is private’, as he always said. It’s very important to me that he can continue to enjoy his private life as much as possible. Michael always protected us, and now we are protecting Michael.”
Using AI-generated content in journalism raises important questions about the balance between the public’s right to know and an individual’s right to privacy. While the public may be interested in hearing from Schumacher, his family has made it clear that they do not want his private life exploited for the sake of a story. The Schumacher case serves as a reminder that journalists and media outlets must balance the public’s right to information with the need to respect the privacy of individuals, particularly those who are vulnerable or have been victims of trauma.