3 Questions To: A Media Perspective

  • Christian Simon profile
    Christian Simon
    31 July 2018
    Total votes: 3

Dear All,

Mariana first asked me to participate in the "official" 3-Questions-To-Thread as a representative of the media sector. But since I am working for Süddeutsche Zeitung and Media Lab Bayern (a startup incubator for media and journalism startups) but did not enter this forum in any official capacitiy or with a mandate from one of my employers, we thought it best to express my personal view in a standalone post to avoid any misunderstandings. 

Take this as the viewpoint of someone who is not in a media management position, but rather working both "on the ground" in journalism. I am in the unique position of both being affected by AI in journalism and working with the startups and entrepreneurs who try to create the tools journalists might use in the future. 



So here are Marianas questions:



1. AI and Journalism - More Opportunities or More Challenges?

  • The least satisfactory of all answers: both. The biggest challenge I see is: How can journalists report about "AI"? Not only is the term "AI" ill-defined and more precise alternatives (like "machine learning" or "neural networks" etc.) are complicated and difficult to grasp for the average reader - the phrase "AI" also creates mental images, that are closer to HAL9000 or Ironmans Jarvis than they are to the "weak AIs" we've seen so far. Most readers (and, in some cases, most journalists) do not differentiate here, wich makes reporting on "AI" often sensationalistic. As "AI" gains more importance in many areas of life, the reporting on it has to become better, and I see a huge challenge of educating both readers and journalists here.
    For journalism as a profession on the other hand, I do see a lot of opportunities. A wise person said: "AI will not replace humans in the workplace, but people with AI will replace people without AI". Already, we see a lot of emerging tools to help journalists, that use "AI" - and I am not just talking about "Robo-Journalists" that use Natural Language Generation to write articles and ads. From "AI"-powered Content Management Systems like Forbes' Bertie to tools that use "AI" to create automated transcripts (e.g. Trint), I do see a lot of opporunities there, that could make journalists work easier, faster and more efficient.

2. Where will AI Not Repleace Humans in the Journalism Domain?

  • See above - I dont think many jobs in journalism will be replaced by "AI" in the near future. "AI" can be helpful in a lot of places, and make the work of a lot of journalists, especially the ones working with big amounts of data, easier. It is still missing vital components to actually replace them, and will for a long while. "AI" can not handle qualitative data very well. "AI" can not yet work with unstructured data. "AI" can not find a story in a heap of quantitaive data, it can just arrange the data in a way that makes it easier for a human to see it. AI can not make a phone call that goes way beyond a restaurant reservation. And while the Joke "There is only two kinds of people: 1. Those that can extrapolate from incomplete data." is easy to understand for a human, it is hard to explain to a computer. 
    In the broader media scene there might be some jobs on the line in copywriting and translating, but all in all and especially in journalism, I believe the claim "Humans will be replaced by AI" to be mostly fearmongering - see the challenge for journalism in anwser #1.

3. What are the Most Pressing Ethical Questions of the Media Use of AI?

  • Like we have read a few times now in this forum, I think bias and accountability are the biggest ethical challenges for "AI", and that is especially true in journalism. Journalism has lost enough trust with the general population already, and the answer to that can only be more transparency. And if we expect that transparency of journalists, we should also expect transparency of the algorithms they use. If we ever get to the point where "AI" influences editorial decisions like agenda setting or presentation (headlines, teasing, etc.), we have to be able to explain that decision to the readers - which is why algorithmic accountability is a huge topic. I strongly believe that no journalist with a good conscience should ever use an "AI"-Tool, where even its creators can not exactly tell how it got its results. So if "AI" is to be widley used, that question has to be adressed.
    Bias fits right into that line of discussion: It took a long while for human journalists to admit their own biases. It is only slowly that journalism is adressing that and starting to be concerned with diversity and representing different perspectives. We can not go a step back on that by using "AIs" that were trained with biased data - the least we can do is to be aware of that bias, for which we need to know the data that went into the creation of our tools.
     

I am looking forward to the "official" entry in the "3 Questions To"-Thread and to the discussion with you all.