这篇文章来自于The Conversation ,在创作共用协议下重新发布. 阅读原文.
翻译已经征得作者Jaigris Hodson教授同意。

参考中文翻译

AI 产生的虚假信息:应对它的3项技能

发布日期:2023年10月3日 下午3点37分 BST
作者
Jaigris Hodson, Royal Roads University

在我开设的数字研究课上,我让学生向 ChatGPT 提出问题并讨论结果。令我吃惊的是,一些学生询问 ChatGPT 关于我的生平。

ChatGPT 误称我从两所不同的大学获得了博士学位,分别在两个不同的学科领域,但实际上只有一个领域是我的博士研究的焦点。

这给课堂增添了不少趣味,同时也揭示了生成性 AI 工具的一个重大风险 —— 使我们更容易受到令人信服的虚假信息的影响。

为了应对这种威胁,教育者需要教授学生如何在充满 AI 产生的虚假信息的世界中保持清醒。

虚假信息问题的加剧

生成式 AI 加剧了我们在区分基于证据的真实信息和虚假信息、错误信息的困难,使问题愈发严重。

像 ChatGPT 这样的基于文本的工具能够创作出听起来非常可信的学术文章,完整的引用可能会欺骗那些对文章主题不熟悉的人。而基于视频、音频和图像的 AI 可以成功模拟人们的面孔、声音甚至举止,创建出完全不存在的行为或对话的明显证据。

随着 AI 创建的文本和图像或视频结合,制造出伪造的新闻故事,我们应该预见,更多的阴谋论者和利用虚假信息的机会主义者会尝试利用这些欺骗他人以谋求个人利益。

在生成 AI 广泛可用之前,人们已经可以创建假视频、新闻故事或学术文章,但这个过程需要时间和资源。现在,令人信服的错误信息可以更快地被创造出来,这为全球的民主制度带来了新的破坏可能。

需要新的批判性思维应用

至今为止,教授批判性媒体素养的重点,无论是在公立学校还是在高等教育中,都是要求学生深入阅读文本,并了解其内容,以便他们可以总结、提问和批评。

在一个 AI 可以轻易伪造我们用来评估质量线索的时代,这种方法可能不再有效。

虽然没有简单的答案来解决虚假信息的问题,但我建议,教授以下三个关键技能会使我们更有能力面对这些威胁:

1. 横向阅读文本

我们需要教育学生在第一眼深入阅读单个文章、博客或网站前,准备一套新的过滤技能,通常被称为横向阅读。

在横向阅读中,我们让学生在深入阅读之前寻找线索。需要提出的问题包括:谁是文章的作者?你怎么知道?他们的资质是什么,与讨论的主题有关吗?他们提出了什么主张,这些主张在学术文献中得到了良好的支持吗?

做好这项任务意味着需要让学生考虑不同类型的研究。

2. 研究素养

在大众的想象和日常实践中,研究的概念已经转变为指网络搜索。然而,这代表了对收集证据过程的误解。

我们需要教育学生如何区分有根据的基于证据的主张与阴谋论和虚假信息。

所有级别的学生都需要学会如何评估学术和非学术来源的质量。这意味着要教授学生关于研究质量、期刊质量和不同种类的专业知识。例如,一名医生可能会在一个流行的播客上谈论疫苗,但如果这名医生不是疫苗的专家,或者大量证据不支持他们的主张,那么无论这些主张多么令人信服,都不重要。

思考研究质量也意味着要熟悉像样本大小、方法以及同行评审和可否证性的科学过程等概念。

3. 技术素养

许多人不知道 AI 其实并不聪明,而是由能够识别模式的语言和图像处理算法构成,然后以随机但统计显著的方式将它们复述给我们。

同样,许多人没有意识到,我们在社交媒体上看到的内容是由目的是为广告商赚钱的算法优先考虑参与度来决定的。

我们很少停下来思考为什么我们会看到这些技术展示给我们的内容。我们没有考虑谁创建了这项技术,程序员的偏见如何影响我们看到的内容。

如果我们能对这些技术有更强的批判性倾向,追踪资金流向,并思考当我们看到特定内容时谁会受益,那么我们就能更好地抵御利用这些工具传播的虚假信息。

通过横向阅读、研究素养和技术素养这三项技能,我们将能更好地抵御各种虚假信息 —— 并且不容易受到基于 AI 的虚假信息的威胁。

英文原文

AI-generated misinformation: 3 teachable skills to help address it

Jaigris Hodson, Royal Roads University

In my digital studies class, I asked students to pose a query to ChatGPT and discuss the results. To my surprise, some asked ChatGPT about my biography.

ChatGPT said I received my PhD from two different universities, and in two different subject areas, only one of which represented the focus of my doctoral work.

This made for an entertaining class, but also helped illuminate a major risk of generative AI tools — making us more likely to fall victim to persuasive misinformation.

To overcome this threat, educators need to teach skills to function in a world with AI-generated misinformation.

Worsening the misinformation problem

Two youth seen looking at a smartphone.


We should expect to see more attempts by conspiracy theorists and misinformation opportunists to employ AI to fool others for their own gain. (Shutterstock)

Generative AI stands to make our existing problems separating evidence-based information from misinformation and disinformation even more difficult than they already are.

Text-based tools like ChatGPT can create convincing-sounding academic articles on a subject, complete with citations that can fool people without a background in the topic of the article. Video-, audio- and image-based AI can successfully spoof people’s faces, voices and even mannerisms, to create apparent evidence of behaviour or conversations that never took place at all.

As AI-created text and images or videos are combined to create bogus news stories, we should expect to see more attempts by conspiracy theorists and misinformation opportunists to employ these to fool others for their own gain.

While it was possible before generative AI was widely accessible for people to create fake videos, news stories or academic articles, the process took time and resources. Now, convincing disinformation can be created much more quickly. New opportunties are yielded to destabilize democracies around the world.

New critical thinking applications needed

To date, a focus of teaching critical media literacy both at the public school and post-secondary levels has been asking students to engage deeply with a text and get to know it well so they can summarize it, ask questions about it and critique it.

This approach will likely serve less well in an age where AI can so easily spoof the very cues we look to in order to assess quality.

While there are no easy answers to the problem of misinformation, I suggest that teaching these three key skills will better equip all of us to be more resilient in the face of these threats:

1. Lateral reading of texts

Rather than reading a single article, blog or website deeply upon first glance, we need to prepare students with a new set of filtering skills often called lateral reading.

In lateral reading, we ask students to search for cues before reading deeply. Questions to pose include: Who authored the article? How do you know? What are their credentials and are those credentials related to the topic being discussed? What claims are they making and are those claims well supported in academic literature?

Doing this task well implies the need to prepare students to consider different types of research.

A youth seen holding a smartphone.


Lateral reading means searching for clues before reading deeply. (Shutterstock)

2. Research literacy

In much popular imagination and everyday practice, the concept of research has shifted to refer to an internet search. However, this represents a misunderstanding of what distinguishes the process of gathering evidence.

We need to teach students how to distinguish well-founded evidence-based claims from conspiracy theories and misinformation.

Students at all levels need to learn how to evaluate the quality of academic and non-academic sources. This means teaching students about research quality, journal quality and different kinds of expertise. For example, a doctor could speak on a popular podcast about vaccines, but if that doctor is not a vaccine specialist, or if the total body of evidence doesn’t support their claims, it doesn’t matter how convincing those claims are.

Thinking about research quality also means becoming familiar with things like sample sizes, methods and the scientific process of peer review and falsifiability.

3. Technological literacy

Many people don’t know that AI isn’t actually intelligent, but instead is made up of language- and image-processing algorithms that recognize patterns and then parrot them back to us in a random but statistically significant way.

Similarly, many people don’t realize that the content we see on social media is dictated by algorithms that prioritize engagement, in order to make money for advertisers.

We rarely stop to think why we see the content we’re being shown through these technologies. We don’t think about who creates the technology and how biases of programmers play a role in what we see.

If we can all develop a stronger critical orientation to these technologies, following the money and asking who benefits when we’re served specific content, then we will become more resistant to the misinformation that is spread using these tools.

Through these three skills: lateral reading, research literacy and technological literacy, we will be more resistant to misinformation of all kinds — and less susceptible to the new threat of AI-based misinformation.

Jaigris Hodson, Associate Professor of Interdisciplinary Studies, Royal Roads University

This article is republished from The Conversation under a Creative Commons license. Read the original article.