CONTACT US:
Friday / December 6

Credibility in the Age of Artificial Intelligence

Over the past year, the rise in use of generative artificial intelligence platforms like ChatGPT has presented many opportunities. Educators, for instance, have discovered how this technology can save them time by doing things like creating additional practice problems in math, writing exemplar texts for classroom modeling, adapting the level of texts or assignments for the varying needs of students, and helping with administrative tasks like sorting data or writing weekly newsletters.

Despite seeing all the benefits, and even learning about the many sites designed for the specific use of teaching and learning, there are many significant concerns and plenty of user reluctance. One of the biggest areas of concern with the use of AI in education is its credibility. Since these platforms can quickly generate content and respond to requests seemingly completely and confidently, it can seem almost too good to be true.

In a world already saturated with content, from streaming services to smartphone apps and social media feeds, generative artificial intelligence has added another layer of complexity, making it even more important to address the credibility issue and learn the skills of being critical consumers. As content consumption continues to rise, we must learn to:

  • identify sources
  • verify accuracy
  • be on the lookout for bias and imbalance
  • stay in control of the content we encounter—and even more in control of the content we decide to use

This is why many are talking about the following:

  • Where is the content coming from?
  • Can these sites be trusted?
  • Is there anything specific to lookout for?

Let’s uncover the important things to keep in mind with each.

Where is the content coming from?

Generative AI creates content by scrubbing a wide range of public sources, seeking key information and noticing patterns in that information related to the request and curating that into a human-like response (US Department of Education, 2023). Though it might seems like there is a single “Wizard of Oz” working behind the scenes, the content actually comes from hundreds of thousands of sources. Unlike a search engine that provides a list of sources for users to explore, AI scans sources and synthesizes them for a user to create a seemingly coherent and complete answer.

Can these sites be trusted?

It is important to know that artificial intelligence cannot form its own original thought and does not (yet?) have the ability to think in the same way humans do. It relies on existing information related to a user’s request to generate content that feels human-like and almost always appears to be accurate. However, it is crucial to proceed with caution and to not fully “trust” any one AI response. Carefully analyzing output for accuracy, cross-checking questionable information, and being aware of one’s own level of expertise on a topic are all important steps for AI users. A 2023 UNESCO publication exploring the integration of AI in higher education provided a model to highlight the importance of having a certain level expertise when interacting with and trusting these tools (figure 1). Without a certain level of personal expertise on a topic, it is difficult to know what to trust when looking at a reply. So, when it comes to this question the answer might be, “yes, but …”

Figure 1

ChatGPT and Artificial Intelligence in Higher Education Quick Start Guide (2023, UNESCO)

In there anything specific to lookout for?

The more people interact with generative AI and use the output in daily functions, the more important it becomes to be aware of certain issues in responses, including hallucinations and bias. Hallucinations, fabricated or inaccurate information that appears to be true, are a result of gaps in the data an AI platform pulls from, false learned-information, or a misunderstanding of a user’s request. Additionally, bias can and has been present in AI responses resulting in response that have skewed or prejudiced information. Since AI is pulling from existing information, and we know information in the world can have a certain level of bias, it is important to understand that this bias can work its way into the reply AI gives to a request or prompt (Mollick, 2024). When analyzing output, it is important for users to know when to adjust or not use a certain response.

As educators, it has been and will continue to be important to stay in control of the information delivered to students. Being aware of the limitations of AI and knowing if and when to trust its responses are crucial considerations as we integrate this technology into education. Figure 2 outlines these important considerations, questions to keep in mind, and prompts to use with chatbots for each.

Figure 2

Analyzing Credibility of AI Output
Thing to Consider Questions to Ask Prompts to Use
Accuracy of Information ●       Is the information true?

●       Do you have the expertise to know if this information is accurate?

●       Do you have any questions about the output?

●       Do you need to cross-check with another source?

●       Is the information supported with evidence?

●       Where did you get this information?

●       What are your sources?

●       Give a list of citations.

●       Cross-reference sources for _______.

Presence of Bias ●       Are there signs of favoritism or prejudice?

●       Does this information feel one-sided?

●       Are any assumptions made?

●       Does the output feel judgmental?

●       What position does it feel this response is taking?

●       What about _____?

●       What is your perspective on _______?

●       Give another perspective.

●       I am questioning the language used about ____, why did you use _____?

●       Revise ____ to include _____.

Missing Information ●       Does it feel like there are gaps in information?

●       Do you feel there are any missing details?

●       Does the information feel incomplete?

●       Do you have any big questions about the content provided?

●       Would you include anything else?

●       Is there anything else to know about _____?

●       What about______?

●       How complete is this information?

●       What else should we know?

●       Share any additional information that should be considered.

It is important to remember that AI platforms did not go to school to be a teacher, they do not hold a credential, they have not engaged developing learning units, and thus require human users to contribute their own expertise.

Though AI can certainly speed things up and give ideas you might not have thought of on your own and has shown such great possibility, it is important to acknowledge that it is far from perfect.

The beauty with AI is that you don’t have to just consume content, you can also respond to, question, and try again. Users have the chance to really guide these tools to work for them and not against them and off course in the end, we do not have to use AI. Though it can be a helpful time-saving tool and is a technology that offers some real possibility, if you question the credibility of content produced you can always pivot and create that content without the use of AI. Remaining in control of how and when you use AI and what you choose to do with the content created is essential in making this a technology that can help education in many positive ways.

References:

Mollick, E. (2024). Co-Intelligence: Living and Working with AI. Penguin Random House

UNESCO. (2023). AI and ChatGPT in higher education: Quickstart guide. UNESCO Publishing

U.S. Department of Education, Office of Educational Technology (2023). Artificial intelligence and
future of teaching and learning: Insights and recommendations. https://tech.ed.gov.

Written by

Meghan Hargrave is an experienced educator, with over 15 years in the field. After being a teacher leader in the classroom, she moved into education coaching and consulting where she supports hundreds of K-12 schools and districts worldwide. Her work has always focused on important instructional shifts in education and practical ways the educators she supports can embrace these shifts effectively, which has included the integration of Artificial Intelligence tools in the classroom. She is ChatGPT, GoogleAI, and AI for Education certified in addition to working closely with thousands of educators on how to implement these tools in the classroom. She is an international presenter, has taught preservice teachers at Columbia University’s Teachers College, regularly contributes to popular educational publications, and is known for sharing innovative and effective classroom strategies via social media @letmeknowhowitgoes.

Nancy Frey is professor of educational leadership at San Diego State University and a leader at Health Sciences High and Middle College. Previously, Nancy was a teacher, academic coach, and central office resource coordinator in Florida. She is a credentialed special educator, reading specialist, and administrator in California. She is a member of the International Literacy Association’s Literacy Research Panel. She has published widely on literacy, quality instruction, and assessment, as well as books such as The Artificial Intelligences Playbook, How Scaffolding Works, How Teams Work, and The Vocabulary Playbook.

Douglas Fisher is professor and chair of educational leadership at San Diego State University and a leader at Health Sciences High and Middle College. Previously, Doug was an early intervention teacher and elementary school educator. He is a credentialed teacher and leader in California. In 2022, he was inducted into the Reading Hall of Fame by the Literacy Research Association. He has published widely on literacy, quality instruction, and assessment, as well as books such as Welcome to Teaching, PLC+, Teaching Students to Drive their Learning, and Student Assessment: Better Evidence, Better Decisions, Better Learning.

No comments

leave a comment