CONTACT US:
Thursday / February 22

Six Questions to Ask Before You Use AI

A smartphone showing chatgpt next to a pair of glasses

Nearly every educator I’ve spoken with this spring–in any context–has mentioned their angst with artificial intelligence. I’ll be honest: I’m more than a little uncomfortable, too. We’re all awestruck, for certain, but many of us are treading lighter than we ever have as we inch our way along this vast new shore. We’re wondering why we didn’t notice it before or at the very least, why we barely anticipated its formation. We enter the water, and perhaps because we can’t sense its depth, many of us wade in only to our knees. Some of us swim a bit further. A few begin to survival-float in the abyss, turning wary eyes upon those colleagues who are already hanging ten and daring the ocean to rip-current them toward a future that must be better than everything we’ve dealt with in recent years.

Some people–maybe me–don’t want to move too fast, because it kinda feels like there are no take-backsies here. I’ll admit, I’m disappointed in myself. I’ve always been a bit of an early adopter, and while I’ll never raise my hand when asked who the tech evangelists are in any room, those who know me well will tell you that I’m more than comfortable at my keyboard. I’m a well connected educator, and I know that if I can’t find my way with AI so much will soon be lost.

Perhaps you feel the same way, too.

It’s not about navigating the tools, though, is it? It’s about the deeper implications of AI. It’s about the unintended consequences of swimming too far from shore. We’re unaware of what’s beneath the surface, how hungry it might be, and how powerless this enigmatic force might leave us. So we splash around, refuse to commit, and try to bide our time a bit. If we move too fast, we think, something–or someone–is going to get hurt.

The notion that we’re safe in the shallows is illusory, though.

The world is turning fast, the tides continue to shift, and we can only linger for so long.

This is what I told myself a few months back, as I tried to understand my hesitation. Reflecting left me with more questions than answers.

Maybe you’ll appreciate them, both the questions and the answers… and even the questions the answers generate.

These are kinds of questions that help me make better choices. The answers shift, depending on my context, and this makes my use of AI far more intentional than it might have been otherwise. I’m realizing that that intentionality matters more to me than dropping-in on every dazzling wave these waters churn up. I know they can crush me and everyone else in my crew, too.

So here’s what I’m asking myself each time I choose to use AI:

  • Will this automate the sort of doing that actually improves my thinking?
    Recently, I taught the students in my Assessment Methods course how to use ChatGPT to create profiles for learners with complex disabilities. This pushed them to think far more critically about how they might design truly accessible lessons and units. The characters they created were richly diverse, and their instructional design approaches were far more nuanced than they might have been otherwise as well. This was quite wonderful.
  • Will this help me be of better service to someone?
    The experience I describe above taught me something important: My students will soon be first-year teachers. It will take years—or perhaps even decades—for them to become masterful in the classroom. AI can empower them to teach vulnerable learners better and faster. This can be a good thing.
  • Will this express or repress my authentic self?
    As a creative person, I’ve prompted ChatGPT to generate everything from stories to illustrations. I’ve asked it to propose solutions to professional and even a few personal dilemmas. It balanced my budget, and it also rapidly composed whole lesson plans and units of study for me. Sometimes, I’m satisfied by the results. They’re acceptable. They offer a good enough beginning. However, sometimes the results aren’t merely inaccurate; they’re not representative of who I am, how I speak, or what I value. For instance, if I asked ChatGPT to write a blog post about the complexities of embracing AI, it wouldn’t sound like this. It wouldn’t address the issues I find particularly compelling, either.
  • Will this elevate or diminish my learning?
    When my students used ChatGPT to create those learner profiles I mentioned earlier, it leveled-up their thinking. They found themselves chasing better questions, refining their ideas, and noticing subtleties in the theories they explored. Prior to relying on AI in this way, these pre-service teachers relied on their limited background knowledge and experiences in the field to construct simple stock characters. ChatGPT helped them uncover their biases, revealed problematic stereotypes, and offered them portraits of learners who were far more human and whose needs and interests were distinctive and intricate.
  • Will this democratize an experience or deepen inequities?
    This is the question that gives me the greatest pause, as I’m aware of what AI is built upon: our socio-political and racial history. AI can deepen racial and economic inequities. AI also changes what it means to be creative, skilled, and responsive. Those who have access to these tools—and use them well—may quickly outperform those who don’t. This has the potential to tip, level, and even destroy many playing fields. Questions like this one remind me to consider the potential impact of the choices I make.
  • Will this bring me closer to others, or will it create distance?
    Several weeks ago, I spent a few hours on a Saturday morning writing thoughtful letters of recommendation for former students who are chasing big dreams. This is always challenging work, as you may well know. Just-right detailing requires a bit of research. I spend a lot of time combing through portfolios, email exchanges, and our university website ensuring that I have my facts straight and my stories in order. By the time I’ve finished drafting, I’m usually missing those students. Writing recommendations rakes up great memories that leave me feeling closer to the students I care about. Yesterday, I dropped a former student’s bio, grades, and feedback left on several projects into ChatGPT. I asked it to compose a letter of recommendation. It was good. I was impressed. And I was also left cold by the process. When my students used AI to generate learner profiles, the characters they created were far more human, and this drew them into their world. It brought them closer to me and to one another as well. It made for a warmer learning experience. This matters.

So do you.

The more I grapple with the tensions surrounding AI, the more I realize how personal each experience with it really is.

Chatting with ChatGPT forces me to understand myself, my purposes, and how I tend to show up inside of the decisions I make and the things that I create.

I have to wonder, in the end, if it isn’t simply its potential to think for me but also, what it illuminates about my relationship with myself, my work, and others that leaves me a bit unsettled. Maybe there aren’t sharks in the water, but only ourselves. And maybe the questions we ask are more important than finding quick and certain answers. I’d love to explore some of them with you. Come find me on Twitter if you’d like to talk more. I’m @AngelaStockman there.

 

 

Written by

Angela Stockman is an instructional designer for Daemen University where she also teaches in the departments of education and the BA liberal studies hybrid program. A former middle school English teacher who spent over twelve years in the classroom, she also founded a sustained learning community for K-12 writers and teachers, and she has authored several books relevant to multiliteracy instruction and assessment. Angela facilitates professional learning experiences for literacy-minded educators globally. You may find her online at www.angelastockman.com.

No comments

leave a comment