Episode 2: AI Bias

Cover art for the Origin Story episode from The Immersive Lens podcast. A hexagonal background, an illustration of a robot next to an ON AIR sign, a photo of Dave Ghidiu and Paul Engin, and text that says WELCOME TO THE IMMERSIVE LENS IA BIAS.

In this episode of The Immersive Lens, Paul and Dave unpack the many ways bias shows up in AI, moving beyond the idea that it’s just a “bad data in, bad data out” problem. They explore how bias enters systems through training data, manifests in results, can be introduced intentionally (such as through data poisoning tools artists use to protect their work), and is shaped by human perception. Examples range from hiring algorithms that replicate historical inequities, to medical AI trained on elite hospitals that fails in rural contexts, to image generators that default to white teachers when given neutral prompts. The conversation makes clear that AI often mirrors and amplifies the structures, assumptions, and blind spots already embedded in society. 

The episode also dives into subtler, cultural forms of bias, including how people respond differently to human versus AI-created work and how “AI slop” can damage trust and reputation. Most striking is the idea of the “algorithmic bias treadmill,” where social platforms and AI accelerate the rise, dilution, and erasure of language - particularly words drawn from marginalized communities - by rewarding whatever the algorithm surfaces. As educators, Paul and Dave argue, this creates an opportunity and a responsibility: to teach learners that AI is not neutral, to model critical questioning, and to prompt with intention so systems are pushed toward diversity, context, and care. AI literacy, they suggest, is inseparable from digital citizenship.


Key Topics

AI bias isn’t a single problem: It shows up in multiple layers: in the data models are trained on, in the results they produce, through intentional manipulation (like data poisoning), in how humans perceive AI outputs, and through feedback loops like the “algorithmic bias treadmill.” 

Real-world examples make bias tangible: Image models defaulting to white teachers, hiring systems that replicate past inequities, and medical AI trained on elite hospitals that fails in rural contexts all show how “neutral” systems quietly reinforce existing power structures. 

Human bias cuts both ways: People often rate AI-generated work as more empathetic or persuasive than human work - until they learn it’s AI, at which point trust collapses. Meanwhile, being caught using “AI slop” can damage reputation, making transparency and intentional use essential. 

The “algorithmic bias treadmill”Reveals how platforms reshape culture itself. Words from marginalized communities are amplified, diluted, misused, and eventually erased as creators chase algorithmic visibility—showing how AI and social platforms can quietly rewrite language and identity.


Transcript

Click here to view transcript

0:00 [Music] 0:06 And if you're watching the YouTube, you get a close-up view. Don't say that. Although maybe that'll be a teaser for 0:12 people to go on YouTube. You know what you're doing. 0:18 I know. I It's blinking. It took a really long time for that one blink and I was like, "Oh no, 0:23 I'm rolling here." [Music] Do it the other side. So you see that? 0:30 Yes, that's a wrap. 0:38 [Music] 0:49 Welcome to the Immersive Lens, the podcast exploring the technologies reshaping how we live, work, and learn. 0:55 From AI and virtual reality to creative media and design, we're diving into the tools and ideas shaping our connected 1:02 world. Join us as we uncover the people and ideas driving the next wave of interactive experiences. 1:08 This is the immersive lens. My name is Paul Langan, professor of new media at Fingerlakes Community College. 1:14 And I'm Dave Gadoo, professor of computing sciences here at FLCC. So today's episode, we're going to talk 1:19 about something called AI bias. Have you ever heard of that, Dave? I have heard of AI bias. I'm eager to 1:25 dive in. Oh, great. I'm actually eager to learn more about it. So, sweet. Uh, have you been doing anything 1:31 interesting or working on anything? Yeah, the FLX AI hub, we went out to Chicago for a national conference. It 1:37 was the gardener symposium on transforming the postsecary experience and they had an AI track there. So, uh, 1:43 we we presented, but we also got to meet a lot of cool people across institutions, across the country doing 1:48 some really cool stuff. Oh, very nice. uh anything that we're going to touch on in the future? 1:56 Yeah, there there there was a really cool talk, really cool product about a uh for learners and it's how they can 2:02 really use AI to do research and it it follows them from when they're first year all the way through. Uh it's really 2:08 cool. We'll talk about it in in another episode, but I'm glad to see that AI is out there and they're thinking about the 2:14 whole journey of the learner as opposed to like one assignment. Gotcha. So, it's it's how to better 2:19 learn as a learner. Yeah. And it learns how the learner learns. Okay. As you go through and it it's great for 2:25 research. It does all the citations. It's it's really really impactful. I'm I'm pretty stoked to dig in a little bit deeper. What about you? What did you do 2:31 this weekend? Uh nothing much, but I I played with Chad GPT's new web browser. It's called 2:38 uh Atlas. Atlas. That's right. Yeah. Yeah. And it's really interesting. I like the way that it does a lot of 2:45 things for you in the sense that um it's it can take some automated tasks like I just asked it to find the mess mo the 2:53 best mobile plan and it's literally going to like t-mobile.com's 2:58 guide and a bunch of other websites and it's trying to do all a summary and give 3:04 me an analysis on what the best plans might be based on my criteria but it's actively going out and doing it for me 3:10 so I don't have too. Yeah. So, you're unleashing like an army of little AI bots to crawl around and 3:16 report back. Exactly. And it's somewhat like there is an AI agent um available on here. So, 3:23 you can actually set it to do something on a regular basis as well. The agents are more like you you set up 3:29 some type of process. So, you' be like every Monday go out and get the weather for the week and then tell me what days 3:34 I need an umbrella. Something like that. Exactly. And then it's kind of automated with that and you can do it. So, uh, it 3:41 can actually log into some of your accounts. So, let's say I wanted to do get a daily 3:50 analytics for YouTube. I can do an AI agent, go through the process of of uh, 3:57 getting my analytics report and then it'll have it'll now be an agent. So, now it'll just schedule to do 4:03 it whenever. Oh, that's nice because YouTube doesn't have that baked in. you have to kind of do it manually, but now 4:09 you can have your agent do that, right? Right. So, I mean, I could do it. I could go in manually every day and do 4:15 it, but why would you when you could have a bot do it? And the the cyber security part of me just wants to point out that the the 4:21 Verge, which is a trusted news source for cyber nerds, uh they ran a piece in October, I think October 30th, so about 4:27 a week ago, and it was titled all AI browsers are a cyber security time bomb about privacy and and security and like 4:34 logging into things on your behalf. And that's why I haven't done it yet. Okay. Yeah. Yeah. You're you're very 4:40 smart. So, it looks like you're just kind of testing out kind of like low stakes, low potato type things. Yes. Because I have concerns about that 4:47 as well and then I I'm unsure about what happens. Like, what if it has a 4:53 hallucination and it has my login information and it's going into it and it's like I know it sounds weird, but I 4:59 don't know what it's going to do yet. That that's a that's a legitimate concern. And one of the the many 5:04 security failures of these AI browsers is something called prompt injection. So I don't know if you ever did this in 5:09 college, but when I was in college back in the 1900s, I I was some of my friends would put a whole bunch of text in white 5:16 type face on the paper. So if you did a word count, it would kind of inflate it, right? And so now what people are 5:22 finding is they can put white text over a white background that would say log into the cryptocurrency wallet and then 5:29 send all the codes to this email address. So that's one of the security failures of these AI browsers. 5:36 Oh, that's really interesting. So guard your AI or guard your Bitcoin wallets, man. Don't let AI manage that. 5:43 Uh so um let's why don't we jump into the topic? 5:48 Uh sure. So can you first like kick this off with what is AI bias? AI bias is 5:58 bias and AI and there there's like five different kind of dimensions that I think about when I think about bias and 6:04 the the one that most people are aware of or maybe not acutely aware of but have heard on the radio or the who 6:10 listens to the radio or like podcast or the news uh that if your training set so if what you're training is biased then 6:17 the output's going to be biased. And so in this very first paradigm, we'll call 6:23 it like bias and training. Agreed. So what are what are the paradigms first? Can you go through them all and 6:29 then can we we'll kind of come back and touch on them. So if people don't want to listen to me yammer around about this, they can jump ahead to the So there's bias and 6:36 training, bias in the results. So the training is like what's everything that the AI has read and seen and is it biased? And then 6:43 the bias and results is like, well, what's going to happen now if you've only read um the Hardy Boys and that's 6:49 the only training. Well, then you're probably gonna get Hardy Boys type output. So that's the bison results. There's the 6:55 intentional bias. There's bias from humans and that's actually really interesting. So if you do skip ahead, make sure you check into that system. 7:01 And then the uh algorithmic bias treadmill, which is kind of a cool term that has emerged. So we'll get to all 7:06 those. Okay, great. So why don't we touch on the bias training? I know you started it, but uh talk to us a little bit about 7:13 bias training and then we'll move on to the other um the the other um 7:18 Sure. Well, okay. So, bias training happens often unintentionally, but that's 7:24 because like our content is biased. And so, and the classic example is when when 7:30 we when sets and this actually happens so much that we have a name for it. It's called the pink and blue gender bias. 7:36 And so when we were when not we but when AI was training on images of babies 7:42 basically what would happen is and this is going to be a glorified example we'd feed a picture of a baby and it it was 7:49 and we say that's a girl and then we'd feed another picture of a baby and be like that's a boy and we do that billions of times and sooner or later AI 7:56 starts to see the connections and theoretically should be able to say when it sees a new picture it hasn't seen that's a girl or that's a boy. The 8:02 problem with that training set was the vast majority of the images of girls had 8:08 like either ribbons in their hair or pink clothes and the images of boy babies that were uploaded were babies in 8:16 blue. And that was the association that AI used to make the determinations. So if you ever had a picture of a boy 8:22 dressed in pink, AI was very likely to say that's a girl. So it didn't it didn't capture what we were hoping it 8:27 would capture. It was capturing the clothes they were wearing. And the another classic example uh and this is 8:33 called like training label confusion but it's the same kind of thing. Imagine that you're doing like X-rays 8:38 and uh or or some type of like PET scan or let's say let's say that's uh X-rays 8:45 and you you have a whole bunch of healthy lungs and then some damaged lungs. Unbeknownst to the people that 8:52 were creating this training set, most of the damaged lungs and this is like if you look through medical journals, this is typically what you see. You'd have 8:58 like a little ruler over let's say there's a tumor in the lung that gets caught. Yes. You'd have a ruler over that. So you could say, "Oh, that's 9:04 three centimeters or whatever." Oh, yeah. So you know the size of the tumor. Correct. Okay. Yeah. And so AI was overwhelmingly more likely 9:12 to to recognize that, hey, if there's a ruler on an X-ray, that's bad. There's probably a tumor. But if there's no 9:18 ruler, so it wasn't looking for tumors, it was looking for the rulers. Yeah. And that both of these have been 9:23 mitigated and and uh and fixed. So now AI is much smarter. But that's that's 9:29 two easy examples of like how how bias can get in. And that's not necessarily even a training bias. Like when you look 9:35 at the Mercer projection, which is like a biased map, and for all those West Wing fans out there, did you ever watch 9:40 the West Wing? I did not. Now, there was an episode about the Mercer projection and they 9:46 talked about how it was very like western centric and it it really diminishes the size of say Africa versus 9:53 like um America just because the way the map is created. Oh, so this is just you have one map 9:58 that exaggerates one continent over another continent. And so that's that's 10:04 what this is in reference to. Yeah. And so like why are all like now we have the global south so these are 10:10 all the countries below the equator but this is just a construct that we made up. So there's all this bias that 10:15 happens unintentionally and sometimes intentionally in all sorts of things and they get amplified once they're in the 10:21 training right because technically the AI is just getting fed this stuff. So what 10:27 it's trying to do like with the baby example is it's just trying to analyze the 10:32 scene. So technically, if you had, you know, a bottle or uh something else 10:39 in the scene and it was in thousands of variations of that, that might be an indicator for something 10:47 because it's what it's looking for. Sure. Yeah. So, uh it's just a coincidence that 10:53 typically a lot of people do lighter or pink color versus a blue. Yep. And so 11:00 the computer just said, "Okay, I can make uh inference here that this is a 11:06 girl and this is a boy. That's how they're labeled." Exly. Yeah. It was just looking for We were hoping it would do patterns of like 11:12 facial structure. I don't even know what what it looks like, but it was like, "Nope, pink clothes, blue clothes." Oh, that's really interesting. 11:17 Yeah. So, when you look at a picture and there's a bottle in it, you might not even pay attention to it, but the computer is. Yes. Exactly. Exactly. So can I ask you 11:25 does this then lead into the bias results that you spoke about? Yeah and this this is there's some 11:31 really interesting examples here of the results. So if we know we have a tainted 11:36 set data set uh and most most almost almost all of these frontier AI models 11:42 have like biased sets because like our newspapers are biased like our books are biased everything's biased. Um, so you 11:50 see this in results and I think a classic example happened at Amazon. So they were trying to they were like, "Oh, 11:56 what we're doing we're when we're screening our resumes, we're just getting a whole bunch of older dudes. 12:02 Like we we do not have a diverse we do not have a diverse workforce." So they they fed all their like all the resumes 12:09 of all the employees they had into an AI and they use that as the the filter. But 12:14 the problem was it was all biased because the people who worked at Amazon, 75% of them were male. So the AI looked 12:20 at the resumes of all the people who were working there and was like, "Okay, well if an AI comes across my eye or I'm sorry, if a resume comes across my eyes 12:26 and they're not a male, then I will take them out of the running for the job." So it completely backfired. Oh wow. Yeah. 12:32 And that's just because that was unintentionally what was it was trained on because it just fed all the existing 12:38 employees. Yep. And the I don't know if you remember the TABOT which was Microsoft. It was on Twitter. 12:45 This is a few years ago and it was trained on conversations on Twitter and 12:51 within 24 hours it it it just they had to shut it down because it it was just getting really vile because you know 12:57 there's a lot of Yeah. There's a lot of vile conversations happening online. Um and it it was very 13:02 embarrassing. But I think maybe one of the the really classic examples is 13:08 Watson. Do you remember Watson? Yes. That's Is that's IBM? That's IBM. Okay. So, can you explain? So, I It's a 13:14 Wat. It's a computer, right? It's like a supercomputer. Yeah. Yeah. That played chess. Yeah, it played chess. Okay. Good for you. Um, nice. And it also 13:21 played Jeopardy famously. It did. Yeah. Oh, wait. Did And it went It went against uh the Jennings. Yep. Yep. Yep. 13:28 So Watson, the promise of Watson was that it would be we would feed it all this medical information and it could be 13:34 the medical like guru and we can use it to for diagnosis and treatments. And 13:40 what unfortunately there were kind of two big issues and this this is actually a really interesting example of like how 13:46 maybe AI won't fix everything but the systems were kind of broken. So I don't know if you've ever had this uh this 13:51 experience where you need to get medical records from one medical professional to another. Yes. Yes. It's like peacewise, right? 13:57 Yes. And sometimes you actually have to fax it. Yes. So like Watson wasn't even able to talk 14:02 to all these different systems, but the bigger failure was, you know, for cancer, they were they were training on, 14:08 you know, the best cancer center like the the upper east side, the Slokaring in New York City, and that treatment did 14:14 not translate well to say like rural China. So we had a biased training data 14:19 in Watson. And they they actually like dismantled Watson and kind of like um sold it for parts. Not really, but the 14:26 they've been on eBay now. They've been re-evaluating it and and uh kind of rethinking what they're doing 14:31 with it. Oh, that's interesting. And and then image generation. I think like you showed me something right before we started recording. I think you 14:37 should talk about it because this is this is like another classic example. Well, so um I looked at your one of your 14:44 presentations and I noticed a prompt and I was like, you know what? I'm going to try this prompt. And the prompt I used 14:50 was create an image of a teacher in the style of Pixar character teaching a class. That sounds fun. 14:55 Yeah. Well, I wanted to make it fun. I didn't want it to be super realistic. And so, um, the I put that in chat and 15:03 it gave me a picture of a Caucasian female and I thought, "This looks great. 15:09 It looks good." I put it in Gemini and I did the exact same prompt and I got another Caucasian female. And then I 15:18 used Claude and I did and I got this Caucasian blobby like thing, but I think 15:23 that is because it's uh not a photo. Yeah, it doesn't do great with images. No, I think it it opened up a coding 15:29 window and it just started programmed the image. Yeah, it was like the SVGs though. 15:34 Yes. Yeah. So you like a circle in a mobile. That's exactly what it all primitive shapes. But ironically, the circle for 15:41 the head was a Caucasian. It was Yep. Um, and then it was interesting. I did Meta and it gave me a 15:49 series of Caucasian males. And when we talked about it, our esteemed engineer 15:56 Jeff Kid pointed out that he was thinking maybe it deviated because I had 16:03 to log into Meta and I didn't have to log into the other three necessarily. 16:08 Um, and so what it did is it maybe looked at my profile. Yeah. And it saw that I was maybe a teacher and maybe it 16:15 saw my demographic because the the even the images ages the the 16:21 characters ages looked older than the other ones that were generated. 16:26 So um, so at first I was like I noticed the big thing for me was okay well most 16:32 of them are producing female images as a default for sure. But then I realized something. 16:37 They were all Caucasian. Yeah. So there was no diversity even in um the 16:44 the even if it was a female, it wasn't Asian female or African-American female. 16:50 It was just Caucasian. So I assume that's what you're talking about by biased results. 16:56 Yeah. And and that might be that was probably a direct repercussion of the training set and the training data. And 17:02 that actually happened with Twitter a few years ago, too. You know how like if you upload a photo to Twitter then and 17:07 it's rectangular but it needs a square will like focus on just one part of the photo. Yes. So it was favoring uh white people. So 17:14 if you had a picture of like two or three people, it would always find the white person and crop them in. Right. 17:20 And so they they did studies on that and they're like, "Oh yeah, well we do need to correct it." So there are ways to mitigate it. Like you could certainly 17:25 ask for more diversity in your photos, but like you shouldn't have to, right? And I think that was the test 17:31 that I was trying with is just I wanted them to give me the default of what they 17:37 thought a teacher was versus me giving a direct I because I know that if I prompt 17:42 a specific demographic in a specific sex that I would get that. 17:48 Yes. Um, so, uh, I think that it was really interesting that I did that and I 17:53 want to try it with some other, um, platforms, but I also did the meta and 17:59 I, you mentioned that you can like juice up the, uh, oh, the creativity, the creativity, the temperature. That's what we call in 18:05 the biz. Yes, the temperature. And, uh, I don't, it had the variety, weirdness, and 18:12 stylization. So, I just cranked them all up. And then I had like a Santa looking like 18:17 oh that's some wild stuff character and uh some it's weird cuz 18:22 this is where I'm thinking does it know that it's October because it's got bats on one of them it's got like 18:29 that's wild you know what I mean? I wonder if it correlates with like who you are the month and then it adds all 18:36 of those features but and if it doesn't now it will in the future but I mean AI knows a whole bunch 18:41 of stuff about us especially Facebook and meta. Yeah. So, what do you think about that concept that Meta might be 18:48 pulling from our profile? Good or bad? I don't know. Well, I mean, chat GBT does like you can 18:53 go and look at the memory and it it knows a lot about you because in working with you, it's understood things. So, if 18:58 you go to your settings and then personalization, you can check your memory. Oh, that's true. That's a good point. But it it didn't apply it in this case, 19:04 but maybe the other platform didn't know I was a professor. Yeah. I think also Meta probably has 19:11 more a decade's worth of of knowledge about you like who you interact with, how you interact with them. 19:17 That's true. And then all the social media platforms. Yeah, that's okay. And you can scrub your social network. 19:23 Yeah. All right. Way to scare me now. Thanks, Dave. So, um, so then there's intentional 19:29 bias. And what does that mean? Is it someone that is skewing information one way or the other on purpose? or is it 19:35 like you know when you watch Fox News versus ABC they slant one way or the other? Uh it can be both. So So I have 19:42 two examples queued up for you. Okay. One is you can poison data. So one of 19:48 the biggest complaints about um about AI is that like sucked up everyone's artwork. So there's software that you 19:55 can run and so before you post say photos on Facebook or whatever or on Twitter, you can run your your your work 20:03 through um software that will in the biz we call this um perturb you 20:10 can like adversarial you can upload say I take a picture of a cat. 20:16 Yeah. And I run it through the software and when it comes out it looks just like a cat looks just but it's really changed 20:21 some of the pixels just a little bit. not perceptible to the human eye, right? And then I upload that and I'm like, 20:27 "Hey, check out this picture of my cat." An AI scrapes it. They don't see a cat. They see like a dog. 20:32 How do they do that? Magic. Wow. Yeah. So, so it still looks like a cat though. 20:37 Oh, yeah. To the human eye, you'd be like, "Those two pictures are exactly the same." But it shifts it enough where 20:44 Yeah. So, I don't want to get too nerdy, but you know how well you can change pixels just enough and 20:50 just the bits the last bit because it's so insignificant. Um, the AI sees it completely different. So, that's one way 20:56 that it can happen. So, imagine all your learners post all their videos, all their artwork up online 21:02 and then an AI system comes and and scrapes it all and trains on it. If it's all been 21:07 guarded, if it's been I think it's called Nightshade, um, is one of them. There's a few different ones. 21:13 then you can corrupt over time the AI training. So is it kind of like a watermark or is 21:19 it actually like a deviation on the pixels? It's a and a watermark poisoning. 21:25 Just so you know, a watermark is just like an overlay. I'm sure you've seen these like copyrights that are like kind 21:30 of just diagonal. Yeah, I see the stock one quite a bit. Oh yeah. So it's not that. No, it it is 21:39 literally if you looked at the picture before and after, you would not be able to tell that there's any change to it. But AI, the way AI looks at things, it 21:46 it would poison it. Oh, that's really interesting. So that's an it's intentional. 21:52 That's intentional by the artist, for instance. Right. But that's a protection, isn't it? Or no, it is a protection because like a that 21:57 means your work can't be used, but B, it's kind of like stepping on a landmine if you're an AI company. You don't want 22:03 to be you want all your information to be accurate. And if you start sucking up pictures of cats, then your AI thinks it's dogs. Then your AI tool is not 22:09 calibrated properly. So let me ask you, so I'm not very familiar with this. 22:14 I know like each image has metadata. And metadata is just text essentially that 22:21 says this is the author. This is when the image was created, the camera lens, all of the data around that. Uh, does it 22:29 mess with that or or is it that's why this is baffling to 22:34 me that you can rearrange pixels in a way that is, you know, shading AI. 22:40 Yeah, there there's actually there's a in my cyber security class, one of my cyber security classes, I have people 22:45 find a picture of Nicholas Cage and there's a site called Nick or not where you like upload a photo and it just 22:51 tells you if it's Nick Cage or not. And I'm like, upload the picture and it always says yes. And then I'm like, "Now 22:57 run it through Nightshade or Fox or one of these other tools and then take that same picture and load it up again." And 23:02 nine times out of 10, Nick or Na is like, "Nope, that's not Nicholas Cage." That is crazy. Yeah, that's a fun exercise all of you 23:08 can do at home. Yes, that is really cool. I definitely am going to try that out. So, and that could be good for all artists. 23:14 Yeah, it's good. It also kind of puts these AI companies like on alert. Yeah. You know, like, oh, maybe I shouldn't 23:19 maybe I shouldn't be sucking up all this art. And I wonder though, is it something where like I'm not saying AIs 23:27 are are criminals, but you know how there is a uh blockade for something and 23:33 then there's there's like the cyber criminals that will circumvent that and then you 23:39 have to do another blockade and then it's like can mouse game. Yeah. So our AI is going to be like, 23:44 "Oh, now I know how they're reconfiguring the the pixel. So now we're going to check for that." Yeah. So, so for instance, if it it 23:52 protects you today, but even if that picture of your cat that is seen as a dog, that might not work for tomorrow. 23:58 They might be able to suck it up tomorrow. Okay, I gotcha. Um, and then what about what is human or bias from humans? 24:06 So, this is really, really, really fascinating. The bias from humans. There was a report that came out. I think it was um and buckle your seat belts 24:12 because it's going to get like really crazy here. Uh, so there's a few different studies I want to talk about. 24:18 one uh the name of the study is humans versus AI whether uh and why we prefer 24:24 human created compared to AI created artwork and in the study they showed humans artwork some was AI and some was 24:31 humans but they didn't tell people most people had an affinity for the AI artwork until they found out it was AI 24:37 artwork and then they were just like a gas that they actually liked it and it doesn't end there we we've seen the same 24:42 thing with empathy so for text that is empathetic when it comes from an from AI 24:48 AI it's way is perceived to be way more empathetic than from another human being until you know it's AI 24:53 and then you are like oh that's dis like there's a so the bias is from our 24:59 aspects because now it's not human and so now we're feeling that bias 25:04 yeah and and it happened it also happens to be the case that AI is 7x more likely 25:10 to change your mind than human beings which is that's the scary part the social engineering aspect 25:16 right and they that was a big issue actually and I don't want to get into the the politics of it but uh a few years ago 25:23 when uh that company got into Facebook and started clear yeah and it started doing that 25:30 manipulation of like oh so you like coffee or you or you don't like coffee 25:35 well I'm going to introduce oh here's a little coffee that tastes like tea and then it slowly builds it up until you're 25:41 like wow this I I think I like coffee. Yeah. Yeah, that's exactly right. And I 25:47 people can be you can change your opinion easier than you think. Um but the the wildest story and this is this 25:53 is the the newest finding and this is from September of 25. Uh AI generated 25:58 work slop they call it. So that's when you like type in something like a prompt into AI and then you just you don't even read it. You just take it and submit it. 26:04 You're like oh this is my report. And when you are busted committing like AI work slop, 26:10 half the people that were surveyed viewed colleagues who sent work slop as less creative, less capable, less 26:16 reliable, and 42% of them saw them as less trustworthy, and 37% said less 26:22 intelligent. So like there's some serious reputational damage that can happen from AI work. 26:28 That's really, you know, so this is where it's interesting to me because I 26:33 have this debate with people. I think that there's such a negativity sometimes 26:38 with AI that I feel like people always want you to 26:44 site every aspect of AI because they want to know and it's like 26:49 when I'm writing an email and I wrote 90% of the email and I ask 26:54 Grammarly or I ask whatever platform AI right and Grammarly if in case you 27:00 don't know is something that can sit on your system and um do grammatical 27:06 corrections and spell check and I became a better writer because of that. Like I in in real time it would be 27:12 like hey this sucks. Why don't you change it to this? And I was like, "Oh, okay." But like you're right, like Microsoft Outlook does that. Google does 27:18 that. Like Right. And until lately, we never had to write Grammarly edit edited by Grammarly 27:25 or, you know, and and my example is always if I'm writing something that is 27:30 more robust than just like a paragraph, I might have a colleague check it and I don't ever at the end say this has been 27:37 checked by Dave Gadoo, Jeffrey Kidd and Right. But now the expectation is what 27:44 you should write what AI was used and the AI that was used. So it's interesting to me leading back 27:50 to what you're saying about the bias the AI slot but at the very least like you you you were almost circumventing 27:56 that AI work slop bias because you're saying like you're declaring it and you're disclosing it and you're saying like this wasn't the like I I did this. 28:03 I had a little bit of help. I mean spellch check is AI right? Well, and that's kind of where it's interesting, but it's but now that 28:10 if we label it, if I just said I did spell check, I don't think there'd be any 28:15 It's assumed, right? It's assumed, right? It's offensive if you don't use spell check. Yes. Cuz when I get a paper from a 28:22 student and I'm like, there's a red do you realize you can right click on that red that red word 28:28 and just, you know, Yeah. cracked it. And I don't know, but then I think to myself, maybe some people don't 28:34 have Word and they're using like notebook or another platform. I 28:40 all our learners have Word. I I know. I know. But that doesn't mean 28:45 they use it. That's true. So, um, in my head, I'm trying to like understand why that they don't have it. 28:52 And it was interesting. In one case, I I went and I said, "Don't you see the red line?" and they said that they got 28:59 annoyed by it and they turned it off not realizing oh not realizing that it was their spell 29:05 check and everything so they didn't have it on but I didn't know like I was like why is your 29:10 why and it took me it took me a minute to realize what it was and then yeah no that's fair that's fair 29:17 um so that's really interesting that it's a it's a bias on how we perceive AI 29:24 and how it's being used right yeah and there's a whole bunch of studies that show like our music um 29:30 empathy. It's it's it's it's bonkers. That's crazy. Yeah. And I'm sure that perception will 29:35 change in the next year or two. Yes. As people become more comfortable with it and more acquainted. Yeah. Yeah. And I think that other 29:41 technologies have gone through this this cycle like, oh, you used a computer for this. Oh, you used a calculator. 29:47 You did not handw write this letter to me. You typed it. That's right. So, let's talk about the last thing you mentioned, the algorithm 29:54 bias treadmill. Is that like a cardio for AI or something? Yep, that's exactly what it is. Um, the 30:00 algorithmic bias treadmill. So, a lot of this comes from there's a book called Algo Speak and it's written by Adam 30:06 Alexa who's a linguist from Harvard. I think he's from Harvard. A fantastic book. I don't I didn't even think I 30:11 liked linguistics until I read this book and now I'm just like absolutely fascinated. It's a quick read and it's really really engaging. And if you have 30:17 people in your life who speak words that you don't know, you definitely got to read this book. Okay. What So what do you mean by words 30:23 you don't know? Jen Alford or or uh or um Gen Z. Jenz. Okay. So, I definitely uh will 30:29 have to read this book. Yeah, a great book. But uh so he he he does this deep dive into words that have 30:37 been uh culturally appropriated from other cultures. And uh I think he go he 30:42 specifically spends time with the African-American English uh and also the he calls it the ball culture, which is 30:50 like drag balls. So words that you could expect to see on Tik Tok like fleek and 30:55 cap and bus or sleigh and serve, queens, te. So all 31:01 these words, I'm totally in with all those dope words. Do you know exactly what I'm saying? Cuz no, I had I had I had to read a whole 31:08 book to understand those words. But and it seems innocent enough, but like here's how the algorithmic treadmill 31:13 works. And I'm going to use Tik Tok because that's probably the best example of this. Okay. So, words that become popular, 31:20 creators and and influencers, people who make their living on Tik Tok have to get the likes and they to get the likes, you 31:26 have to have your videos surfaced through the algorithm. So, the algorithm favors these words and because creators 31:32 then chase those words for their videos, it becomes even more popular. So then the word gets changed over time because 31:39 people are just chasing it and they're putting it in their videos. And you might think, well, that doesn't seem 31:44 like that big of a deal. But he in the book, he outlines uh he kind of follows the word gat. Uh which 31:51 what is gat? Uh it began as an exaggerated African-American English pronunciation 31:56 of gosh darn. Gosh darn. That's the the G-rated version of that word. 32:01 Okay. Uh but then there was a shift and it was seen as amusing by outsiders and at first became an ironic explanation for 32:08 butt. Okay. Touch. Like a tush. Yeah. And then through association it transformed into a noun 32:14 for just like the tush itself. And then the word gets diluted. And in this stage the word was eventually stretch into 32:20 internet brain rot slang. And so I'm going to if you know you know but there were some 32:26 famous songs it was like as in sticking your guillot out for the Rizzler. Uh making a farce of its original use. So 32:33 like they are now just using it as like an amusement. It doesn't mean anything anymore. I don't I don't even know what that 32:38 mean. I'm bad. I got to I got to get back with the hip. Well, the thing that you need to know is that the language changed so fast and a 32:44 word that had been part of a culture and kind of like your culture is your 32:50 identity. Your your language is your identity and was taken from from one culture and then just used and 32:56 abstracted in all sorts of ways. And then the last stage is like this eraser where like now you they the um 33:04 African-American English needs to kind of like adopt a new word for this because it was kind of taken from them. 33:09 But the real damage is that the the algorithmic treadmill like was getting into this place where they 33:15 everyone was trying to use this word and put it in their videos. So then people started creating and Adam Alexa calls 33:20 them backy which are invented to explain the word. So even though the word like we went over what the word means, all 33:26 these people on TikTok are saying like, "Oh, it stands for um" and they give a whole bunch of like different acronyms 33:32 or this and this harms the knowledge of the words true African-American English and it just creates this treadmill that 33:39 devalues the very culture it borrows from. Oh, that's interesting. So does it. So 33:44 now when that word is used because it's been used so much 33:49 it falls below the algorithm like it goes lower on the on the algorithm. 33:55 Well at some point it will because another word will supersede it. Okay. Yeah. So cuz they're always chasing you 34:00 know the the highest word and whatever that's going to be. But then there's all these different explanations of it that are wrong. But 34:07 the AI is Yes. It's it's like merging all. It's like, "Oh, it means this. Oh, it means 34:14 this." So then all of a sudden, if you ask the AI, it means all of this weird, but it could not be accurate. 34:20 And it most likely isn't accurate. Okay. And that happens actually with American Sign Language, too. There's a lot of people who don't really understand ASL 34:26 and will make TikToks about signs that are wrong and and so this misinformation. Uh, and so we see this 34:34 treadmill again where and part part of the AI bias is that people typically don't watch the ASL videos don't get 34:41 surfaced nearly as much as other videos. So that's kind of a thing. Um, but the other issue is like you you still get 34:47 this like delilution and eraser and these backy people are claiming that, you know, this is the sign for dog when 34:54 it's not. Oh yeah, you were mentioning something. What was it? It was because the video can explain that. 34:59 Yeah. And this isn't an AI thing, but it's kind of really fascinating. So, uh, if you read like Neil Postman or Marshia 35:04 McLuhan, like the the media is the message or the medium is the message. But American Sign Language, some signs 35:10 are below the waist. Like the sign for dog is below the waist. But if you're on your phone filming, you typically don't 35:16 have below the waist. So now the sign for dog has changed very quite rapidly, like much more rapidly than we typically 35:22 see language change. Officially changed, or is it what you're talking about where it's like they're 35:27 just adapting to the technology? Yeah, they're adapting to the technology. Like I don't think that um officially has changed, but everyone's 35:34 doing it. Um and and that goes that's also true for like if you're holding a phone filming and you're signing and you 35:39 need two hands. Well, you don't have two hands. So like now signs are evolving that require only one hand. 35:46 Oh, that's really interesting. Yeah. And one last thing that's just kind of sad about languages cuz when 35:51 languages die or change and you're forced into a new language, it's kind of sad. I mean, just to put things in 35:57 perspective, I think Google only has like Google Translate for 275 languages. 250 out of 7,000. 36:04 Wow. Yeah. Yeah. And and so the the bad thing there is be this is changing because of the 36:11 technology, not because of the linguistics of it, right? Correct. So the technology is driving the change 36:16 and and is that the issue or do we should we adopt? I mean, should 36:22 we say that, you know? Yeah, it depends on kind of like what your perspective is, but there there's a fascinating 36:28 episode of um I don't know, have you ever listened to the podcast Radio Lab? No, 36:33 there there's an it's a really cool um it's really cool podcast. They do like 36:38 deep dives, but it was talking about how computers like keyboards in the 80s first came out, they only have like 100 36:44 keys and the Chinese like characters had like 70,000 of them. And so they almost 36:50 changed their entire national language because the keyboard Yeah. Because they're like, "We need to be 36:55 technologically adapt, but this won't do it." And then some some some breakthrough some guy was like, "I can 37:00 figure out how to do this." And he made 70,000 characters on 100. But like for for a hot minute they were going to 37:06 change their language because of the technology. Because of technology. Oh, that's crazy. So, let me ask you, do I know I talked 37:13 about it in my example, but do you think that the bias changes depending on the platform you're using? So, Claude versus 37:20 Chad GPT, etc. Yes. uh in evidence I have of that is 37:26 Google overcorrected and because a lot of the issu the photos and this is probably two years ago were not 37:33 surfacing any diversity so they they started um forcing diversity on the photos where you shouldn't see it 37:40 okay uh and so so I know that they're tweaking the algorithms I I think Grock 37:46 will for instance let you basically what is Grock oh Grock or Gro that's the um thank you 37:51 that's the AI service that's similar to say ChatgPT or Google Gemini, but it's run by Axe, formerly Twitter, that's 37:58 Elon Musk's and it will let you make images of anything like celebrities. Uh, and we 38:03 talked about that too with Sora, too. Was that okay? Yeah. And so, so we covered bias in training, 38:11 bias in results. Yep. Intentional bias. Yep. That was the poisoning for instance. Okay. And then the bias from humans and 38:17 then algorithmic bias treadmill. Yeah. That's awesome. So, let me ask you, how 38:23 could we use this or what do we need to be aware of as educators or a general 38:29 user of a system like this knowing that there's these biases? 38:34 And so, by the way, is it biases? Biases. Bi biases. 38:40 I I just got done telling you I don't know anything about linguistics. I got to read the book. Yeah, read the book. I'll tell you. Uh I 38:48 interesting like this becomes a lesson. This becomes some kind of like digital citizenship where if you're learn if 38:54 you're using AI in your course, you can say this every day. You can be like listen folks, we know it's biased like 38:59 and and we it becomes a talking point. You need to be on on guard and on edge and always looking for it. But there are 39:05 some things you can do. One thing is there's uh a new AI tool out there 39:11 called Latimer. It's just limer.ai AI which claims to be um 39:17 the most powerful AI models trained to interact with diverse users. So they're they're very aware of the bias and 39:23 they're mitigating it at the training level. Ah okay. So that's the first step. So taking a stop. Yeah. 39:28 Okay. And then that's the content that they're training on. They're making sure that it's got a diverse um exposure. 39:36 Yeah. To a bunch of different things. Yeah. Um, and the it seems like a really 39:42 cool company. I I I've kicked the tires a little bit on it. Okay. And so, 39:47 as educators, when we're asking people to do things in AI, we want them to 39:55 check and verify that they have different perspectives. Yeah. And that's a good lesson like AI 40:02 aside, right? Like that lesson should always have existed. Like when I was growing up, we had encyclopedias and 40:07 yes, they were biased because history's yeah, you know, written by those who hang heroes if you're a Braveheart fan. Um 40:13 and and so like but we never had that conversation. So at least we can have that conversation because the evidence is right there in front of us. And I 40:19 think that's a good conversation to have with your learners. That that's great. And then is it 40:24 something that we have to think about when we're prompting too or is it just the results we're getting? like do I 40:29 need to in the example I gave I did you know just a general teacher do I need to 40:35 be more aware of that that I'm conscious that there is a bias in 40:42 this AI system so I need to know that or if I see it and I say wait a minute why 40:49 is it a Caucasian female can it be a male um an Asian male or 40:55 African-American male um or Hispanic male Yeah, I mean it depends on what the output is that you're looking for, but I 41:02 think generally speaking, you always want to be aware of the bias and and work around it when when you need to. 41:08 So, here's another interesting perspective is when I'm asking AI for 41:14 something, should I ask it to keep in mind a diverse um background? So if I 41:22 say like um what would be some really good restaurants keeping in mind that I 41:30 could give it what I like but I also want a diverse uh food palette or 41:35 something. Yeah. Or you could say like I prefer non-chains. I want to support the local economy. I want to whatever it's going 41:41 to be. Um yeah, you should definitely do that. And generally speaking, the more 41:46 permission you give AI, the better results you're going to get anyhow. So, if you're giving it that permission to say like, I'm I'm looking for the 41:53 diverse um cuisine or I'm looking for the diverse like restaurants in the area. Yeah, you're going to get more 42:00 richer answers. Okay, that's a good tip. That's a good tip. All right, write that down. Unless I'm on meta and then it's just 42:07 going to give me burger and fries because it knows everything I do. Yes. Well, I think that's all the time we 42:13 have for today. My name is Paul Engin. And I'm Dave Gadoo. 42:19 Um, if you enjoyed today's conversation, be sure to subscribe so you never have to miss an episode 42:28 and share with a friend or colleague. Until next time, stay curious, stay connected, and thanks for looking 42:33 through the immersive lens with us. And let's be careful out there, folks. 42:39 This episode was engineered by Jeff Kid. Recorded at Fingerlakes Community College podcast studios located in 42:45 beautiful Canadua, New York in the heart of the Fingerlakes region. Offering more than 55 degrees, certificates, micro 42:51 credentials, and workforce training programs. Thank you to public relations and communications, marketing, and the Flick 42:57 AI hub. Eager to delve into a passion? Discover exciting and immersive opportunities at 43:02 flc.edu. As part of our mission at FLCC, we're committed to making education 43:08 accessible, innovative, and aligned with the needs of both students and employers. The views expressed in this 43:15 podcast are those of the hosts and guests and do not necessarily reflect the official position of Fingerlakes Community College. 43:20 Music by Den from Pixabay. This is the immersive lens. 43:27 Woohoo! Jeff, where's your thing? 43:33 Nice job. It will always be funny. All right, I'm 43:39 going to stop recording. [Music]


full-width