Episode 3: Low Barrier Tools

Cover art for the Origin Story episode from The Immersive Lens podcast. A hexagonal background, an illustration of a robot next to an ON AIR sign, a photo of Dave Ghidiu and Paul Engin, and text that says WELCOME TO THE IMMERSIVE LENS LOW BARRIER TOOLS.

In this episode of The Immersive Lens, hosts Paul Engin and Dave Ghidiu discuss emerging AI technologies, beginning with the deployment of the AI actress Tilly Norwood in a History Channel documentary to bridge historical eras, despite lingering challenges regarding visual consistency. They also cover the immediate release of Google’s Gemini 3, highlighting its superior reasoning capabilities - demonstrated by a history professor using it to infer trends from 19th-century ledgers (and its ability to generate code for video games). Additionally, the hosts mention Google's Weather Next 2, a new AI model that reportedly surpasses traditional meteorological systems in forecasting accuracy.

The episode’s central theme focuses on "low barrier" media development tools that enable users to create professional content without high-end hardware or extensive training,. The hosts explore platforms like Adobe Express and Canva, which offer web-based, collaborative environments for designing graphics and social media content across devices as accessible as Chromebooks. Dave Ghidiu enthusiastically introduces Google Vids, a new tool that functions like a slide deck but publishes as a video, featuring dynamic embedding that allows creators to update content instantly on websites without needing to re-upload files


Key Topics

The Democratization of Content Creation via "Low Barrier" Tools. Platforms like Adobe Express and Canva as game-changers because they remove the need for expensive hardware or specialized skills. These tools allow users on basic devices, such as Chromebooks, to collaborate and create professional-grade media, infographics, and videos without requiring the steep learning curve of software like Adobe Photoshop or Premiere.

Google Vids as a Dynamic, Live-Updating Platform. Google Vids is not just a video editor, but a tool that blends the ease of slide presentations with video output. A critical feature is its ability to be embedded on a website and updated "live"; changes made in the editor are instantly reflected in the embedded video without the user needing to export and re-upload the file.

A Leap in AI Reasoning and Real-World Modeling. The release of Gemini 3 and Weather Next 2 (by Google) are next-gen. Unlike previous models that struggled with math, Gemini 3 demonstrated the ability to perform contextual reasoning and calculations on historical ledgers, while Weather Next 2 is reported to exceed the current "gold standard" of traditional meteorological forecasting.

The Necessity of Human Intervention in AI Production. Through the example of the AI-generated actress Tilly Norwood, AI acts as a starting point that gets creators "in the ballpark," but still requires human effort. AI video generation often suffers from "decay" or inconsistency in character likeness, necessitating traditional post-production work like compositing and stitching to create a polished final product.



Transcript

Click here to view transcript

Dave: Okay, I feel like I'm in Tron Ares. That's the color scheme for the... That's just the focus assist. That motion tracking. So we're out, dude. We're getting better and better at this thing. We only have four... Jeff, can you come back in? We only have five devices for this. Do you have any extra devices that we can use? Paul: I got some AirPods you can have if you want. Dave: Just delay. Low barrier tools. I don't even know what that means. No one knows what it means, but it's provocative. Paul: No it's not. Dave: It gets the people going. Paul: Welcome to the Immersive Lens, the podcast exploring the technologies reshaping how we live, work, and learn. From AI and virtual reality to creative media and design, we're diving into the tools and ideas shaping our connected world. Join us as we uncover the people and ideas driving the next wave of interactive experiences. This is the Immersive Lens. How you doing, Dave? Dave: I'm doing great. How you doing? Paul: Good, good. Did you hear what's going on with the AI actor Tilly Norwood? Dave: Just the headlines. Tell me more about it. Have you been following it? Paul: Yeah, so I read a little bit about it. I guess the... wait, and this is Tilly Norwood, the AI actress that you were telling us about a few weeks ago, right? Dave: Yes. Paul: Yeah, so completely fictitious character. I don't know how she was created, so I don't know what the AI learned on. But they are making her in a History Channel documentary, and they're going to use her as the host that would possibly like jump from modern to whatever the decade is that they're talking about. Dave: So just like transform right then and there into period costumes? Paul: Exactly. And then start whatever the engagement or whatever the scenario is for that time period. Dave: Interesting. This is just so hard for me to wrap my head around because in my mind I'm like, "Yeah, we've had CGI for 20 years, this is not a hard thing." But this is something completely different, right? Paul: Yeah, I mean, well, it's interesting because if we don't say she's AI, everyone would just assume that it is like post-production work where you would just be compositing. And there's still going to be that. I think what they're doing is they're taking like old videos and old stills and then they're going to use AI to help kind of like bring them to life, and then they're going to use her integrated as a host. But you know, you would do something similar: you shoot someone on a green screen, composite them into whatever you're doing. So it's interesting. I think it's definitely a different use for this. Dave: And I'm wondering if it's more work or less work than actually recording somebody. You know, I was just thinking like... I was in a commercial—I don't like to brag—once. But I was in a commercial. Paul: Watching? Yeah, you could see my hand. Dave: Anyhow, it was... hand model. I was a hand model. And it was just some small potatoes commercial in Rochester and they had catering and they had makeup and they had costume and it was for a 30-second spot. So when I think about movies and TV shows, I'm like, "Wow, there must be 30, 40, 50, 60, 70, 100 people on the crew." So this, if you don't have to worry about makeup, you don't need catering, you don't need... like, you save a lot of money I suppose, right? Paul: Yeah. And I think that, you know, when I look at this stuff... we were just talking about it, but Photoshop has it. There's different applications that you can take old photos and, you know, clean up the scratches, make them full color. And you have these tools now that can actually bring static images to life. So it's really interesting to see where it's making that leap. Dave: Yeah, I'm excited about it. I'll watch it. It'll make me watch the History Channel. Paul: Yeah, I'm curious to see how it's integrated and how she's integrated. Cuz we haven't really seen consistent character AI, you know what I mean? Like where we can see that character in multiple scenes and different roles and it actually looks like her. Dave: Yeah, there's always a decay or an erosion of their likeness from scene to scene. Paul: Yeah, the eyes are slightly different, the nose is slightly different, maybe there's a mold that's not there, you know? Like there's those really subtle things that sometimes the AI just missed or don't do. But this is interesting. Dave: It kind of reminds me of my metaphor for AI. So this will take out a lot of the work, but still, once that video is done, that's kind of when the human work begins. Now you have to doctor it up, now you need to fix it in post and stitch it together. So this is a good metaphor for saying like, hey, I get you in the ballpark, but maybe you got to do the real work on your own. Paul: Right. Cuz I don't think they're going to just do, you know, "make Tilly Norwood XYZ put them in this time period." I think they're going to have her solo and then they're going to do the same thing for a background and then you're going to use something to composite everything together. Dave: Yeah. So it's more work than just typing in a prompt. Paul: Yes. Yeah. But I think that like you said, I think it alleviates some of the, you know... maybe you don't need 15 actors to recreate a scene. Now you can use this. But cool. I'm excited. How about you? Anything going on or anything that you came across? Dave: This... yeah. So it's wild. It's only been five or six days since we talked last. And last Friday, the New York Times Hard Fork—I'd like to say friend of the pod but they don't know we exist—they ran a piece about this history professor who's really engaged with AI and he's testing the newest models. There's some websites you can go to test them. And he was testing a model that he kind of assumed was Gemini 3, which hasn't been released yet. Paul: Okay. Dave: And his workflow is he takes... in this case, I don't know, documents from the 1800s or 1700s from like a sugar distributor. And so it's just a ledger. And he usually scans it in, has AI kind of make sense of it. And it was able to do math and conversions that were contextual. That was not really based on what the document was; it was taking a leap of faith. And the words from this professor were, "Models shouldn't be able to do this. This is operating at a level above what we have been using." Paul: Wait, so can you explain? I don't quite understand it. So it's a ledger, but the AI system didn't know it was a ledger? Dave: So this ledger had weights in different units and it had different amounts of money and different distributions for who was going to what. And large language models are historically bad at math because it's actually not doing math, it's just looking in the past. So if you ask AI, "Hey, what's 2 plus 2?" it knows it's four not because it knows 2 plus 2 is four, but because there's a billion different documents that say 2 plus 2 is four. But when you start doing more novel, unique math, you really lose fidelity in those calculations. So not only was the math right, but it was able to infer—it had a reasoning that we haven't seen before in AI. Paul: Oh, so it saw like... "Oh, this was $5, this was six pounds, there's eight units of this..." Dave: Yeah. And it said, "Wait a minute, this kind of looks like something that we can put together in a spreadsheet or something." And it started like... Paul: And look for trends? Insights? Trends? Dave: Yeah. So it was a step above where we've been with AI. Paul: Oh, that's really... that's amazing. Dave: Yeah. And so that was on Friday. And today's Tuesday, November 18th. And right before this podcast started, literally you and I were just sitting here, I got access to Gemini 3 which came out today. Paul: Oh wow. Dave: And so this history professor was thinking, "I think this might be Gemini 3," and lo and behold it is Gemini 3. And Gemini 3 is wild. You just saw me make a video game while we were getting the mics ready and everything. So Gemini 3 is out and it's really remarkable because when chatGPT came out, Google was kind of caught. They didn't anticipate it. And then they released Bard, which was their first iteration. I don't know if you remember, that was kind of garbage. Paul: Yeah. Dave: And Gemini when it first came out wasn't all that great. But right now I think Gemini 3 is really making Sam Altman and the rest of the people at OpenAI really a little concerned. Paul: Yeah, I'm really liking what it's producing. I haven't really got to play with Gemini specifically too much, but the amount I've been able to play with it, it's producing really good content. And what did you use? The game, I saw the game... was it just straight Gemini? Dave: Straight Gemini. I went to the Gemini app, actually I was at gemini.google.com, and I just said, "Make this game." And I described it—you saw it was probably like three or four sentences—and it was awesome. Paul: Oh, and it was all web-based? Dave: All web-based. Yeah. Paul: Now you said when we were doing that that it was learning off of your previous prompts. So do you have that turned on or something? With history? Dave: No, AI has gotten to a point where it kind of learns across prompts because... remember when you first came, chats were context specifics? But now it does a much better job of learning who you are. Paul: Okay, okay. And that's... and you would have to be logged in with an account right, so it can have background on you? Dave: Yep. And know your preferences. And the other thing that happened with Google yesterday was they dropped Weather Next 2, which is a weather modeling AI, and it exceeds the current gold standard right now. Paul: Can you give a little more background? I don't even know what Weather Next is. Dave: Weather Next is a weather prediction... so think of like meteorologists and weather people on the news. Meteorologists are using models, historical models, to predict the weather. And Weather Next 2 is the AI-powered version of that, and it is so far exceeding what is currently the gold standard for meteorology. Paul: Wow. So it takes all the radar data, all the historical data, and then looks at the patterns and then now can make more accurate predictions on the weather than meteorologists? Than any system? Dave: That's right. Oh, system, okay. All right. Paul: So Doppler radar one? Dave: Doppler radar one is so 1990s. That's old school, man. It's all about Weather Next. Paul: I was just gonna say... and now here with Weather Next 2. Very cool. And so what are we talking about today? Dave: Today we're going to talk about the low barrier media development tools. Paul: Low barrier media development tools. And I think this is a phrase that I think you really coined for me because I'm so used to Adobe products that you're like, "You know, we got to... these tools are lowering the barrier for people to execute on some things that they might want to do." So for instance, if somebody wants to create an infographic, like a visual, maybe they don't know Photoshop, maybe they don't know Illustrator to create all the icons needed, maybe they don't know how to use Excel correctly. Dave: It takes me like 10 years to make an infographic in like Illustrator or Photoshop. Paul: Right. So there's these tools now that exist that kind of help you so you don't necessarily need to know how to create an icon or develop an infographic. You could integrate it. Dave: Is Canva like the canonical one that most people would recognize? Paul: I think so. Canva is very popular and it's becoming more and more integrated into a lot of different platforms. And so Canva... Dave: I can make an infographic there in like five minutes. Paul: Five years in Illustrator, right? Well, and so I think that that's the... that's what we're talking about when we're talking about low barrier media development. And it was interesting because when you bring that up to me, I'm so ingrained into like Adobe and production and I'm just like, "But you can do all this..." You know? But you're like, "Yeah, but I don't have Adobe or I don't know how to do XYZ, so this is perfect for me." And I'm like, "Okay, that makes complete sense." Dave: Yeah, like, that's your core job responsibility is knowing all that stuff. My core job responsibility is to... like if I need to make a video, in the past I'd probably have to pay someone to do it or just live without a video. Or worst case scenario, like jam my face in front of a camera and it would just be a video of my face and that's kind of it. Paul: Yes. And I think that not only is the software coming around to make it more accessible, but I think it is also the technology. So like you know, most laptops, most devices have a camera now, right? So now we can use that as a, you know, a way to record. And I remember when screen recording came out and I could actually record my desktop or my PowerPoint or whatever, and I was like, "Oh that's kind of nice, right?" Like now it's my face or the PowerPoint. Like I could choose between the two of them. Dave: Yes. Paul: And we're going to talk about it in a minute, but you can even create avatars now to uh... Dave: And so if you don't want to be on camera—and there's some, you know, I have some students that are very hesitant to be on camera, sure—but they can pick an avatar that represents them. Like a more... not a real person but like a character, like a cartoony, and they could use that character as their... Dave: Yeah. So tell me about that. Tell me like what software are you talking about that's the low barrier media production? Paul: So for me, the two that I've been using, and the one that I use the most, is Adobe Express. Dave: Is that free? Paul: So there's a free version, like you can do some minor things, but then you'd have to pay for a subscription. And they have different tier models for that. But it's all web-based. Most of these tools are tools that you don't need to download. I think that's the other barrier that many people have. Um, I have many students for instance that they're like, "I tried to install Photoshop and I don't have... my computer can't support it." Right? So when we're looking at a low barrier media development tool, I think we're also looking at: is it web-based? Is it accessible from any system? I mean, you're not going to believe this: some people have Chromebooks. Dave: And you have to... Dave has a Chromebook. I'm on my Chromebook right now. But it's collaborative. If it's web-based then it's collaborative so you could share with me and we could work at the same time. Paul: Yep. It's collaborative and it's not relying on the system having such power that, you know, it needs to run it. You're not going to believe this, but some people have high-powered MacBook Pros. Dave: Yes. That would be... Paul has one right now. Paul: Yes. All right. Dave: So that's nice. You can do on any device. Paul: Yep. And so with Adobe Express, you can go on to any web browser, you type it in, and then they have different tool sets. So if you're trying to do social media posts, you can identify the social media that platform that you want to target. So if it's like Instagram or if you wanted to do a video and you want it like a Tik Tok or a Instagram reel... Dave: Or YouTube vertical? Paul: Yep, it'll do it vertical, it'll do it horizontal. The cool thing about these tools are also, let's say I produce a video that is on Tik Tok or let's say YouTube—so traditional landscape, you know, TV aspect ratio... Dave: Yeah. Paul: When I export it, it will say, "Do you also want a vertical version of this?" Dave: Oh. And you can click it and it will try to rework it so the text is positioned and aligned in that? So like your face doesn't get cut off? Paul: Yeah. And that's a little more difficult but it can control more of the text so it can like you know scale the text down and whatnot. Um, and so it saves you a lot of work. Dave: Oh yeah. If I wanted to do both versions... Paul: And you know it's really cool because I show the students how to use this for... they do a social media takeover where they can create these posts and um, it has to adhere to some guidelines. But they can do really simple animations. They can do avatars, so you can pick a character and they can... and a lot of these kids had really creative approaches to doing, you know, using the avatars. But then they also shot video on their phone and then just uploaded it directly to the template as well. Dave: So that's actually really cool if you're collaborative too because you have three or four people getting the video: the part they're responsible for in a collaborative video. Paul: Yes. And you can also create a branding. So in Adobe Express I can put the logos and everything and they can all have access to it so I don't have to like share it with them in a different... Dave: So can they do color palettes too and fonts? Oh, that's really, really nice. Yeah I see the wisdom there now. And so Canva is similar right? It's all web-based? Paul: You can... there's a paywall there as well. So if you want the advanced AI features where things are... you can prompt it to do certain things. Dave: I think there's some AI features that are free and then some more that are not. Paul: Yeah, I think like image generation maybe. Dave: Okay. Paul: Yep. So it's a similar tool. I mentioned to you last time it's now integrated with chatGPT so you can type Canva and it'll launch and give you options. So that's a cool integration. And I believe Adobe Express has a integration into it now too cuz they can do it with any web-based tool. Dave: Oh no, so as long as it's web-based then I think that it can interact with it. So chatGPT can interact with Express? Paul: Yep. So very... I guess tools that can do a lot. They can create infographics so you can, you know, we talked about that at the beginning. You can just tell it... it has AI capabilities so you can say "create an icon that is a coffee cup." They probably already have an icon that you could do a search for that, but you know if you want it more specific. Um, you could copy and paste the data from Excel into there and it can try to formulate a infographic that way. And Canva does the same thing. So you have the ability to use both, you know, in a similar way. But I know a lot of people love Canva and just completely swear by it for everything from presentations to the graphic development. Dave: Well I used to work at a different college and the graphic design person said PowerPoint made it really easy to make really bad presentations really quickly. And I kind of think Canva makes it really easy to make really good presentations really quickly. Paul: Yes. The design element and it's just fantastic. Dave: Yes. Paul: And it's interesting because it was to me like the history of this is like it started with PowerPoint, then Keynote came along which is Apple's version, and Keynote was like "we're going to dress this place up a little bit." But now I do feel like it's like you look at Canva and the templates that they have and the starting points and it just... it looks so... like I know when people are using Canva because it's so clean and like... I mean you can do different styles but um there's a little uh... you could see there's like a creativity like you're like "oh someone..." Dave: We're not starting from way back when. This is all new, new design, fresh and new. Paul: Exactly. Um, so I really think that it's tools that you should try to explore if you're out there and you're interested. It's all web-based so you don't need to worry about what computer system you have. I think that it's a great tool. And you were talking about another tool that you're really excited about that's in the same framework, in the same category. Dave: And don't sleep on this Paul. Don't sleep on Google Vids. Paul: Google Vids? Dave: Google Vids. Paul: So it's a video editor? Dave: It is more than that. Paul: What is it? Dave: So imagine power... well, I'm sorry, Google Slides. Paul: Okay. Dave: Google Slides is nice but it's a slideshow. So if you're presenting or if it's say you put it on your website or in your course, the user whoever's watching it has to hit spacebar or the next button to pop up the next slide, and then text will come up, right? So that's a presentation. Google Vids is the analog of that when you look at the software. It looks very similar, has a lot of shared DNA with Google Slides, but it's for video editing. And now the experience is you embed that on your website and it's a video. The user does not need to interact with it. And you say like in 3 minutes and 30 seconds have this text pop up and have it... and it has different effects like slide in, slide out. So you can make these really... I can really punch up with my video production skills. So I can now record myself on my phone or my computer because it has a button to record right from your website or your webcam. So I can embed video, I can bring videos in, I can record videos, I can record my desktop and it puts it on you know the canvas. And then I can make it smaller or bigger. I can have two or three videos on the same slide and I can say "wait for this video is 2 minutes long, after at the two-minute mark start playing this other video that you've been able to see the whole time." So you create the timing for it. Paul: So can I ask you... I'm video background right now... but is there a timeline? Dave: There is a timeline per slide. They call them scenes. Paul: They call them scenes? Yes. But so that's how you do your... Dave: So it's not like old school PowerPoint where it's like you know animation 2 minutes, correct? And then it's a sliding timeline. So if I have three objects—maybe some text pops up and then a title slides across the top and then I have some clip art that just explodes on—each one has a slider and I can say "Okay I want last six seconds and I want it to come in here." So you can slide and make it look nice. Paul: So the output of this is not like a flattened MP4? This is a certain file or an embed or something? Dave: So you can export as MP4 and upload that to YouTube, but what I really like doing—and this is the power that I haven't seen in anything else—is I can actually embed that. And then we can talk about embedding later but that's basically you can take this and put it on your website so when I change the Google Vids, the change is already there on the live... on your website. I don't need to go back in and re-upload it to the website. Paul: Ah, so if you have that video in three places you update it in the Google Vids and it pushes out everywhere? Dave: And to the user it's perceived as a straight video. Paul: So we use Brightspace here for our learning management system for our students, and so I could take this code, it's like an embed code, and I can paste it into one of our modules so the student clicks on it and it just plays like a video? Dave: Just plays like a video. And it has the closed captioning and it has the gear wheel where if you want to make it go a little bit faster... Paul: Oh so literally like a video. Oh yeah. And is there back and neck? Is it like a traditional presentation too or is it not? Is it video? Dave: It presents as a video even though it's not technically a video. So there's a, I call it a scrubber, so you can just scan it like you would in YouTube. Paul: Yes really? Oh yeah. So the user thinks it's a video and then you can go in the back end back into Google Vids and say "Oh I'm going to re-record this" and then save it? Or I don't know what you do... you don't save it it's just like any other Google doc it's auto save? Dave: So for instance, I have a video in a course that I'm teaching right now and it could be a website whatever. So I have it in my course and I was thinking about it last night and I was like "ah I really need to set the stage so the video just kind of jumps into whatever the content was." And I was so... this afternoon I just recorded myself for like 30 seconds saying "Okay now let's take a look at this topic" and I just put that right in the Google Vids like in the first slide. And I didn't have to go into my course and upload anything; it was just there. So if they were to open it up right now, a student, they'd see the update. Paul: Just... and their experience might be different yesterday or today? Dave: So I try not to make it very jarring but uh... but so there's a few things I like about this. One is I know I'm working in online learning specifically but this is true for any type of video production: there is often this "it's not perfect, I don't want to ship it." And when I say ship it I mean like put it in your course, right? Paul: Right. Dave: And this allows me... this gives me the space and the permission to ship it, put it in my course knowing I can go back the next day and maybe change the colors or add another scene. So it's now gives me this progression where I can build on it. And there's another secret weapon to this whole thing I didn't tell you about. So in the sidebar it gives me access to stock footage so I can search. And I was doing this video and I wanted to talk about... I needed stock footage of a frying pan so I could have hit the stock footage button and find the frying pan and or cast iron whatever it was. But it also has Google V3 embedded in it—that's the video generation, the AI text to video. So because this is all Google... it's all Google. And if you have the Gemini subscription then I just hit that VO button and I said "I need footage of a frying pan" but I customized it. I was like "I want it to be older and I want it to have like eggs on one side and hash browns on the other." I may have found that in stock footage, I might not have, but I could definitely create it. So it has this whole video production thing is right there: the whole pipeline. I can do the B-roll—that like extra footage. Uh, I can... it does have avatars. It's not quite like what you were describing but it has avatars you can choose from. Uh, and it has all these other like AI tools built in. So it's just... even without the AI tools, so if you don't have Gemini Pro, it's still so cool. It's so awesome. Paul: That is really, really cool. So you have the ability to also add B-roll which is that extra footage so if you're talking about the um... I don't know like airplanes, sure, and you know you don't have any footage of it you could technically say you know "give me a picture of a B1 bomber, a video..." Dave: A video of a... Paul: That's really cool. Dave: Yeah which is nice because for videos now it's not just my ugly mug. It's my ugly mug but then I can, while I'm talking, we'll have the SR71 or the V2 bomber or whatever and then it'll come back to my face. And then I can share my desktop. So it's really heightened my game of video production. It's done it very very easily and organically. Paul: So you can even have a screencast of your desktop too while you're doing it? Dave: Oh that's great. And one of the nice things for all the nerds out there is if you are recording your face and the desktop it's two different streams. So it's not just my desktop with my face on it. The desktop is a video and my face is a separate video. So it's two layers. Paul: Two layers. So you could turn it off if you wanted to? Dave: Yeah I could make my face bigger and then make it smaller all the while the desktop is in the background. It's just really really sweet. Paul: That's so cool. All right, don't sleep on it Paul. Don't sleep on it. I'll definitely be trying it out. Dave: Sweet. Paul: Um so is there anything particular as far as educators go that do you think that, you know... I just explained the Adobe Express and Canva... Dave: Yeah, for educators I think both Express and Canva... and Canva actually if you're in a K12 school Canva gives you the pro for free. Paul: Oh really? Dave: Yeah which is awesome. So if you're an educator don't sleep on that either. But what I appreciate about all three of these products—the Express, Canva, and Google Vids—is that it is collaborative. So it's so much easier when you're working in this case with learners, but if you're at a business and you're doing some social media campaign it is still collaborative. You can have multiple people working on it. So I think those are the strengths. And again I'll never be at the proficiency you're at with Photoshop, Illustrator, Premiere, After Effects, all those Adobe products but I'm okay with that because I have this and this is good enough for what I do. Paul: Yeah and it seems like it does everything you need. And it looks like it's getting easier and the ability to do more is like, you know, right around the corner. Every week it's like "whoa they just released this, they just released that." Dave: That's bonkers. We're going to have to start recording every day. Paul: Right? So that's really cool. I think that um, that is something that's interesting because uh for the collaboration aspects of it, I know I have students use um Google Slides for that reason so they are all responsible for uh different you know different slides so they don't need to be together necessarily side by side. Dave: Yeah and they can do it asynchronously. And I don't... I'm sure you know about this but they added... so they did have slide templates in Google Slides but they added something called um, gosh what is it called... under the templates like on the sidebar, which is different than where they lived before, they have these really really high-end templates and when you click on one it gives you all the different slides so you can import them one at a time. But they're like Canva where the design is done for you. So if you use Google Slides templates in the past they've really revamped that. Paul: Oh nice. And I bet you it's you know in direct competition because they're seeing what Canva and Adobe Express are doing so they're like "oh we better step it up." Um but I definitely think that these tools are useful. I think that because there's such a low barrier that any student can use it. Um and you're not restricted by having a high performance computer which is I think a really big thing for many students because they don't have access to a high-end computer. Um so they have the ability to use it. And then I think that if you're an educator and you need to create videos like you said and maybe you don't have Premiere, you don't know Premiere... Dave: Yeah, I'll never know Premiere. Paul: You can use the Google Vids and you can create and screencast and you can do avatars. So you can do just about everything inside of just that one tool. Dave: Yeah and it's probably a little bit limited. My final product would be so much lower grade than your final product but it's really good for me and it's empowering. But I think really one of the most powerful things about it is I look at these YouTube videos I'm like "Oh I don't have that video production." But now I'm well on my way. I'm a lot better than I was two weeks ago. Paul: Right. And I think that's the important thing here is that there's a lot of tools out there and because there's such low barriers that you can achieve just about anything you can think about now. And I think AI helps with that. I think these web-based tools help with that. Um now there's always a subscription with most of these things. Um I think Google for the most part is free unless you want the AI pieces. Dave: Yeah right. Paul: So I think that that's a great resource. Um and I'm really interested in doing that video... uh Google Vids because uh that's I think that's so cool that you can update it on the fly. Dave: Oh it's a game changer. It's a game changer. Paul: Awesome. Well I hope you learned a little bit more about the um you know the low barrier media development for everyone out there. And I learned that I will never use Premiere, I will never be able to figure it out, it's too complicated for me. Dave: No, it's good. They're making it easier and easier too but they're also adding more and more features but... Paul: All right so that's all the time we have for today. I'm Paul Langen. Dave: And my name is Dave Gadoo. This episode was engineered by Jeff Kidd and recorded at Finger Lakes Community College podcast studios located in beautiful Canandaigua, New York in the heart of the Finger Lakes region. Offering more than 55 degrees, certificates, micro credentials, and workforce training programs. Thank you to the public relations and communications marketing and the FLX AI hub. Eager to delve into a passion, discover exciting and immersive opportunities at www.flcc.edu. As part of our mission at FLCC we are committed to making education accessible, innovative and aligned with the needs of both students and employers. The views expressed in this podcast are those of the hosts and guests and do not necessarily reflect the official position of Finger Lakes Community College. Music by Den from Pixabay. This is the Immersive Lens. Until next time, stay curious, stay connected, and thanks for looking through the immersive lens with us. Let's be careful out there Jeff. Jeff: Okay that's a wrap. That's how you know it's a wrap.


full-width