Trending Misterio
iVoox
Descargar app Subir
iVoox Podcast & radio
Descargar app gratis
The FIR Podcast Network Everything Feed
The FIR Podcast Network Everything Feed
Podcast

The FIR Podcast Network Everything Feed 305l56

2.268
8

Subscribe to receive every episode of every show on the FIR Podcast Network 276a31

Subscribe to receive every episode of every show on the FIR Podcast Network

2.268
8
FIR #466: Still Hallucinating After All These Years
FIR #466: Still Hallucinating After All These Years
Not only are AI chatbots still hallucinating; by some s, it’s getting worse. Moreover, despite abundant coverage of the tendency of LLMs to make stuff up, people are still not fact-checking, leading to some embarrassing consequences. Even the legal team from Anthropic (the company behind the Claude frontier LLM) got caught. Also in this episode: Google has a new tool just for making AI videos with sound: what could possibly go wrong? Lack of strategic leadership and failure to communicate about AI’s ethical use are two findings from a new Global Alliance report People still matter. Some overly exuberant CEOs are walking back their AI-first proclamations Google AI Overviews lead to a dramatic reduction in click-throughs Google is teaching American adults how to be adults. Should they be finding your content? In his tech report, Dan York looks at some services shutting down and others starting up. Links from this episode: Google has a new tool just for making AI videos Meet Flow: AI-powered filmmaking with Veo 3 Google’s Veo 3 marks the end of AI video’s ‘silent era’ Google announces new video and image generation models Veo 3 and Imagen 4, alongside a new AI filmmaking tool Flow and expanded access to Lyria 2 Ethan Mollick (@emollick) on X Veo 3 News Anchor Clips Google has a new tool just for making AI videos Chicago Sun-Times publishes made-up books and fake experts in AI debacle How an AI-generated summer reading list got published in major newspapers Chicago Sun-Times publishes made-up books and fake experts in AI debacle Anthropic’s lawyer was forced to apologize after Claude hallucinated a legal citation Chicago Sun-Times Faces Backlash After Promoting Fake Books In AI-Generated Summer Reading List Yes, Chicago Sun-Times published AI-generated ‘summer reading list’ with books that don’t exist Groundbreaking Report on AI in PR and Communication Management Comms failing to provide leadership for AI Perplexity Response to Query about Failure to Implement AI Strategically Embracing the Unknown: How Leaders Engage with Generative AI in the Face of Uncertainty Google is Teaching American Adults How to Be Adults Google AI Overviews leads to dramatic reduction in clickthroughs for Mail Online Shocking 56% CTR drop: AI Overviews gut MailOnline’s search traffic Google AI Overviews decrease CTRs by 34.5%, per new study The Google Exodus: Why 46% of Gen Z Has Abandoned Traditional Search Company Regrets Replacing All Those Pesky Human Workers With AI, Just Wants Its Humans Back How Investors Feel About Corporate Actions and Causes Links from Dan York’s Tech Report Skype shuts down for good on Monday: NPR Glitch is basically shutting down Investing in what moves the internet forward Bluesky: “We’re testing a new feature! Starting this week, select s can add a livestream link to sites like YouTube or Twitch, and their Bluesky profile will show they’re live now.” Bridgy Fed Fedi Forum Take It Down Act 2025 (USA)  Mike Macgirvin The next monthly, long-form episode of FIR will drop on Monday, June 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Shel Holtz (00:01) Hi everybody and welcome to episode number 466 of Four Immediate Release. I’m Shel Holtz in Concord, California. @nevillehobson (00:10) and I’m Neville Hobson in the UK. Shel Holtz (00:13) And this is our monthly long form episode for May 2025. We have six reports to share with you. Five of them are directly related to the topic du jour of generative artificial intelligence. And we will get to those shortly. But first, Neville, why don’t you tell us what we talked about in our ⁓ short form midweek episodes since You know, my memory’s failing and I don’t . @nevillehobson (00:44) ⁓ Yeah, some interesting topics we’ve had a handful of short form episodes, 20 minutes more or less, since the last monthly, which we published on 28th of April. And I’ll start with that one because that takes us forward. That was an interesting one with a number of topics. The headline topic was cheaters never prosper, said, unless you pay for what you create. And that was related to a university student who was expelled for developing an AI driven tool to help applicants to software coding jobs cheat on the tests employers require them to take. And it had mixed views all around with people thinking, hey, this is cool. And it’s not a big deal if people cheat others who are abhorred by it is an abhorrent idea. I’m in that camp. I think it’s a dreadful idea that ⁓ most people think it’s not a bad thing. is. Cheating is not good. That’s my view. There were a lot of other topics too in that as well. A handful of others that were really, really good. How communicators can use seven categories of AI agents and a few others worth a listen. That was 90 minutes, that one. That’s kind of hitting the target goal we had for the long form content. If it’s too long, hit the pause button and come back to it. Might apply to this episode too. So that was 462 at end of April. That was followed on May the 7th by 463 ⁓ that ⁓ talked about delivering value with generative AIs endless right answers. This was really quite intriguing one. ⁓ Quoting Google’s first chief decision scientist who said that one of the biggest challenges of the gen AI age. is leaders defining value for their organization. And one of the considerations she says is mindset shift in which there are endless right answers. So you create something that’s right, you repeat the prompt and get a different for images, for example, and get a different one, it’s also right. And so she posed a question, which one is right? It’s an interesting conundrum type thing. But that was a good one. We had 16 minutes that one. And Shel Holtz (03:01) We had a comment on that one, too, from ⁓ Dominique B., who said, sounds like it’s time for a truthiness meter. @nevillehobson (03:02) We have a comment? Yeah, we do. Okay, what’s what are those? Shel Holtz (03:13) Stephen Colbert fans here in the US would understand truthiness. It’s a cultural reference. @nevillehobson (03:18) Okay. Got it. Good. Noted. ⁓ Then 464. This was truly interesting to me because it’s basically saying that as we’ve talked about and others constantly talk about, you should disclose when you’re using AI ⁓ in some way that illustrates your honesty and transparency. Unfortunately, research shows that the opposite is true. that if you disclose ⁓ that you’ve used AI to create an output, ⁓ you’re likely to find that your audiences will lose trust in you as soon as they see that you’ve disclosed this. That’s counterintuitive. You think disclosing and being transparent on this is good. It doesn’t play out according to the research. ⁓ It’s an interesting one. I think I’d err on the side of disclosure more than anything else. Maybe it depends on how you disclose. But it turns out that people trust AI more than they trust the humans using AI. And that we spent 17 and half minutes on that one show. That was a good one. You got a comment too, I think, have we not? Shel Holtz (04:31) from Gail Gardner who says, that isn’t surprising given how inaccurate what AI generates is. If a brand discloses that they’re using AI to write content, they need to elaborate on what steps they take to ensure the editor fact checks and improves it, which I think is a good point. @nevillehobson (04:48) wouldn’t disagree with that. Then 465 on May the 21st, the Trust News video podcast PR trifecta. That’s one of your headlines, Cheryl. I didn’t write that one. So ⁓ it talks about unrelated trends or seemingly unrelated trends, painting a clear picture for PR pros accustomed to achieving their goals through press release distribution and media pitching. The trends are that people trust each other less than ever. people define what news is based on its impact on them becoming their own gatekeepers. And video podcasts have become so popular that media outlets are including them in their up-fronts. So we looked at finding a common thread in our discussion among these trends and setting out how the communicators can adjust their efforts to make sure the news is received and believed. That was a lengthier one than usual. 26 minutes that came in at has always this great stuff to… to consume. So that brings us in fact to now this episode 466 monthly. So we’re kicking off the wrap up of May and heading into a new month in about a week or so. Shel Holtz (05:59) We also had an FIR interview dropped this month. @nevillehobson (06:03) we did. Thank you for the gentle nudge on mentioning that. That was our good friend Eric Schwartzman, who wrote an intriguing post or article, I should say, in Fast Company about bot farms and how they’re invading social media to hijack popular sentiment. Lengthy piece, got a lot of reaction on LinkedIn, ⁓ likes and so forth in the thousands, some hundreds of comments. So we were lucky to get him for a chat. It’s a precursor to a book he’s writing based on that article that looks at bot farms. They now outnumber real s on social networks, according to Eric’s research and how profits drive PR ethics. Why meta TikTok X and even LinkedIn are complicit in enabling synthetic engagement at scale, says Eric. So ⁓ lots to unpack in that. That was a 42 minute conversation with Eric. His new book, called Invasion of the Bot Farms. He’s currently preparing for that. He’ll explore the escalating threat, he says, through insider stories and case studies. That was a good conversation with Eric Schell. It’s an intriguing topic, and he really has done a lot of research on this. Shel Holtz (07:16) And we do have a comment on that interview from Alex Brownstein, who’s an executive vice president at a bioethics and emerging sciences organization who says, chat GPT and certain other mainstream AIs are purportedly designed to seek out and prioritize credible, authoritative information to inform their answers, which may provide some degree of counterbalance. And also since the last monthly episode, there has been an episode of Circle of Fellows. This is the monthly discussion featuring usually four IABC fellows. That’s the International Association of Business Communicator. I moderate most of these. I moderated this one. And it was about making the transition from being a communication professional to being a college or university professor teaching communication. And we had four ists who have all made this move. Most of them have made it full-time and permanent. are teachers and not working in communications anymore. One is still doing both. And they were John Clemens, Cindy Schmig, Mark Schuman and Jennifer Waugh. It was a great episode. It’s up on the FIR podcast network now. The next circle of fellows is gonna be an interesting one. It is going to be done live. This is the very first time this will happen, episode 117. So we’ve done 116 of these as live streams and this one will be live streamed, but it’ll be live streamed from Vancouver, site of the 2025 IEBC World Conference and Circle of Fellows is going to be one of the sessions. So we’re gonna have a table up on the platform with. the five of the 2025 class of IAVC fellows and me moderating. And in the audience, all the other fellows who are at conference will be out there among those who are attending the session and we’ll have the conversation. Brad Whitworth will have a microphone. He’ll be wandering through the audience to take questions. It’ll be fun. It’ll be interesting. It will be live streamed as our Circle of Fellows episode for June. So. watch the FIR Podcast Network or LinkedIn for announcements about ⁓ when to watch that episode. Should be fun. @nevillehobson (09:54) Okay, that does sound interesting. Shell, what date is it taking place? You know. Shel Holtz (10:00) It’s going to be Tuesday, June 10th at 1030 a.m. Pacific time. It’s the last session before lunch. So even though IABC has only given us 45 minutes for what’s usually an hour long discussion, we’re going to take our hour. People can, you know, if they’re really hungry, their blood sugar is dropping, they can leave. But we’ll be there for the full hour for this circle of fellows. @nevillehobson (10:27) I was just thinking, the last time I was in Vancouver was in 2006, and that was for the IBC conference in 2006. That’s nearly 20 years ago. I where’s time gone, for goodness sake? Shel Holtz (10:37) I don’t know. I’ve been looking for it. So as I mentioned, we have six great reports for you and we will be back with those right after this. @nevillehobson (10:40) No, that was good. At the Google I.O. last week, that’s Google’s developer conference, amongst many other things, the company unveiled a product called V.O.3, that’s V.E.O., V.O.3, its most advanced AI video generation model yet. It’s already sparking equal parts wonder and concern. V.O.3 isn’t just about photorealistic visuals. It marks the end of what TechRadar calls the silent era of AI video. by combining realistic visuals with synchronized audio. Dialogue, soundtracks and ambient noise all generated from a simple text prompt. In short, it makes videos that feel real with few, if any, the telltale glitches we’ve come to associate with synthetic media. ZDNet and others included in a collection of links on Techmeme describe VO3 as a breakthrough in marrying video with audio, simulating physics, lip syncing with uncanny accuracy, and opening creative doors for filmmakers and content creators alike. But that’s only one side of the story. The realism VO3 achieves also raises alarms. Exios reports that many viewers can’t tell VO3 clips from those made by human actors. In fact, synthetic content is becoming so indistinguishable that the line between real and fake is beginning to dissolve. Alarm is a point I made in a post on Blue Sky. earlier last week when I shared a series of amazing videos created by Alejandra Caravejo at Harvard Law Cyber Law Clinic, portraying TV news readers reading out a breaking news story she created just from a simple text prompt. What comes immediately to mind, I said, ⁓ is the disinformation uses of such a tool. What on earth will you be able to trust now? One of Alejandra’s comments in the long thread was, This is going to be used to manipulate people on a massive scale. Others in that thread noted how easily such clips can be repeated and recontextualized with no visual watermark to distinguish them from real broadcast footage. I mean, one thing is for sure, Sal, if you’ve watched any of these, they’re now peppered all over LinkedIn and Blue Sky and most social networks. You truly are going to have your jaw dropping when you see some of these things. It’s not hard to visualize. just hearing an audio description, but they truly are quite extraordinary. This is a whole new level. There’s also the question of cost and access. ⁓ VO3 is priced at a around $1,800 per hour for professional grade use, suggesting a divide between those who can afford powerful generative tools and those who can’t. So we’re not just talking about a creative leap. We’re staring at an ethical and societal challenge too. Is VO3 one of the most consequential technologies Google has released in years, not just for creators, but for good and bad actors and society at large? How do you see it, Joe? Shel Holtz (14:00) First of all, it’s phenomenal technology. I’ve seen several of the videos that have been shared. saw one where the prompt asked it to create a TV commercial for a ridiculous ⁓ breakfast cereal product. was ⁓ Otter Crunch or something like that. And it had a kid eating Otter Crunch at the table and the mom holding the box and saying Otter Crunch is great or whatever it was that she said. ⁓ and you couldn’t tell that this wasn’t shot in a, in a studio. ⁓ it was, it was that good. Alarm? I’m surprised that there is alarm because we have known for years that this was coming. ⁓ and I, I don’t think it should be a surprise that it has arrived at this point, given the quality of the video services that we have seen from other providers and This is a game of leapfrog so that you know that one of the other video providers is going to take what Google has done and take it to the next level, maybe allowing you to make longer videos or there will be some bells and whistles that they’ll be able to add and the prices will drop. This is a preliminary price. It’s a brand new thing. We see this with open AI all the time where the first time they release something, have to be in that $200 a month tier of customer in order to use it. But then within a couple of months, it’s available at the $20 a month level or at the free level. So this is going to become widely available from multiple services. I think we need to look at the benefits this provides as well as the risk. that it provides. This is going to make it easy for people who don’t have big budgets to do the kind of video that gets the kind of attention that leads to sales or whatever it is your communication objective was for enhancing videos that you are producing with actual footage in order to create openers or bridges or just to extend the scene, it’s going to be terrific. Even at $1,800 an hour, there are a lot of people who can’t get high quality video for $1,800 an hour. So this is going to be a boon to a lot of creators. In of the risk, again, I think it’s education, it’s knowing what to look for. getting the word out to people about the kinds of scams that people are running with this so that they’re on their guard. It’s going to be the same scams that we’ve seen with less superior technology. It’s going to be, you know, the grandmother con, right? Where you get the call and it sounds like it’s your grandson’s voice. I’ve been kidnapped. They’re demanding this much money. Please send it. Sure sounds like him. So grandma sends the money. So This is the kind of education that has to get out there ⁓ because it’s just gonna get more realistic and easier to con people with the cons that frankly have been working well enough to keep them going up until now. @nevillehobson (17:38) Yeah, I think there is real cause for major alarm at a tool like this. You just set out many of the reasons why, but I think the risk mostly comes more from or rather less from examples like the grandmother call saying, you know, someone calling the grandmother, I’ve been kidnapped. I don’t know anyone that’s ever happened to him, not saying it doesn’t, but that doesn’t seem to me to be like a major daily thing. might more pro-Zec, more fundamental than that. But Some of the examples you can see and the good one to mention is the one from Alejandra Carabagio, the video she created, which were a collection of clips ⁓ with the same prompt. they were all TV anchors, presenters on television, ⁓ talking about breaking news that J.K. Rowling had drowned because a yacht sank after it was attacked by orcas in the Mediterranean off the coast of Turkey. ⁓ What jumped at me when I saw the first one was, my God, this was so real. It looked like it was a TV studio, all created from that simple prompt. But then came three more versions, all with different accented English, American English, US English, English as a second language for one of the presenters that illustrates from that one prompt, what you could do. And she said that the first video took literally a couple of seconds. And within less than 10 minutes after tweaking a couple of things after a number of attempts, she had a collection of five videos. So imagine that there are benefits, unquestionably. And indeed, some of the links we’ve got really go through some significant detail of the benefits of this to creators. But right on the back of that comes this big alarm bell ring. This is what the downside looks like. And I think your point about ⁓ it’s going to come down, competitors will emerge. Undoubtedly, I totally agree with you. But that isn’t yet. In the meantime, this thing’s got serious first mover advantage and the talk up that I’m seeing across the tech landscape mostly, it hasn’t yet hit mainstream talk. I’m not sure how you kind of explain it in a way that excites people unless you see the videos. But This is big alarm bell territory, in my opinion, and I think it’ll accelerate a number of things, one of which is more calls to regulate and control this if you can. you know, who knows what Trump’s going to do about this? Probably embrace it, I would imagine. I mean, you’ve seen what he’s doing already with the video and stuff that promotes him in his his emperor’s clothes and all this stuff. So this is, a major ⁓ milestone, I think, in the development of these technologies. it will be interesting to see who else comes out in a way that challenges Google. But if you read Google’s very technically focused description, this is not a casual development by six guys with a couple of computers. This is required, I would imagine, serious money and significant quantum computing power to get it to this stage in a way that enables anyone with a reasonably powered computer to use it and create something. ⁓ got that that aspect to consider should we be doing something like this that generates huge or rather uses huge amounts of electricity and energy and all the carbon emissions we got that side of the debate that’s beginning to come out a little bit. So it’s experimental time without doubt. And there are some terrific learnings we can get from this. mean, I’d love to give it a go myself, but not at 1800 bucks. So if I had someone to do it for that was I could charge them for that I’d be happy. ⁓ But I’m observing what others are doing and hearing what people are saying. And it’s picking up pace. Every time I look online, there’s something new about this. Someone else has done something and they’re sharing it. So great examples to see. So yes, let’s take a look at what the benefits are and let’s see what enterprises will make of this and what we can learn from it. But I’m keeping a close eye on what others are saying about the risks because ⁓ we haven’t, you talk about the education, all that stuff. No one seems to have paid any attention to any of that over the years. So why are going to pay attention to this now if we try and educate them? Shel Holtz (22:06) Well, that really depends on how you go about this. Who’s delivering the message? I mean, where I work, we communicate cybersecurity risk all the time. And we make the point this isn’t only a risk to our company. This is a risk to you and your family. You need to take these messages home and share them with your with your kids. And every time something new comes out, where there’s a new scam, where we are aware @nevillehobson (22:10) It does. ⁓ show. Shel Holtz (22:34) And we usually hear about this through our IT security folks, but where we are aware that in our industry, somebody was scammed effectively with something that was new. We get that out to everybody. We use multiple channels and we get from people who are grateful for us telling them this. So it’s not that people won’t listen. You just have to get them in a way that resonates with them. And you have to use multiple channels and you have to be repetitive with this stuff. You have to kind of drill it into their heads. see organizations spending money on PSAs on TV alerting people to these scams. They’re all imposter scams is what it comes down to. It’s pretending to be something that they aren’t. know, what troubles me about this I think is that we are talking a lot about erosion of trust. We talked about it on the last midweek episode, the fact that people trust each other less than they ever have. Only 34 % of people say they trust other people, that other people are trustworthy. And we’re trying to rebuild trust at the same time we’re telling people, you can’t trust what you see. You can’t trust your own eyes anymore. So this is a challenging time. @nevillehobson (23:54) Right. Shel Holtz (24:00) without any question when you have to deal with both of these things at the same time. We need to build trust at the same time. We’re telling people you can’t trust anything. @nevillehobson (24:02) It is. Well, that is the challenge. absolutely right, because ⁓ people don’t actually need organizations to tell them that. They can see with their own eyes, but it’s then reinforced by what they’re hearing from governments. We’ve got an issue that I think is very germane to bring this into conversation, something in this country that is truly extraordinary. One of the biggest retailers here, Marks & Spencer. was the subject of a huge cyber attack a month ago, and it’s still not solved. Their websites, you still can’t do any buying online. You can’t do click and collect none of those things. Today, they announced you can now again, log on to the website and browse. You can’t buy anything. You can’t pay electronically. You can only do it in the stores. And that no one seems to know precisely what exactly it is. There’s so much speculation, so much ⁓ talk that of which most is uninformed, which is fueling the worry and alarm about this. And the consequences from Marks & Spencer are potentially severe from a reputational point of view and brand trust, all those things. haven’t solved this yet. That, people are saying, that was likely caused by an insecure by someone who is a supplier of Marks Spencer. But this is not like little store down the road. This is a massive enterprise that has global operations. And the estimates at the moment is that the cost to them is likely to be around 300 million pounds. It’s serious money. They’re losing a million pounds a day. It’s serious. Oh, they won’t disclose it. It’s illegal to do that here in the UK to pay the ransom, if you disclose it. Government advice from the cyber security folks is don’t pay the ransom. Difficult thing to me is that you follow that advice and they’re still not solving the problem. Shel Holtz (25:45) And what was the ransom? @nevillehobson (26:03) The point I’m making, is that this is just another example of ⁓ forged trust, if I could say it that way, that it was likely until information arrives telling exactly what it was, that someone persuaded someone to do something who they thought was someone else that they weren’t that enabled that person to get access. Right. So this is going to be like that for some of the examples we’ve seen. But I think it’s likely as well to be ⁓ Shel Holtz (26:23) Yeah, sure. It was fishing. @nevillehobson (26:33) kind of normal that you would almost find impossible to even imagine that it was a fake. So what’s going to happen when the JK Rowling example, like someone in a prominent position in society or whatever, it’s suddenly ⁓ on a website somewhere that gets picked up and repeated everywhere before it’s well, wait a minute, is just to what’s the source of this, but it’s too late by then. And that’s likely what we’re going to see. Shel Holtz (26:58) We reported on a story like this many years ago. It was, if I correctly, a bank robbery in Texas. It was a story that got picked up by multiple news outlets. It was completely fake. The first outlet that picked it up just assumed that it was accurate because of their source and all the other newspapers. picked it up because they assumed that the first newspaper that picked it up had checked their facts, but it was a false story. This is nothing new. It’s just with this level of realistic video, it’s going to be that much easier to convince people that this is real and either share it or act on it. @nevillehobson (27:40) as it will. And it won’t be waiting on the media to pick up and report on it. That’s too slow. It’ll be TikTokers, it’ll be YouTube. It’s anyone with a website that has some kind of audience that’s connected and it’ll be amplified big time like that. So it’ll be out of control within probably within seconds of the first video appearing. That’s not to say that, dear, know, this is so what do we do? We’ve got to be that that’s that is the landscape now. And honestly and truly can’t imagine how example of like a JK Rowling death at sea and all that stuff is on on multiple TV screens, supposedly TV studios that you don’t think when you’re watching hang on, is this legit this TV show you might occur to you, but the other nine people out there watching along with you aren’t gonna ask themselves that they’re gonna share it. And it’s suddenly it’s out there. And before you know it. I don’t know. If it’s ⁓ say the CEO of big company that’s happened at a time of some kind of merger or takeover going on and then that person suddenly dropped dead, that’s the kind of thing I’m thinking about. So ⁓ I can see the real need to have some kind of, I can’t even call it shell regulation, I’m not sure, I don’t know, by government or someone. alongside, you can’t just leave this to individual companies like yours who are doing a good job. Well, there are 50 others out there who aren’t doing this at all. So you can’t you can’t let it sit like that. Because this, the scale of this is breathtaking, frankly, what’s going to happen. And I think Alejandro Caravaggio and others I’ve seen saying the same thing, that ⁓ that, ⁓ you know, this is going to be a tool used to manipulate people on a massive scale. We’re not talking about business. employees necessary, the public at large, this is going to manipulate people. And we’re already seeing that at small scale, based on the tech we have now. This tech’s up notches, in my view. you know, 1800 bucks, people are going to do this, ⁓ that to them, it’s like, you know, petty cash almost, or someone’s going to come out with something, again, that isn’t going to be that and it’s on a dark web somewhere and you know. So I mean, I’m now getting into areas that I have no idea what I’m going to be talking about. So I will stop that now. I don’t know how that’s going to work. this requires attention, in my opinion, that to protect people and organizations from the bad actors, that euphemistic phrase, who are intent on causing disruption and chaos. And this is potentially what this will achieve alongside all that good stuff. Shel Holtz (30:19) It’ll be interesting to hear what Google plans to do to prevent people from using it for those purposes. I have access to… @nevillehobson (30:26) They have a bit an FAQ, which talks a talks a little bit about that. hey, this is like draft still, I would say. Shel Holtz (30:33) I have access to VO2 on my $20 a month Gemini , I’ll just wait the six weeks until VO3 is available there. @nevillehobson (30:44) Well, things may have moved on to who knows what in six weeks, I would say. But nevertheless, this is an intriguing development technologically and what it lets people do in a good sense is the exciting part. The worrying part is what the bad guys are going to be doing. Shel Holtz (31:03) to say. So I need to make a time code note. @nevillehobson (31:04) Yeah. Shel Holtz (31:18) The fact that generative AI chatbots hallucinate isn’t a revelation, at least it shouldn’t be at this point, and yet AI hallucinations are causing real, consequential damage to organizations and individuals alike, including a lot of people who should know better. And contrary to logic and common sense, it’s actually getting worse. Just this past week, we’ve seen two high-profile cases that illustrate the problem. First, the Chicago Sun-Times published what they called a summer reading list for 2025 that recommended 15 books. Ten of them didn’t exist. They were entirely fabricated by AI, complete with compelling descriptions of Isabelle Indy’s non-existent climate fiction novel Tidewater Dreams and Andy Weir’s imaginary thriller The Last Algorithm. The newspaper’s response? Well, they blamed a freelancer from King Features, which is a company that syndicates content to newspapers across the country. It’s owned by Hearst. That freelancer used AI to generate the list without fact checking it. And the Sun-Times published it believing King Features content was accurate. And other publications shared it because the Chicago Sun-Times had done it. Then there’s even more embarrassing case of Anthropic. That’s the company behind the Claude AI chatbot, one of the really big international large language models, frontier models. Their own lawyers had to apologize to a federal judge after Claude hallucinated a legal citation and a court filing. The AI generated a fake title and fake authors for what should have been a real academic paper. Their manual citation checks missed it entirely. Think about that for a moment. A company that makes AI couldn’t catch its own tools’ mistakes, even with human review. Now, here’s what’s particularly concerning for those of us in communications. This isn’t getting better with newer AI models. According to research from Vektara, even the most accurate AI models still hallucinate at least 0.7 % of the time. with some models producing false information in nearly one of every three responses. MIT research from January found that when AI models hallucinate, they actually use more confident language than when they’re producing accurate information. They’re 34 % more likely to use phrases like definitely, certainly, and without doubt when they’re completely wrong. So what does this mean for PR and communications professionals? Three critical things. First. We need to fundamentally rethink our relationship with AI tools. The Chicago Sun-Times incident happened just two months after the paper laid off 20 % of its staff. Organizations under financial pressure are increasingly turning to AI to fill gaps, but without proper oversight, they’re creating massive reputation risks. When your summer reading list becomes a national embarrassment because you trusted AI without verification, you got a crisis communication problem on your hands. @nevillehobson (34:04) . Shel Holtz (34:28) Second, the trust issue goes deeper than individual mistakes. As we mentioned in a recent midweek episode, research shows that audiences lose trust as soon as they see AI disclosure labels, but finding out you used AI without disclosing it is even worse for trust. This creates what researchers call the transparency dilemma. Damned if you disclose, damned if you don’t. For communicators who rely on credibility and trust, this is a fundamental challenge we haven’t come to with. Third, we’re seeing AI hallucinations spread into high-states environments where the consequences are severe. Beyond the legal filing errors, we’ve seen multiple times now, from Anthropic to the Israeli prosecutors who cited non-existent laws, we’re seeing healthcare AI that hallucinates medical information 2.3 % of the time, and legal AI tools that produce incorrect information in at least some percentage of cases that could affect real legal outcomes. The bottom line for communication professionals is that AI can be a powerful tool, but it is not a replacement for human judgment and verification. I know we say this over and over and over again, and yet look at the number of companies that use it that way. The industry has invested $12.8 billion specifically to solve hallucination problems in the last three years, yet we’re still seeing high profile failures from major organizations who should know better. My recommendation, if you’re using AI in your communications work, and let’s be honest, most of us are, insist on rigorous verification processes. Don’t just spot check. every factual claim, every citation, every piece of information that could damage your organization’s credibility if it’s wrong. And , the more confident AI sounds, the more suspicious you should be. The Chicago Sun-Times called their incident a learning moment for all of journalism. I’d argue it’s a learning moment for all of us in communications. We can’t afford to let AI hallucinations become someone else’s crisis communications case study. @nevillehobson (36:37) until the next one. Right. mean, listen to what you say. You’re absolutely right. Yet, the humans are the problem. Arguably, and I’ve heard this, they’re not, it’s the technology is not up to scratch. Fine. Right. In that case, you know that. So therefore, you’ve got to pay very close attention and do all the things that you outlined before that people are not doing. So this one is extraordinary. Shel Holtz (36:39) And it becomes a case study. ⁓ The humans are the solution. @nevillehobson (37:05) ⁓ Snopes has a good analysis of it talking about this. ⁓ King Features, mean, their communication about it, they said, the company has a strict policy with our staff, cartoonists, columnists, and freelance writers against the use of AI to create content. And they said it will be ending its relationship with the guy who did this. Okay, throw him under the bus, basically. So you don’t have guidance in place properly, even though you say you have a strict policy, that’s not the same thing, is it? So I think this was inevitable and we’re going to see it again, sure, we will and the consequences will be dire. I was reading a story this morning here in the UK of a lawyer who was an intern. That’s not her title, but she was a junior person that she ⁓ entered into evidence, some research she’d done without checking and it was all fake, done by the AI. And the case turns out, and again, this is precisely the concern, not the tech. It’s not her fault. She didn’t have proper supervision. She was pressured by people who didn’t help because she didn’t know enough. And so she didn’t know how to do something. And she was under tight parameters to complete this thing. So she did it. No one checked her work at all. So she apologized and all that stuff. And yes, the judge, from what I read, isn’t isn’t penalizing her. It’s her boss. He should be penalizing. You’re going to see that repeated, I’m sure already exists in case up and down businesses, organizations everywhere, where that is not an unusual setup structure, lack of , lack of training, ⁓ lack of encouragement, indeed, the whole bring it out, let’s get the policy set up guidance and not just publish it on the internet. We bring it to people’s attention. We embrace them. We encourage them. We bring them on board to conversations constantly, brown bag lunches, all the informal ways of doing this too. And I’m certain that happens a lot. But this example and others we could bring up and mention show that it’s not in those particular organizations. So the time will come, I don’t believe it’s happened yet, ⁓ where the most monumentally catastrophic clanger will be dropped sooner or later in an organization, whether it’s a government. whether it’s a private company, whether it’s a medical system or whatever, that could have life or death consequences for people. Don’t believe that’s happened yet that we know of anyway, but the time is coming where it’s going to, I’d say. Shel Holtz (39:36) it will, it undoubtedly will. And you’ll see medical decisions get made based on a hallucination that somebody didn’t check. What strikes me though is that we talk about AI as an adjunct, right? It is an enhancement to what humans do. It allows you to offload a lot of the drudgery so that you can focus your time on more. human-centric and more strategic endeavors, which is great, but you still have to make sure that the drudge work is done right. I mean, that work is being done for a reason. It may be drudgery to produce it, but it must have some value or the organization wouldn’t want it anymore. So it’s important to check those. And in organizations that are cutting head count, @nevillehobson (40:06) Ahem. Shel Holtz (40:29) You know, what a lot of employees are doing is using AI in order to be able to get all their work done. That drudge work, having the AI do that and spend 15 minutes on it instead of three hours. It’s not like those three hours are available to them to fact check. They’ve got other things that they need to do. Organizations that are cutting staff need to be cognizant of the fact that they may be cutting the ability to fact check the output of the AI. which could do something egregious enough to cost them a whole lot more than they saved by cutting that staff. And by the way, I saw research very recently, I almost added it as a report in today’s episode that found that investors are not thrilled with all the layoffs that they’re seeing in favor of AI. They think it’s a bad idea. So if you’re looking for a way to… get your leaders to temper their inclinations to trim their staff. You may want to point to the fact that they may lose investors over decisions like that, but we need the people to fact check these things. And by the way, I have found an interesting way to fact check and it is not an exclusive approach to this. But let me give you just this quick example. On our intranet every week, I share a construction term of the week that not every employee may know. And I have the description of that term written by one of the large language models. I don’t know what these things mean. I’m not a construction engineer. So I get it written, and then the first thing I do is I copy it, and then I go to another one of the large language models and paste it in, and I say, review this for accuracy and give me a list of what you would change to make it more accurate. And most of the time it says, this is a really accurate write-up that you’ve got of this term. I would recommend to enhance the accuracy that you add these things. So I’ll say, ahead and do that, write it up and make those things. Then I’ll go to a third large language model and ask the same question. I’ll still go do a Google search and find something that describes all of this to make sure I’ve got it right. But I find playing the large language models against each other as accuracy checks works pretty well. @nevillehobson (42:56) Yeah, I do a similar thing to not for everything. mean, like everyone who’s got the time to do all that all the time, but depends, I think, on what you’re doing. But ⁓ it is something that we need to we need to pay attention to. And in fact, this is quite a good segue to our next piece, our next story, where artificial intelligence plays a big role. this one ⁓ talks about ⁓ outlander really a new report from the Global Alliance of Public Relation and Communication Management that is, it offers a timely and global perspective on how our profession is adapting and in many cases struggling to keep pace as artificial intelligence continues its rapid integration into our daily work. As AI tools become embedded in the workflows of communication professionals around the world, a new survey from the Global Alliance offers a revealing snapshot of where our profession currently stands and where it may be falling short. The report titled Reimagining Tomorrow, AI and PR and Communication Management draws on insights from nearly 500 PR and communication professionals. The findings paint a picture of a profession that’s enthusiastically embracing AI tools, particularly for content creation, but falling short when it comes to strategic leadership, ethical governance, and stakeholder communication. While adoption is high, 91 % of respondents say they’re using AI. The report highlights a striking absence of strategic leadership. Only 8.2 % of PR and communication teams are leading in AI governance or strategy, according to the report. Yet professionals rank governance and ethics as their top AI priorities at 33 % and 27 % respectively. Despite this, PR teams are mostly engaged in tactical tasks. such as content creation and tool . This gap between strategic intent and practical involvement is critical. If PR professionals don’t position themselves as stewards of responsible AI use, other functions like IT or legal will define the narrative. This has implications not only for reputation management, but for organizational relevance in the comms function. Now, in a post on his blog last week, our friend Stuart Bruce describes the findings as alarming, arguing that communicators are failing to lead on the very issues that matter most, ethics, transparency, stakeholder trust, and reputation. His critique is clear. If PR doesn’t step up to define the response of the use of AI, we risk becoming sidelined in decisions that affect not just our teams, but the wider organization and society. The Global Alliances report also shows that while AI is mostly being used for content creation, Very few are leveraging its potential for audience insights, crisis response, or strategic decision making. Many PR pros still don’t fully understand what AI can actually do, Stuart, either tactically or strategically. Worse, some are operating under common myths, such as avoiding any use of AI with private data, regardless of whether they’re using secure enterprise tools or not. So where does this leave us? Well, it looks to me like somewhere between a promise and a missed opportunity. How would you say it, Joe? Shel Holtz (46:21) it is a missed opportunity so far as far as I am concerned. And I have seen research that basically breaks through the communications boundary into the larger world of business that says, yes, there’s great stuff going on in organizations in of the adoption of AI, but there is not really strategic leadership happening in most organizations. Employees are using it. There are a growing number of policies, although most organizations still don’t have policies. Most organizations still don’t have ethics guidelines, although a growing number do. There are companies like mine that have AI committees, but the leadership needs to come from the very top down. And that’s what this research found isn’t happening. I was just scrolling through my bookmarks trying to find it. I’ll definitely turn that up before the… show notes get published, if it’s not happening at the leadership levels of organizations, it’s not happening at the leadership levels of communication, I certainly can see that in the real world as I talk to people. It’s being used at a very tactical level, but nobody is really looking at the whole overall operation of communication in the organization, the role that it plays and how it goes about doing that. through that lens of AI and how we need to adapt and change and how we need to prepare ourselves to continue to adapt and change as things like VO3 are released on the market and suddenly you’re facing a potential new reputational threat. @nevillehobson (48:07) Lots to unpack there. It’s worth reading the report. It’s well worth the time. Shel Holtz (48:12) Hey, Dan, thank you for that great report. Yeah, I had to wipe a tear away as well over the ing of Skype. You’re right. It was amazing as the only tool that allowed you to do what it could do. And as we have mentioned here more than once in the past, it is the only reason that we were able to start this podcast in the first place without Skype. You were in Amsterdam at the time. And for you and I to be able to talk together and record both sides of our conversation, Skype was the reason that we could do that. The only other option would have been what at the time was an expensive long distance phone call with really terrible audio. Who knew the double ender back in those days? We could have done it. You realize we could have both recorded our own ends. It would have taken forever to send those files. @nevillehobson (49:02) Yeah. Shel Holtz (49:09) back then because the speeds were. @nevillehobson (49:11) It would have been quicker burning them to a CD and sending it by courier, I would say. Shel Holtz (49:15) Yeah, no kidding. So bless Skype for enabling not just us, but pretty much any podcasters who were doing interviews or co-host arrangements. Skype made it possible, but Skype also enabled a lot of global business. There were a lot of meetings that didn’t have to happen in person. I mean, you look at Zoom today, Zoom is standing on the shoulders of Skype. @nevillehobson (49:39) Yeah, it actually did enable a lot. You’re absolutely right. I can to you this, of course, back in those days when both of us I think we were both of us were independent consultants. So, you know, pitching for business securing s and following up and all that was key. We had what what Skype called Skype out numbers that were regular phone numbers that people could use like a landline and that we get forwarded through to Skype by wife’s family in Costa Rica, she used Skype to make calls all the time that replaced sending faxes, which is how they used to communicate because that was cheaper than international phone calls at that time. ⁓ lots happened in that time. But in reality, it’s only 20 years ago. It sounds a lot. But all this has happened in a 20 year period. And Skype ⁓ was the catalyst for much of this. They laid the foundation for teams that we see now, Zoom, Google Meet, all those services that we can use. So what happened to WebEx and the like? It seems to have largely vanished, what I can see. So we’re used to all this stuff now. But it was great starter for us. And Dan mentions. Shel Holtz (50:55) Yeah, I had a Skype out. My Skype out number I got, it was my business number and I got a 415 area code because that’s San Francisco and nobody knew the 510 area code in the East Bay outside of the Bay Area. So it provided just that little extra bit of cache. Oh, a San Francisco number. I mean, there was just so much good that came out of Skype. They kept coming up with great features and great tools even after Microsoft bought it. @nevillehobson (51:17) Yeah. They did. Yeah. And the price, the pricing structure was good. At that time I had, I had business in on the East coast in the U S and I had a New York number. So, uh, yeah, it was, was super, but, so good to, to have a reminisce there with Dan. That was great. Um, I was intrigued by your element about Bridgie Fed, which, uh, I’ve been trying to use that since it emerged. Shel Holtz (51:25) So. That’s great. @nevillehobson (51:53) with Blue Sky, but also with Ghost, which has enabled a lot of this connectivity with other servers in the Fediverse. And so I’ve kind of got it all set up. But no matter what I do, it just does not connect. And I haven’t figured out why not yet. So you’ve prompted me to get this sorted out, because it’s important. I’ve got my social web address, and it was enabled by Ghost, that works on Mastodon. and it enables Blue Sky to connect with Mastodon 2. It’s really quite cool, but Bridgifed’s key to much of that functionality. maybe it’s just me. I haven’t figured it out yet. There could be. So this is definitely not yet in the mainstream readiness arena quite yet, but this is the direction of travel without any doubt. And I think it’s great that we eliminate these, you know, activity pub versus AT protocol. It just works. No one gives a damn about whether you’re on a different protocol or not. That’s where we’re aiming for. And that’s what is actually we’re moving towards quite quickly. Not for me, though, until I get this work. Shel Holtz (53:04) One protocol will win over another at one point or another. It always does. @nevillehobson (53:07) It’s like, yeah, Betamax and VHS, you know, look at that. Shel Holtz (53:12) Yep. And that’s the power of marketing because Betamax was the higher quality format. Well, let’s explore a fascinating and entirely predictable phenomenon that’s emerging in the corporate world. Companies that enthusiastically laid off workers to replace them with AI are now quietly hiring humans back. @nevillehobson (53:16) Yes, right, right. Shel Holtz (53:35) This item ticks a lot of boxes, man. Organizational communication, brand trust, crisis management. Let’s start with the poster child for this phenomenon. Klarna, the buy now pay later company. CEO Sebastian Simitowski became something of an AI evangelist, loudly declaring that his company had essentially stopped hiring a year ago, shrinking from 4,500 to 3,500 employees through what he called natural attrition. He bragged that AI could already do all the jobs that humans do and even created an AI deep fake of himself to report quarterly earnings, supposedly proving that even CEOs can be replaced. How’d that work out for him? Just last week, Semitkowski announced that Klarna is now hiring human customer service agents again. Why? Because as he put it, from a brand perspective, a company perspective, I just think it’s so critical. that you are clear to your customer that there will always be a human if you want. The very CEO who said AI could replace everyone is now itting that human connection is essential for brand trust. It isn’t an isolated case. We’re seeing this pattern repeat across industries, and it should serve as a wake-up call for communications professionals about the risk of overly aggressive AI adoption without considering the human element. Take Duolingo, which had been facing an absolute firestorm of social media after CEO Louis Vuitton announced that the company was going AI first. The backlash was so severe that Duolingo deleted all of its TikTok and Instagram posts, wiping out years of carefully crafted content from s with millions of followers. The company’s own social media team then posted a cryptic video. They were all wearing those anonymous style masks saying Duolingo was never funny. We were. And what a stunning example of how your employees can become your biggest communication crisis when AI policies directly threaten their livelihoods. All this is particularly troubling from a communication perspective. These companies didn’t just lose employees, they lost institutional knowledge, creativity, and human insight that made their brands distinctive in the first place. A former Duolingo contractor told one journalist that the AI-generated content is very boring. while Duolingo was always known for being fun and quirky. When you replace the humans who created your brand voice with AI, you risk losing the very thing that made your brand memorable. But here’s the broader pattern we need to understand. According to new research, just one in four AI investments actually deliver the ROI they promise. Meanwhile, companies are spending an average of $14,200 per employee per year just to catch and correct AI mistakes. Knowledge workers are spending over four hours a week ing AI output. These aren’t the efficiency gains that were promised. Now, I firmly believe those are still coming, those gains, and in a lot of cases, they’re actually here now. Some organizations are realizing them as we speak, but we’re not out of the woods yet. From a crisis communication standpoint, the AI layoff rehire cycle creates multiple reputation risks. There’s the immediate backlash when you announce AI replacements. We saw this with Klarna and Duolingo and others. Employees and customers both react negatively to the idea that human workers are disposable. Then there’s the credibility hit when you quietly reverse course and start hiring people again. It signals that your AI strategy wasn’t as well thought out as you claimed. And that sort of trickles over into how much people trust your judgment and other things that you’re making decisions about. For those of us working in communication, this trend highlights some critical lessons. Stakeholder communication about AI needs to be honest about limitations, not just potential and benefits. Companies that over promise on AI capability set themselves up for embarrassing reversals. Klarna CEO went from saying AI could do all human jobs to itting that customer advice, customer service quality suffered without human oversight. Second, employee communications around AI adoption require extreme care. When you announce AI first policies, you’re essentially telling your workforce they’re expendable. The Duolingo social media team’s rebellion shows what happens when you lose internal buy-in. Your employees become your critics, not your champions. And brand voice and customer experience are fundamentally human elements that can’t be easily automated. Companies struggling most are those that tried to replace creative and customer facing roles with AI. Meanwhile, companies succeeding with AI are using it to augment human capabilities, not replace them entirely. The irony here is pretty rich. At a time when trust in institutions is at historic lows, companies are discovering that human connection and authenticity matter more than ever. You can’t automate your way to trust. So. What should communication professionals take away from this ⁓ AI layoff rehire cycle? Be deeply skeptical of any AI strategy that eliminates human oversight in customer facing roles. Push back on claims that AI can fully replace creative or strategic communications work. And that when AI initiatives go wrong, it becomes a communications problem that requires very human skills to solve. The companies getting all this right are the ones that view it as a tool to enhance human capabilities, not replace them. The ones getting it wrong are learning an expensive lesson about the irreplaceable value of human judgment, creativity, and connection. @nevillehobson (59:32) Yeah, it got me thinking about ⁓ the ⁓ human bit that doesn’t get this, which typically a leader is an organization, but actually not necessarily at the highest level. I’m thinking in particular of companies, I’ve had a need to go through this process recently, who replace people at the end of a phone line in customer . with a chat bot typically as the first line of defense. And I use that phrase deliberately. It defends them from having to talk to a customer where they have a chat bot where it guides you through carefully controlled scripted scenarios that it does have a little bit of leeway in its intelligence to respond on the fly to a question that’s not in the script, as it were, but only marginally. And so you still have to go through a system that is poor at best and downright dangerous at worst in of trust with customers. your point, I agree totally, kind of fosters a climate of mistrust entirely when you can’t get to human and all you get is a chat bot and sometimes a chat bot that can actually engage in conversation. There are some good ones around. But my experience recently with an insurance company to an accident, car accident I had in December, a guy drove into my car, repaired, and I’m chasing the other party to reclaim my excess. And boy, that’s an education in how not to implement something that engages with people. So, but I don’t see any sign of that changing anytime soon. So one thing I take from this show, everything you said, indeed what we discussed in this whole episode so far in this context, it’s a people issue, not a tech issue completely in of how these tools are deployed in organizations. The CEO at Klana, I was reading about the CEO of Zoom who deployed an avatar to open his speech at an event recently. ⁓ I just wonder what were they thinking to do all these things? Now you mention investors. So it comes back to people. I think ⁓ the idea of replacing ⁓ all these expensive humans with AIs is surely as tempting as you can imagine to some organizations. We’ve talked about recently, maybe it was late last year, on Part of the future is this deployment. Indeed, recently we talked about you’re to have AIs on your team, a mix of kind of a hybrid in the new sense of the word of people and an AI as part of a team. And how is that going to work? And are the AIs going to take over? So you’ve got to have a strategy. Go back to the Global Alliance where strategically is a strategy or an approach, a strategic approach, if you will, is one of the biggest failings. in what’s going on in organizations, not by communicators necessarily, but by the organization as a whole. So it is a time when we said this a lot, know, communicators can really step up to the plate and take on the role of educating their organization into this is how we need to be doing this. ⁓ Often it ⁓ is the case they want to do that and they would like to do that and they propose all the reasons why they should but they’re shut out by others in the organization. So how do you get around that? You can’t basically. So this is people we’re talking about in the
Marketing y estrategia 1 semana
0
0
7
01:43:58
Circle of Fellows #116: Molding Young Communicators — Teaching as a Communication Career Path
Circle of Fellows #116: Molding Young Communicators — Teaching as a Communication Career Path
One of many career paths in the field of professional communication leads to colleges and universities: It is not uncommon for communication practitioners to move from the conference room to the classroom, where they help mold the next generation of communicators. All of the ists participating in episode 116 of “Circle of Fellows” have chosen that path and will discuss the various dimensions of teaching — including making the transition from the business world to the hallowed halls of academia. The session was recorded on Thursday, May 22, 2025, with John Clemons, Cindy Schmieg, Marck Schumann, and Jennifer Wah. Shel Holtz moderated. About the : John G. Clemons, ABC, APR, IABC Fellow, an independent communications consultant based in North Carolina, has held senior executive and consultant roles over the course of his career in corporate and organizational communications. He has special expertise in providing strategic counsel and for top executives and corporate offices of Fortune 500 companies. John has served as chair of IABC and holds accreditations from both IABC and PRSA. John has worked with Walmart, Raytheon, and Marriott (among others). John has been an adjunct instructor for six years at the University of North Carolina Charlotte and Loyola University in New Orleans. Cindy Schmieg is an award-winning strategic communicator. Her 30+ years of corporate, agency, and consulting experience focuses on making the communications function strategic within an organization. Cindy now teaches online in the Communications Master Degree program at Southern New Hampshire. She has served in many IABC leadership roles and is today a member of the IABC Audit/Risk Committee and Pacific Plains Region Silver Quill Award Committee, as well as assisting on the IABC Minnesota Annual Convergence Summit. Mark Schumann, PCC, ABC, IABC Fellow, is a certified executive coach who teaches in the NYU Master’s program in executive coaching and organizational consulting. He is the co-author of Brand from the Inside and Brand for Talent. Mark has served as VP Culture for Sabre, Director of Graduate Communication Studies at the Zicklin School of Business at Baruch College in New York City, and as a managing principal and global communication practice leader at Towers Perrin. He was IABC’s chair in 2009-2010 and won 17 Gold Quill awards. Jennifer Wah, MC, ABC, has worked with clients to deliver ideas, plans, words and results since she founded her storytelling and communications firm, Forwords Communication Inc., in 1997. With more than two dozen awards for strategic communications, writing and consulting, Jennifer is recognized as a storyteller and strategist. She has worked in industries from healthcare and academia to financial services and the resource sector, and is ionate about the strategic use of storytelling to business outcomes. Although she has delivered workshops and training throughout her career, Jennifer formally added teaching to her experience in 2013, first with Royal Roads University and more recently as an adjunct professor of business communications with the UBC Sauder School of Business, where she now works part-time to imprint crucial communication skills on the next generation of business leaders. When she is not working, Jennifer spends her time cooking, walking her dog Orion, or talking food, hockey, or music with her husband and two young adult children in North Vancouver, Canada. The post Circle of Fellows #116: Molding Young Communicators — Teaching as a Communication Career Path appeared first on FIR Podcast Network.
Marketing y estrategia 1 semana
0
0
5
01:01:15
FIR #465: The Trust-News-Video Podcast PR Trifecta
FIR #465: The Trust-News-Video Podcast PR Trifecta
Seemingly unrelated trends paint a clear picture for PR practitioners accustomed to achieving their goals through press release distribution and media pitching. The trends: People trust each other less than ever; people define what news is based on its impact on them, becoming their own gatekeepers; and video podcasts have become so popular that media outlets are including them in their upfronts. In this short midweek FIR episode, Neville and Shel find the common thread among these trends and outline how communicators can adjust their efforts to make sure their news is received and believed. Links from this episode: What Is News? (Pew Research Center) Americans’ Trust in One Another (Pew Research Center) Video podcasts are the next big pitch at media Upfronts News Consumption in the UK: 2024 The next monthly, long-form episode of FIR will drop on Monday, May 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript @nevillehobson (00:01) Hi everyone and welcome to Four Immediate Releases. This is episode 465. I’m Neville Hobson in the UK. Shel Holtz (00:09) And I’m Shel Holtz in the U.S. And if you work in communication, it’s time to tweak your media playbook. If you still treat a press release and a reporter pitch as the center of the universe, it’s time to reconsider things. We’ll talk about why and how right after this. Let’s start with the human glue that holds any message together, that being trust. Pew released a survey on May 8th that tells us only 34 % of Americans now believe that most people can be trusted. In the Watergate era, that number was 46%. In the mid-50s, it was closer to 70%. This is a crater, not a dip. Low social trust bleeds into institutional trust. So your brand news starts with a skepticism handicap. Now, later on this, Pew’s other study also released in May that asked what is news, and the picture starts to come into sharper focus. Americans still want information that’s factual and important, but they apply those labels through a personal filter. Does it touch my wallet, my neighborhood, my values? If yes, it’s news. If not, it’s just clutter. That’s why an election gets an automatic news stamp and a blockbuster earnings release doesn’t. The gatekeeping power has moved from editors to individual, and each individual is now effectively their own assignment editor. Enter into this mix the video podcast boom. CNBC’s upfronts coverage reads like a love letter to long-form host-driven shows. New Heights with the Kelsey brothers, Alex Cooper’s Call Her Daddy, LeBron and Steve Nash breaking down hoops for Amazon, These aren’t side hustles, they’re front row inventory next to NFL rights. The numbers explain why. New Heights pulls 2.6 million YouTube subscribers. Joe Rogan’s sit down with Donald Trump chalked up 58 million views. Multiples of top 10 broadcast hits, but on demand clipped and reshared endlessly. So here’s the tripod we’re standing on. Low interpersonal trust. a personalized definition of news, and an audience migration to host-driven video-forward channels. Shake those three together and the argument that will blast a release, cross our fingers, and call it a day feels about as modern as a fax machine. People trust people, and even though they trust people less than they used to, they do trust peers, subject matter experts, or charismatic hosts who are already populating their feeds. That means the old CEO quote and boilerplate formula is just table stakes at best. Yes, there are still good reasons to send out a press release, but not in a vacuum. We have to surface frontline engineers, project superintendents, patient advocates, whoever the listener already sees as one of them. We also have to pitch the host and not the masthead. Video podcast bookers don’t really care about breaking the news. It’s more about a conversation that keeps their community engaged through next week’s and the show’s next month. Study the arc of the show. Offer stories that fit that arc and bring props. If you can’t show it, demo it, or screen share it live, probably isn’t a great video podcast pitch. You need to build a video-first asset bundle. Think 16-9 aspect ratio and 9-16 b-roll. Cut down clips ready for shorts, lower third ready stats graphics, even physical product the host can hold up. Give them the raw materials to create snackable moments. We need to consider fast-forward transparency, because low trust means listeners will Google literally while you’re talking. Make it painless. Publish the data set, the methodology, the impact dashboard the moment the podcast drops. When a host can say links in the show notes, check it yourself, you borrow their credibility instead of testing it. You need to measure the clip life, not just the hit. An episode premiere is only inning one. Track how quotes migrate to TikTok, how a product demo GIF surfaces on Reddit, and how snippets thread their way into your earned media that you didn’t pitch. That’s the long tail. That’s the ROI. And all of this, of course, spills inward. Employees are audience segments too, and they’re consuming in the same places. Internal comms, the should-think podcast-style video for CEO AMAs, peer-to-peer explainers, even training modules. Why write a thousand-word intranet post when a five-minute host-driven conversation between the project manager and a site safety lead will get watched and maybe shared to LinkedIn by the very people that you need to reach? So if I had to condense this new rule set into one line, it’s this. Facts open the door. Trusted humans carry them across the threshold, visuals bolted shut. Your news still has to meet the classic criteria, timely, significant, novel, but today it also has to an audience sniff test delivered by someone they feel they know in a format that lets them see more than they hear. So real quick, build a credible messenger bench inside and outside the organization, package every story visually, court video podcast hosts, Ship transparency aids and track the afterlife of clips because momentum equals mindshare. Get this mix right and you’ll do more than play stories. You’ll earn a toehold in the very channels that are shaping public perception, channels the upfront buyers just anointed as prime time. @nevillehobson (05:57) There’s a lot of stuff there, Shell, that you shared, I must it. And Pew Research, which you’ve cited a bit, really is way out front with quality data that informs their reporting. I don’t think there’s anything quite like Pew anywhere else in the world with the breadth and depth of data they use to come up with a reporting conclusion. So it’s hard to find comparisons. listening to how you were setting all this out, it made me think that, you know, what’s actually changed over the years, other than the obvious declines here and there and different numbers. It’s, it’s the defining what is media, what is news has changed, I think, that may well have influenced some of these metrics that you’re you’re quoting. Looking at Pew, for instance, Consistent views exist on what news isn’t rather than what it is. That’s interesting, I think. Hard news stories about politics and war continue to be what people most clearly think of as news. And I suspect that’s the same here too. But it’s difficult to see some of this through any other lens than… the radical changes we’re experiencing and we’ve been going through over the past five years or so after two decades of, you know, golden years, you might say when, when we didn’t have the worries we have today. It’s easy to blame Trump for all this, by the way, and of course, it’s not really fair to do that. Not I’m not worried about being fair to him, but generally for understanding what’s happening and the changes that are happening. There’s a wider shift happening in society on which I would suspect Mr. Trump is one of the catalyzers behind the changes that are going on. So I find it most interesting seeing the US picture as a benchmark, if you will, for what’s happening elsewhere, purely through the lens of the sheer volume of quality data that informs opinions. We don’t have that anywhere else. I was looking at a report here in the UK to get some perspective from this side of the Atlantic on this broad topic. The regulator here, Ofcom, has produced some really useful data. There’s a big report that came out late last year that is to do with the picture in the UK, the broad picture on news consumption across generations. There’s an interesting metric set, nothing like the depth of what you’ve got. For instance, comparing podcasts and the new media, if you will, that isn’t really in this report to a significant depth like that. But there are some parallels without any doubt, decline of traditional media, the decline of trust, for instance. But the generational gaps are not the same, it seems to me, although probably looking in depth at them may well be quite similar. But one thing that seems to be clear is that how people trust who they do trust is not that different here as it is in the US. It’s not quite so granular, it’s a smaller country apart from the US. So it’s hard to look at it through the same eyes as you would in the US with the vast geography you’ve got over there. But you know, I find it quite interesting, you know, here’s a kind of a leaping out statement, we kind of think, yeah, we know that traditional platforms are declining in popularity. Yeah, that’s a finding that’s universal, I would say in Western countries, certainly. But, you know, just looking through this to see what comparisons I can draw from from the US picture, there are some interesting topics. haven’t connected them with the Pew data, but I bet you they’re similar. For instance, how teens consume news. Teen show a preference for lighter news topics in favor of social media platforms for news consumption rather than traditional media. No surprise with that at all, I don’t think. It is interesting seeing this decline in traditional news platforms for the top 10 news sources in the UK are now social media platforms. That’s again, that’s interesting. 70 % of respondents to Ofcom survey rate TV news is accurate, while only 44 % rate social media similarly. So inaccurate is the word for social media, accurate for TV in of news reporting. Public service broadcasters, BBC is one, continue to be seen as vital for delivering trusted news, this report says, despite a decline in viewership. This survey I’m referencing indicates that audiences prioritize accurate news from public service broadcasters. And that sense of trust is big here because, you know, we don’t have cities with their own TV networks in those cities or that state. It’s not the same geographically as one reason. But the other is that the state here defined television back in the 50s. In fact, prior to that, 1930s, even before Second World War, unlike the US where the soap opera emerged in those days, it had sponsors for content that didn’t exist here. was public service broadcasting, then commercial channels arose. It’s well trusted, even though that trust is often dented by events. For instance, all over the news the last few days is a guy called Gary Lineker, who’s very well known in the UK. He was a professional footballer. He’s been the voice of BBC Sport for two decades, but he’s very outspoken. He’s got himself into trouble. Previously, over using social media for political comment, and he dropped a big clanger recently about Israel and Gaza. basically he’s… quit, he’s resigned and he’s not getting the big golden handshake he would have got if he had been a good boy rather than naughty boy. I’m sure he’s going to pop up somewhere. he he’s dented his own reputation in of trust, but it’s rubbed off on the BBC a bit. So they’re going to have to weather that for a while, I would imagine. But, you know, the the the markets are have a lot of parallels in many ways, I think. Social media platforms increasingly use for news consumption. Facebook being the most popular source. think that reflects the picture of the US too, doesn’t it, Joe? Probably? Yeah, yeah. So perception of news accuracy and trustworthiness is a relevant metric in the context of this conversation. Search engines, news aggregators are perceived as more trustworthy and accurate compared to social media platforms. Facebook in particular scores lower on attributes like quality and trustworthiness. Yet, as the other metric shows, Shel Holtz (11:44) I believe so, yeah. @nevillehobson (12:06) Facebook’s the most popular source for news consumption. with paradox, there seems to be. And you’re looking at a couple of other things. Social media and talking with family are the most common ways teens access news. So that’s a bit different to the statistic you quoted on where teens place their trust. So social media is 55%, family 60%. TikTok’s the most used individual news source among teens. That’s 30%. So it’s kind of interesting. you know, there’s lots of more lots of further metrics there that aren’t really relevant to this conversation. think Americans trust in one another. Would that be the same here? I’ve not found kind of direct directly comparable metrics that I could I could throw at you. So it’s hard to see the difference. But I wouldn’t be surprised if the over the general sentiment in that is not dissimilar here. Yet there are definitely differences. Racial issues, the discrimination factors aren’t quite the same as in the US, but they exist here. They don’t tend to be skin color, if you see what I mean, it’s more origin like from the Indian subcontinent and the Middle East rather than, you know, black Americans who originated generations back coming from Africa. Those are not quite the same. Yet I would say the the outcomes in of analyzing behaviors, look at the cystics aren’t that dissimilar. It just shows, I think, how similar and how different we all are wherever we are in the world. And the difference, one big one in America is you’ve got the metrics that helps you understand it all, that don’t exist into such a scale elsewhere. So I’m not sure this is going have gone off on a slight tangent to what you were talking about earlier, Shell, but I think it is useful to contrast or compare really the data from one side Atlantic to another, not directly comparable for geographic geographies and simple sheer volume. But behaviors aren’t that different, it seems to me. Shel Holtz (13:58) No, I suspect not. When you look at the growth in distrust of each other here, the political divide must have a lot to do with that. People on the left just not trusting people on the right, people on the right not trusting people on the left. But I think it probably goes deeper than that. But what it leads to is it leads to people finding sources of news that are relevant to them, that affect them. that is conveyed by people that they do trust. So if you trust Joe Rogan, you’re going to watch Joe Rogan’s show. And that’s where you’re going to get a lot of your news, since he does tend to have newsmaker type guests on. This, think, is why we have to pay attention to these video podcasts as a possible outlet for the message that we’re trying to convey. because there are people who are gravitating to these. see the numbers. The numbers are bigger than the numbers that are being drawn to your top 10 TV shows. And this is just this confluence of the definition of news, who we trust, where we get our news and how we define news and the growth of video podcasting, which we saw played out. in the last presidential election because Trump and his spokespeople were hitting the bro circuit of video podcasts and the Harris campaign was by and large ignoring them and playing to traditional media. I haven’t seen any analysis that has definitively said that led to victory, certainly didn’t hurt. And it was a wise strategy given the data that we’re seeing now. So I think the people who are talking about this and thinking about it are absolutely right. You have to start thinking about where your audience is, who they trust, what do they think news is, and how can we craft our news so that it conforms to that and gets delivered through a trusted third party that they’re actually paying attention to and find credible. It’s a big shift. But the other thing that comes out of accommodating this shift of adopting these new practices in getting the story out for our organization or our clients is that it does accommodate AI search. The interview on that video podcast is going to end up in a training model somewhere. So this aids that effort as well as fewer and fewer people click any links that they find in a Google search, just settling for the AI overview at the top of the page, which now Google is starting to emphasize anyway, which is a whole different topic of conversation that we have addressed before and no doubt will again. you want people to hear your news, at least you get the side benefit of appearing more in AI search results. @nevillehobson (16:49) Yeah. Yeah, things are changing so fast, it seems to me that some of these detailed analytical reports on behaviors and trends and so forth in media, you get the sense that they’re trying hard to remain relevant in the analysis when the demographics and the markets are shifting so radically. For instance, report just the other day here in the UK that was focused on the Daily Mail, one of the tabloids here that is definitely right leaning in a big way, were talking at a conference on the dramatic fall in click throughs since the advent of Google’s AI overviews. And they said it was alarming and it was shocking. And the volumes, I don’t have the report in front of me, but the numbers were quite significant, the drops. So what are they doing about it? And this to me, I found most interesting. which I think instead of complaining as some media are about these changes and we got to do something, we’re not, you this is wrong and blah, blah, is do what the mail is doing, which is off, which is starting up a newsletter that you subscribe to. So that I think is definitely a trend to keep keep eyes on. I the the niche newsletter is designed to relate directly to your own interests. So if you want to get all your news from the mail to you directly, than suffer from, if you’re searching for something or whatever, your own behavior, you’ve searched for something, it pulls up the results from the daily mail, you read it there, and it’s enough to satisfy the reason why you were searching, so no click through. So I get the logic of what they’re doing, and I think others will follow them unquestionably. And I look at my own behavior. in a very small way. This is just me. not I don’t know if it’s a trend or mirrors anyone else or not. I subscribe now to nearly 20 newsletters. And of course, I don’t get a chance to read half of them, to be honest, show but I read the ones that interest me early in the day when I’m not at my desktop machine or even my laptop, probably my phone or tablet that I wouldn’t otherwise do. And it tends to be glancing as almost snacking on the content. And I see that is a different way I used to what I used to do. media consumption, which would have been sitting in a desktop computer, looking at the screen reading stuff for half an hour. Don’t do it like that anymore at all. So and some of the newsletters are from new media, if I described like that, not the old media. And they’re well written, they’re entertaining, they’re storytelling, but not just a bunch of dry, factual information, they entertain as well. And so, you know, to me, one of the measures of whether I like them is if I permit the images to come through automatically rather than be blocked by my email program. others I leave blocked. So you get a sense of how they’re approaching this, whether deg for a desktop computer, or you’ve got tons of broken image links all over the screen, you’re not going to read that. So that’s part of the shift. And I think maybe that’s generational. I don’t know. I’ve not looked into it. Do younger audiences have similar behaviors with newsletters? Well, according to the mail, demographic they’re interested in is definitely leaning young, not old, even though my understanding of the male, and this may be based just on people I know, they’re old who read the Daily Mail and the right wing. you know, things are changing. the Pew is probably best placed to provide data on the US picture. I wish they would look at international but of course, I guess the raw data is not there. But this is part of the shifting landscape. generational shifts as well. You know, we got Gen Alpha on our heels at the moment. What is their news consumption like? I was looking at an ad the other day for a digital camera, which I bought one actually that when we’re out looking at things and my wife and I are visiting places, instead of fumbling with my phone, I’ve got this little digital camera hanging on the strap, I could just pick up take a picture. That’s that’s why I did this. But I found one that was like 30 pounds 64 megapixel. with they call it a vlogging camera because it’s 4k video as well aimed at teens it’s very affordable and indeed the pitching it as a gift to your youngster who’s 10 get him started it’s very simple very safe very straightforward it’s not connected to anything although there are versions with built-in wi-fi so you look at how these things are shifting into one of the tools that are available to to kids these ages now so We are at a time of significant change. We know this. This is another manifestation of it, seems to me. Shel Holtz (21:09) Yeah, by the way, you mentioned newsletters and I think there’s probably a new approach to newsletters that people in communications might want to consider. And that is after an interview, after a news release, after an event to send a newsletter out that provides all of the backup information, because this is again, make it easy to. have people see you as transparent. Make it very easy for people to confirm the information that you have shared. You also want to make it available online. But if you have people who are paying attention to what’s coming out of your organization and you deliver some remarks or make an announcement, get the backup material out there. Use whatever means are available to you. lots of new approaches in @nevillehobson (21:58) Yeah. Shel Holtz (21:59) this profession for people to consider in order to succeed. But again, know, the press release with the CEO quote, you still need it for a variety of reasons. I mean, here in the US, compliance, you know, with SEC rules, but it won’t cut it ⁓ in getting the word out to the people you’re trying to reach. @nevillehobson (22:10) Yeah. No, it wasn’t. And I suspect that’s a similar reason here in the UK. I’ve not looked into that, listed companies have to communicate certain things. I’m just thinking, funny you mentioned that because today I got a press release from an agency that I read it. I thought, my God, this is dreadful, truly, particularly when they use the old fashioned language that we used to use back in the 70s and 80s, I think, it was so and so commented. He commented. Shel Holtz (22:32) Most of them. @nevillehobson (22:42) People don’t talk like that naturally, he said, or… Oh, you bet. You bet. Well, now we’re getting into the topic of structuring press releases, because to me, it’s like they name the company and then four paragraphs of what the position is of the company, how well they’ve been doing in the history and all that stuff, then you get to the news. So no, that’s not the way to do it. Shel Holtz (22:45) The one I love is, we’re excited to announce. Are you? Really? Are you bouncing around in your seat excited? @nevillehobson (23:07) The newsletter in the way you suggested it makes a lot of sense to me, I must it. It’s quite a layer to add into the workflow of producing this hence there are tools that will help you. AI driven, many of them. So that’s definitely worth considering. But the newsletter generally for the example of the Daily Mail in of the media, I could see this going a lot. I get, for instance, alerts in the start of the day from news organizations. Here’s today’s headlines. I used to enjoy them, but they’re all the same now. They’re reporting on the same news, just different presentation. So I’ve got to be selective in what I look at. And the ones I’ve not looked at for a couple of weeks, I’ll unsubscribe. But they are useful. And does it make me click through? No, it doesn’t actually, very, very rarely. The new media ones do, though. Some of my favorite newsletters from the tech area and in politics are well-written. They are entertaining, more so than these. These are… kind of a shinier approach from the old media, whereas the new media tell real stories in their news and they make it something you look forward to reading and you then engage more with those. So what impact will that have on reporting by Pew, for instance, next year? I wonder. People are popping up all over the place with companies, rather, should say, offering newsletter services. So, for instance, Beehive, see a lot, you know. Shel Holtz (24:23) Yeah, well. @nevillehobson (24:25) Substack I hear less about now than alternatives to that, although Stubstack is still a pretty big player. is a great one. know a number of organizations have shifted to Ghost as a platform. I use my blog, which also has a newsletter function I use too. And that’s actually more than I’ve ever done before is growing in subscribers. I’m quite pleased that that’s not my prime purpose, but people clearly like that. So for us, we might consider some of that show. That’s a whole different topic here. So yeah. Shel Holtz (24:50) Well, the other thing to consider is that if people don’t trust Entity X, they’re not going to trust Entity X’s newsletter just because they’re cranking one out. You have to build that trust through other means or get into somebody else’s newsletter. But I’m sure this is a conversation that will continue as these changes continue. But for now, that’ll be a 30 for for immediate release. @nevillehobson (24:58) Right.   The post FIR #465: The Trust-News-Video Podcast PR Trifecta appeared first on FIR Podcast Network.
Marketing y estrategia 1 semana
0
0
6
26:13
CWC 110: Embracing change as an agency owner (featuring Tim Kilroy)
CWC 110: Embracing change as an agency owner (featuring Tim Kilroy)
In this episode, Chip speaks with agency advisor Tim Kilroy about the challenges and strategies for running a small agency. Tim shares his extensive experience in digital marketing and agency coaching, highlighting the importance of flexibility and adaptability in leadership. They discuss the notion of many agency owners being ‘accidental’ and the necessity of creative problem-solving and rigorous operational procedures in today’s tough economic and technological landscapes. The conversation emphasizes fostering a ive and clear environment for agency teams, allowing for autonomy and decentralized decision-making to drive success. [read the transcript] The post CWC 110: Embracing change as an agency owner (featuring Tim Kilroy) appeared first on FIR Podcast Network.
Marketing y estrategia 1 semana
0
0
7
32:54
Eric Schwartzman on Bot Farms and Digital Deception
Eric Schwartzman on Bot Farms and Digital Deception
In this FIR Interview, Neville and Shel talk with author, investigative journalist, and New York SEO, Eric Schwartzman, about his Fast Company article, “Bot farms invade social media to hijack popular sentiment.” A consultant who specialises in SEO for financial services companies, Eric explains how coordinated networks of smartphones and AI-generated content are distorting public perception, manipulating virality, and reshaping what we trust online. Eric, a long-time friend of FIR and a former entertainment public relations correspondent for FIR, discusses how bot farms now outnumber real s on social networks, how profits drive PR ethics, and why Meta, TikTok, X, and even LinkedIn are complicit in enabling synthetic engagement at scale. Eric also previews his forthcoming book, Invasion of the Bot Farms, which explores this escalating threat through insider stories and case studies. Discussion Highlights What bot farms actually are: Thousands of smartphones, each controlled to simulate authentic behaviour, operating at industrial scale to manipulate what trends. How bot activity manipulates algorithms: Early engagement patterns (likes, shares, comments, follows, and profile expands) are carefully coordinated to make content appear organically viral. State actors vs. commercial players: Governments use bot farms to divide and destabilise societies, while businesses use them for influence and promotion. The blurred line between PR and manipulation: Case studies like the Blake Lively incident show how synthetic engagement is being used as a reputational weapon. Why social platforms allow it: Fake engagement boosts ad revenue, so many platforms knowingly look the other way. The future of trust and truth: Eric argues that virality can be bought, engagement is no longer an indicator of credibility, and even AI models are being trained on misinformation. A glimpse at Eric’s new book: Invasion of the Bot Farms will expose the people and systems behind this digital arms race, told through real-world case studies and first-hand research. About Our Conversation Partner Eric Schwartzman is a digital PR and content marketing strategist, author, and award-winning podcaster specialising in organic media, SEO, and content marketing. With deep experience in both agency and client-side roles, he helps organisations boost visibility, web traffic, and conversions through strategic digital campaigns. As a freelance journalist, Eric has written for Fast Company, TechCrunch, VentureBeat, AdWeek, and others, and is the author of two best-selling books on SEO. His work bridges technical expertise and clear communication, making him a trusted voice in the evolving digital landscape. Follow Eric Schwartzman on LinkedIn Mentioned in this Interview: Eric’s Fast Company article published in April 2025: Bot farms invade social media to hijack popular sentiment. Book in progress: Invasion of the Bot Farms (publishing date TBA). FIR archive episodes featuring Eric’s engagement with FIR, including his early podcast contributions. The post Eric Schwartzman on Bot Farms and Digital Deception appeared first on FIR Podcast Network.
Marketing y estrategia 2 semanas
0
0
7
42:23
ALP 271: Can agency team  be more strategic?
ALP 271: Can agency team be more strategic?
In this episode, Chip and Gini discuss whether or not employees can be encouraged to be “more strategic”. They explore the definition of being strategic, frequently misunderstood expectations, and the challenges of fostering strategic thinking among team . Gini shares her personal experiences and frustrations from her early career, emphasizing the importance of proper coaching and mentoring. Chip and Gini conclude that agency owners should define their expectations clearly, consider the individual capabilities of their employees, and re-evaluate their own workload to potentially take on more strategic responsibilities themselves. [read the transcript] The post ALP 271: Can agency team be more strategic? appeared first on FIR Podcast Network.
Marketing y estrategia 2 semanas
0
0
9
18:22
CWC 109: Thought leadership for agency growth (featuring Melissa Vela-Williamson)
CWC 109: Thought leadership for agency growth (featuring Melissa Vela-Williamson)
In this episode, Chip talks with Melissa Vela-Williamson of MVW Communications about her unique journey in public relations and the importance of content creation. Melissa shares her background, highlighting her non-traditional path into PR and her ion for using public relations for social good. They discuss her focus on helping nonprofits and education clients, her role as a content creator, and her work as a columnist for the Public Relations Society of America. Melissa also delves into the impact of the COVID-19 pandemic on her business and the strategic approaches she took to maintain client relationships and grow her firm. They explore the significance of writing books and producing various types of content, emphasizing the value of building relationships and demonstrating thought leadership in the communications industry. [read the transcript] The post CWC 109: Thought leadership for agency growth (featuring Melissa Vela-Williamson) appeared first on FIR Podcast Network.
Marketing y estrategia 2 semanas
0
0
8
30:35
FIR #464: Research Finds Disclosing Use of AI Erodes Trust
FIR #464: Research Finds Disclosing Use of AI Erodes Trust
Debate continues about when to disclose that you have used AI to create an output. Do you disclose any use at all? Do you confine disclosure to uses of AI that could lead people to feel deceived? Wherever you land on this question, it may not matter when it comes to building trust with your audience. According to a new study, audiences lose trust as soon as they see an AI disclosure. This doesn’t mean you should not disclose, however, since finding out that you used AI and didn’t disclose is even worse. That leaves little wiggle room for communicators taking advantage of AI and seeking to be as transparent as possible. In this short midweek FIR episode, Neville and Shel examine the research along with recommendations about how to be transparent while remaining trusted. Links from this episode: The transparency dilemma: How AI disclosure erodes trust The ‘Insights 2024: Attitudes toward AI’ Report Reveals Researchers and Clinicians Believe in AI’s Potential but Demand Transparency in Order to Trust Tools (press release) Insights 2024: Attitudes toward AI Being honest ing AI at work makes people trust you less, research finds Should Businesses Disclose Their AI Usage? Insights 2024: AI’ Report – Researchers and Clinicians Believe AI’s Potential but Need Transparency New research: When disclosing use of AI, be specific Demystifying Generative AI Disclosures The Janus Face of Artificial Intelligence : Deployment Versus Disclosure Effects on Employee Performance The next monthly, long-form episode of FIR will drop on Monday, May 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Shel Holtz (00:05) Hi everybody and welcome to episode number 464 of 4 Immediate Release. I’m Shel Holtz. @nevillehobson (00:13) and I’m Neville Hobson. Let’s talk about something that might surprise you in this episode. It turns out that being honest ing AI at work, you know, doing the right thing by being transparent, might actually make people trust you less. That’s the headline finding from a new academic study published in April by Elsevier titled, The Transparency Dilemma, How AI Disclosure Erodes Trust. It’s a heavyweight piece of research. 13 experiments over 5,000 participants from students and hiring managers to legal analysts and investors. And the results are consistent across all groups, across all scenarios. People trust others less when they’re told that AI played a role in getting the work done. We’ll get into this right after this. So imagine this, you’re a job applicant who says you used AI to polish a CV, or a manager who mentions AI helped write performance reviews, or a professor who says grades were assessed using AI. In each case, just itting you used AI is enough to make people view you as less trustworthy. Now this isn’t about AI doing the work alone. In fact, the study found that people trusted a fully autonomous AI more than they trusted a human. who disclosed they had help from an AI. That’s the paradox. So why does this happen? Well, the researchers say it comes down to legitimacy. We still operate with deep seated norms that say proper work should come from human judgment, effort and expertise. So when someone reveals they used AI, it triggers a reaction, a kind of social red flag. Even if AI helped only a little, even if the work is just a good. Changing how the disclosure is worded doesn’t help much. Whether you say, AI assisted me lightly, or I proofread the AI output, or I’m just being transparent, trust still drops. There’s one twist. If someone hides their AI use, and it’s later discovered by a third party, the trust hit is even worse. So you’re damned if you do, but potentially more damned if you don’t. Now here’s where it gets interesting. Just nine months earlier in July, 2024, Elsevier published a different report, Insights 2024 Attitudes Towards AI, based on a global survey of nearly 3,000 researchers and clinicians. That survey found most professionals are enthusiastic about AI’s potential, but they demand transparency to trust the tools. So on the one hand, we want transparency from AI systems. On the other hand, we penalize people who are transparent ing AI. It’s not a contradiction. It’s about who we’re trusting. In the 2024 study, trust is directed at the AI tool. In the 2025 study, trust is directed at the human disclosure. And that’s a key distinction. It shows just how complex and fragile trust is in the age of AI. So where does this leave us? It leaves us in a space where the social norms around AI use still lag behind the technology itself. And that has implications for how we communicate, lead teams and build credibility. As generative AI becomes ever more part of everyday workflows, we’ll need to navigate this carefully. Being open about AI use is the right thing to do, but we also need to prepare for how people will respond to that honesty. It’s not a tech issue, it’s a trust issue. And as communicators, we’re right at the heart of it. So how do you see it, Shail? Shel Holtz (03:53) I see it as a conundrum that we’re going to have to figure out in a hurry because I have seen other research that reinforces this, that we truly are damned if we do and damned if we don’t because disclosing, and this is according to research that was conducted by EPIC, the Electronic Privacy Information Center, it was published late last November. They basically said that if you… @nevillehobson (03:56) Yep. ⁓ Shel Holtz (04:18) disclose that you’re using AI, you are essentially putting the audience on notice that the information could be wrong. It could be because of AI hallucination. It could be inaccurate data that was in the training set. It could be due to the creator or the distributor or the content intentionally trying to mislead the audience. basically it tells the audience, AI, it could be wrong. This could be… false information. There was a study that was conducted, actually I don’t know who actually did the study, but it was published in the Strategic Management Journal. This was related specifically to the issue that you mentioned with writing performance reviews or automating performance evaluations or recommending performance improvements for somebody who’s not doing that well on the job. So on the one hand, know, powerful AI data analytics increase the quality of , which may enhance employee productivity, according to this research. They call that the deployment effect. But on the other hand, employees may develop a negative perception of AI once it’s disclosed to them, harming productivity. And that’s referred to as the disclosure effect. And there was one other bit of research that I found. And this was from Trusting News. This was research conducted with a grant that says what audiences really need in order for a disclosure to be of any use to them is specificity. They respond better to detailed disclosures about how AI is being used as opposed to generic disclaimers, which are viewed less favorably and produced. less trust. Word choice matters less. Audiences wanted to know specifically what AI was used to do with the words that the disclosers used to present that information mattering less. And finally, Epic has, that’s the Electronic Privacy and Information Center, had some recommendations. They said that both direct and indirect disclosures, direct being a disclosure that says, hey, before you read or listen or watch this or view it, you should know that we used AI on it. And an indirect disclosure is where it’s somehow baked into the content itself. But they said, regardless of whether it’s direct or indirect, to ensure persistence and to meaningfully notify viewers that the content is synthetic, disclosures cannot be the only tool used to address the harms that stem from generative AI. And they recommended specificity, just as you did see from the other research that I cited. says disclosure should be specific about what the components of the content are, which components are actually synthetic. Direct disclosures must be clear and conspicuous such that a reasonable person would not mistake a piece of content as being authentic. Robustness, disclosures must be technically shielded from attempts to remove or otherwise tamper with them. Persistence, disclosures must stay attached to a piece of content even when reshared. There’s an interesting one. And format neutral, the disclosure must stay attached to the content even if it is transformed, such as from a JPEG to a .PNG or a .TXT to a .doc file. @nevillehobson (07:34) Thank Shel Holtz (07:40) So all kinds of people out there researching this and thinking about it, but in the meantime, it’s a trust issue that I don’t think a lot of people are giving a lot of thought to. @nevillehobson (07:50) No, I think you’re probably right. And I think there doesn’t seem to be any very easy solution to this. The article that I first saw that discussed this in detail in the conversation talked a bit about this, which in some detail, but briefly, they talk about what still is not known. And they start with saying that it’s not clear at all whether this penalty of mistrust will fade over time. They say as AI becomes more widespread and potentially more reliable, disclosing its use may eventually seem less suspect. They also mentioned that there is absolutely no consensus on how organizations should handle AI disclosure from the research that they carried out. One option they talk about is making transparency voluntary, which leads a decision to disclose the individual. Another is a mandatory disclosure policy. And they say their research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors. And finally, they mentioned a third approach is cultural, building a workplace where AI use is seen as normal, accepted and legitimate. And they say that we think this kind of environment could soften the trust penalty and both transparency and credibility. they… In my view, certainly, I would continue disclosing my AI use in the way I have been, which is not blowing trumpets about it or making a huge deal out of it. Just saying as it’s appropriate, I have an AI use thing on my website. Been there now for a year and a bit. And I’ve not yet had anyone ask me, so what are you telling us about your AI use? It’s very open. The one thing I have found that I think helps in this situation where you might get negative on AI use is if you’ve written something, for instance, that you published that AI has helped you in the construction of that document, primarily through researching the topic. So it could be summarizing a lengthy article or report. I did that not long ago on a 50 page PDF and it produced the summary in like four paragraphs, a little too concise. So that comes down to the prompt. What do you ask it to do? But I found that if you share clearly the citations, i.e. the links to sources that often are referenced, or rather they’re not referenced, let’s say, or you add a reference because you think it’s relevant, that suggests you have taken extra steps to that content and that therefore means you have not just, you shares something an AI has created. And I think that’s probably helpful. That said, I think the report though, the basis of it is quite clear. There is no solution to this currently at hand. And I think the worst thing anyone can do, and that’s to the conversation’s first point, leaving it a voluntary disclosure option, is probably not a good idea because some people aren’t going to do it. Others won’t be clear on how to do it. And so they won’t do it. And then if they found out the penalty is severe, not only what you’ve done, but your own reputation, and that’s not good. you’re kind of between the devil and the deep blue sea here, but bottom line, you should still disclose, but you need to do it the right way. And there ought to be some guidance in organizations in particular on how to disclose, what to disclose, when to disclose. I’ve not seen a lot of discussion about that though. Shel Holtz (11:10) Well, one of the things that came out of the epic research is that disclosures are inconsistently applied. And I think that’s one of the issues with leaving it to individuals or to individual organizations to decide how am going to disclose the use of AI and how am going to disclose the use of AI on each individual application, that you’re going to end up with a real hodgepodge of disclosures out there. And that’s not going to… @nevillehobson (11:15) Mm-hmm. Right. Shel Holtz (11:36) aid trust, that’s going to have the opposite effect on trust. Epic is actually calling for regulation around disclosure, which is not unsurprising from an organization like Epic. But I want to read you one part of a paragraph from this rather lengthy report that gets into where I think some of the issues exist with disclosure. says, first and foremost, disclosures do not affect bias or correct and accurate information. @nevillehobson (11:49) Hmm. Shel Holtz (12:03) Merely stating that a piece of content was created using generative AI or manipulated in some way with AI does not counteract the racist, sexist, or otherwise harmful outputs. The disclosure does not necessarily indicate to the viewer that a piece of content may be biased or infringing on copyright, either. Unless stated in the disclosure, the individual would have to be previously aware that these biases, errors, or IP infringements exist. @nevillehobson (12:18) . Shel Holtz (12:30) and then must meaningfully engage with and investigate the information gleaned from a piece of content to assess veracity. However, the average viewer scrolling on social media will not investigate every picture or news article they see. For that reason, other measures need to be taken to properly reduce the spread of misinformation. And that’s where they get into this notion that this needs to be regulated. There needs to be a way to assure people who are seeing content. that it is accurate and to disclose where AI was specifically employed in producing that content. @nevillehobson (13:08) Yeah, I understand that. Although that doesn’t address the issue that is kind of like underpins our discussion today, which is disclosing you’ve used AI is going to get you a negative hit. But the fact that you did use the AI. So that doesn’t address that. I’m not sure that anything can address that. If you disclose it, you’ll get the reactions that the conversations research shows up or the service research shows up, I should say. If you don’t disclose it, you should and you’ll get found out it will be even worse. So you could follow any regulatory pathway you want and do all the guidance you want. You’re still gonna get this until as the conversation reports, as as ever his research, it dies away and no one has any idea when that might be. So this is a minefield without doubt. Shel Holtz (13:36) Right. Yeah, but I think what they’re getting at is that if the disclosure being applied was consistent and specific so that when you looked at a disclosure, it was the same nature of a disclosure that you were getting from some other content producer, some other organization, you would begin to develop some sense of reliability or consistency that, okay, this is one of these. I know now what I’m going to be looking at here and can… consume it through that lens. So I think it would be helpful, you know, not that I’m always a big fan of excess regulation, but this is a minefield. And I think even if it’s voluntary compliance to a consistent set of standards, although we know that how that’s played out when it’s been proposed in other places online over the last 20, 25 years. But I think, think consistency and specificity are what’s required here. And I don’t know how we get to that without regulation. @nevillehobson (14:50) No, well, I can see a way that I’m not a fan of regulation of this type until it’s been proven that anything else that’s been attempted doesn’t work at all. And we don’t still see enough of the guidance within organizations to this particular topic. That’s what we need now. Regulation, hey, listen, it’s gonna take years to get regulation in place. So in the meantime, this all may have disappeared, doubtful, frankly, but. I’d go the route of, we need something, and this is where professional bodies could come in to help, I think, in proposing this kind of thing. Others who do it share what they’re doing. So we need something like that, in my view, where there may well be lots of this in place, but I don’t see people talking too much about it. I do see people talking much about the worry about getting accused of whatever it is that people accuse you of, of using AI. That’s not pleasant at all. And you need to have thick skin and also be pretty confident. I mean, I’d like to say in my case, I am pretty confident that if I say I’ve done this with AI, I can weather any accusations even if they are well meant, some are not. And they’re based not on informed opinion, really, it’s uninformed, I suppose you could argue. Anyway, it is a minefield and there’s no easy solution on the horizons. But in the meantime, disclose, do not hide it. Shel Holtz (16:10) Yeah, absolutely. Disclose, be specific. And I wonder if somebody out there would be interested in starting an organization sort of like Lawrence Lessig did with Creative Commons. So all you had to do now was go fill out a little form and then get an icon and people will go, that’s disclosure C. @nevillehobson (16:27) There’s an idea. There is an idea. Shel Holtz (16:28) That’s it. That’s it. need a creative commons-like solution to the disclosure issue. And that’ll be a 30 for this episode of Four Immediate Release. The post FIR #464: Research Finds Disclosing Use of AI Erodes Trust appeared first on FIR Podcast Network.
Marketing y estrategia 3 semanas
0
0
7
17:35
ALP 270: Limiting scope creep from the start
ALP 270: Limiting scope creep from the start
In this episode, Chip and Gini delve into the topic of scope creep in agencies. They discuss the bell curve of profitability and the importance of setting clear expectations from the first client conversation. They highlight strategies like dividing projects into 90-day scopes to regularly reassess goals and deliverables. The duo emphasizes the significance of internal communication, developing a culture of transparency, and ensuring team understand project scope and costs. They also stress the need to build flexibility and cushion into initial pricing to manage minor scope changes and avoid financial strain. Finally, they agree on mastering financial understanding and regular one-on-one meetings for smoother agency operation. [read the transcript] The post ALP 270: Limiting scope creep from the start appeared first on FIR Podcast Network.
Marketing y estrategia 3 semanas
0
0
5
19:06
FIR #463: Delivering Value with Generative AI’s “Endless Right Answers”
FIR #463: Delivering Value with Generative AI’s “Endless Right Answers”
Google’s first Chief Decision Scientist, Cassie Kozyrkov, wrote recently that “The biggest challenge of the generative AI age is leaders defining value for their organization.” Among leadership considerations, she says, is a mindset shift, one in which there are “endless right answers”.  (“When I ask an AI assistant to generate an image for me, I get a fairly solid result. When I repeat the same prompt, I get a different perfectly adequate image. Both are right answers… but which one is right-er?”) Kozyrkov’s overarching conclusion is that confirming the business value of your genAI decisions will keep you on track. In this episode, Neville and Shel review Kozyrkov’s position, then look at several communication teams that have evolved their departmental use of AI based on the principles she promotes. Links from this episode: Endless Right Answers: Expnlaining the Generative AI Value Gap How Lockheed Martin Comms is working smarter with GenAI How AI Can Be a Game Changer for Marketing AI in 2025: 4 PR industry leaders discuss company policies, training, use cases and more The next monthly, long-form episode of FIR will drop on Monday, May 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Hello everyone and welcome to four immediate release episode number 4 63. I’m Neville Hobson. And I’m Shell Holtz reports on how communication departments are moving from AI experiments to serious strategy driven deployment of Gen AI are proliferating. Although I’m still mostly hearing communicators talk about tactical uses of these tools. The fact is you need to start with strategy or don’t start at all. That’s the conclusion of Cassie. Kako, Google’s former chief decision scientist who warns leaders that Gen AI only pays off when you define why you’re using it and how you’ll measure value. She calls Gen AI automation for problems that have endless right answers. Now that. Warrants a little explanation. Traditional ai, she says, is for automating tasks where there’s one right answer using patterns and data. It’s gen AI that automates tasks where there are endless right [00:01:00] answers and each answer is right in its own way. This means old ROI, yardsticks won’t work. Leaders have to craft new metrics that link every Gen AI project to. Not just a cool demo. This framing is useful because it separates flashy outputs from real, genuine impact. With that in mind, we’re gonna look at a few comms teams that are building gen AI programs around a clear, measurable strategy right after this. Well, let’s start with Lockheed Martin’s Communications organizations, which set a top down mandate. Every team member is required to learn enough gen AI to be a strategic partner to the business. They hit a hundred percent training compliance early this year. They published an internal. AI Communications Playbook filled with do and don’t guidance Prompt templates, a shared prompt library, and monthly newsletters that surface new [00:02:00] wins. There are a few reasons that this is a worthy case study. First, the team generated savings. You can count, for example, a recent video storyboard project ran 30% under budget and cut 180 staff hours. The team has fostered a culture of experimentation. Uh, there’s a monthly AI art contest that they. Host inviting communicators to practice prompting in a low risk environment, helping them learn prompt craft before they touch billable projects. And the human in the loop discipline is built into the team’s processes. Gen AI delivers the first draft or first visual. Humans still own the final story. The takeaway, Lockheed shows that enterprise rollouts scale when you train first, codify governance. Next, then celebrate quick wins. Qualcomm corporate comms manager, Kristen Cochran Styles said Gen A is now in our DNA. Qualcomm’s comms team is leaning on edge based gen AI, running models on phones, [00:03:00] PCs, and even smart glasses to lighten workflows while respecting privacy and energy constraints. Uh, they have a device centric narrative. They don’t just talk about on debate on. Its comms group uses the same edge pipeline that it promotes publicly. They have faster iterations occurring in their processes, drafting reactive statements, tailoring, outreach to niche reporters and summarizing dense technical research all happen at the edge, shaving hours off typical cycles, and there’s alignment of their reputation because they’re eating their own dog food from their own silicon powered AI stack. Qualcomm’s comms team reinforces the brand promise every time it ships content. Let’s. Take a look next at VCA, uh, chain of veterinary clinics. One of them was the one that I take my dog to. Joseph Campbell’s, a comms leader at VCA and he’s echoed the strategy first mantra. He noted that 75% of comms pros now use gen [00:04:00] ai, but more than half of their employers still lack firm policies. A gap he finds alarming. Campbell’s rule of thumb. AI can brainstorm and polish, but final messaging must. Obtain human creativity strategy and relationship building. VCAs approach involves sandboxing with teams practicing in non-public pilots before committing anything to external channels. Crafting guardrails is treated as urgent change management work, not paperwork. So they’re developing their policies in a very deliberate way, and they have an ethics checklist. Outputs go through fact checking and hallucination screen steps just like any other high stakes content. Now these individual stories of teams employing gen gen AI strategically sit against an industry backdrop that’s moving fast with tripling of adoption. Three out of four PR pros now use gen ai. That’s nearly three times the level from March of last year. Uh, and [00:05:00] efficiency gains are clear. 93% say AI speeds their work. 78% says it improves their quality, but speed. By itself isn’t value. Cassie Coser Cove’s Endless right Answers framework reminds us Comms leaders still have to specify which right answers matter to the business. So let’s wrap this up with six quick takeaways for your team from these case studies. First, tie every Gen AI experiment to a business result. Whether it’s fast or first drafts, budget savings, or higher engagement, write the metric before you. Invest in universal literacy. Lockheed’s a hundred percent training. Target created a shared language, a shared context, and without that, AI initiatives are gonna stall, codify, and update guardrails. VCAs governance, sprint shows policies can be an after, can’t be an afterthought. They’re the trust layer that lets teams scale gen AI responsibly. [00:06:00] Prototype publicly when it reinforces brand stories. Qualcomm’s on device PR work doubles as product proof and keep humans critical in every example. Communicators use AI for liftoff, then rely on human judgment. For nuance, ethics and style communicators have next desktop publishing social. Gen AI is bigger than these. It won’t just make us faster. It will change how we define good work. That’s why the strategic questions upfront, what does value look like and how will we prove it matter more than which model or plugin you pick. Good insights in all of that. Uh, shell, I guess the first thought in my mind, it makes me wonder how do those who argue against using AI and, uh, what, what’s prompted that thought as an article? I was reading, uh, just this morning about, uh, an organization where the leadership don’t prohibit it. No one uses AI [00:07:00] on the belief that, uh, it doesn’t deliver value, and it minimizes the human excellence that they bring to their client’s work. I wonder what, uh, they would say to things like this, because there are examples everywhere you look and you’ve just recounted a load of the advantages of using artificial intelligence in business. I was reading one of the other articles that you shared, which you didn’t talk about on the examples that Mons, uh, which is really quite interesting, itemizes, how they, how AI plays a large role in their marketing, uh, for instance, to create digital advertising content. Product display pages, uh, towards high level creative assets including social media content and video ads. They talk about though the 40 ai augmented campaigns that they have implemented, which they say have led to measurable improvements in brand awareness, market share, and revenue. And that compliments all the examples you were saying. They also say, rather than replacing humans, AI assist the, in refining their ideas and generating content. The key role of humans is to ensure brand distinctiveness and [00:08:00] originality. That simple. Those two simple phrases really resonated with me because AI assists the humans, and the key job of the humans is to ensure brand distinctiveness and originality. And that to me is, makes complete sense. So, uh, AI delivers significant value and they talk about the, uh, the metrics they have. Uh, here’s a one, uh, they say when start delivering two. And if you can do that 1% better, that adds up to significant volume gains and significant growth in of net revenue. Then, then it’s just the beginning and AI is delivering that according to, so these, these add to the, to the, uh, collection. Of, uh, what I call validation points for the benefits of using a particular tool, particularly when you focus on the human element in it. So they’re all great examples. Uh, and I think you, you mentioned at the start that too much of the, uh, activity we hear about is focused on tactics, [00:09:00] and this is full of it. It links it all to strategic aspects. Uh, it’s not just the, uh, the improvement in this and the 250 trillion impressions, although that’s pretty extraordinary. It seems to me these are real learning insights that you can get from all this kind of stuff. And, you know, I love reading all this stuff, so it’s good to see it. I have to say. I, you know, in communication we talk about strategic planning as a core competency in the profession and IABC conferences and in textbooks, the strategic planning process is outlined repeatedly. I mean, there are, are are different models and different approaches, but it’s always based on what is it that you’re trying to accomplish. At the end of the day, you’re not trying to accomplish writing a good headline. Right. You’re trying to accomplish, uh, having somebody read the article because it had a good headline and walk away ready to buy your product or ready to vote for your candidate, or [00:10:00] whatever it it may be. And it seems like. Even though we have embraced this as a profession in general, we have by and large forgotten it when it comes to Gen ai just because we get so excited by the immediately evident capabilities, the ability to gimme five headlines in different styles. So I can. Pick one or, or adapt one to, uh, to, to, to what I wanted to say, create this image. I mean, there’s nothing wrong with that. These are all great uses of the tool, but ultimately we have to look at where it delivers value that aligns with the goals that we’re trying to achieve on behalf of the organization. And you talk about those organizations that say there is no value. I, I would suggest either they’re not looking, they have a, a bias against it at the leadership level. Or they have people at lower levels who haven’t figured out how to demonstrate that value, and therefore leaders are convinced that there isn’t any. But if you look at the examples we’ve shared here today, it, [00:11:00] it’s clear that you can align what you’re doing with Gen ai. To your organization’s business goals and your strategic plan and your business plan and the like, there’s, there’s, there’s no question that you, you can, uh, the question is why aren’t more people doing it? I completely agree with the decision scientists from Google’s belief that if you’re not being strategic about it, why are you doing it at all? Yeah. I mean, I think to me the, the key thing to keep ing, and this could well be the kind of circling point you come around to, to repeat together again, as Mondelez says, while AI has been a game changer for them, it takes human ingenuity to get the most out of a technology that is available to everyone. And that, uh, is a point you mentioned from one of the examples that you gave that, um, how AI. Augments as opposed to replace or instead of that people talk about. Sure. But this needs emphasizing, I think, in a much, much bigger way. So Mondelez says, uh, again, a real simple point, but it’s, it’s good to say it. They [00:12:00] think AI is gonna help you do everything from creation of the brief all the way to actual actually trafficking the effort and putting it out into market. It’ll help you. So, um, that bears repeating, it’s not gonna do any of, all of that or any of that. It’s gonna help you do all of that. Hence, you know, AI augmenting intelligence. And I saw another different use of that phrase the other day, which has escaped my memories. Obviously wasn’t very memorable, but it was another example of it’s the human, that’s the key thing. Uh, not the technology, the technology tool that enables these things. So people’s eyes roll my view, leadership. No. And I think if leadership is going to pay attention to this in a way that is meaningful to the organization, there has to be an effort to bring managers into the loop to, so that managers can help their employees feel good about this. Understand, and we’ve talked about the role of the manager here before. Yep. But this, this is a critical one, is the emotional [00:13:00] side of managing. When you have a team of people who are confused and distressed and, and maybe worried about their futures with ai to be able to assuage those concerns and pull people together into a team that works with these things so that they do deliver that value, that’s going to increase the value of that team and of those individuals. So there’s a lot of work to be done here, and it’s heartening to see organizations like VCA and Qualcomm and Mondelez doing it. Well and doing it right and, and the more these case studies we can see, the easier it’s gonna be for other organizations to basically adapt those concepts. Yeah, I agree. And on the case of, on the part of Mondelez, the article was published in a publication called Knowledge at Wharton from, uh, the Wharton School University of Pennsylvania. I was quite at the end of April. Uh, I was actually quite amused to see the final text at the end saying that this article was partially generated by AI and edited with additional writing by knowledge at Wharton [00:14:00] staff. Curious about what the additional writing is. Uh, but that there, I would argue that’s a simple but good example that’s fully disclosed of the role AI played in them. Being able to tell that particular story. I don’t think that diminishes anything. If anything, it’s additional to it, hence the additional. Uh, in the, in the, I was gonna ask, did you, did you find the article less readable because it was partly written by ai? Well, now I know that. How could I tell? That’s the thing. They disclosed it and, uh, it’s good for them. I don’t think they needed to do that. Again, it depends on how they felt. They don’t say what percentage of the additional was AI generated, but I would imagine, again, a good example. To me, it seems that you’ve got something that you wrote and you running it by your AI assistant to check for. The flow tone, all those things you kind of do. With Grammarly a bit, I think at the very least, if you’re using Word, you can use the grammar checker and all those tools in there. Not very good. Nothing nearly as [00:15:00] good as an AI tool to do these things. So that’s already with us and has been for quite a while. It’s getting better, but the human element is absolutely critical. So it would be interesting to know what that additional writing was said, but it’s a good example. It is. And that’ll be a 30 for this episode of four immediate release. The post FIR #463: Delivering Value with Generative AI’s “Endless Right Answers” appeared first on FIR Podcast Network.
Marketing y estrategia 3 semanas
0
0
8
16:21
ALP 269: Pricing psychology for agency clients
ALP 269: Pricing psychology for agency clients
In this episode, Chip and Gini discuss the psychology of pricing within agencies. They cover topics such as the importance of being confident in your pricing, avoiding negotiating against oneself, and the benefits of pricing. Gini highlights her experiences with male and female negotiators, emphasizing how women often undervalue themselves. The duo debates the effectiveness of the ‘three pricing options’ strategy and its pitfalls. They also offer practical advice for owners to ensure their pricing sends the right message to clients and reflects the true value of their services. [read the transcript] The post ALP 269: Pricing psychology for agency clients appeared first on FIR Podcast Network.
Marketing y estrategia 4 semanas
0
0
8
21:50
ALP 268: Identifying and managing agency owner burnout
ALP 268: Identifying and managing agency owner burnout
In this episode, Chip and Gini discuss the prevalent issue of burnout among agency owners. They explore the different types of burnout, including cyclical and long-term burnout, and offer strategies to identify, cope with, and prevent it. Key recommendations include taking regular breaks, understanding personal energy drains and boosts, and adjusting work habits accordingly. They emphasize the importance of self-care, realistic time management, and the necessity to avoid making major decisions while burned out. Chip and Gini also share personal experiences and practical tips to help agency owners manage their workload more effectively. [read the transcript] The post ALP 268: Identifying and managing agency owner burnout appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
7
19:01
FIR #462: Cheaters Never Prosper (Unless They’re Paid $5 Million for Their Tool)
FIR #462: Cheaters Never Prosper (Unless They’re Paid $5 Million for Their Tool)
A Columbia University student was expelled for developing an AI-driven tool to help applicants to software coding jobs cheat on the tests employers require them to take. You can call such a tool deplorable or agree with the student that it’s a legit resource. It’s hard to argue with the $5 million in seed funding the student and his partner have raised. Also in this long-form monthly episode for April 2025: How communicators can use each of the seven categories of AI agents that are on their way. LinkedIn and Bluesky have updated their verification programs in ways that will matter to communicators. Onboarding new talent is an everyday business activity that is in serious need of improvement. A new report finds significant gaps between generations in the PR industry when it comes to the major factors impacting communication. Anthropic — the company behind the Claude LLs — warns that fully AI employees are only a year away. In his Tech Report, Dan York explains how Bluesky experienced an outage even though they’re supposed to operate under a distributed model. Links from this episode A Deep Dive Into the Different Types of AI Agents and When to Use Them Ethan Mollick’s LinkedIn post on ChatGPT o3’s agentic capabilities LinkedIn post on rumored OpenAI-Shopify integration I got kicked out of Columbia for building Interview Coder, AI to cheat on coding interviews Cluely Columbia student suspended over interview cheating tool raises $5.3M to ‘cheat on everything’ From the singularity community on Reddit: “Invisible AI to Cheat On Everything” (this is a real product) I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything LinkedIn will let your verified identity show up on other platforms Bluesky’s Blue Check Is Finally Here Burning questions (and some answers) about Bluesky’s new verification system Bluesky Adds Blue Check System With a Twist A New Form of Verification on Bluesky – Bluesky Bluesky’s newly unveiled verification system is a unique and interesting approach How To Onboard Digital Marketing Talent According To Agency Leaders Center for Public Relations’ Global Communication Report uncovers key industry shifts and generational divides Exclusive: Anthropic warns fully AI employees are a year away AI: Anthropic’s CEO Says All Code Will Be AI-Generated in a Year Hacker News on Anthropic Announcement AI as Normal Technology Links from Dan York’s Tech Report Wait, how did a decentralized service like Bluesky go down? Manton Reece – Bluesky downtime New Features for the Threads Web Experience Facebook cracks down on spammy content by cutting reach and monetization WordPress 6.8 “Cecil” The next monthly, long-form episode of FIR will drop on Monday, May 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript Neville Hobson: Greetings everyone, and welcome to for immediate release episode 462, our monthly long form edition for April, 2025. Neville Hobson in. Shel Holtz: I’m Shell Holtz in Concord, California in the us. We’re thrilled to be back to tackle six topics that we think communicators and others in business will find interesting and useful. Before we jump into those topics, though, as usual, in our monthly episode, we’d like to recap the shorter episodes that we’ve recorded since the last monthly, and we’re. Neville over. I think we’re, Neville Hobson (2): yeah, I think we are. Shell, uh, episode 4 56. That was our March monthly recorded on the 24th of, or rather, published on the 24th of March. Um, a lot of topics in that one, they addressed variety of issues. Uh, for instance, uh, publishing platform ghost enabling the social web by employees quitting [00:01:00] over poor communication in companies, the UK newspaper launching AI curated news. And there were three or four other topics in there too. Plus Dan York’s tech report as usual. So that’s a mighty episode. And. Shel Holtz: We did on the topic of whether artificial intelligence will put the expertise of practice by communicators at risk. Julie MayT wrote, it’s not about what we do anymore, but how we think, connect and interpret. Human value isn’t disappearing. It’s shifting, isn’t it? The real opportunity isnt doubling down on creativity, context and emotional intelligence by communicating with kindness and empathy. Looking forward to tuning in. And Paul Harper responded to that comment saying, my concern is that AI, for many applications completely misses emotional intelligence, cold words, which are taken from the web, which does not discriminate between good and bad sources, truth or fake. And Julie responded to that saying, good point, Paul. When it comes to important [00:02:00] stuff where it really matters whether AI is giving us something real or fake, I usually ask for the source and double check it myself. Chachi PT also has a deep research function that can help dig a bit further. Neville Hobson (2): Okay, so our next 1, 4 57 that was published on the 28th of March. And this I found a, a really interesting discussion, very timely one, talking about communicating the impacts of Mr. Trump’s tariffs. And we talked about that at some length. Our concluding statement in that episode was communicated should counsel leaders on how to address the impacts of those tariffs. And I believe we have a comment on that show Shel Holtz: from Rick Murray, uh, saying So true business models for creative industries are being turned upside down, revenue and margin streams that once fueled agencies of all types don’t need to exist now and won’t exist in three years. Neville Hobson (2): Well said Rick. Well said 58, which we recorded or published on the 3rd of April. This was, I thought, a [00:03:00] really interesting one, and we’re gonna reference it again in this episode. This was about preparing managers to manage human AI hybrid. Teams, um, a lot of talk about that and that how, uh, uh, uh, that we are ready or not for this, it’s on the horizon. It’s coming where we will have this in workplaces, and we talked about that at some length in that episode. Uh, looking at what it means for managers and how far businesses from, uh, how far it is from enabling their managers to succeed in the new work reality. We also added a, a kind of a, a mirror or a parallel element to this, that it’s also helping employees understand what this means to them in the workplace if they got AI colleagues. So, um, I don’t think we had any comments to that one. She, but it’s got a lot of views, so people thought about that, just didn’t, didn’t have any comments at this point, but great topic. Uh, I think Shel Holtz: left, left them speechless if we did. Neville Hobson (2): Yeah, exactly. So, uh, maybe we’ll get some after this episode in nine that we publish on the 9th of April that [00:04:00] looked at how AI is transforming content from ive to interactive. We discussed the evolving landscape of podcast consumption, particularly in light of Satya Nadal, the CEO of Microsoft, his innovative approach to engaging with audio content through ai. So not listening to the podcast, he has his, uh, chat bot of, uh, his favorite chat bot, not chat, GBT of course, it’s co-pilot that, uh, talks to the transcript and ge he engages that way. Interesting. Uh, I’ve seen comments elsewhere about this, that, that say, why on earth do you wanna do this? But you can listen. Well, everyone’s got different desires and wishes in this kind of thing. Uh, but it seems to me a feasible thing to do it the, for the reasons he describes why he’s doing it. And I believe it attracted a number of comments. Did it not show. Shel Holtz: We did, starting with Jeff Deonna, who wrote, to be honest, I find this approach deeply disrespectful to podcast hosts and their guests. It literally silences their human voices in favor of a fake conversation with a solace [00:05:00] algorithm. Now, I responded to that. I thought that Cliff notes would be a reasonable analogy. People rather than reading Silas Marner, uh, read the Cliff notes where some solace Summarizers outlines the story and tells you who the key characters are so that you can a test and it silences the voice of the author, author. And yet we didn’t hear that kind of objection to Cliff Notes. We’ve heard other objections. Of course, you should read the whole damn book. Right? But I think people have been summarizing for years. Executives give reports to their s and say, write me a one page summary of this. And now we’re just using. AI to do the same thing. I don’t know if you had any additional thoughts on Jeff’s comment. Sure. Neville Hobson (2): I left a comment to his, uh, comment. I just reply to his comment as well, saying that, uh, I didn’t say these words, but effectively it was a polite way of saying I disagree. Sorry, you’re not right with this for the reasons you’ve, you’ve outlined. I don’t have the comment open on my [00:06:00] screen now, so I can’t the exact words I used, but I thought I couldn’t let him get away with, with that, without a response. Shel Holtz: Well, we had another comment from Kevin Anselmo, who used to do the Higher Education podcast on the FIR Podcast Network. He said, I asked chat GPT to summarize your podcast transcript. After receiving the below chat, GPT provided practical advice on actioning the takeaways in my own projects. Interesting exercise, and I will not read everything he pasted in from chat GT’s analysis of the transcript of our podcast. But I’ll, I’ll tell you what the five key takeaway labels are. Transcripts are becoming essential. A ai AI makes podcasts interactive. Most people still prefer ive listening. AI is going multimodal. And then there’s a notable quote from the podcast, so that was, uh, turnabout. I mean, we’re talking about what would happen if people didn’t listen to the authentic voices. Well, you know, Kevin didn’t have to listen to us. I’m fine with that. If he [00:07:00] walks away with actionable items based on hearing or reading a summary of our transcript, one more way to get to it. I agree. And Mark Hillary wrote, why would you need a transcript for chat GPT though? Just feed it the audio and it could work out what is being said. Anyway, I. Neville Hobson (2): Yeah, I replied to him as well. We had quite an interchange. I can’t if it was on LinkedIn or on on Blue Sky, I can’t which, which service now. Um, but um, he was gonna go and experiment himself with something else. Uh, ’cause what he described, and someone else was left to comment about this as well. Actually, I think that was on Blue Sky too, that, um, talked about, uh, you know, why would you wanna do this a bit bit like GE actually, not like Jeff. It wasn’t just alleging disrespect, it was saying, why would you wanna do this? Um, when I, you know, it was actually Mark who said he’d ed an MP three. And, uh, it had done the job. It actually hadn’t, uh, chat. GPT got the MP three, created the transcript from it, and then it did what it [00:08:00] needed to do. So the transcript is essential to. Shel Holtz: Whether you created Issa. Nevertheless, Neville Hobson (2): these, these, yeah, these, these great comments are, are fab to have these I must have been extends the conversation. Okay. So then four 60, which we published on April the 14th. This one talked about layoffs in the United States primarily, and the return of toxic workplaces and the big boss unquote era. Uh, the tide is turning. We started off and assessed that I mentioned. We’re seeing not, not the same and not layoffs per se, but people quitting here in the UK for different reasons. But this turmoil in this and toxicity in the workplace is part of the reasoning. So we explore the reasons behind the layoffs in the US are the impact of CEO Tough talk and how communicators can help maintain a strong non-toxic workplace. So that was good. We have comments too, don’t we? Shel Holtz: We do.[00:09:00] Starting with Natasha Gonzalez who says something that stood out for me was a point that Neville made about employees in the UK who are reg from jobs due to toxic workplace culture, rather than being laid off as in the us. I imagine this isn’t unique to the uk. And then Julie MayT, who was the first comment she’s going to bookend our comments, wrote that organizations in the US are starting to see we cracks in psychological safety and trust disappearing. Then all those folks who keep everything ticking along will start to quietly disengage. It’s up to us, calms people to be brave enough and skilled to say on a wee minute, that message isn’t landing the way you think it is. While the big wigs are busy shouting, spinning, and flexing, it’s us who need to rock up with the calm, clear human communications, no drama, ram, just stuff that makes sense and actually help folks to figure out what the hell is [00:10:00] going on and what to do next. Neville Hobson (2): Good comment Mr. Bit. And that takes us to the last one before this episode, episode 4 61. We published on the, on the 24th of April that looked at trends in YouTube video two reports in particular that really had interesting insights on virtual influences and AI generated videos. And the bit that caught my attention mostly was, uh, news that every video ed to YouTube. So you take your video, you it, um, uh, can be dubbed into every spoken language on the planet, uh, with the, with the speaker’s lips reanimated to sync with the words they are speaking. I mean, this is either terrifically exciting or utter nightmare that, uh, that is approaching fast. So, um, we talked about that and uh, we haven’t had any comments to that one yet, but this is a topic I see I’m seeing quite a bit being discussed online in various places. So this is just a start of this, I think. [00:11:00] So that takes us to the end of the recap show, Shel Holtz: so I didn’t see it. Okay. Lemme talk about that. Neville Hobson (2): And last but certainly not least, I want to mention a new interview that, uh, that we posted on the 23rd of April. This was with Zoa artists in Australia who we interviewed on an article she wrote in the populous blog on bridging AI and human connection in internal communication. It was a really, really good discussion we had with, uh, it’s definitely worth your time listening to this one. You will learn quite a lot from what or Zoa has to say on this topic. What did you think of it? She, it was good, wasn’t it? Shel Holtz: It was fascinating and I read that, that post in the popular blog and also was engaged in a conversation with Zuora at the Team Flow Institute where we’re both research fellows and she raised it and it led to a conversation with all the fellows [00:12:00] and this notion of what would a board of directors do if AI was in the room with them right now? What would they use it for? How would they take advantage of it to some fascinating discussion. So worth a listen. Also up now is episode number 115 of Circle of Fellows, the monthly livestream discussion that people who watch live are able to participate in in real time. This was about communicating amidst the rise of misinformation and disinformation. Brad Whitworth moderated this installment of Circle of Fellows with ists, Alice Brink, Julie Holloway, and George McGrath. Sue Human was supposed to participate, but woke up feeling ill, but did send in some written contributions that, uh, were read into the discussion. So a good one. I’ve, I’ve listened to it. You should too. It’s a very timely topic. And just to let you know about the next Circle, circle of Fellows, episode one [00:13:00] 16 is scheduled for noon eastern time on Thursday, May 22nd. The topic is moving to teaching. This is something a lot of communicators do is become adjunct professors or full professors, or even tenured professors. And we’ll be having a conversation with four IABC fellows who have done just that, Cindy smi, John Clemens, mark Schumann, and Jennifer W. And in fact, I’m speaking at Jennifer W’s class via Zoom pretty soon, so that’ll be a fun one too. You can mark that one on your calendars May 22nd noon eastern time, and that’ll take us to the start of the coverage of our topics for this month, but only after we turn things over to an r for a moment.[00:14:00] As we have been discussing for some time, AI agents are coming and to a degree they’re already here. Ethan Molik, the Horton professor, and ai, I guess you’d call him an AI influencer. He posted this observation to LinkedIn a few days ago. He wrote, I don’t think people realize how much, even a mildly agentic AI system like chat PT oh three can do on its own. For example, this prompt works in oh three zero shot. Come up with 20 clever ideas from marketing slogans for a new mail order. Cheese shop. Develop criteria and select the best one. Then build a financial and marketing plan for the shop, revising as needed, and analyzing competition. Then generate an appropriate logo using the image generator and build a website for the shop as a mockup. Making sure to carry five to 10 cheeses to fit the marketing plan. With that single prompt in less than two [00:15:00] minutes, the AI not only provided a list of slogans, but ranked and selected an option, did web research, developed a logo, built marketing and financial plans, and launched a demo website for me to react to the fact that my instructions were vague and that common sense was required to make decisions about how to address them was not a barrier. And that’s an open AI reasoning model, not an actual agent. Built to be an agent to take on autonomous tasks in sequence multiple tasks in pursuit of a goal with agents imminent. HubSpot shared a list of seven types of agents in a post on its blog, and I thought it would be instructive given what Professor Mooch wrote to, to go over these seven categories or classes of agents and where they intersect with what we do as communicators. Now I, I’ll give you the caveat that. Somebody else may develop a different list. Somebody else may slice and dice the [00:16:00] types of agents differently, but this is the first time I’ve seen this categorization, so I thought it was worth going through. They start with simple reflex agents that operate based on direct condition action rules without any memory of anything that you may have interacted with it about before. So in PR, we could use this for automated media monitoring alerts set up agents that trigger. Instant alerts based on keywords that, uh, appear in news articles or on social media that lets you respond quickly. Uh, you could have some basic chat bot responses, you right, simple chat bots on internal or external platforms that will answer frequently asked questions with pre-programmed answers about things like, I don’t know, office hours, basic company information, dates of events. And then you could filter inbound communication, automatically flag or filter incoming emails or messages based on keywords that indicate urgency or specific topics and route [00:17:00] them to the appropriate team member to respond to it. The second type of agent is a model-based reflex agent. These maintain an internal model of the environment to make decisions considering past states as well as what you’re asking it to do right now. So you could use a contextual chat bot to develop these chat bots for websites or, or internal PO portals that can maintain conversational context. It can previous interactions, and then provide more relevant information or when the employee or the customer comes back for, for a follow-up or for additional information. Do sentiment monitoring with that, that historical context. Agents that track media or social media sentiment over time can identify trends and, and give you historical context to current conversations. So you know, something’s being discussed around the organization. It can say, well, you know, two weeks ago this conversation happened then that weighs on what’s going on in these [00:18:00] conversations today. And then there’s automated information retrieval, uh, agents that can access and synthesize information from internal databases or external sources based on what you ask it. Uh, providing more comprehensive answers than you get from the simple reflex agents. Goal-based agents make decisions to achieve a specific goal, planning a sequence of actions to reach that objective. This is what most of us think about when we’re thinking of agents, automated press release, distribute distribution, social media, campaign management, internal communication, workflow automation. This is all possible here. I think I, I referenced on an earlier episode that I used an agent, a test agent that I think was Anthropic had set up, and I had it go out to my company’s website, identify our areas of subject matter expertise, and the markets we’re in. Then go out and find 10. Good podcasts with large audiences where we [00:19:00] could pitch our subject matter experts as guests and it would be an appropriate pitch. And I sat back and watched while it did all of these things. So this is what we’ve got coming. Fourth are utility based agents that choose actions that maximize their utility or a defined performance measure considering various possible outcomes. Uh, we can use these to optimize communication channel usage, right? Analyze how audiences engage across different communication channels and recommend the most effective platforms for specific messages or, uh, desired reach or desired impact. I can use this for crisis communication, simulation and planning. Personalized communication delivery. Fifth is learning agents that improve their performance over time by learning from their experiences. You can use this to refine your message targeting, to improve, uh, the, the natural language understanding of chatbots that are engaging with customers or employees or whoever. And to predict [00:20:00] communication effectiveness. They can analyze a number of factors like message, content, timing, audience demographics. To predict the potential reach and impact of your communications, letting you make adjustments. Sixth are hierarchical agents that break down complex goals into smaller, more manageable sub goals. Here you’ll have higher level agents overseeing the work of lower level agents, so you’ll have a human manager managing an AI agent who manages AI agents. These for large scale communication projects, multi-channel campaigns, and and streamlining the approval process or use cases. And finally, there are multi-system agents. These are multiple agents interacting with each other to achieve a common goal or individual goals. Integrated communication, planning and execution. Managing online reputation with agents, monitoring different online platforms, analyzing sentiment, coordinating responses or engagement based on a unified strategy, and then [00:21:00] cross departmental communication coordination. So we need to understand the distinct capabilities of these different types of agents, and if we do, we’ll be able to leverage them to automate, to gain deeper insights, to do better personalization and better achieve our objectives. And I think, I think this is also a, a, a good point to mention. I have not had a chance to, to read it because you said you saw it and commented on it today. It’s still early here where I am. But Zora Artis, our interview guest posted something that kind of fits in here too, right? Neville Hobson (2): Yeah, she shared a post from LinkedIn, which I found quite intriguing. Uh, written by, uh, Jade Beard Stevens, who’s the Director of Digital and Social Innovation at YMU in London. Brief post, but it says it all, I gotta read it out. It’s quite, quite short. Uh, she says I wasn’t shocked, but still had to share. This rumor has it that open AI is quietly working on a native Shopify checkout. Inside chat. GPT apparently leaked code shows Shopify checkout, [00:22:00] URL Buy Now product offer ratings. No redirects, no search, just chat compare and buy in one flow. If this happens, Google, TikTok, even product pages as we know them are all about to change. This isn’t just another e-commerce update. This is the merger of search and checkout. This is AI becoming the new storefront. Brands will need to optimize for AI’s first visibility, not just SEO. This could be bigger than TikTok shop, and it’s already happening. Now, is this a agent ai? I don’t know. Shell, it’s, it’s, it’s kind of fits somewhere in, in this overall picture of, uh, tools, emerging methods emerging. Uh, look at the seven things you, you read out. Uh, there’s some real interesting stuff in there to, to deep dive into, but what Jade mentions is definitely something to pay attention to, even if you’re not in retail or in e-commerce or any of that. There’s a huge, not huge kind of developing conversation on Reddit about this, which has some more, in more detail on what’s happening. I did a quick search on [00:23:00] this. This is generally this topic to see, you know, anything else talking. I did find something, which isn’t this, this is gonna replace this other thing that I found, I think, which is a Shopify AI chatbot via chat, GPT as the title of the app goes, uh, put out by, um, uh, not, not Shopify beg, pardon? Shockly. A company called Shockly that, uh, builds, uh, tools to, for, for vendors on Shopify to, to sell their stuff. This isn’t it, but this has been around since September of 2024, and it is actually quite interesting. It’s an app you install. I see it’s got, uh, just under 30, uh, ratings, all five out of five stars from vendors. Um, it is all to do with, uh, enabling your whole, uh. Storefront using a, a tool from chat chat, GPT. What, um, Jade’s article talks about is this sort of [00:24:00] thing happening natively within Shopify. So that’s a slightly different proposition, but something like this is coming, so you’ve already got third party apps doing this. Now you’re gonna have a native app doing this. And if it is, um, well, I don’t wanna get hung up on the word digic here, but if, if this is, uh, uh, enables you to, to complete the whole buying process, from interest to purchase, to g up and paying for it all within chat GPT, that will, uh, a appeal to quite a few people. I think if it’s offered something better, faster, or less stressful, less hassle, easier than doing it otherwise in, in, uh, in Shopify, it’ll attract attention. So add this one to the list of things to pay attention to as well. Shel Holtz: Yeah, and whether that’s part of an agent or not, I think depends. It could absolutely be, uh, I could see how that would work in an agent tech environment. I’m thinking of giving the, the agent the [00:25:00] assignment of buying me a new mirrorless camera, as long as I provide it with the criteria, my price limit of the features that it needs to have, how soon it can be delivered, which brands I don’t want you to consider, uh, but go out and do comparisons of the different models, uh, from different manufacturers that meet my criteria. Then do price comparison to find the best price. Once you have found the best price, buy it and have it delivered so that I don’t have to do anything else. That’s an agent. So again, you know, if there’s price at the end, what can communicators do with that? I don’t know how much the PR folks can do with that, but the marketing side of the house can probably do a ton with that. Neville Hobson (2): Yeah. So one more to pay attention to. I was looking through the HubSpot article you referenced, and I, it’s a couple things in there that I, that struck me, uh, their views. Uh, one where they talk about under the, uh, autonomous AI agents paragraph, it’s always a good idea to keep a human involved in any AI operation. Absolutely [00:26:00] agree with that. Um, a lot of very useful, uh, information in HubSpot’s piece. Uh, some good explainers of what some of this stuff means. And then, um, uh, the answer to the question about preparing for an agent, ai future experimenting. I think the concluding sentence is probably the kind of, okay. Summarize the whole thing into this. The future is agent. Will you be ready now? That’s what we asked in 4 58 when we talked about this topic, and I wonder if we’ll be asking it again after this one. We’ll see. Shel Holtz: Undoubtedly we’ll be asking this for some time because even after the agents. Have fully arrived and are available. Uh, I think there’s going to be a lot of people in our profession and across industry who are not ready Neville Hobson (2): opportunity for. Shel Holtz: And we’ll talk about that more when we cover another story later. Neville Hobson (2): We will. Yeah. So let’s take a look at something quite interesting that popped up in the last few days. [00:27:00] Imagine an AI tool that promises to help you cheat on everything from job interviews to academic exams. That’s exactly what clearly offers. Created by two former Columbia University students, Chung and Roy Lee and Neil Han Mugham clearly acts as an invisible AI assistant that overlays realtime onto any application a is running. It gained attention and controversy after Roy Lee was suspended from Columbia for using an early version during a job interview. Despite this, clearly has just raised $5.3 million in funding from investors promoting its vision of true AI maximalism, where AI can assist in any life situation without detection. The tool is designed to be undetectable, providing realtime suggestions during interviews, exams, writing assignments, and more, much like an augmented reality layer. But for conversation and tasks, ers argue it could level the playing field for those who struggle with traditional [00:28:00] assessments, but critics warn it crosses a serious ethical line, potentially devaluing qualifications and undermining trust in recruitment and academic credentials. Realtime interview assistants raises questions, not just about competence, but about honesty and disclosure. Rarely happens. Interestingly, the Verge tested it. Their real world testing found that clearly is still very rough around the edges. Technical issues, latency and clunky interactions make it more proof of concept than polished products, at least for now. And did I mention they just got over $5 million in investor funding? The founders defend the provocative framing. They describe cheating as a metaphor for how powerful AI assistance will soon feel. Much like the early controversies over calculators or spellcheck, as they say, not quite the same thing. I don’t think Shel, but so are we looking at the next Grammarly or are we opening the door to a darker future where nobody can be sure what’s real anymore? So question for you then Shell is what does this tell us about the [00:29:00] blurring lines between assistance and deception in an AI driven world? Shel Holtz: Well, I think there’s a couple of ways to look at this. I did hear Lee interviewed on Hard Fork. Uh, it was a great interview and he made a couple of points. First of all, he said that having been through these types of interviews, this is, uh, the kind of interviewing you do for a coding job. That the tests that they give you have absolutely no relevance to the kind of work that you’re doing. You’re gonna do this once for the interview, and then you’re never gonna do it again. So he doesn’t think that helping people. Figure out how to do that particular exercise is, is all that much of a cheat. But he also said that everybody programs with the help of AI these days and he says it just doesn’t make sense to have any kind of interview format that assumes you don’t have the use of AI to help you code. I absolutely see that point, but on the other hand, I think this is [00:30:00] just one instance of the kind of thing that AI is going to enable. And there will be times that it can be very problematic, much more problematic than in this case if somebody can cheat on, say their legal exam or their medical exam, then you’ve got a problem. Somebody who’s not prepared to go out there and and operate on you past the boards because they had help from a program that was written to help them cheat and . So it’s the type of thing that society needs to be thinking about and isn’t yet. Neville Hobson (2): So if I get this right from what you said, Roy Lee thinks it’s okay to cheat in coding ’cause it’s a stupid question to ask and you’re only ever gonna do it once. So therefore it’s okay to cheat. Meaning you actually pretend you do know how to do this even though you don’t. I mean, that is bullshit, frankly, truly. Don’t you think? Shel Holtz: Well, his his point is that, yeah, you, you don’t know [00:31:00] how to do it, but you don’t have to because you’re never going to on the job. Neville Hobson (2): So don’t, don’t, don’t, don’t even take the exam and don’t apply for that job. That’s what I would say. Shel Holtz: I guess then you don’t get any jobs, right? Well, cheating is Neville Hobson (2): cheating Shel Holtz: His point is that you’re, well, yeah, it’s cheating. Yeah. But he says his point is that the cheating in this instance isn’t going to affect your ability to do the job. Whereas in other instances, well, I’m still cheating. I’m not defending it. Understand. I’m just telling you what he said. Neville Hobson (2): Yeah, sure. Yeah. But it’s still cheating. I, I would say, I mean, it is, to me, this is the same as saying, or someone’s a little bit pregnant or, you know, I’m, I’m, I’m, you know, that kind of stupid kind of defensive argument. This is an indefensible situation in my view that Shel Holtz: of course, it used to be considered. Neville Hobson (2): Yeah, but no, no, you can’t. You can’t do it by degrees. She, I don’t believe, honestly, I don’t. You are cheating or you are not. And in this case, again, from how you describe what Roy Lee said, effectively it’s saying, well this is a dumb question to ask and [00:32:00] I’m never gonna do this again, so I’ll get this thing to do it for me basically. And that they won’t know this. That’s the other thing. They do not know this. They think, are you’s a smart guy? This fell, let’s give him the job. What a ridiculous outcome. And the other ones you mentioned in degrees, you know, taking legal exams or, or you know, ing to be a surgeon. Yeah, they’re serious too, but they’re all the same. They’re cheating. But I then kind of flip a bit by saying that this is society as we are. I’m afraid this is humans doing this. This will be out there. And this makes it even more difficult to know what’s true and what’s not, and who you can trust and who you can’t. So, you know, welcome to the new world there. Shel Holtz: I think the adaptation that has to happen has to happen on the part of the people conducting the interviews, not the people taking them. And the reason for that is, I mean, if you think about it, it used to be considered cheating to, to bring a calculator into, well, they mentioned that’s Neville Hobson (2): the argument he gives. Ridiculous. Shel Holtz: Yeah. Well, I mean, everybody’s allowed to use a [00:33:00] calculator now because the people that was 60, Neville Hobson (2): 60 years ago. Yeah. So maybe in 50 years this would be normal. Yeah. Shel Holtz: Who conduct the tests came to realize that the people who do the work are able to use calculators. So they should have been part of the test all along. So I think that’s a legitimate argument, not a, not a legitimate argument for cheating, but for updating the testing so that people don’t feel like they need to. Neville Hobson (2): So in the meantime, that’s not the landscape. So they need to develop it. So maybe the simplest way to do this is send your AI agent in to take the exam for you. Has that, Shel Holtz: well, there are people doing that for job interviews. Yeah, of course. They, they’re probably pretty close to that. Yep. We’ve seen some interesting developments recently with two platforms taking different approaches to verification, and I think some of this may be a little backlash to X, where now you can just buy the blue check mark and it doesn’t actually anything other than that you pony up the money for it. But LinkedIn and Blue Sky [00:34:00] have taken steps with their verification programs. Let’s start with LinkedIn, which is allowing verified identities to extend beyond its own platform. This change means your verified LinkedIn identity can now be visible on other platforms designed to enhance trust and transparency across the internet. The system leverages open standards and cryptographic methods to ensure authenticity and security. What makes this particularly interesting is how it integrates with Adobe’s technology. Adobe’s content credential system is one of the tools ing this cross-platform verification. So when you your identity on LinkedIn, that verification status can essentially travel with you to other websites and services that these standards, including Adobe’s Behance. Now, this is a site that helps creators and people who need to hire creators connect. Now, this is a fundamental shift in how verification works rather [00:35:00] than a siloed verification system on each platform. LinkedIn’s embracing an interoperable approach that lets your verified status function as a digital port of sorts. Now, while it’s too bad, this isn’t tied directly to the fedi verse protocols, the significance for communications professionals can’t be overstated. As content creation becomes increasingly distributed across platforms, having a verified identity that travels with you simplifies your ability to establish authenticity in multiple spaces. For organizations managing multiple spokespersons or content creators, this can streamline verification processes considerably. Meanwhile, blue Sky has taken a different but equally innovative approach to verification by introducing a new Blue Check system just last week. Uh, they’re implementing what they call a -friendly, easily recognizable blue check mark that will appear next to verified s.[00:36:00] The platform will proactively authentic and notable s while also allowing trusted verifiers select independent organizations that can s directly. Now, what’s really interesting about Blue Sky’s approach is how it distributes verification authority. Under this system, organizations like the New York Times can now issue blue checks to their journalists directly within the app, and Blue Sky’s moderation team will review each verification to ensure that it is what they say it is. This creates a more decentralized verification ecosystem rather than putting all verification power in the hands of the platform itself. Blue Skies verification system has transparency built in. s can tap on someone’s verified status to see which trusted verifier granted the verification. This adds a layer of context that helps s understand not just that the s verified. But who [00:37:00] vouched for it? Now, before this update, blue Sky had been relying on a domain based verification system letting s set their website as their name. For example, NPR [email protected] and US Senators their with their senate.gov domains. This method is gonna continue alongside the new blue check mark system, and this gives s multiple ways to establish authenticity. Now, the evolution of these verification systems comes at a critical time with scammers and impersonators on the rise. A recent analysis found that 44% of the top a hundred most followed s on blue sky had at least one doppelganger attempting to impersonate them. For those of us working in organizational communication, these developments signal a series of important trends. First. Verification is important and it’s becoming distributed and contextual rather than a single authority declaring who’s authentic. We’re moving toward [00:38:00] ecosystems where multiple trusted entities can vouch for identity. Second Cross platform verification is emerging as a solution to digital fragmentation. LinkedIn’s approach particularly shows how verified identity could function seamlessly across digital spaces rather than being siloed within individual platforms. Third, transparency about who is doing the ing is becoming important. Blue Sky’s approach of showing which organization verified an recognizes that the source of verification matters almost as much as the verification itself. For organizations, these trends suggest that we really ought to be thinking more holistically about verification strategies. Rather than just get verified on each individual platform, we are really gonna need to start thinking about establishing verified digital identities that can travel with our content and our spokespersons across the net. Neville Hobson (2): Very interesting development. So I [00:39:00] hadn’t familiarized myself much with the LinkedIn one, but that’s e equally very interesting. Uh, blue Sky though, to me is definitely moving ahead in a very interesting area. Unlike XI think you mentioned Shell, but some people are seeing this as like a slap in the face to Musk. That’s probably a very tangential way down the, the priority list, but yes, I bet they are. But I found it most interesting the way in which they’ve gone about this in of the, the levels of verification. You’ve got your little blue check mark looking slightly differently depending on the verification system. And by the way, this is, I think it’s a smart move to follow the blue check, although technically it’s not a blue check, it’s a white check in a blue background, but whatever people call it a blue check mark because, uh, it’s familiar thanks to Twitter as was and the who. Trashed it completely. ’cause the only verification means you’ve paid Musk so many dollars a month fee and therefore you verified. I mean, that’s Twitter’s def or X’s definition of what verification means. No value to it, in my view, shall frankly. But, uh, this though, [00:40:00] I think is, is far more interesting. Particularly the transparency about who has verified you. Um, I’ve used my own domain, a domain I acquired back in 2023 for the purpose of this is to my handle by domain. Neville Hobson xyz, YXYZ. You might ask because that’s because at the time the Metaverse was a big deal. NFTs were hot, and everyone who was, everyone had a domain ending in X, Y, Z. So hey, that’s a bandwagon I’ll jump onto, which I did. So I’m now using it have been, and it’s only used for that purpose currently. So, um, you can’t request verification. That’s another thing to mention with Blue Sky, uh, it’s not much you are invited. Is that suddenly that you might get a, not from saying they have verified you or one of these other organizations might, if you excuse me. On a domain with your employer, they can you. And there is something equally interesting on this. I’m not quite sure if this is just a sample, it’ll stay around or not. But you can actually yourself. I’ve [00:41:00] seen some people doing that. I haven’t done it. So because I can’t see the point. Uh, ’cause the point of verification to me is trust in someone else has verified you, not you doing it yourself. So maybe that will disappear or it’ll have some other function, I don’t know. But the transparency thing, according to the screenshots in Blue Sky’s, uh, announcement posts about this are, are great. A very clear so-and-so is verified. Uh, it says this has been verified. It has a blue check because it’s been verified by trusted sources. Then it lists who those sources are and the date they perform. The verification adds lots to the trustworthiness that you perceive rather than just some simply say, yep, you verified, you a blue check. If you’re an organization , you’ll have a different style check. And these will all become quite familiar. They, they’re not complicated at all. So I. You are right at what you said earlier, which is about, verification isn’t just a casual thing anymore. You need to have a strategy about who in your organization, if you are a, a large organization in particular, who gets [00:42:00] verified for what purpose by whom, and we’ll see that emerging as this picks up. But this is a great start. They do say, and this is going back to the domain, you can self with a domain. That’s the only thing that makes sense, because to do it, you’ve got to make changes at your registrar in the in DNS settings and, and a few other things. And also engage with blue sky to do this. So it’s uh, uh, they say during this initial phase, they’re not accepting direct applications, as I mentioned. Uh, but they do say as this feature stabilizes, so I guess all the excitement’s dying down and people see how it’s all working, they’ll launch a request form for notable and authentic s interested becoming verified or becoming trusted verified. So during the course of 2025, we’ll see this develop and maybe, um, uh, maybe it will, uh, become the kind of benchmark standard for verification on social networks like this. So it’s interesting. I. Shel Holtz: We need a standard and I’d like to see that [00:43:00] standard. Yeah. Integrated with the fedi verse standards, because these all ought to be infra operable. We, we really ought to be able to share a post in one place where we are verified and have that post show up wherever people have chosen to follow us from and have that verification show up with us. And people should be able to click on that verification and see who vouched for us. Uh, they should be able to see that the spokesperson for my company was verified by me or by the CEO and it all works together. Neville Hobson (2): I think that will emerge, uh, thinking about this cross. Posting idea that’s in been in place in a couple of places, but it’s very, very flaky. I’m talking about things like, for instance, it’s been for a while, at least a year, if not longer, where a plugin on WordPress lets you publish your post and it will then share it across, across the Fed us via connection, uh, with Mastodon. And you’ve then got threads doing the same thing, [00:44:00] but they’re not. They require tweaks to your platform. Uh, the, probably the one that shows you, if I can use this phrase again, the direction of travel is ghost. The, uh, the new platform, which I ed the beginning of this year that has just enabled, um, or recently just enabled the ability to share your posts with Blue Sky Now ghost. Has invested a lot of time, effort, and probably a bit of money too, I think, into its social web offering, which is in beta. And that’s all to do with the activity pub protocol because Blue Sky has a different protocol at Proto yet that works from ghost to blue sky via a bridge. And that’s a little technical and that has got to be just immediate term usage whilst this, this plays out further. So someone like Ghost is making big inroads into doing, into enabling this kind of thing. And I would say we’re gonna see a lot of activity [00:45:00] during 2025 from Mastodon in particular, as well as people like Ghost and others to connect up these, these disparate elements of the Fedi verse so that we we’re becoming more cohesive. But it’s gonna take time. Shel Holtz: Yeah, the fedi verse is, is nascent, but it’s also, I think, inevitable. We’ve been talking for quite some time now about what is the successor to Twitter now that X has become what it has become. And I’m not sure that there is a successor. I think that there are a number of places that people are attracted to. It could be ghost, uh, could be for its newsletter functionality as much as for its blogging functionality. It could be threads it, it could be blue sky, it could, you know, whatever. But as long as where I am, I can follow who I want to follow and have that appear in the network that I have chosen, I’m good. So I think this is where things are, are headed inevitably since I think the days of somebody being able [00:46:00] to come along and say, I’m the new 800 pound gorilla of social networking. Everybody’s coming here are over. I. Neville Hobson (2): Yeah, it’s been apparent like that, that that’s likely to be the case for, for a bit, I believe very much that the time is gone for, for monolithic centralized social networks like Facebook, for instance. No, this is the time for niche networks. Uh, people can set things up themselves. Uh, it doesn’t matter. You, you’ve got 50 people on there or 50,000 people on there, doesn’t matter. And indeed, the the recent, uh, outage on Blue Sky is a, is an interesting indicator of the fragility of all of this. And, and Dan’s gonna talk about this a bit later in his report, but this is an interesting time. We’re now, it’s almost like things are maturing, it seems to me. And I think you’re right when you say that, that people aren’t, aren’t so much attracted by the idea of a centralized place where, Hey, we’ve all gotta go here after the experience on X. You’ve got more about people saying, I want to get outta here, where do I go? So, um, we’re still at that phase, and you’ve got. Something interesting with Trump’s, uh, not Trump Musk’s, uh, GR [00:47:00] network, developing chatbots for it and all this stuff. So that’s something interesting in that area of this. So it’s all at a time for communicators to pay attention closer to what is happening here and the implications of it just as you and I are doing. And if you don’t wanna do that, that’s fine. Just listen to FIR ’cause we’ll help you understand it. Yep. Okay. That’s a really good report, Dan. Thank you. Good topics. You’ve talked about, uh, blue sky. I mentioned just before your report actually the outage was unfortunate, but is it not an indicator precisely of that fragility? I mentioned previously different definitions of decentralization that you mentioned. I think that’s. Possibly a communication issue because people seem to be latching onto, Hey, it’s decentralized when actually it’s more like, it’s going to be decentralized. ’cause that’s our aspiration that we’re working towards, which is the case with, with Blue Sky. That’s very good on threads move to.com and web improvements. I must it, I, I was a bit yawny about [00:48:00] that. You know, dot net.com. Do I care as a ? Well maybe I should because I then read somewhere else that the move to.com enables meta to do things that they can’t do with ANet domain. And I’m sure you’ll know more about that than me Dam at the internet Society. Again, interesting developments with what’s happening with all of this. So thanks for the report, Dan. This is really, really a good one. And let’s cha shift gears slightly. I don’t think this story’s got AI in itself that I’m gonna talk about. Oh Shel Holtz: my God. Neville Hobson (2): Gotta have one gonna be Shel Holtz: fined. Neville Hobson (2): So. Let’s shift focus, as I mentioned, something that’s critical for every business, uh, but often overlooked how we bring new people into our organizations and set them up for success. It’s called onboarding, right? The topic of onboarding is particularly timely right now, especially in digital marketing, where the pressure to deliver results is higher than ever with digital marketing at the heart of [00:49:00] business communication strategies, every new hire represents not just an addition to a team, but a critical investment in how a company presents itself, engages customers and drives growth. Effective onboarding therefore isn’t just about helping someone settle in. It’s about ensuring they contribute meaningfully, quickly, and sustainably to an organization’s broader success. A recent feature in Search Engine Journal caught my attention as it explored how digital marketing agencies are rethinking the onboarding experience. But whatever your business and where the agency or client side hiring great talent is only half the battle, keeping them as where the real challenge begins. The article highlights the critical role of structured onboarding in enhancing employee retention, productivity, and satisfaction within digital marketing agencies. One strong theme is the importance of starting onboarding before day one. Christie Hoyle, the COO at Kaizen Search explains Our process begins two weeks [00:50:00] before their official start date to ensure employees feel informed, prepared, and welcomed. This early engagement helps build confidence and sets expectations well before a new hire walks through the door. Zoe blog director of operations at the SEO Agency reboot highlights the importance of immersion during the first weeks. She says, our process is designed to give new hires time to truly absorb how we work before they’re expected to contribute. Human systems play play a key role too. Phil Dukowski, client services and new ex director at SEO Sherpa and Emma Welland, co-founder of House of Performers, both emphasize mentoring. As Emma puts it, we assign everyone a mentor as well as a manager to make sure they have multiple people to check in with and speak to. Technology is also critical. Agencies like Vivant use platforms such as Asana to structure onboarding flows. Beth and Ranford, general manager and head of paid media at Vivant says we use Asana across the [00:51:00] business and have a comprehensive onboarding flow, which all new starters enroll with it. Meanwhile, Olivia Royce, operations director at SEO Agency Novos explains how their structured 30, 60, 90 day onboarding plan breaks the early months into clear milestones aligning with probation periods. She says, we have a clear onboarding process in our task management system, which outlines who is responsible for what during the onboarding process. Beyond tools and tar timelines, emotional connection matters most. Emma Wellen says, I fundamentally believe a good onboarding is judged by how you make someone feel for us. Making sure expectations are clear from day one is a big part of this. Shel Holtz: Yeah, I mean onboarding new hire orientation, call it what you will. It’s vital. There is data that suggests that people tend to leave a job somewhere between one and three years into it, and you have to believe that if the onboarding had been effective, those numbers [00:52:00] would drop. And there is so much wrong in what I see in so many companies doing with their onboarding. I mean, the typical thing is you have a new hire orientation the day you start, and then you’re just. Thrown into the deep end and how much can you really retain on your first day? You’re overwhelmed, your first day, you’re lucky. If you what day payday is, how I record my time, what work hours are, and what the deal is with the parking lot. So I, I like the 30, 60, 90 day approach. In fact, where I work, we are in the process of migrating to a, a new internal communications platform. We’re consolidating several separate tools in, into one tool. But one thing that it lets you do is target individuals to a different homepage to start with. And one of the things that we’re going to do in phase two is have a homepage for people. Are there. From their first day to their 30th day. Another [00:53:00] homepage for people who are there from their 30th day to their 60th, and a third one for people who are there from their 60th to the 90th, just surfacing those milestones and the kind of information that they need while still providing them the navigation to the same resources that everybody else needs. But yeah, I, I’ve heard so many different great approaches to this. I think it was Coca-Cola that had essentially a report card and it had a list, and it said, in your first week, you need to go talk to these three people about these three things. And when you did, the people you needed to talk to signed off and you had to have everything. Signed off at the end of a 90 day period, meaning you’ve met all of these people, gotten to know them, they’ve gotten to know you, you’ve learned from them, and have built that connection and started the relationship, and that speaks to that emotional connection that the report you referenced, addressed. Companies need to invest the time, energy, and [00:54:00] money in onboarding if they don’t wanna lose these people after they’ve been around for a year or two. That’s what it comes down to because replacing somebody is, I guarantee you gonna cost a whole lot more than what is going to cost to do an effective new hire orientation period. Neville Hobson (2): And this is talked about a lot, isn’t it? Shell, such as the examples I’ve mentioned from those individuals at those digital marketing agencies. But as you pointed out that, that so many companies don’t do anything beyond, Hey, welcome. Here’s your desk, here’s your for your email stuff. Off you go. Uh, there’s some great approaches here and so. If someone says, why, why do we need to talk about this? Well, I think we just explained why we need to talk about this. This is key. If, as people keep saying, people are the most essential resource in our company, you just read the general newspapers to get a feel for the, the kind of dilemma across the board. Literally. This is not just to do with digital marketing agency. I mentioned that at the beginning. This applies to almost any organization that you [00:55:00] wanna retain people who need to, obviously the package, they get remuneration and benefits. All that is part of that, of course. But how you treat them, make them feel valued. Uh, I’m reminded my only experience this in in recent relevance. Was when I went to work for IBMA decade ago now, and I started at the beginning of 2016, but I had two months prior to that, a lot of with, with HR and others in, in IBM to familiarize myself with at the time how IBM worked. And boy, that was difficult to figure that out at that time, but they, they were on the, on the ball very much with this back then, a decade ago. And many company you mentioned Coca-Cola. I’m sure this is not alien to many companies, but it probably is alien to lots of companies as well. So, um, I, I hope this helps people if they’re. Looking ask and set their procedures and processes where there’s some good tips here from these folks that I mentioned. Shel Holtz: Yeah, yeah. There’s so many good ideas you can research on, on how to do a, a, a a a, a [00:56:00] good onboarding program. You referenced the idea of a mentor being assigned to every new hire. I like that in companies that are large enough where there’s a cohort of new hires, there may be 10 per month or or 20 per month. Uh, to have them go through all of these things as a cohort so they get to know each other and they become a resource to one another. You know, it can be embarrassing to reach out to somebody who’s been with the company for 18 years and, and ask something really, really basic that you think sounds stupid, but to reach out to somebody who started within three or four days of the time you did, have you figured this out yet? That’s just fine. And I know that. When I worked for the pharma that I used to work for after you’d been there a year, that cohort got together in a meeting with the CEO and the president who talked about, you know, things that we want you to know about now that you’ve been here a year in of culture and direction. But we also want to answer your questions and hear [00:57:00] your concerns. And I gotta tell you, that goes a long way toward building that relationship. It does, and building that trust in the leadership of the organization. And I think, I think it’s a, a, a, a really good idea. There is an opportunity for communicators to inject themselves in what is usually seen as an HR process, because this is all about knowledge transfer and information sharing. Neville Hobson (2): Good stuff Shel Holtz: don’t abdicate the responsibility that the communicators have to participate in this process. Well, I’ve been digging into a new global communication report from the University of Southern California’s Annenberg Center for Public Relations. You’ll, you’ll like the title of this one, Neville. It’s, it’s called Mind the Gap. The gap referenced is, is the one that exists between generations, even though the logo is the one that’s used for the tube and, and London. It’s not like we haven’t had a ton of research about generational differences, but this one had some revelations. Well, lemme start with the big picture. The PR industry is [00:58:00] experiencing what the report calls unprecedented upheaval driven by four major forces, artificial intelligence surprise, a hybrid work, the changing media landscape, and political polarization. Those are all topics that we address pretty routinely here on FIR. The report examines these forces through a generational lens, looking at how perspectives differ across Gen Z Millennials, gen X and US Boomers. Neville. Uh, the researchers surveyed over a thousand public relations professionals this past January, and despite all th
Marketing y estrategia 1 mes
0
0
5
01:30:44
FIR #461: YouTube Trends Toward Virtual Influencers and AI-Generated Videos
FIR #461: YouTube Trends Toward Virtual Influencers and AI-Generated Videos
Videos from virtual influencers are on the rise, according to a report from YouTube. And AI will play a significant role in the service’s offerings, with every video ed to the platform potentially dubbed into every spoken language, with the speaker’s lips reanimated to sync with the words they are speaking. Meanwhile, the growing flood of AI-generated content presents YouTube with a challenge: protecting copyright while maintaining a steady stream of new content. In this short midweek FIR episode, Neville and Shel examine the trends and discuss their implications. Links from this episode: YouTube Culture & Trends – Data and Cultural Analysis for You YouTube Looks to Creators (and Their Data) to Win in the AI Era YouTube Publishes New Insights Into the Rise of Virtual Influencers The next monthly, long-form episode of FIR will drop on Monday, February 24. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: [00:00:00] Hi everybody, and welcome to episode number 461 of four immediate release. I’m Shell Holtz. Neville Hobson: And I’m Neville Hobson. This month marks 20 years since the first video was ed to YouTube, a 19 second clip that launched a global platform now at the Center of Digital Media as the platform. Reflects on its past. It’s also looking sharply ahead. And what lies on the horizon is a bold AI powered future highlighted in two reports published in the past week. According to YouTube’s leadership, we’re five years away from a world where every video ed to the platform could be automatically dubbed into every spoken language. More than that, the dubbed voice will sound like the original speaker with AI generated lip movements tailored to match the target language. It’s a vision of seamless global accessibility where creators can invest once and reach audiences everywhere. [00:01:00] This isn’t speculative. YouTube is already piloting dubbing tech with hundreds of thousands of creators and experimenting with voice cloning and lip reanimation. But with that ambition comes a fair amount of controversy. Underpinning these features is Google’s Gemini AI model trained on an ocean of YouTube videos, YouTube. Many from creators who weren’t aware their content was being used this way. Some have pushed back arguing that a license granted under YouTube’s of service doesn’t equate informed consent for AI training. At the same time, YouTube’s 2025 trends report highlights the rise of virtual influencers, synthetic personas, who are building large audiences and changing what authentic content looks like. For a growing number of viewers, it doesn’t seem to matter whether the face on screen is real generated or somewhere in between. What emerges is a picture of a platform trying to empower creators with powerful tools while, while quietly shifting the [00:02:00] ground beneath their feet, culturally, ethically, and. On one hand, a report by Bloomberg paints a picture of YouTube as a tech powerhouse using AI to expand creative reach, drive viewership, and reshape media, but not without controversy over how training data is sourced, especially from creators unaware that content fuels these advancements. On the other hand, social media, today’s take focuses more on the cultural shift. AI generated influencers, fan created content and multi-format storytelling are changing the rules of what audiences find compelling and raising questions about the very definition of authentic content. Both views converge on the same point, AI is here to stay, and whether you are excited or concerned, it’s reshaping the creator economy from top to bottom. So is this YouTube fulfilling its mission to de democratize creativity through technology? Or is it becoming a platform where the line between creator and content becomes so blurred [00:03:00] that the original human touch gets lost? We should unpack this. There’s quite a bit here to talk about. Isn’t. Shel Holtz: There is, and it seems to me a relatively natural evolution for YouTube. Uh, as long as creators are able to what they want, I think you will find plenty of authentic content. There’s going to be no shortage of people who want to talk into a camera and share that. Uh, people who. Themes, uh, that they think people would be interested in? Uh, I, I love hearkening back to a story I read about a, a physics grad student, uh, who started a YouTube series, uh, called Physics for Girls. Uh, and it was aimed at the K through 12. Cohort of of students and trying to get them interested in the STEM sciences and it became very popular and she was [00:04:00] making, I think I read a million dollars a year in. Advertising revenue. I don’t think that’ll stop. I think people will be able to continue to do that. What you see is in a platform where there’s no limits, there’s no constraints. How many gigabytes of of video data can be ed? They just. Keep expanding their data center capacity, uh, that there’s room for all of this other stuff, including the AI generated content. And as long as it’s entertaining or informative, if it serves a purpose, people will watch it. And that’s the thing, if it’s crap, people aren’t gonna watch it. It’s not gonna get recommended, uh, it won’t find its way into the algorithm. And. Spending time creating it if it doesn’t produce the kind of results that they’re looking for. But we’ve already seen that influencers. Work, uh, on both sides of the equation, you [00:05:00] can tailor them to be exactly what you know your audience is looking for. So it’s great for the consumer. Uh, and in of the brand or the r, uh. You don’t have these loose canon celebrities that you’re, uh, using or, or somebody who’s just a professional influencer who goes off the rails. You’re in complete control. So, uh, you know, it’s not my favorite concept, but I don’t see any way to slow it down. And I think the people behind them are gonna continue to, uh, find ways to make them. Resonate with, with the people that they’re, uh, aiming them at. And in of the training of AI models on all of this, you know, right now you have a, an istration in Washington DC that is agreeable to the approach that the, uh, the AI companies, uh, open ai [00:06:00] and like. Want the government to take, which is to, uh, just put an end to this whole intellectual property thing and say, AI can train on anything it wants to. Uh, so I, I think that’s probably coming, uh, God knows Elon Musk is, is training grok on all of the content that is shared on X. And if you have an there that’s, that’s your. Implicit permission to let him do that. It’s one of the reasons that he went ahead and bought X in the first place was knowing that he had access to that treasure trove of data. So I don’t see it. I don’t see that slowing down either, and I don’t see the fact that people are unhappy, that their content is being used for training, being an impediment to having that content used as training. It’s gonna continue to happen. Neville Hobson: That’s part of what worries me a lot about this. I must it, if I took, if taking the Bloomberg report, um, which [00:07:00] is, uh, this, this idea of auto dubbing videos into every spoken language. We’ve talked about this before, not what YouTube’s doing, but the notion of. The example, you often give the CEO of a company giving an all employee address and he’s an American or a native English speaker. Uh, and yet there’s a version in 23 other languages like Urdu or Hindi or, or Spanish even. You know, you then talk about Mongolian, perhaps if they have offices in Learn Battle or something. Um. That, uh, shows him fluent in talking in all of those language, which is, I’ve always believed and I still do. That’s misleading. Uh, unless you are very transparent, which is fact adds to your burden of, of engage with employees. If you’ve gotta explain every time he’s not fluent, and this is not really him speaking Hindi. It’s, uh, an AI has done it or however you might frame it. So that’s not gonna stop though easier. Uh, your point I agree with as well [00:08:00] that most people won’t really care about, about this Probably. Um, I mean, I’m a, I count myself as a creator, uh, uh, in of the very tiny bits of content I put up on my YouTube channel, um, which, uh, isn’t a lot, uh, it’s not a regular cadence, uh, is now and again. Uh, and if I found versions in, uh, you know, in, uh, uh, in native, uh, in, in a native language on Bolivia, for instance, would I care? Well, only in the sense of is, is it reflecting exactly what I said in English and have to, you have to assume that it’s gonna be doing that, but that’s not to me the point really, they’ve gone ahead and done it without permission. There will be people who don’t want this happen to content. Ts and Cs saying they can do this. If you don’t like it, you’re gonna have to stop using YouTube. And that’s the reality of life, I think. But there are a couple of things though. Uh, I, I think, you know, Google wants creators to use its ai IE Gemini, uh, to, uh, create, edit, market and [00:09:00] analyze the content that they create and, and, uh, uh, that’s, you may not want to use Gemini. Um. You’ve got, uh, uh, the training element that Google is assuming they’re okay to use your content to do things like that. Uh, it aligns with their of service, they say, but trust isn’t in that equation as far as critics are concerned. The voice cloning and lip animation, the technology is amazing, I have to say. Uh, and according to Bloomberg, YouTube’s already testing multilingual dubbing in eight languages with plans to expand that. Well, yeah, there’s cloning and lips. To mimic native language speech are in pilot phrases. So all this is coming without doubt. So I think it is interesting. There’s some downsides on all of that. According to Bloomberg, dubbing will reduce, uh, Ms when moving from English to other languages. You’ve got that to take into too. But expanding reach to new language audiences may ultimately increase total revenue. If it’s a monetization thing you’re looking at. Um, so [00:10:00] YouTube says they think quality content, to your point, will still rise above the growing flood of AI generated deepfake material. I guess that’s part of what we call AI slop these days, right? So there’s that, which of course leads you straight into the other bits about virtual influences, uh, and, uh. Just a casual look. And I was doing this, uh, this morning, my time before we were recording this, uh, uh, coming across examples of what people are writing about and publishing, uh, with photos and videos of people that you, you get it in the highest resolution you want. I swear. You cannot tell if it’s real or not, if it’s a real human being or an AI generated video. Will that matter? At the end of the day, I, I think it probably comes down to do you feel hood wicked when you find out it’s an ai when you thought it was a person? And there’s a few surveys out recently, and, and this is kind of tangential connection to this topic, but of people who are [00:11:00] building relationships with ais, they’re, they’re getting intimate with them. And, and I don’t, I don’t mean the, obviously meaning what we might think intimate means, but developing emotional bonds. With an AI generated persona. And so, uh, there’s great, uh, risk, I think there of, uh, misuse of this technology. So, you know. Going down the rabbit hole or, or even the, the, the, the, the idea of it’s all a conspiracy and they’re out to steal our data and confuse us. No, it’s not that. But there’s great risk, I think, of opacity, not forget about transparency. This is, this is completely the opposite of that. Uh, and it’s, it’s got, uh, issues in my view that, uh, uh, we ought to try and be clearer than just give. The likes of Google and others, uh, literally can’t blanc to do what the hell they want without, uh, without any, uh, uh, any regulation, which, uh, unfortunately that seems to be aligned with, uh, Mr. Trump and his gang in Washington as to what, they [00:12:00] don’t care about any of this stuff at all. In which case, um, uh, tech companies, if you listen to some of the strong critics are rubbing their hands with glee at what they’re gonna be able to do now without any oversight. And therein is the issue. But I’m not saying that’s something we should therefore, you know, get out our pitchforks and shovels of March on Washington. But it’s a concern, right? I mean, this is a major development. Um, the virtual influencers I think is, uh, is is exciting idea. I. Um, but the risks of of misuse are huge in my view. So I just having the yes but moment here basically. And I normally not, I don’t normally do this shell, I’m normally embracing all this stuff straight away, but there’s big alarm bells ringing in my mind about some of the stuff that’s happening. Shel Holtz: Well, I think a lot of it is going to be contingent upon what we become accustomed to. Uh, yeah. As, as you become accustomed to things, they just become normalized and you don’t give them a second thought. There was a TV commercial. [00:13:00] I’m gonna have to. See if I can find it. They must have it on YouTube. Uh, even though this had to be 20, maybe 25 years ago, I believe it was an IBM commercial. It was a great commercial by the way. This is why I it so many years later. Yeah. Uh, it was either black and white or, or sort of toned. Uh, it was in a dusty old diner out in the middle of nowhere, and there’s a waitress behind the counter, uh, and there’s nobody there. One guy wanders in and sits down. And he, I don’t what he asks for, but they don’t have it. They don’t have this, they don’t have that. And then he sees the tv. He says, uh, what do you have on tv? And she says, every movie and television show every ever made in any language you want to hear it in, uh, and talking about the future of technology, right? If you get to a point where anything you wanna see. Is available in your language [00:14:00] then, does it continue to be an ethical question when you see your CEO who doesn’t speak your language speaking to you in your language? Or is this just something that we all accept that the technology does for everything now and it doesn’t matter whether he speaks your language or not, he can because of the technology. Now, I’m not saying that. Promoting as an approach to take today from an ethics standpoint, I think you do need to let people know, uh, we think it’s gonna be a lot easier and more meaningful for you to hear, uh, the CEO speak in your native language. Mm-hmm. But he doesn’t speak it. This was AI assisting with this, but in five years when everything. Is handled that way, it will it even matter. I, you know, I, I suspect that it won’t, I suspect it won’t matter whether somebody speaks that language when you know that any media you consume can be consumed in your native language thanks to the technology that we all [00:15:00] take for granted at that point. Neville Hobson: Hmm. Uh, that’s a sound assessment. Uh, and you may well be right and I, I suspect that much of what you said will likely come to . I just think that there’s. Concerns we ought to be paying more attention to than we seem to be. It seems to be. So for instance, uh, one big thing to me is, is um, I guess it’s kind of related to the ethical debate, but what does real mean anymore? I. In this, what does authenticity mean? Now, it doesn’t mean what it meant yesterday. If you’ve got virtual influencers, uh, creating videos, you don’t know that that’s not, that’s not a real person. Things like that. Shel Holtz: That’s, that’s keep in mind that I was, I was sold, uh, sugar Frosted Flakes by Tony the Tiger, uh, who was not a real person, uh, or even a real tiger. But they, Neville Hobson: they weren’t pretending it was, or, or making you assume that it probably was. That’s the only different, but this is. Thing. Shel Holtz: This is, uh, uh, the, the modern equivalent. Uh, and well, Tony the Neville Hobson: tiger. [00:16:00] Shel Holtz: Yeah. And yeah, the, the virtual influencers I’ve seen so far, uh, are obvious. Uh, I have not seen one that they have worked really, really hard to convince you that this is anything but a virtual influencer. And on Instagram, at least most of them, I see the disclosure, uh, that, that they are, uh, I just don’t think people care. Uh, no. If, if, if they’re getting good information, if they’re being entertained, you know, are you not entertained? If you are, you’ll continue to watch. And, uh, if somebody says, you know, that’s ai, your answer’s gonna be okay. So Neville Hobson: I get that, but I think we have a responsibility to, uh, to point out certain things, whether people care or not. That’s part of our Oh, that question. Our responsibility is communicate. Yeah. So yes. So, so hence my point about, uh, what does real mean? What, how do we defining real now? Uh, and I think the, um, the, the, the kind of, uh. Bigger worry. Waiting in the wings is [00:17:00] the fakery that we see everywhere. It’s getting even easier to, uh, to do this kind of thing. Um, deep fakes, whatever they’re now called. Um, that’s been off the radar for a bit now, but suddenly you’ll see something and to, for what I mean, I read, I haven’t seen anything myself, but I did read this morning that already there’s videos around of Pope Francis who died, uh, on Monday, uh, that he is not. Actually, uh, according to these videos, he’s out there speaking and, and doing all these events and so forth, um, that will confuse some people. And th this is the, this is, I think the gr the, the grave risk, uh, of not the technology, um, because. It’s what people will do with it. And that’s not, I’m not suggesting for a second that because of that, therefore we shouldn’t do X and y and so forth. Not at all. But we need to, uh, address these concerns and indeed the, uh, the unspoken concerns, uh, before they become a problem, uh, or at least make people aware [00:18:00] and that that is a lot. Not to do with the awareness that we’re already seeing from governments everywhere. Like here in the uk for instance, I see government ads across every social network now and again about, uh, checking the very, checking the authenticity of things and people, uh, and products that people are pitching and so forth. Uh, and that will ramp up, no doubt, in which case opportunity for communicators then for that kind of education. So, um, it, it, it perhaps will come down to, uh, to that the, uh, the, uh, the ethical debate on training, on consent, uh, on people’s rights, intellectual property, whatever. Governments in Washington, DC I mean, that. Uh, the situation with Trump and his, uh, um, his psycho fence, as I call them, really, uh, is only, um, uh, is well, it’s more than a blip. It’s, it is made a huge change around the world that no one could have predicted. Whatever you think about Trump, you gotta give it, give it to him in one sense that he [00:19:00] has forced huge change on almost every country around the world. So, uh, I see here things that people are discussing now, we’re gonna. Would never have dreamt that these politicians would be suggesting that if Trump was not on the scene. So that is a big impact in all of this, and it’s hard to predict what effect that’s gonna have on something like this. But, um, I think the, uh, the concerns of people about training, for example, using their content without permission, uh, human beings, again, this is a, a related thing to what other conversations are worried about being replaced by the. That’s not, not, not a separate or a suddenly new thing, but it just reinforces in my view, certainly that we need to address all of these things. We need to show that we have people’s backs in their concerns about this, and we’re gonna help kind of understand it if we can. That’s our job as communicates, it seems to be. Shel Holtz: Yes. In addition to creating some of this content. Neville Hobson: Oh, indeed. [00:20:00] Shel Holtz: That’ll be a 30 for this episode of four immediate release. The post FIR #461: YouTube Trends Toward Virtual Influencers and AI-Generated Videos appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
6
21:20
Zora Artis on Bridging AI and Human Connection in Internal Communication
Zora Artis on Bridging AI and Human Connection in Internal Communication
Zora Artis is a leading voice in strategic internal communication with a ion for how IC can lead the integration of AI into the workplace in ways that reinforce, rather than replace, human connection. In this FIR Interview, Neville Hobson and Shel Holtz speak with Zora about how artificial intelligence is reshaping internal communication, prompting a strategic transformation in the profession. The conversation builds on Zora’s article on the Poppulo b March 2025, “Bridging AI and Human Connection: What’s Possible for Internal Communication,” and draws on her experience facilitating a global roundtable debate of senior communicators in The Hague. Zora challenges the narrative that AI erodes empathy or replaces people. Instead, she explores how AI, when used intentionally and ethically, can personalisation, amplify employee voice, and help communicators focus more on strategic value and less on repetitive tasks. The discussion also examines examples of AI in action: from internal GPTs (large language models that use deep learning to generate human-like text and content) trained on leadership content, to custom AI advisors embedded in daily workflows. But with opportunity comes risk, and Zora highlights the need for governance, inclusivity, and ethical clarity in how AI is used within organisations. Discussion Highlights: Why communicators are central to bridging the trust gap between leaders and employees on AI adoption. What it means to treat AI as a collaborator, not just a tool. How AI can enhance messaging effectiveness and employee understanding. The ethical risks of bias, overreach, and unrealistic expectations. What internal communicators should do now to stay relevant: shift mindset, experiment, and lead. About Our Conversation Partner Zora Artis, GAICD, IABC Fellow, SCMP, is a strategist, advisor, and coach specialising in alignment, communication, and leadership. She is the CEO of Artis Advisory, co-founder of The Alignment People, and a partner in Mirror Mirror Alignment. She helps leaders and teams cut through complexity to build clarity, cohesion, and high performance. Zora is a ionate advocate for responsible human–AI collaboration and the evolving role of communication professionals in the way we work and the value we create and deliver. Follow Zora Artis on LinkedIn Links from This Interview Poppulo blog: Bridging AI and Human Connection: What’s Possible for Internal Communication? Artis Advisory The post Zora Artis on Bridging AI and Human Connection in Internal Communication appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
7
36:20
Circle of Fellows #115: Communicating Amidst the Rise of Misinformation and Disinformation
Circle of Fellows #115: Communicating Amidst the Rise of Misinformation and Disinformation
Misinformation and disinformation aren’t just problems for the news media—they’re also becoming critical concerns for corporate and organizational communicators. Whether it’s a viral post spreading false claims about your company, a deepfake video targeting a leader, a cloned voice trying to trick employees into transferring funds, or AI-generated content muddying the information landscape, today’s communicators must be equipped to navigate a world where truth competes with convincing fiction. In this live-streamed conversation, Fellows of the International Association of Business Communicators (IABC) explored how generative AI is accelerating the spread of false and misleading content—and what communication professionals can do to identify, counter, and prepare for it. About the Alice Brink is an internationally recognized communications consultant. Her firm, A Brink & Co., works with businesses and non-profits to clarify their messages and communicate them in ways that change people’s minds. Her clients have included Shell Oil Company, Sysco Foods, and Noble Energy. Before launching A Brink & Co. in Houston in 2004, Alice honed her craft in corporate settings (including The Coca-Cola Company, Conoco, and First Interstate Bank) and in one of Texas’ largest public relations firms, where she led the agency’s energy and financial practices.  Alice has been active in IABC for over 30 years, including as chapter president, district director, and Gold Quill chair. Sue Heuman, ABC, MC, IABC Fellow, based in Edmonton Canada, is an award-winning, accredited authority on organizational communications with more than 40 years of experience. Since co-founding Focus Communications in 2002, Sue has worked with clients to define, understand and achieve their communications objectives. Sue is a much sought-after executive advisor, focused on leading communication audits and strategies for clients in all three sectors. Much of her practice involves a strategic review of the communications function within an organization, analyzing channels and audiences. She creates strategic communication plans and provides expertise to enable their execution. Sue has been a member of the International Association of Business Communicators (IABC) since 1984, which enables her to both stay current with, and contribute to, communications practices. In 2016, Sue received the prestigious Rae Hamlin Award from IABC in recognition of her work to promote Global Standards for communication. She was also named 2016 IABC Edmonton Chapter Communicator of the Year. In 2018, IABC named Sue a Master Communicator, the Association’s highest honor in Canada. Sue earned the IABC Fellow designation in 2022. Juli Holloway is an Indigenous communications practitioner specializing in professional communication in Indigenous contexts. Throughout her career, Juli has been fortunate to work with First Nations and Indigenous organizations in British Columbia and across Canada to transformative change for First Nation communities and people through strategic communications and community engagement. Juli is the communications advisor at the Tulo Centre of Indigenous Economics. She leads communications, designs, and delivers a communications curriculum in university-accredited programs designed to advance Indigenous economic reconciliation. She is also an associate faculty member at Royal Roads University, where she teaches in the MA in Professional Communications program. In 2022, she earned the Outstanding Associate Faculty Award for Teaching Excellence in the MA Programs in 2022 for her innovative pedagogical methods. Juli is Haida and Kwakwaka’wakw and has been a guest on the traditional lands of the Secwépemc for 17 years. She belongs to the Skidegate Gidins, an eagle clan from the village of Skidegate on Haida Gwaii, and the Taylor (nee Nelson) family originating from Kingcome Inlet, home of one of the four tribes of the Musgamagw Dzawada̱ʼenux̱w. George McGrath is founder and managing principal of McGrath Business Communications, which helps clients build winning corporate reputations, promote their products and services, and advance their views on key issues. George brings more than 25 years in PR and public affairs to his firm. Over the course of his career, he has held senior management positions at leading strategic communications and integrated marketing agencies including Hill and Knowlton, Carl Byoir & Associates, and Brouillard Communications. The post Circle of Fellows #115: Communicating Amidst the Rise of Misinformation and Disinformation appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
7
01:00:57
CWC 108: Helping PR agency clients navigate a challenging communications climate (featuring Rachel Sales)
CWC 108: Helping PR agency clients navigate a challenging communications climate (featuring Rachel Sales)
In this episode, Chip talks with Rachel Sales from the PR agency Enunciate about managing client communications in a chaotic world. Rachel discusses the importance of empathy and strategy in addressing news and its impact on clients. They explore whether brand leaders should comment on current affairs, emphasizing the need for authenticity and aligning with business goals. Rachel shares a process for determining when and how clients should respond to news events. The conversation also covers the evolving media landscape, the shift towards contributed content, LinkedIn strategies, and the importance of humanizing client messaging in turbulent times. [read the transcript] The post CWC 108: Helping PR agency clients navigate a challenging communications climate (featuring Rachel Sales) appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
7
26:32
FIR #460: The Return of Toxic Workplaces and the “Big Boss” Era
FIR #460: The Return of Toxic Workplaces and the “Big Boss” Era
The tide is turning. For several years, workers have enjoyed a seller’s market. Unemployment has been low, and companies have competed for the best employees. Now, for a variety of reasons, we are experiencing a surge in layoffs, exacerbated by sizable staff reductions in U.S. federal agencies. With so many newly-unemployed workers on the street, employers now have the advantage as we shift to a buyer’s market. Emboldened by the flood of potential recruits on the market, and anxious to be on the good side of the current U.S. presidential istration, some CEOs are trading in their ive servant-style leadership for old-school tough boss talk. And while they may be able to justify this behavior in the short term, the impact on the culture — and what that will do to the employer brand — could deter the best potential recruits from taking that job, which will be filled by a mediocre performer desperate for employment. In this short midweek episode, Neville and Shel explore the reasons behind the layoffs, the impact of CEO tough talk, and how communicators can help maintain a strong, non-toxic workplace. Links from this episode: Layoff announcements surge to the most since the pandemic as Musk’s DOGE slices federal labor force CEOs deliver tough talk as workers face a softening labor market What’s Causing Corporate Layoffs? Are Workplaces Getting More Toxic? Some Employees Think So ‘Toxic workplace culture’ main reason behind staff resignations, new research says Gen Z are ‘conscious unbossing’—avoiding stressful middle management roles The next monthly, long-form episode of FIR will drop on Monday, February 24. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Raw Transcript evillehobson (00:02) Hi everyone and welcome to Four Immediate Release. This is episode 460. I’m Neville Hobson. Shel Holtz (00:08) And I’m Shel Holtz. As we know, business goes through phases. Right now, we’re entering into a reality that’s reshaping organizational communication. It’s a new phase in corporate America that’s going to demand a lot of communication, including counseling leaders with messages they may not want to hear. Let’s start with what’s happening. Layoff announcements have surged to levels we haven’t seen since the pandemic. According to CNBC earlier this month, we’re seeing cuts across a broad range of industries and companies suggesting something sort of systemic is at play. And it’s not happening in isolation. It’s coinciding with a shift in tone that CEOs are using when they talk to employees. What we’re seeing is a shift from a seller’s market to a buyer’s market. You may it hasn’t been all that long since executive messaging was all about bringing your whole self to work and prioritizing mental health and building workplaces centered on empathy and inclusion. Those days are fading pretty quickly. Axios recently dubbed it the return of the Big Boss era. CEOs are dropping that therapeutic language of 2020 and embracing what one commentator called masculine energy. It’s less, we’re all in this together and more, step it up or step out. This isn’t just about bravado. There’s calculation here. This new tone serves multiple audiences. not just employees, but investors, board , and political stakeholders. This is, , Trump 2.0, where radical transparency and performative toughness are in vogue. Some CEOs see value in being leaked when they talk like this. It’s internal messaging with an external target. But let’s dig a little deeper on these layoffs. An analysis from the Wharton School points out that what we’re seeing isn’t just a response to economic turbulence or pandemic era over hiring, companies are bracing for long-term changes, slower growth, productivity gains they anticipate they’ll get from AI, and increasing shareholder pressure for efficiency. Some organizations are trimming fat, others are actually cutting muscle and hoping automation or restructured workflows can pick up the slack. Neville, before I list some of the things communicators should be thinking about, What are your thoughts on these trends? I read recently the companies in the UK are laying people off and UK firms are planning more, though I don’t know if CEOs have shifted their tone in their conversations with employees. @nevillehobson (02:42) No, I haven’t seen that. I haven’t seen anyone talking about that. Let’s put it that way. And I did a little look up myself to see what was happening here in the context of this conversation. Yeah, I see reporting on that too, not for the same reasons in the US, though, because Trump 2.0 seems to be the big gorilla behind all of this activity happening in workplaces in America. That’s not the case here, although I would say that that will likely pick up a bit as events shift and change globally. But the main reason I see being mentioned, a couple of pretty good articles recently point this out to toxic workplace culture is the big driver behind employees quitting, not being laid off. There’s a big difference. They’re reg. So Not quite the same landscape shell, but it seems that the result is similar, i.e. people are going to be without jobs. Shel Holtz (03:34) And I think that if you see these CEOs being more paternalistic, maybe a little more autocratic in their behavior toward employees, you’ll probably see those workplaces shift toward the toxic direction because leaders set the tone for the people who report to them and the people who report to them. So you’re very likely to see that. You’re likely to see. middle managers revert from the coaching and mentoring style that’s been in vogue for good reason for the last several years toward that just be the boss and tell you what to do and yell at you when you don’t do it right or fast enough. So the toxic workplace could be on its way back to the U.S. @nevillehobson (04:13) wonder. Yeah, I wonder though, the reality of that, given what you mentioned is on the decline since 2020 was was how you framed it, the kind of more kinder concern for employees well being and all that kind of stuff. And it’s gone out the window in America, I see this being talked about quite a bit. Not yet here in the UK. And I put I insert the word yet because I think it’s likely to be the case, I wonder, it’s people we’re talking about, right? So you’re looking at, and what you said is absolutely correct. It starts at the top and filters down. So behaviors are influenced by the leadership behaviors, et cetera. And so the manager for a team in an office, that’s not the head office, it’s a branch office somewhere, but the manager there, or even the big department. is not, I would say, correct if I got it wrong, but it seems to me they’re not going to suddenly, there’s a sudden shift in behavior. This is a gradual thing. Right. So it’s going to be very difficult for some. You could almost equate it to things I’ve seen in the news recently, for instance, in relation to Trump in laying off federal workers. So for instance, people who Shel Holtz (05:10) No, they see the modeling of behavior from the top, so it’s gradual, of course. @nevillehobson (05:27) have communicated and that news is leaked, that they don’t agree with a certain policy of the Trump istration, that person’s been fired. So are we likely to see that in the private sector as opposed to the public sector? Well, probably. But that being said, whichever way you look at it, we are seeing a shift, are we not, in things that I tend to put it in my own mind, looking back. over what’s happened just over the last few months, that we do have a change that is huge. And I often say to people when we have these conversations, they are increasingly happening even with friends. The first two decades of this century are the golden years when money was cheap, freedoms were huge, you could get on a plane and travel anywhere. And again, in the European context, that’s a big deal. Get on a plane at London and go for the weekend in Prague. all you got to do is buy your etiquette and accommodation. That’s it. Didn’t need visas. need. Well, you have port checks if you come to the UK because even within the EU, the UK insisted on that. They really don’t like foreigners, the Conservative Party in this country, seems to me. So they were in charge during nearly all of this time. But I think that those days are gone and we need to be getting used to this. And it is a time of great uncertainty. in the workplace. So we hear about uncertainty in stock markets and people thinking, how do I price my goods? Do I have to increase it or what? Here we’re seeing something else that is likely to be of concern to people. It’s a very uncomfortable time for everyone. And indeed, I don’t envy the kind of middle manager in an organization who’s responsible for a team of 20, let’s say, and he’s being told to do certain things that are against his beliefs. What’s he or even she going to do about that? There is the dilemma. Shel Holtz (07:09) Yeah, and I think if you’re a frontline employee in an organization, that uncertainty is coming at you from a number of different directions. It’s not just what’s the flip-flop policy coming out of Washington today that’s going to move the markets one way or another. It’s also, is my boss or the CEO of this organization going to take a whole different approach that’s going to change the culture into one that I don’t want to work in anymore? And I think that’s a real consideration here for communicators as they think about how they’re going to navigate through all of this. What’s going to happen is you’re going to see as, mean, imagine being a meta employee right now, because say what you will about Mark Zuckerberg. He did champion liberal causes and he created a workplace culture of, I mean, it was a culture of trust. He and Sheryl Sandberg, used to tell deep secrets in all employee meetings and they never leaked because employees knew that if that information leaked, they’d stop having this information shared with them. Now he’s embraced this, know, masculine culture. He’s obviously pandering to Trump. And what is that tone going to do to the way he perceives employees, the way he treats employees? how it affects the culture. The best people in any organization are the ones who can say, I’m done and find a job somewhere else. The mediocre people in the organization are the ones who can’t find a job anywhere else. They’re the ones you end up stuck with. So I think the short-term benefits these CEOs may see from becoming the tough boss again. could evaporate when people aren’t going to continue working for you who you need and you can’t recruit because your employer brand has gone down the toilet. @nevillehobson (08:59) Yeah, I think it’s going to get even more difficult becoming a months shell because some of the big tech companies and it is the tech companies. I can’t think of any other way to express it, but did a deal with Trump, they sold their souls to the devil in expectation that they would get what some kind of preferential treatment or be not interfered with and all that. I saw a news story today in one of the tech journals that made the comment that Zuckerberg had done the deal with the devil and what has he gotten in return? Absolutely nothing. So what does that mean? So this uncertainty continues in this area, but I think trust is evaporating fast from big companies. And equally, I did read something a week or so back. It’s actually very relevant to what you said about Zuckerberg and the culture he developed with Sheryl Sandberg. He got rid of Sheryl Sandberg. I think he threw her under the bus, if you . Right. And say the DEI stuff was all her. He didn’t agree with it, blah, blah, all that kind of talk. So would you trust him? I certainly wouldn’t if an employee. So you’re going to get people right. So this is likely to be the case in many more organizations. And that’s a very alarming picture, I think. Shel Holtz (09:47) Oh, and he talks trash about her now, yeah. No, not even a little. So let’s take a look at what this means for organizational communication professionals, because I hate to throw a cliche out there, but it presents both a challenge and an opportunity. We are the voice of the organization. We keep the narrative and we facilitate the internal dialogue if we’re doing our jobs well. So what role should we be playing in navigating these particular waters? Well, first we need to be advocates for transparency and clarity. This is nothing new, but when layoffs happen, the way they’re communicated is paramount. is, we’ve been talking about this for probably 50 years in of communicating layoffs in such a way that it leaves the workforce that remains productive and optimistic about the future. Employees deserve more than legal precision when they’re told what’s happening. They need context. They don’t need vague, you know, we’re realigning with strategic priorities language, but actual explanations. Why are we doing this now? What’s next? What does this mean for us who are still here? know, ambiguity breeds anxiety and erodes what? It erodes trust. Second, we have to be sense makers. The survivors of layoffs are likely feeling a mix of emotions, fear, uncertainty, guilt. We need to actively listen to their concerns and provide channels for them to voice these feelings. This isn’t a doom and gloom message. It’s a reality check. Workplace toxicity, as we mentioned earlier, is creeping back into the workplace. A recent Investopedia report showed a significant number of employees feeling that work environments have become more hostile, less safe to speak. That is, no psychological safety or reduced psychological safety. And they become more cutthroat. If we’re not creating channels to surface those feelings, we’re not doing our jobs. Third, communicators have to coach leadership on consistency. As an article in Axios points out, employees are remarkably attuned to tone. If we’ve gone from we you to produce or perish, then leaders need to own that change. They can’t pretend the vibe hasn’t shifted. When the tone of leadership changes overnight and this tone from leadership has changed overnight. People start wondering, who is this person? Is this the same guy I signed up to work for? That disconnect can unravel trust faster than a layoff could. Fourth, we have a crucial role in shaping the internal narrative. In the wake of layoffs and the shifting leadership tone, it’s easy for negativity and toxicity to creep in. We need to find ways to preserve culture when leadership is, intentionally or not, undermining it. That might mean emphasizing peer stories, championing middle managers who are under enormous pressure, or quietly maintaining the threads of the pre-layoff values the company once celebrated. And let’s not forget about Gen Z. They’re watching all this unfold and they’re drawing conclusions. A Fortune article from late last year pointed out that many young professionals are actively avoiding middle management roles. They don’t want to become managers. It’s not because they’re lazy, it’s because they see those positions as stressful, thankless, and misaligned with their values. In an environment of layoffs and perceived shifts in leadership attitudes, retaining talent, especially younger generations, is going to require a more nuanced approach. This is the time to reaffirm that communication is not just a soft skill, it’s a strategic imperative. @nevillehobson (13:38) Thanks Shel Holtz (13:41) the way we frame hard decisions, the way we guide executive tone and the way we preserve human connection in the face of corporate calculus, these are the levers that we can still pull. So as communicators, we have full plates. We are essential in navigating organizational change, fostering trust and shaping the employee experience during this shifting period of significant upheaval. The way we approach these challenges will have a lasting impact on our organization’s reputations inside and out, but let’s not shy away from having those difficult conversations. Neville, thoughts? @nevillehobson (14:17) And difficult they will be, expect. I think what you said makes a lot of sense. The only thing I would say is I would worry if the, let’s say the head chief communicator in an organization, whatever the job title might be, the person in charge of communicating internally and externally, but let’s see in the context of a conversation internally, if that person is like, say, Caroline Levitt, Trump’s press secretary, who is so, it’s like a mini me of Trump in behavior. Black is white, white is black. the most untrustworthy individual I’ve ever seen with a blonde hair and a nice demeanor, but I wouldn’t trust that person at all. You would be very worried if your CCO was like that. Also, if you’ve got a CEO like Jamie Dimon, we talked about him. in episode 451 when the speech, the rant peppered with bleeped out expletives on a, what do call it, a meeting with employees was leaked, basically saying, my way or the highway, he didn’t give a damn. And if you got CEO like that, yeah, you’re in trouble. Shel Holtz (15:16) That’s what they meant about wanting to be leaked and heard by the president, right? @nevillehobson (15:22) Exactly. So it’s an alarming time, I think, again, without making it sound like, you know, the kind of the end of the world. These are things that are very concerning. This sudden shift in focus, attitude and behavior. So I’ve yet to see anyone talking about, you know, layoffs that they believe are absolutely because they’re literally a complete about turn in behavior. by leaders and organization leading to employees being let go. So layoffs, redundancies, as we call them here in the UK, because the distinction here, of course, is you are not fired, your job has been eliminated, therefore there’s no role for you anymore. So your job has been made redundant, you’re gone. So we’re likely, I wouldn’t be surprised to see this sort of uncertainty continuing and worsening. communicators. Yeah, all those things you mentioned, Shell. I think when I was looking at what’s happening here in the UK, I mentioned that I did encounter one report that was in HR director magazine that was just a couple of weeks, a couple of months ago, talking about toxic workplace culture are the main reason behind staff quitting, I resignation employees reg and going somewhere else. So there’s a kind of the flip side of all of this is the employee taking the initiative and saying, I’m gone from here. That’s on the increase. So if you tie that with then layoffs and redundancies, the workplace is not looking very secure at all for anyone. Shel Holtz (16:43) And I think you also have to factor into this that our employees are also consumers and they’re wrapped up in a lot of the data that we’re seeing about consumers. Consumer confidence is at a very steep low right now. So this is weighing on them in the workplace too. And that’s factoring in with the uncertainty about the economy and what the impact of that is going to be on their organization. their employment. So lots of reasons for communicators to be listening, to be sharing what they’re hearing with leadership and counseling leadership on how to communicate through all of this in a way that maintains a level of productivity, but also maintains that employer brand so that people want to work for you. @nevillehobson (17:27) Yeah, I think again, referring to the UK publication HR director and what they were talking about toxic workplaces, their focus in their assessment, then their kind of, here’s the list of what you need to do is aimed at HR, specifically at human resources, who they say is the the kind of start point of this. I don’t disagree with that. That I would argue is where they are with setting policy and setting behaviors and communicators communicate that and maybe help shape the messaging around that. So in addition to that list of things you mentioned, I would add you need to be hand in glove with your HR side because they will know things you don’t know and you need to know and they may not be proactive. They may be all scared sitting in a corner of cells even. So there’s that, that’s key to it. And the other thing I would argue is that all you’ve known so far on how you address these sorts of issues probably needs rethinking now in light of what’s happening. Because all the stuff I read and that I know of and you will be the same, I’m sure, is to find and fine tuned over years in times that are not like this at all, where trust is gone completely. And again, just reflecting back on Edelman’s trust barometer. we’ve seen kind of warning signs in some of the research from that recently, that this, you know, things like, for instance, how leaders can shape workplace culture, the company’s ethos and leadership training, all those things are great. But for circumstances like this, there’s something else needed more than that, I would say. They talk about a thriving workplace is where toxicity is neither tolerated nor ignored. Difficult. if the leadership is involved in that, eg, Jamie Diamond, JP Morgan, instance, who is fostering a climate of toxicity, then then that’s real tricky. So this is where going back to the role of the communicator. That’s where some very creative thinking and behaviors need to be put in place, which may not be something you’re used to. So communication leadership is the key to that, I would say. So interesting time. I mean, you mentioned The cliche, know, threats and opportunities, this is opportunities, I see it, but it’s an interesting time we’re in and it’s going to be especially interesting in the coming months. Shel Holtz (19:37) Yeah, I just want to go back to your remark about working hand in glove with human resources. And I think there’s a flip side to that, too. They may be developing policies that they think are great, but they have not considered how the employees are going to react to that if it’s message to the way they’re planning on messaging it or maybe not react well to it at all. Maybe they know that employees won’t react well to it. and need communications help in explaining why. mean, there are things that happen in the workplace that employees don’t like and there’s things you can’t do about that. Your job isn’t to make employees happy, it’s to help them understand why this is happening, where they fit in a solution, where we’re going from here. communicators can’t just be sitting on their hands, know, cranking out articles about the company picnic. We need to be in the thick of this strategy. @nevillehobson (20:21) Ha ha ha ha. Shel Holtz (20:24) in order to see our organizations through these times. And that’ll be a 30. Indeed, that’ll be a 30 for this episode of For Immediate Release. @nevillehobson (20:27) Yeah. Interesting times indeed. Yeah. The post FIR #460: The Return of Toxic Workplaces and the “Big Boss” Era appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
7
21:45
ALP 267: Agency owners review 2024 performance, assess outlook
ALP 267: Agency owners review 2024 performance, assess outlook
In this episode, Chip and Gini discuss the latest quarterly SAGA owner survey, which provides a mixed bag of results for agencies. They explore key findings, including the cautious optimism displayed by respondents, concerns about economic conditions, and the impact of government policies. Despite the varied performance of agencies, many are still managing to move forward. The discussion also delves into the benefits of project work, the size of client bases, and the lack of mergers and acquisitions activity. Chip and Gini encourage agency owners to stay informed about macroeconomic trends but also to focus on positive strategies to navigate uncertainties. [read the transcript] The post ALP 267: Agency owners review 2024 performance, assess outlook appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
7
19:13
FIR #459: AI Transforms Content from ive to Interactive
FIR #459: AI Transforms Content from ive to Interactive
In this episode, Shel Holtz and Neville Hobson discuss the evolving landscape of podcast consumption, particularly in light of Satya Nadella’s innovative approach to engaging with audio content through AI. They explore the significance of transcripts, the potential for AI to facilitate interactive experiences, and the challenges that come with adopting these new technologies. The conversation highlights the future of podcasts as a medium that can be both ive and interactive, reshaping how audiences engage with audio content. Neville and Shel also examine how these same generative AI tools can make other content interactive and the ease with which s will be able to take advantage of it as LLMs become multi-modal. Links from this episode The surprising way Microsoft CEO Satya Nadella uses AI to consume podcasts on his commute Podcast Transcription: How & Why You Must Transcribe Podcasts The next monthly, long-form episode of FIR will drop on Monday, February 24. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, Shel or Neville directly, request them in our Facebook group, or email [email protected]. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz (00:03.168) Hi everybody, and welcome to episode number 459 of Four Immediate Release. I’m Shel Holtz. @nevillehobson (00:10.742) And I’m Neville Hobson. One of the more thought provoking stories I came across recently was a short piece on Geekwire about Microsoft CEO Satya Nadella and how he listens to podcasts, or rather how he doesn’t. According to Nadella, the best way for him to consume podcasts these days isn’t by listening to them in the traditional sense. So what does he do? We’ll discuss that in just a minute. @nevillehobson (00:38.72) Instead, during his commute, he interacts with the transcript of a podcast using his personal AI co-pilot. He speaks to it, asks questions, interrupts when needed, essentially turning what would normally be a ive listening experience into an active conversational one. Nadella describes it as a full duplex conversation, a two-way interaction, which until recently would have seemed futuristic. This kind of back and forth modality, he says, is more convenient and powerful than traditional listening. His comment was, there’s no going back. This shines a spotlight on a crucial, but sometimes overlooked asset for podcasters, the transcript. Providing a transcript isn’t just about accessibility, although that’s a critical benefit. It’s also about discoverability, repurposing, and now enabling new ways of engaging with content. Transcripts can enable a number of things, such as improving SEO, helping your podcast get found in search. makes your content more accessible to people who are deaf or hard of hearing, enable content repurposing into blog posts, social media or newsletters, and as Nadella demonstrates, allow AI tools to analyze, summarize or even hold conversations about your content. In a world where attention is scarce, transcripts are no longer just a nice to have, they’re becoming essential infrastructure for how audiences, including high level business leaders, interact with audio content. So here’s the primary discussion point for our conversation today. Could Satya Nadella’s model be the future for busy professionals where AI acts as a bridge between long form audio and actionable insights? And if so, what does that mean for podcasts and communicators in how we produce a packaged content? What do you think, Cheryl? Shel Holtz (02:23.167) I think there will be a fair number of people who will take advantage of this and the activities that you’ll be able to engage in like it that will emerge as these tools continue to evolve. I think there will be a majority of people who will continue to listen to podcasts because it is the lowest friction way to listen to a podcast. There’s no work. just go to the podcast app, pick the one you want to listen to. Latest episode play. And a lot of people don’t want to work in their car. They’re driving, they’re relaxing. Asking questions is not on their bingo card for their for their drive to or from work, for example, or or during a road trip. So I think people will continue to take the easy road for the most part. But for people who are trying to learn a lot or glean information. related to current events or emerging science or whatever the theme of the podcast they listen to is. Yeah, absolutely. I think this is going to be a popular approach and there are more than one way to do it. For example, you could put that transcript into Google’s Notebook LM. I don’t think you can talk to it yet, but you can. can you? OK, great. Well, there you go. @nevillehobson (03:40.267) Yeah, yes, you can. Yes, you can. That’s just just just been introduced. But it’s different because there you’re talking to the two AI hosts and you then having a conversation you can’t script. Shel Holtz (03:49.226) Right. Yeah, exactly. But if the transcript has been loaded into the notebook, then the conversation, obviously, you can listen to the podcast with the two hosts talking about it. But at that point, you might as well just listen to a podcast, one or the other listen to the original. But the whole idea of Notebook LM, even before they introduced that podcast feature, was your ability to query the notebook based on everything that’s in it. You could have 50 episodes of a podcast or you could load the transcripts of the most recent episodes of 20 different podcasts on the same theme and just start querying it. And it’ll give you answers to your questions. This was what makes it so powerful a tool. think there’s also, by the way, I’m using a tool now. This is not for podcasts, but along the same lines, it’s called Drip Drippp, three P’s. what I do here and it’s a paid service is I take all of the AI email newsletters that I subscribe to, and I subscribe to probably 12 or 13 of them, and it gets all of them and sends me one email summary of what’s in all of them. So I’m now reading my one daily drip rather than all 13. Email. newsletters, which is particularly useful because there are days I don’t have time to read any of those newsletters, but I find the time to read that one drip. this being able to repurpose, which is kind of what we’re talking about here, content to make it more consumable for you in the circumstances that you’re in. This is, I think, one of the powerful uses that we’re seeing a lot of people start to adopt AI for. @nevillehobson (05:39.17) Yeah, I mean, in this specific case of what the Sati Nadella did with co-pilot, it’s not so much reading it out. It’s the way in which it interacts. You interact with it. And literally it’s random. It has no foresight of what you’re going to ask, but it finds the content you’re looking for. So you could ask it, for instance, tell me a bit more about the topic that Shail and I talked about on Shel Holtz (05:49.931) Mm-hmm. @nevillehobson (06:08.333) XYZ topic or at about the 18 minute mark. Can you quickly summarize the key points of what we discussed? And it will do that. And that is wow to me. The first one’s easy. You could tell it as I did in an experiment I did, which you’ve got a clip we’re going to have. You can be able to listen to that in a minute is to summarize the podcast. And what I did was the transcript of our previous episode for 98 for 58. Sorry. And that was the one we talked about on AI being a part of your team, an AI chatbot and all that. So I ed the whole thing and asked it a couple of things and you’ll hear that in the transcript in a minute. But it got me thinking that the interview with Satya Nadella is worth a read. It’s quite concise, but you can project out your own thinking as to what this might mean and what Satya Nadella talks about his experience with it and particularly the multimodal element. Your point about driving got it entirely, although if you’re commuting on a train or a bus, no big deal. You can just do it like that. But I think the idea of being able to tell the chat bot, give me a summary of the episode 458 and it’s got, it’s ed and it tells you that that might prompt further things. Tell me, you you and I talked about such as a topic. Give me the key takeaways that we discussed. So it’s literally as fluid as that. It’s not more structured than that even. And the experiment I did, which may or may not be an indicator of wow or not, worked extremely well. There are some big downsides with this. And this I wondered about what Satya Nadella is doing about this, because the work in preparing the transcripts and ing it to the chat bot, each time for each episode is severely not. good from ease of use and all that. There’s big barriers that you’ve got to be really keen to do this. I did it. One thing I found, Nadella used Copilot. I knew few people using Copilot actively, but I know tons of people using ChatGPT. So I asked ChatGPT, can you do this? To which it replied, seriously, confidently, yes, of course I can. @nevillehobson (08:28.755) And it then actually asked me, what do want to do? the transcript and tell me how we can have a conversation. I said, well, that’s exactly what I want to do. So I followed the advice on ChatGPT’s page about this, which is to use the mobile app on my Android phone. But when I tried to the transcript, ChatGPT told me on the app, sorry, the app doesn’t file s for this use. You can files, but not for this use, using the audio. engagement function. But I did discover that the web app, the PWA on Windows 11 does that. And so I engaged with the chatbot via the PWA, the personal web app on Windows 11 on my desktop computer, not the mobile. But it was an interesting experiment in my engagement with chat GPT that way, very conversational. And I was actually very impressed indeed with with how it performed. So let’s include the clips so people can listen to it, Shell, because it was well done. It’s lightly edited to edit out some of the gaps in it to make for a better listening experience. And I’ve amplified the chatbots audio a bit because I was doing this on my microphone, the desktop, and the sound was coming out of the desktop speakers, which had to amplify a bit. But I think you’ll get the idea here. When I asked it to summarize, then I asked it specific questions about one of the segments we discussed. So let’s take a listen. @nevillehobson (10:01.899) So what do you think of that, Cheryl? I mean, that’s a simple instance. I honestly couldn’t imagine me going through all the faff of ing transcripts for each individual episode and then listen to it on my car and asking it questions. I just simply couldn’t imagine that. But as you mentioned earlier on in our conversation, things like this are only going to get easier, I would imagine, don’t you think? Shel Holtz (10:24.711) absolutely. For a number of reasons. One of the big ones is that what’s coming is the large language models, the big frontier models are all going to be multimodal before too long, which means that you’ll be able to the audio podcast or even the video version of the podcast from YouTube or just point it to it. And it’ll be able to do exactly the same thing without having to go through the rigmarole. of the transcript. this is just going to get easier and easier as time goes on. The other thing that you mentioned having to create the transcript, a lot of podcasters are including a transcript in their show notes for SEO purposes, and we are among those. I don’t edit it. I don’t go in and fix all of the errors. I don’t have time. It’s raw. So that at least somebody searching for some keywords might find us. But all you would have to do with @nevillehobson (11:14.391) For this you’d have to. Shel Holtz (11:20.743) any podcaster who’s doing that is copy and paste that transcript. You don’t have to go through any hoops to create one. @nevillehobson (11:27.241) no, no. When I said you have to do something with the transcript, I didn’t mean that at all. You don’t write it out. But I’ve noticed this in the past on other experiments I’ve done. If you a raw verbatim transcript with lots of Ams and As in and some things that didn’t quite catch and so it does it wrong, that will seriously impact the quality of what you get back. So for something like this, I would definitely edit lightly the output. And of course, on a 20 minute episode such as we had, that would be an easy thing to do. but that doesn’t scale. this isn’t, in my view, this is not a prime time tool for everyone to think, wow, I can use this. This is if you’re keen, if you’ve got the patience and the time to go through the prep. So you prep 10 episodes, let’s say, or even your transcripts. Let’s say you speak, you’ve got the ideal audio environment in which you’re recording, in which case the recording software will pick up everything about 99 % correctly, like Riverside that we do, for instance. And that’s got a method to smooth some of the stuff. Or you’ve got Descript. I mean, there are tools you can do this. The point, though, is there are still separate tools you’ve got to use to get this into the state before you can it and share it with the chatbot. That may well be a barrier too much for people. The idea is fabulous, I think. I love this idea, which is why I was so keen on trying it out myself. And I think it is going to be part of the landscape pretty soon, just without all the barriers, hopefully. Shel Holtz (12:52.009) Yeah, and the easier it gets, the more people will do it. But to your initial question, I don’t think podcasters have anything to worry about. think people, by and large, are just going to continue listening to podcasts. After all, a lot of people listen to podcasts because they like the hosts. They like the segments. They like the vibe. And you won’t get that out of summaries and the ability to query. But on the other hand, I listened to an episode of a podcast that was over five hours long. @nevillehobson (12:59.925) No, no. Shel Holtz (13:20.595) It was interviews with three people. think they were from Anthropic, but it was a five hour podcast. Boy, would I have loved a summary and the ability to query that because a lot of it was very over my head. @nevillehobson (13:29.335) Yeah. I mean, I think. Sure. I think this is in that area, of course, this is just another method of engaging with the content. It isn’t intended to do this instead of listening to the podcast. Not at all. If you want to do that, like you said, get a summary, whether it’s 20 minutes or five hours, that could be handy. It might work good for if you’re the kind of person who likes trying out new things. There is a new podcast. don’t know. Let’s get it. Let’s the transcript and get a summary. mean, that’s maybe not, you know. daily activity. But all of this will be part of the landscape to make things a lot easier. And I could even imagine that one of the chat bots, you’re not going to go through all this stuff of ing files and editing, you’re going to tell the chat bot to do that. Look at the file, get the transcript, see if it’s okay, share it with your colleague at ChatGPT. We’ll take it from there. But it’s a neat idea. I’d keen to know anyone listening, one of our listeners has done this with Copilot or are you thinking about it? Who’s using Copilot? ChatGPT were hurdles. I wonder how that would be with Copilot. I did ask ChatGPT to compare this feature with Copilots, which it did. And it came back and Copilot excelled in areas such as complete integration with all your typical software apps you’ve got on your PC. Whereas ChatGPT, this is not integrated in the same way. That’s today. I suspect that’ll change. Shel Holtz (15:02.582) So what we’re talking about here is taking a ive medium and extending its utility to make it an interactive medium, which is amazing, frankly. And it’ll be interesting to see what other kinds of media can be made interactive with the aid of artificial intelligence. And that’ll be a 30 for this episode of for immediate release.     The post FIR #459: AI Transforms Content from ive to Interactive appeared first on FIR Podcast Network.
Marketing y estrategia 1 mes
0
0
5
19:06
También te puede gustar Ver más
A VER, HABER, HAVER
A VER, HABER, HAVER Cuando en 2017 estaba descargando camiones por 1000€ al mes, un compañero, en medio de la faena, me preguntó… ¿Cuánto odias este trabajo? -Odio ganar tan poco dinero, pero descargar cajas 10 horas al día mantiene mis brazos de acero a estrenar. Este trabajo sería ideal si pudiera venir en avión privado. Nos reímos. Hoy gano varios millones de euros al año como copywriter, doy talleres por todo el mundo, publico con uno de los mejores sellos del Planeta y mis extraños libros se venden por miles, también tengo miles de haters a los que adoro (no es broma) y viajo en avión privado, aunque no lo hago por trabajo, es placer. Supongo que te preocupa saber si me mantengo en forma. Lo hago, todos los días a las 11:00h estoy con las pesas. No es lo mismo que los camiones, pero casi. Y en este podcast te voy a contar cómo me hice millonario escribiendo y cómo puedo hacer que tu negocio gane mucho más si aprendes a escribir de una vez. A ver, haber, haver… Actualizado
Caviar Online: Comunicación y Marketing Digital
Caviar Online: Comunicación y Marketing Digital Caviar online es un podcast de Marficom con Carles Fité y Joan Martín. Cada viernes nos cuentan un tema de marketing y comunicación digital además de repasar todas las novedades de la semana en el mundo de las redes sociales. Lo hacen mezclado con música y con un estilo muy particular, siempre con buen humor y contenido de calidad. Actualizado
Growth y negocios 🚀 Product Hackers
Growth y negocios 🚀 Product Hackers Growth (anteriormente En.Digital) es el podcast de negocios digitales de Product Hackers. Cada semana, entrevistas con los principales referentes de los negocios digitales y startups que más crecen.En este podcast aprenderás las claves reales del crecimiento de los negocios digitales, de la mano de sus fundadores o responsables de crecimiento.Presentado con muchísimo cariño por José Carlos Cortizo (Corti). Actualizado
Ir a Marketing y estrategia