AI Ethics in Marketing: Why Strategy and Responsibility Must Go Hand in Hand written by John Jantsch read more at Duct Tape Marketing

Episode Summary
In this episode of the Duct Tape Marketing Podcast, host John Jantsch welcomes Paul Chaney, a veteran digital marketer and publisher of the AI Marketing Ethics Digest. As artificial intelligence becomes central to marketing, Paul makes the case for why ethics and strategy must lead the conversation—not just the latest tools.
The discussion explores how unchecked AI use can damage brand trust, create internal chaos, and result in missed opportunities. From AI techno-stress to the need for governance and transparency, this episode offers a timely blueprint for adopting AI responsibly in modern marketing.
About Paul Chaney
Paul Chaney is a B2B writer, content strategist, and the founder of the AI Marketing Ethics Digest on Substack. With a long-standing career in digital marketing, Paul brings a sharp perspective on how businesses can balance the excitement of new AI tools with responsible, customer-focused ethics. His consulting and writing work is rooted in helping brands build trust and clarity in the age of automation.
What You’ll Learn in This Episode
- Why ethical frameworks are critical in AI-powered marketing
- The risks of “shadow AI” and how to govern internal use
- How AI techno-stress is affecting employees and teams
- Why strategy should always come before adopting new tech
- How a “boxed” AI system could reduce chaos and stress in organizations
Key Moments from the Episode
- 00:40 – Why Paul launched the AI Marketing Ethics Digest
- 02:56 – Responsible AI from the customer’s perspective, not just compliance
- 04:06 – Transparency, bias, and brand reputation in AI output
- 05:33 – Strategy before technology: avoiding “bad work faster”
- 06:59 – What “shadow AI” is and how it can harm organizations
- 08:30 – The need for usage policies and monitoring internal AI use
- 10:54 – The Generative AI Business Adoption Hierarchy explained
- 12:51 – Embedding AI into business culture with governance and clarity
- 15:56 – What is AI techno-stress and how is it impacting workforces?
- 18:24 – Lack of training is a hidden ethical risk for employee well-being
- 19:55 – A real-world agency navigating generational divides in AI adoption
- 21:06 – Why many business owners may give up on AI—and what that means for consultants
- 22:15 – Where to follow Paul and subscribe to his work
Explore Responsible AI in Marketing
Interested in learning how to use AI ethically and strategically in your marketing practice? Start by subscribing to Paul’s newsletter and check out his content strategy services.
John Jantsch (00:00.793)
Hello and welcome to another episode of the Duct Tape Marketing Podcast. This is John Jantsch. My guest today is Paul Chaney. He’s a seasoned digital marketer, B2B writer and editor. He is the publisher of the AI Marketing Ethics Digest on Substack and a longtime voice on responsible tech adoption. With a background in content strategy and digital ethics, Paul explores the psychology and structural effects of AI on the modern work
Paul Chaney (00:21.581)
Yep.
John Jantsch (00:30.469)
So Paul, welcome to the show.
Paul Chaney (00:32.514)
Thank you. It’s a pleasure to be here, John.
John Jantsch (00:34.853)
So these days, you know, I talk about AI pretty much in every episode, it feels like. But I’ve not had anybody on this topic. So I thought it was really an interesting one. I’m curious, is there anything that sparked your interest? I mean, you’ve been around as long as I have practically. Anything that sparked your interest in AI, AI ethics specifically?
Paul Chaney (00:40.78)
Right?
Paul Chaney (00:52.992)
Well, yeah, we’ve both been around a long time. Well, you know, I think if you go back to, say the early blogging days, you know, which I know both you and I remember as it began to be incorporated into the business environment, you know, there was this maybe sort of ad hoc approach. And finally we got around to, you know, promoting the idea of having blogging policies, that kind of thing. was just guardrails for safety, making sure people didn’t say stuff that
they shouldn’t say and all of that kind of thing. Well, you know, we’ve jumped through social media hoops and things like that now. And it was the same thing with that. Well, now we’re on something altogether different, a whole new layer of this kind of what could end up being a real mess if organizations don’t manage it effectively and cohesively. But in terms of my own interest, I will tell you this, I looked at what was being talked about as it began, you know, as AI.
began to become really a central part of conversations. I saw marketers talking about the tools, the shiny new tools. saw AI ethics people talking about ethics. What I didn’t see was an integration of the two, a synthesis of the two. didn’t see, and I felt like, well, here’s a hole that needs to be filled or a gap that needs to be filled.
John Jantsch (02:00.793)
Yeah.
John Jantsch (02:04.143)
Thanks
Paul Chaney (02:19.756)
I don’t see anybody else doing it. So what the heck I’ll do it too. And because I’m not really a video kind of guy, I decided to do a newsletter. And that’s when the, in August of 2023, the AI marketing ethics digest was born. And I will be honest with you completely candid. It was not a topic that I was even all that interested in, certainly not passionate about at the time, but I just felt like.
Yeah, we need to be talking about responsible AI. And because my field is marketing, I said, well, let’s just sequester it to that for now.
John Jantsch (02:56.505)
Well, a lot of organizations, I’m sure are wrestling with this or will be wrestling with this from a compliance HR legal standpoint. But I also think a lot of times I think we look at things with that lens, you know, what’s the impact on the company or why would they do this? Because they’re forced to do this. Right. But I think sometimes we forget about the actual consumer that is out there, you know, the buyer, the, the, you know, in B2B situations,
you know, the business that’s out there. How do you feel like, how are we going to be responsible to that thought? you know, and not just a compliance thought.
Paul Chaney (03:36.962)
Well, yeah, compliance, you you think about legal and all of that and then, you know, regulatory policies and stuff. But I think first and foremost, as with anything, you have to think about your customer or the consumer. What’s fair to them? What would be honorable? What would be moral? What would be ethical where they’re concerned? And I think you have to approach this whole use of AI from that standpoint. Now, does that mean
John Jantsch (03:39.738)
you
Paul Chaney (04:06.326)
that you disclose every use of AI that you’re involved with and everything. I don’t necessarily think so. I do think at the very heart of this though lies the ethic of transparency. That’s implicit across the board. And the big talk that you see quite a lot is about bias. These LLMs are trained on certain information and they’re going to spit that out, right? As they’ve been trained on it.
John Jantsch (04:27.599)
Mm-hmm.
Paul Chaney (04:35.366)
And I think those kinds of things can affect your relationship with a customer. And it can also do damage to your own reputation as a brand or a business if you’re not careful, if you’re not making sure you’re auditing, you know, what’s being put out there, you’re monitoring it, you know, that kind of thing. So I think first and foremost, our responsibility is to the consumer and making sure we’re treating them right where this is concerned.
John Jantsch (05:04.74)
Well, you know, I’ve been saying for years, probably 30 years, I started saying, you know, strategy before tactics. and. You know, I’ve kind of been changing that tongue in cheek, you know, strategy before technology, you know, because everybody’s just jumping into these tools and saying, look, this can make me do this faster, or this can make me do this more efficiently, you know, only to come to find out that they’re doing like really bad work faster or at scale. and so that’s, you know, to me.
Paul Chaney (05:16.096)
Mm-hmm.
Paul Chaney (05:28.8)
Right, right. Yeah.
John Jantsch (05:33.103)
You know, I always analyze every new thing that’s come along. mean, the web came along and social media and all these mobile devices and you know, now AI and just look at it fundamentally. What are we here to do as marketers? You know, let’s develop a strategy that does that. And, can the tools allow us to you to do that? You know, as opposed to, I mean, nothing drives me crazier than seeing people show these whiz bang things. Like somebody invited me to be on a podcast and they were going to have an AI bot.
Paul Chaney (05:40.504)
Right?
John Jantsch (06:01.677)
interview me for the podcast. I’m just, it’s like, okay, it can do that, but should it? Yeah. And I think that’s what people do. They get really enamored with like, look at this cool thing. So people are, you know, now out there building AI agents, you know, for everything without any thought about like, what’s the strategic impact of this for my business. And I think ethics runs really side by side with strategy, quite frankly.
Paul Chaney (06:03.236)
my gosh. Do you really want to do that? Just because you can doesn’t mean you should.
Paul Chaney (06:15.426)
Mm-hmm.
Paul Chaney (06:29.166)
Yeah, and I will have to say, I mean, I’ve been doing this newsletter now for a year and 10 months and I just crossed the 700 subscriber line, you and you think, well, gee, a year and 10 months, you should be further along than that. But that’s kind of telling me that maybe this is not top of mind with marketers right now, the ethical side of things. I mean, how many newsletters and blogs and, you know, YouTube videos are out there about all of these tools and stuff. That’s what folks are focused on, but.
John Jantsch (06:56.669)
yeah.
Paul Chaney (06:59.054)
To your point, if you don’t take this from somewhat of a top down or strategic mentality, well, one of things you’re gonna end up with is a lot of shadow AI. You’re gonna end up with people using this, not disclosing they’re using it or full disclosure. You may have this department using this, this department using this and there’s no governance whatsoever. You run into privacy risk, you run into safety risk, I think, with all of this too. We think about cyber.
John Jantsch (07:08.696)
Mm-hmm.
Paul Chaney (07:28.824)
tech and all of that stuff. So I think there’s a lot of ways that this can go wrong if you’re not keeping ethical guardrails in place. And it’s not like it’s rocket science. mean, it’s a lot of it’s just common sense, you know? And so establishing things like an AI ethics council or committee to oversee a lot of this and then maybe some kind of usage policy that we can do this, we can’t do this. Not unlike
John Jantsch (07:29.209)
Mm-hmm. Yeah.
Paul Chaney (07:58.914)
we did with blogging and social media.
John Jantsch (08:01.573)
Yeah, I’m starting to see some parallels with a lot of organizations we work with when, people started, um, you know, posting on social media with their personal phones. Uh, right. So, I mean, it’s, it’s kind of the same thing. It’s like half the people in the organization have a personal chat GPT account, know, but how is it being used? How’s it being monitored? How’s it being, um, retained, right? Cause you know, when they leave the organization, all of that, whatever the work they did on behalf of the organization leaves with it. So.
Paul Chaney (08:24.727)
Right.
John Jantsch (08:30.041)
I think it’s really just a set of kind of common sense policies in some cases, isn’t it? Or, or, or processes.
Paul Chaney (08:34.358)
It really is. Yeah, in a couple of weeks, I’m gonna be publishing an article on the newsletter. I call it the Tower of Babel problem in corporate AI. everybody’s probably familiar with the Old Testament Tower of Babel where they were building the tower and then everybody started speaking a different language and everything stopped, right? Well, I think that can happen in the modern workplace too. And I just, like I mentioned, shadow AI everywhere.
maybe duplicate spending, inconsistency in the voice. Marketing says it one way, some other department says it another way, know, all of these kinds of things. And what’s that gonna do? That’s gonna impede productivity. And I think, you know, that too has ethical sort of parameters around it. But if you start it with a strategic mindset, understanding that it’s, you know, you’re not gonna…
John Jantsch (09:07.887)
Yeah, yeah.
Paul Chaney (09:29.846)
roll it out in a full fleshed sort of way across the enterprise. You know, you’re gonna start small, you’re gonna start with pilot projects, maybe in a really high impact use case, make sure that’s working like it should, then you gradually expand it, but it’s all done under some kind of governance framework where there’s approval, there’s risk checks, there’s all of that kind of, checks and balances, if you would, right? It’s not.
John Jantsch (09:49.541)
Mm-hmm.
Yeah. Well, is anybody producing, you know, that kind of policy? mean, you know, you’ve got people out there that will come in and audit your security and people that will audit your HR practices. Is anybody doing that now as a service? Yeah.
Paul Chaney (10:01.73)
No.
Paul Chaney (10:06.542)
I’m sure there are, and I wish I had in place some real bona fide kind of examples or case studies. That’s one thing I am working on by the way, but we’re still very early in this, even though it’s been out since what, 2022, know? mean, AI has been out much longer, but the practical use of it in our case is different. So I don’t have anything on hand, but just, you know, I’m not.
John Jantsch (10:16.611)
Yeah. Sure. Sure. Yeah. It’s the wild West. Yeah. Yeah.
Paul Chaney (10:34.016)
here to plug my newsletter, but I would say just keep reading the newsletter and you’re going to see case studies because I’m keeping my ear to the ground on all that stuff.
John Jantsch (10:37.358)
Yeah.
Yeah. One of the things I think you posted on LinkedIn that I’m sure came from the newsletter as well, or maybe that’s where I read it, but the idea of this generative AI pyramid. You want to unpack that concept?
Paul Chaney (10:54.614)
Yeah, a funny story about that. I interviewed Charlene Lee recently for the newsletter and she had used this Maslow’s hierarchy concept and applied it to generative AI and her focus is leadership. And I got to thinking, you know, let me see if I could do something like that where it comes to the ethical side of things. So I did. I went to chat GPT or maybe Claude.
John Jantsch (10:59.641)
Mm-hmm.
John Jantsch (11:06.693)
Sure.
Paul Chaney (11:22.382)
It’s chat GPT, Clawdon perplexity. That’s my trifecta. And I said, okay, can we create something that is, you know, using that model and, but it has this ethical orientation to it. And sure enough, it did, but it, call it the generative AI business adoption hierarchy. I’m not one for flamboyant names, John. I’m just a, know, plain Jane guy, but it starts basically it’s, it is kind of follows Maslow’s in a sense. It starts with.
John Jantsch (11:26.03)
Yes,
John Jantsch (11:43.215)
Right.
Paul Chaney (11:52.716)
the lower level awareness and access. Does my team know what these tools can do? Are they allowed to use them? Have they been trained on them? And then the next level, if you want to think of it in a hierarchical fashion is security and trust. Do we have guidelines in place to make sure we’re using this safely and ethically? And I have a friend who’s very well invested in AI technology who says, Paul, you got that backwards. You really need to start with those
clear guidelines, make sure that governance is in place. So, you you could perhaps switch those two around, but once you’ve got that, then that’s really where the magic starts. That’s where you can begin to see productivity take place because you’re using it in a way where there is the governance, where there is approval, where there’s tried and true tested sort of things through these various pilots, that kind of thing. Marketing is here’s your…
John Jantsch (12:22.382)
Mm-hmm.
Paul Chaney (12:51.106)
the prompt stack, if you wanna use that term, HR here’s yours, finance here’s yours, et cetera. And as you get to that level, then you’re gonna begin to leap over into the level where it really does move from tactics to strategy. And it becomes a part of the business model itself. And I don’t know if it’s sort of an epiphanal moment or if it’s just a gradual change of…
the sense of culture where AI begins to be baked into the organization. And that’s kind of that like Maslow self actualization. That’s what that is. Now, how long does that take? I don’t know. I mean, it depends on the organization and how flexible they are and open to transformation they are.
John Jantsch (13:25.465)
Mm-hmm.
John Jantsch (13:36.229)
Well, I have a theory. I think that, I think that organization or I think somebody is going to create the AI system in a box. So they’re going to walk into an organization like they would in ERP software or something and say, here it is. It’s built and branded just for you. It’s got your policies in it. You know, it’s $800 a month or something like that. You don’t have to figure it out now. Because what I’m seeing a lot of is
Paul Chaney (13:38.274)
Yes, sir.
Paul Chaney (13:59.758)
Mm-hmm.
John Jantsch (14:04.473)
You know, everybody’s going, this new tool, that new tool, I should try that, you know, and I think businesses are already exhausted, you know, over the trying to figure it out. So then there are, there’s, there’s obviously AI agencies and consultants, you know, that are cropping up that are actually kind of creating the agents and things for people. But I think there’s going to be a turnkey solution or fairly turnkey solution that kind of handles all of that stuff that comes out. I don’t know, maybe it’ll be a startup. Maybe it’ll be the big three.
Paul Chaney (14:07.309)
Right?
Paul Chaney (14:32.206)
Yeah. Hey, when do you want to get, let’s get started on ourselves. You know, get ahead of the game. Well, you know, are you okay?
John Jantsch (14:34.405)
Well, that’s weird. We’re actually headed that direction with, with, a tool for marketing. where we’re going to walk in and license actual fully built, fully branded, system for an organization and then teach your people how to use that one system as opposed to everybody just kind of doing their own thing and building their own custom GPTs. I think that’s the only, that’s one of the ways to maybe not only make it accurate, but put some guardrails on it.
Paul Chaney (14:55.246)
Hmm.
Paul Chaney (14:59.758)
Well, yeah, I
I’m done or well, it’s in place. I’ll put it that way. I haven’t had any takers as of yet that sort of thing with the ethics side of thing. I’ve really, again, kind of focused on marketing, but I think what I’ve created could branch out into other departments and be used in other departments. And so I think you’re absolutely right. There will be a package solution.
John Jantsch (15:12.773)
Sure.
John Jantsch (15:27.589)
Yeah. And I think, you know, again, the big companies are just going to hire McKinsey and say, you know, do this for me. Um, you know, but I, it’s that, you know, $10 million electrical contractor that’s like, well, I know I need AI, but where do I even start? You know? And so I think there’s a, I think there’s a ripe market for, you know, somebody to kind of hit that kind of mid market firm. So you, there’s another term that I grabbed from some of your writing, uh,
Paul Chaney (15:33.495)
Right.
Paul Chaney (15:43.021)
Mm-hmm.
Paul Chaney (15:49.372)
yeah, I agree.
John Jantsch (15:56.631)
AI techno stress. you know, we were just talking about it. I see a lot of stress from business owners right now. how does that come into play? mean, are people, know, when somebody is full blown techno stress, are they just shutting down or what, you know, what’s going on with that? And how you’re applying that to the ethics component?
Paul Chaney (16:03.67)
Right? Right?
Paul Chaney (16:17.73)
Well, as I was, as the newsletter has grown and evolved and it has, one of the things that sort of began to, I began to think about and give some thought to was this idea of the stress element that is involved in learning to use these tools and maybe being forced to use these tools even when, if,
Say you’ve been a long time employee and you’ve just done it a certain way for years and now you got to do a different way. Well, that’s going to be a stressor, but look, techno stress. And the term was first coined in 1984, I believe, long before AI came on the scene. forget a password and you’re having to go redo a password. That’s techno stress. The internet goes down or whatever the case may be, but AI is just like a whole other layer of this.
John Jantsch (16:56.697)
Hmm. Yeah. Sure.
John Jantsch (17:05.305)
Yes.
Ha ha.
Paul Chaney (17:12.398)
And I think it’s inducing a lot of stress at a lot of levels. You mentioned the business owner, the $10 million electrical company. Surely that business owner has, you know, some confusion related stress where that’s concerned, but maybe you’re a frontline worker or a knowledge worker and you’re sitting in front of your computer every day. And now you’ve been introduced with this new set of tools that you got to learn how to use and get a grasp of. And I can’t tell you how many times I’ve seen people maybe on LinkedIn commenting other places.
saying, well, you know, I tried chat GPT, but it didn’t really do anything for me, you know, and I’m thinking, well, you’re, you’re lacking fundamental training that needs to be involved. And so that certainly is a part of the solution. But I think to me, and I’ve really gone down that rabbit trail now with the AI techno stress concept, I’m actually going to be publishing a book here later this year, focusing on that. And,
John Jantsch (17:51.535)
Sure. Yeah. Yeah.
Paul Chaney (18:11.522)
You know, I just feel like that’s an element that ties to the ethical use of all this. is ethics involved when it could cause some kind of emotional or mental or psychological harm to your employees. That’s ethical. So that fits, you know, it’s still all under the same umbrella.
John Jantsch (18:24.803)
Mm. Mm. Yeah.
Well, it’s interesting because, you know, know a lot of employees get surveyed and, the probably above, like, I want more money is like, I want to know what it takes to do my job. I want to have the tools to do my job. And, and I’m, I hadn’t really thought about that, but, but a lot of organizations are just saying, Hey, figure this stuff out. And that, you know, lack of training is actually a real detriment to the organization.
Paul Chaney (18:51.937)
Right.
Paul Chaney (18:58.924)
Right. And I think, you know, to your point about the boxed solution will help decrease that stress level. But let me give you a quick example. I was in a meeting yesterday with an agency. My day job is writing and editing for B2B. And I work with an agency out on the West Coast and they have been around 30 years. Their founder, she’s probably in her seventies and they’ve done great work, but they’ve done it their way.
And now they’re being confronted with having to do it a different way. So much so that the founder has decided she wants to go ahead and retire. She’s going to turn it over to her second in command, so to speak. And, that lady and I were talking recently about this whole AI thing, and you could see the curiosity in her mind, but you could also see, you know, some bit of conflict because they have very strong, detail.
John Jantsch (19:35.759)
Mm-hmm.
John Jantsch (19:51.013)
Yes.
Paul Chaney (19:55.064)
policies and ways of operating. now, you know, the way they want their writers to work, for example, and now that’s all being challenged. And the conversation in the meeting yesterday was with all the writers. I’m again, one of them. And it was AI this and AI that and AI of the other. And, you know, so I think again, it goes back to the organization, how, how flexible are you willing to be? How aggressive are you willing to be?
Just in every case, as in every case, there’s going to be a bell curve. There’s going to be the innovators, early adopters. There’s going to be the early majority. There’s also going to be the laggards who just going to say, no, we’re not doing it. And those are the folks that I’m very concerned about are going to lose in the long run.
John Jantsch (20:26.073)
Mm-hmm.
John Jantsch (20:40.163)
Well, no question. I mean, there’s absolutely no question. This is not a time to bury your head in the sand. But, you know, it’s interesting. I hadn’t really thought about that, but I bet you there will be a wave of people, business owners, who just like, forget it. I just don’t have the bandwidth or the energy to figure this new thing out because it’s so fundamentally shifts how we go to work. You know, what an org chart actually is going to look like. mean, so it’s not like, I got to figure out Twitter.
Paul Chaney (20:44.682)
No, it is not.
Paul Chaney (20:56.129)
Right.
John Jantsch (21:06.101)
it is really, I’ve got to like replumb my entire, you know, business here. And I think that probably will actually wipe, not, not wipe people out in the sense of, of, you know, being overtaken, but literally just saying, I, I, you know, enough I give.
Paul Chaney (21:10.926)
Mm-hmm.
Paul Chaney (21:21.122)
Well, you and you’re so smart in taking a leadership role in this because, know, you’ve been around a long time. You’ve got a lot of influence and people listen to you and, and, you, know, I don’t know if, if your business has changed over the years, but you know, you focus on small business or least you used to, I don’t know if it’s larger now, but you know, I think about small businesses. come from small business. I am a small business person and my heart still lies there and
John Jantsch (21:26.467)
Okay.
John Jantsch (21:39.481)
Yep.
Paul Chaney (21:49.14)
I can imagine a lot of these are just deer in the headlights look when it comes to this stuff and they have no idea about where to go, what steps to take. And so what does that mean for somebody in a consultant kind of role? Hopefully somebody could come in and say, all right, we can figure this out. Don’t worry about it. Here’s a plan. And like you say, kind of a box solution.
John Jantsch (21:53.027)
Yeah. Yeah.
John Jantsch (22:12.505)
Yeah. Yeah. Well, you asked if my business has changed. I’ve kiddingly say we’re in perpetual beta. because so Paul, I appreciate you taking a few moments to stop by the duct tape marketing podcast. Where would you invite people to connect with you? But probably, also if this topic interests, you to be able to subscribe to your newsletter.
Paul Chaney (22:19.054)
Yeah, really, I understand.
Paul Chaney (22:34.946)
Yes, sir. It’s a imarketingethics.com that will redirect you to the sub stack. My day to day business is prescriptive writing.com. I don’t write prescriptions, but anyway, it’s B2B writing. Excuse me. And you can always find me on LinkedIn. So that’s where I hang my hat a lot and that in sub stack these days, I’m not doing much with any of the other social networks anymore.
John Jantsch (23:00.953)
Yeah. Yeah. Awesome. Well, again, I appreciate you stopping by. Hopefully we’ll run into you one of these days out there on the road.
Paul Chaney (23:07.031)
I would hope so, John, and thank you again for this opportunity. I appreciate it.
John Jantsch (23:10.103)
You bet.
Sign up to receive email updates
Enter your name and email address below and I’ll send you periodic updates about the podcast.