Navigating AI's Next Frontier with Mike Courtney
Download MP3David Brandt: This episode
of Problem Solved is brought
to you by the Poirier Group.
Boutique consultants unlocking
organizational potential with
custom results-driven solutions.
Learn more at ThePoiriergroup.com.
Announcer: Here's the problem.
AI is evolving faster than most
organizations can keep up, and the
risks of falling behind are real.
So how do we solve this?
How do leaders and engineers harness AI's
potential without losing the human touch?
How can we embrace AI as a powerful
collaborator rather than fearing it?
Mike Courtney: It can
do good things at scale.
It can do bad things at scale.
Announcer: Mike Courtney futurist
and CEO of Aperio Insights joins
IISE's David Brandt to unpack how
AI is unfolding at scale and how
industrial and systems engineers can
prepare to lead through this shift.
This is Problem Solved: The IISE Podcast.
David Brandt: Mike, thanks for
joining us on Problem Solved.
Nice to nice to be here.
Thanks for having me, David.
Before we get into our main
topic you describe yourself as
an ethnographer and a futurist.
Mm-hmm.
Can you give some insight on what
those roles are and how they inform
the work that you do for a living?
Mike Courtney: Sure.
So, you know, I typically say that
like I'm a researcher, futurist,
ethnographer in some, in some order.
But basically the research
and ethnography is about
understanding what people do today.
When you do research, sometimes you can
rely on them telling you what they do.
When you do ethnography,
you see what they do.
You know, I may say that I go to the gym
three times a week, but then somebody
who's doing ethnography and watches might
go, Hmm, it's not quite three, is it?
But then the foresight is simply
looking beyond what we do today into
the things that we might do tomorrow,
the possibilities, so to speak.
David Brandt: Alright, you
study consumer behavior and
what motivates change in when.
Look at how AI is transforming.
What's the biggest behavioral shift you're
observing in businesses and organizations?
Are there sectors undergoing more
monumental shifts than others?
Mike Courtney: You know, there are
sectors that are experiencing AI
differently and or experiencing larger
chunks of it before other sectors.
Some of that's because they want to, some
of that, they're leaning in saying, give
me, gimme, gimme I want and I want it.
Let's do it.
Let's do it.
And others, because they're
like, yeah, we'll get there.
Keep the door locked right now, and we'll,
we'll eventually figure out when we wanna.
And let it enter.
So different organizations, different
sectors, always experience things
differently, whether it was the internet,
whether it was, you know, electricity.
I mean, some sectors like now are good.
Other sectors are like, no, I want it now.
So it's normal to have a
somewhat uneven rollout of.
That's, that's not unusual, I think.
David Brandt: Well, speaking of
rollout when we first discussed
having an interview on the podcast,
you said that the introduction of
AI can allow us to solve problems
at an unprecedented rate Yeah.
But also create problems
at an unprecedented rate.
Can you unpack that paradox?
What does it mean for industrial
and systems engineers or anyone
in other professions in terms
of how they should approach AI
integration and imple implementation?
Mike Courtney: Yeah.
And so, you know, if you got time
for a quick story, when I was in
high school, I had a car that, eh,
old car, and I waxed it one day.
'cause I was like, I wanna make
it nice, I wanna clean it up.
And the paint back then wasn't
as good as today's paint.
And there were all these little chips.
And if you remember using
the white paste wax mm-hmm.
And would get in all those little
chips, I was like, okay, I spent
all the energy wax in the car.
How am I gonna get these little.
Wax chips out of the paint
chips said, I know I'll take it.
I'll spend good hard earned
money to go through the car wash.
Took the car wash, ran it through,
still had all this little white chips in
there and I was like, Hmm, I still don't
want to take it home and do it myself.
Maybe they'll run it through again.
So I took it back around.
I told the guy, Hey, I'm not happy at all.
Look at all the waxes still in there.
And this guy was really,
really ambitious, let's say.
He's like, I'll take care of it.
I'll take care of it.
Wear your keys I'll, and
very, we willing to please.
Much like AI can be i'll, I'll
do it, I'll do it, I'll do it.
So he took the keys, took the
car out back, brought it back
about 15 minutes later, and I'm
like, oh my God, what did you do?
Because there are all these
swirl marks, deep swirl marks.
I'm like, this is not good at all.
I wanna talk to the manager.
Manager came out, I was like.
Looked at the car, looked at
guy like, what did you do?
He is like, I tried to get
all little wax chips out.
I was like, how, what did you,
what did you do to the car?
He is like, I just used a little
steel wool and kerosene to try
to rub away at all the wax stuff.
Okay.
And the owner of the car wash
said something I'll never forget,
and I think it applies to ai.
It says, he said, Mike.
It's okay.
If you have ambitious people
work for you, that's great.
It's okay if you have people
that make mistakes or in some
cases just downright stupid.
That's okay.
But if you have somebody
who's not that bright.
And ambitious, they will
put you outta business.
And it's something I think we
have to be aware of today with ai.
If I were to say, Hey, ai, let's say an
AI robot could wash all the windows in
the Empire State Building, you know, in
a couple hours and charge almost nothing
for it, you'd be like, that's fantastic,
but how many times would you be able to
suffer the fact that, oh, the settings
were off a little bit and it, it broke
every window in the Empire Steel Building.
Sorry.
So.
It can do good things at scale.
It can do bad things at scale.
So we have to prepare for that.
We have to know, well, let's,
let's test it a little bit.
Let's have it do, just have
it do a couple of windows.
Okay, do the rest like that.
As opposed to saying,
yeah, do everything again.
Because something may have changed.
The settings could have
gotten corrupted or changed.
And next thing you know, you went from,
well, we were saving all this money.
We washed all the windows in
the building like every week.
And you know, it was really
fast and efficient, but then
that one day could undo.
For years, all of the
good that it had done.
And so, so there's sort of the,
the double-edged sword of, of ai.
David Brandt: You know, the thing
I think about a lot is how, when
it comes to using ai, when it comes
to experimenting with prompts and
things of that nature, precision is
the word that kind of comes to mind.
Mm-hmm.
Is the AI giving me too much?
Is the AI giving me not enough?
Does it really understand
what I'm asking for?
And that's gotten better over time.
But I still see whether it be
in the form of hallucinations,
whether it be in the form of just
simply me writing a bad prompt.
It seems like precision is still
part of the issue in terms of
how well AI is gonna work for us.
Do you see a similar insight or
is that something that is Yeah.
Mike Courtney: I mean, it's, it's
similar in that it's sort of like how
we communicate human to human, you know?
Until we get to the point where we
just can sort of complete each other's
sentences, and I know what you're thinking
and dreaming even before you tell me then
we've gotta be good at communicating.
And sometimes we sort of say,
okay, well if we both grew up in
the same area, we have the same
language and talk in the same way.
I might not need to say as much and
you'll understand exactly what I mean.
But in other cases, you know, we've got
people from all over the, the country,
all over the world working in the
same department or same organization.
Sometimes you have to be a little bit
more specific, clear and really lay
it out just to make sure everybody's
got it and then even so, you know,
have 'em pair it back to you.
Okay.
Tell me what you think you
heard me say and just to make
sure we're on the same page.
AI is no different because AI doesn't
have the things that we grew up with.
'cause the things we grew
up with over years, we.
Becomes just, well,
it's like common sense.
Yeah.
You know that if I said, Hey David,
make the temperature in the room cooler.
You and I know that.
Well, the way we normally do that might
have been open a window back in your
grandparents time, or today it might
be, go to a little thing on the wall,
make an adjustment to a lower number.
AI just knows, okay, goal
is lower temperature.
Hmm.
So if you remove the roof, boom, done it.
It's like, wait, but you
took the roof off right?
Now.
You said lower the temperature.
You didn't tell me what not to do.
So, you know, to, to that degree,
we have to be aware that AI doesn't
know necessarily common sense.
It's gonna solve the problem.
If you said you wanted a cooler, we
made it cooler, but it may not do
it in a way that you wanted it done.
Right.
David Brandt: Fair enough.
And that's also a good point.
I always think of what an English teacher
once told me, specificity is not a vice.
And that's a situation where
that very much applies.
When you look at the evolution of tech
in the 20th century, beginning with the
industrial revolution, the timeframe
of the development and adoption of
automobiles, the airplane, defense,
weaponry, computers, all had long arches.
Yeah.
As someone who studies
behavioral patterns.
What's driving the compressed
timeline with generat of AI in
these few short years, and should
we be concerned about that?
Mike Courtney: So what's driving it
simply is that the technology itself has
the ability to, you know, learn and be
trained and improve hourly, daily, you
know, the people that make those models
and, and work on these systems that
we're all using and experimenting with.
They've built it to be able
to do things overnight.
Say, Hey, rerun it and train
it with this and modify that.
And by morning they have another version.
But if we think about
things like, you know.
Aviation, automobiles, tractors.
Even think about tractors.
Think about when tractors
first came to be a thing.
You know, a true like gas
or diesel tractor, you know?
And the first guy in the area
to have one, let's call 'em Jeb.
Everybody probably looked at Jeb and said.
Jeb's a little bit of a jerk.
He's showing off.
He's got this thing, it's like ox and,
and horses aren't good enough for him.
I'm never gonna have one
of those infernal things.
He is got.
And then 20, 30 years later, you
talk to those same people, they'd be
like, okay, Jeb's still a little bit
of a jerk, but I dig by a tractor.
We've got four now.
It's like, what?
What happened?
Well, you know, I decided,
okay, it made sense.
And you know, so we learned, but we
learned over quite a bit of time.
Today it's, you know, David's.
Uses some new AI and by afternoon
everybody else is like, used what?
And then by tomorrow morning they're
using it and by the end of the week
they're like, wait, I can't remember
the time before we had, and we used
this, I believe in it for weeks now.
So it's, again, AI moves at a faster pace
and that's the thing that's good about it.
And something to be cautious about.
David Brandt: Well, speaking of moving
at a faster pace, according to a
McKinsey report published in March, the
use of AI among survey organizations
jumped almost 25% from January, 2023,
which was a couple months after Chat
GPT came on the scene to mid 2024.
The use of generative AI among
organizations jumped almost 40% in the
same timeframe, but the skill level
for workers is really unclear Still.
Yeah.
What happens when the recognition of AI
capability far outpaces their readiness?
Mike Courtney: You know, I, I think every
organization's a little different, right?
So, you know, if we're talking early
23 to to 24, you know, hey, January
23, maybe everybody made a New
Year's resolution to say we should
learn AI and do use technology.
Or maybe they just got
scared that other people.
Seemed to be talking about it and seemed
to be doing things that maybe they thought
they weren't, so we better play catch up.
But at the end of the day, you know,
Chat GPT when it first, you know,
came out for, for the masses you know,
caught a lot of people off guard.
They weren't sure what to make of it
or what to do with it, but the, they
eventually realize it's not going away.
And yeah, it's not perfect,
but it is useful if we know how
to use it and start learning.
You know, I, I think we're gonna start
to see more and more organizations not
only quote, use it or jump into it,
but learn what they're not doing well.
A lot of it initially was just activity.
Mm-hmm.
And if you say, okay, what
did you achieve with it?
Because there's a difference in
state said, how many are using it?
Yeah.
We're using it.
How many of you have
achieved something with.
What do you mean?
We've just played around with this so far
and sort of kicked the tires a little bit.
But I think more and more, you know,
organizations are gonna say, okay,
how do I measure the ROI of this
and measurement's gonna be a really.
Interesting challenge.
We, we don't know how to measure
exponential technologies.
If I were to say, Hey, I'm, I'm
twice as fast as you, you know, in
a car on foot, you'd be like, okay,
I know how fast I am times two.
Got it.
But if it's said a
hundred times a thousand.
A hundred thousand.
A million.
A billion.
Hey, this new model is a trillion
times faster than the one.
Three years ago.
We, we have no idea how to measure it.
Right?
David Brandt: I don't wanna get too
far off tangent or too far ahead, but.
That makes me think of the Ironman
movies that makes me think of Jarvis.
Yeah.
And I, and the scenes where Robert Daniel
Jr's just sitting there spawning off ideas
and, and just having it, you know, create
having Jarvis create 3D imagery and, you
know, all these designs and everything.
And, and I sort of feel that my
own personal usage of Chat GPT and
Claude is a little bit like that.
I like using it as a. Sounding
board as a brainstorming tool.
Have you found it to be
like that for yourself?
Yeah.
Do you find yourself maybe catching
your own personal limitations on it
where you're like, you know what,
this is as far as I want to go with
the ai, I can take it from here.
Mike Courtney: Yeah.
Yeah.
And again, it really is to
be viewed, I think as more
of a collaborator, a partner.
But as most people we collaborate with,
they can't do everything we can do.
You know, you tend not to have two people
that are matched so perfectly in terms of
their skillset sets are exactly the same,
and they go and partner on the same task.
You know, most of us
in in work situations.
Work in a cross-functional team, or
hey, you know, we all can do some
of these things the same, but we all
bring something else to the party.
And I think certainly that's with
AI and the way you describe using it
as a sounding board is a perfectly,
you know, useful way to use it.
One of the other things I think
is gonna come about in terms of
speed is imagine when we're used to
working with humans and we'll give
'em a list of things to achieve.
And what if after every time you said,
oh, and find a list of this or like done
and then also though, and calculate and
figure out the most or the ROI done.
It is gonna make management different.
'cause managers are used to saying,
Hey, I'm gonna give you a bunch of tasks
that I think is gonna take you a couple
of days or a week, which gives me a
week to not have to worry about you.
Then you'll come back in a week
and you'll gimme an update.
This is great.
I get to spend a week while you do
work and me waiting for you to come
back in a week to give you more tasks.
Mm-hmm.
Now the manager's gonna be like,
every task I give, there's the answer.
Now what?
So it's gonna be interesting to
see how quickly we can manage.
David Brandt: Back on the topic
of how this is affecting industry.
The diplomats reporting
on Asian manufacturing.
It says that China is
producing smartphones at one.
One phone per second and a
fully automated dark factory.
While Western manufacturers are still
figuring out the basics of AI integration,
how do you see the competitive dynamic
reshaping between west and east?
Not just with industrial technology,
but how human systems work around that.
Mike Courtney: And I do know
of, of, you know, United States
based manufacturers that are,
you know, running dark factories.
So it's not just an Asian thing,
but and again, you know, factories
that, that do make that investment.
'cause it certainly costs a lot of
money to set up a dark factory and,
and automate things to that extent.
I'm not surprised that that,
that Asia's interested, but.
I'll be surprised if they transition
over as quickly as I think the US
will in terms of automating things.
Why?
Because they have a lot
more inexpensive labor.
Sure.
They can just throw labor at things.
We don't, so we have to use our
labor to do the things and manage the
process with human intelligence and
not just try to automate all of it.
Almost anything that's ever
happened throughout humanity.
Any other innovation has
created pros and cons.
And sometimes there are unintended
consequences that we don't realize.
I think one of the stories I, I told at
the last conference that we were both
at AC was the story of the woman who two
weeks before her wedding, got in a car
wreck, a really small one, really minor
one, but enough to deploy the airbags.
And while airbags are a good thing, right?
Those are good innovation.
And she was also wearing sunglasses.
Sunglasses protected from UV rays, but
they didn't protect against shattering.
Mm-hmm.
And so two good things, airbags
and sunglasses that protect
from UV rays turned into.
Being blind in one eye for life, two
good things had a unintended consequence.
So we have to be aware of those things
and that's where the futurist in me
really comes out to help understand
all these good things are great, all
these positive things are wonderful.
Where's the, where's the hidden danger
that we have to at least be aware of?
Not to be a naysayer and say that I
don't like good things or positive
innovations, but let's just know that.
Things that we don't want
can also occur from those.
Absolutely.
David Brandt: So how do we balance
the speed of AI problem solving
against new categories of failures?
A little bit like that
situation for mm-hmm.
Every good innovation you bring
into it there's a negative
consequence of an innovation that.
Had good intentions, but
yeah, still had a flaw.
What concerns you most about
organizations racing to implement ai?
Without fully understanding the
potential for those kinds of failures?
Mike Courtney: Yeah, and for here,
I would almost say, if we go back
to the calculators days, you know,
when a calculator first became a
thing, you know, you say, okay, well.
I'm gonna use a calculator, then I'm
still good at doing things by hand
and I'm gonna double check some of
this just to make sure that that
calculator's doing the right thing.
We did the same thing with Excel.
You know, I'm not really sure the
computers are gonna add up, right?
Let's double check the math
here just to make sure.
And that worked when you
could double check things.
But if AI can do something and we
believe it did it correctly, looks
right, but I haven't validated all of it.
But if it did something.
Let's say in seconds or even
minutes, it would take me hours,
days, or weeks to validate.
Might not validate it.
Might not check.
And then, then we're really a danger.
'cause we don't know, did
it really do it right?
Was it complete?
Did it make mistakes?
Did it mess something else up?
It did this part, but then it
kicked this other data over there.
Oh.
So, I think what we're gonna quickly
learn is that steel and steel, you
know, diamond against diamond, that if
we're gonna use AI to do a task that's
important and we wanna make sure it's
done right, we as a human alone aren't
gonna be the ones to validate it.
I need to have another ai, sort
of the AI audit team come in and
say, Hey, did this do that thing?
Right?
Did it do anything that might
have looked a little weird or did
it do anything that I never did?
You know, or is it about to do
something that I've never done?
In fact, the home scenario
of, Hey, cool, the house, you
know, it's too hot in the air.
Make it cooler.
And maybe the, the double check
AI would say, well, the one thing
Dave's never done is remove the roof.
And so if the new AI is plan on
doing that, we might say Hold it.
He is never done that before.
Maybe we, we ask him if he wants us
to take the roof off to cool the house
down, but what he has done before is
change the, the temperature on that
little thing over there on the wall.
Okay?
Why don't we use the things he's
done and just automate it for him?
We're gonna eventually have to
develop tools to monitor the tools.
Gotcha.
David Brandt: The watchdogs
over the watchdogs, I suppose.
Yeah, we'll be right back.
This episode of Problem Solved is
sponsored by the Poirier Group, a
values-based boutique consulting
firm that delivers far reaching
long-term improvements to support your
growth from strategy, development,
and execution to supply chain, cost
reduction, and customer experience.
Their team works side by side with
yours to create agile, customized
solutions that drive results.
With expertise across industries like
healthcare, manufacturing, private
equity, retail, and government, The
Poirier Group helps organizations
align their teams, accelerate momentum,
and achieve sustainable change.
Learn more at ThePoiriergroup.com
and unlock your organization's
full potential today.
Welcome back to Problem Solved.
Let's shift to jobs a little bit.
The World Economic Forums Future of
Jobs report came out earlier this year.
It predicts that 92 million jobs
will be displaced by 2030, but 170
million new jobs will be created.
That's just five years from now.
And I hate to remind myself
of that particularly 'cause
I have a birthday coming up.
But the report also claims that 39% of
existing skills skill sets rather, will
become outdated in that same time period.
To see so much change potentially
occur in the next five years is
pretty daunting and a little scary.
How should ISEs or professionals
in other fields be thinking about
transformation and adaptation, not
just technically, but behaviorally?
Mike Courtney: So, it's a great question.
The first thing I think we have to
realize is that this is a much faster era
of change than we've ever experienced.
You can look back throughout time
and we've gone through tons of change
and figured it out along the way,
and tons of change that meant, oh, we
don't need that kind of job anymore.
But then other jobs came to be,
and even automobiles, right?
You know, we went from horse and
buggy and some of those jobs transfer.
You know, the people that used to maybe
care for the horses, now they're caring
for the cars and maybe learn a little
mechanical stuff, but they had more time
to make those transitions than we will.
The other thing to know is that
it's sort of like entering a tunnel.
And you know that you're gonna lose jobs
in that tunnel and there's gonna be a
light at the end of the tunnel with new
jobs, but you can't see the light at the
end of the tunnel until you're in it.
Mm-hmm.
And humans, being humans, we
tend to, we tend to grasp onto
new things that seem useful.
Profitable at a rate that maybe
is a little bit too quick.
Even think back to like shipping
when somebody says, Hey, hey David,
we could take a whole lot of cargo,
put it in a really big wooden
thing and send it off and, and
trade and make all sorts of money.
That sounds great.
Do we know if it works?
Well, we built a couple.
Let's send 'em out.
And then, you know, months or a
year later, Hey, how'd it work?
Well, a bunch of 'em came back
and we lost a bunch of 'em.
What do you mean lost?
They're, they don't know where they are?
No, they're, they sunk the cargo, the
people, the money, all of it's gone.
Oh wow, that's horrible.
We lost how much?
Yeah, keep going.
So AI is gonna be the same.
People are gonna make mistakes,
it's gonna cost money.
And initially some of those jobs, people
say, oh, we don't need that anymore.
They're later gonna realize, oh
wait, why did that stop working?
We fired that whole team.
Oh.
Hold on.
Can we get some of 'em back?
Yeah.
And the AI do, no, let's get,
hire some of those back and
let's, let's course correct.
So initially people were putting
perhaps too much faith in AI and
then realize later, well it did do a
good job of things we have to do, but
it didn't do all the other things.
Sure.
Oh.
Humans often do things that
aren't necessarily documented,
but are still useful.
You know, when you check out at a
restaurant that's not just, well,
the task of making sure somebody
paid their bill, it's thanking them,
asking 'em how their food was, ask 'em
how the kid is, you know, is Jimmy's
gonna play ball again this year?
It's having those connections that
cause that person to come back.
David Brandt: Well, and and to that note.
In a, in an office environment or
any, any employment situation having
interconnectivity interpersonal
relationships with people you work with?
I mean, that still carries value that.
To what you just described, I
don't think is necessarily gonna
be an easy replacement by ai.
But I'm wondering how much of that
will, will sort of have weight
against the advancement of ai.
Is there any kind of, behavioral aspects
that you've seen in terms of data
documentation or any notions that you
have in general on just what it would
take to devalue those interpersonal
skills, those interpersonal relationships.
Mike Courtney: So I, I think what
we're gonna find is that, you know, AI
will be used for certain things that.
Include the stuff that we
didn't wanna do anyway.
Right.
For years and years and years,
regardless of what job you had and
what industry and what company.
There were parts of your job.
Were like, oh, please,
we gotta do that again.
You know?
And tomorrow there were
parts that we secretly wished
we'd never have to do again.
Ever.
I. And now the horrible thing is
a lot of us are getting our wish.
Mm. But we don't really know
what's gonna replace it.
So it's like, yeah, I know I said I
didn't wanna do those things anymore,
but now I'm starting to worry about,
well, if it does all those things
and I'm only left with these things.
Does that mean I have a job
or can we work something out?
There's some risk there, but we've
seen in the research space, for
example, that nobody ever liked going
through, you know, a hundred pages of
notable, of quotes and transcripts.
So the fact that AI can go and say, Hey, I
summarized it took me a minute and a half.
Great.
And here's the basic summaries, and
we can look over shoulder and go, hmm.
I got it right, but it
missed this and missed that.
But it, it made our job easier.
And now we can spend more of our
time discussing, thinking through
doing the critical thinking part
of the role, which is more fun.
And that discovery is still ours.
We can still do it even if we didn't
have to do all the grunt work ourselves.
David Brandt: It sounds like it's
a matter really of tradeoffs.
What tradeoffs are we willing to make?
What tradeoffs do we wanna avoid?
Mike Courtney: Yeah.
I would add one modification
is that we don't always get
to pick all the trade offs.
True.
Because if your competitors are doing
something that gives them an advantage
in speed, cost, or quality using ai,
it's gonna be pretty difficult to say,
nah, I don't wanna make that trade off.
Right.
David Brandt: Is there a preference
for collaboration over automation?
We've tapped into this a little bit
already, but is the idea of an AI
co-pilot more or less likely than
AI replacing humans in their job?
What's your advice for industrial
and systems engineers who are tasked
with leading AI implementation
in those organizations?
Mike Courtney: Yeah, so I mean there's
a phrase that you've probably heard
more lately, human in the loop, and
I think that's what we have to seek
to establish is to have processes
where AI helps do some of the mundane
things that it's better suited for.
But with a human in the loop to make sure
that it didn't make a mistake and then
replicate the mistake at scale mm-hmm.
In a way that we like, ooh, ouch.
Tough to recover from.
Again, the, the window washing
of all the windows in the
Empire State Building example.
Mm-hmm.
It's great when it works.
Hey, look, wow, it works, you know, now
we're just gonna set it and forget it.
And then a couple weeks
later, what do you mean?
They broke all of 'em?
But, but no, well stop it.
Go ahead and hit the stop button
before it breaks any What do you
mean it, it already broke all of 'em.
Damn.
You know, and you don't need many,
many of those big mistakes where
the human wasn't in the loop to
realize, Hey, let's go going forward.
Let's make sure there's
a human in the loop.
And that it is, you know, more
of a collaboration arrangement.
There's things that aren't even
making a mistake as much as.
Doesn't understand nuance
that we would understand.
Mm-hmm.
Doesn't understand sensitivities
that, you know, might upset a
customer another employee or, or,
or someone else that, again, the
AI just doesn't know those things.
And while you can teach it to be more
empathetic and polite and compassionate it
doesn't necessarily have the same impact.
As any human expresses those concerns,
Hey, I'm really sorry, the, the
delivery hasn't gotten to you yet,
or, we messed up on the order.
Really sorry.
We're gonna fix that.
Coming from a human tends to mean
more than an AI saying, we are sorry
we, we didn't shift the right thing.
We will send the right one.
We are apologizing.
It's like we know.
It's like, give me a human 'cause.
I want a human to really have sweated
on your end of the equation, and I
want that throat to choke, so to speak.
Right.
David Brandt: So given what we've
discussed about the next five years
and the potential change to come what
should ISEs and other professionals
be preparing for that we're not
talking about yet in the AI space?
What problems are we at risk
of creating that we don't
currently see on the horizon?
Sure.
Mike Courtney: So a couple,
because I've got a box full.
So one is that we've always tended
to live in a world where we could
truly look at the options we have, the
solutions that are available to us.
You know, invite, invite the vendors in to
do the dog and pony show and show off what
they have and then make a informed choice.
That's gonna become harder.
Because every time you're like,
okay, we took a look at what's
available and now we're gonna make
it, what do you mean something new?
Is I. Okay.
Well let's add that we're
just about ready to vote.
Give that to me.
Okay.
Now we're ready.
What do you mean five more?
Just showed up.
Okay.
I mean, at what point do you cut it
off and say, okay, I'm gonna analyze
for a certain period of time, then
I'm gonna make a decision knowing
full well that maybe in a month or
six or or a year or two, I'm gonna
look at what I've chosen going.
It sounded good at the time,
but boy, they've, they've far
surpassed it with some other thing.
So we're gonna be just a deluge
of, of options and choices.
I tell people these days that imagine
the things that you're, you're
frustrated with at work or as part of
a process, and as soon as you identify
something you don't like or you
wish, were better instantly believe.
There is a solution that AI's
already solved it, you just haven't
found it yet, and then you go to
the equivalent of, you know, home
Depot for AI tools and know that.
You'll never be able to walk the entire
place because they just keep adding more
shelves, more aisles, and more stuff.
Sure.
You just have to say, okay, I'm gonna
give myself this much time to find a
solution to the problem that I have, and
know that either it does exist when I just
didn't know it, or the things that they
do already have on the shelf can probably
be cobbled together to do what I want.
Does that make any sense?
David Brandt: No, no, absolutely.
It just sounds like decision fatigue.
Yeah.
And it's easier for our brains to
just hand it over to the AI and just
have it make that decision for us.
Mike Courtney: Yeah.
No, and, and in some cases it
will be, there's some decisions
that I really don't need to make.
Right.
You know, if I'm trying to say, Hey,
you know, pick the cheapest electricity
rate for my usage at my home.
Just do it.
Yeah.
I mean, it's gonna be
able to run everything.
Oh wait, no, these get you
because there's a delivery charge.
They don't put that in the marketing
and it's gonna be able to do all the
math and go, yep, this is the one.
Great.
I don't need to oversee it.
Right.
Absolutely.
David Brandt: I think the
trade offs that every trade off
has a different weight to it.
You know, different value to it, and
I think that maybe that's the practice
we have to get into as, as human
beings, is really making the decision
about what is it about this particular
task or this particular repetition
that I either want to keep continue
doing myself, or what is it that makes
me want to give it over to the ai.
And I think that's a, those are harder
questions I think people need to
start getting accustomed to asking
themselves as this technology evolves.
Mike Courtney: To build on that
though, I would say that we will
pretty soon have ais of a sort
that we authorize to observe.
Like our work.
Mm-hmm.
And that AI's only purpose will
be to say how, what's he doing?
How's he doing it?
Oh, interesting.
And keep monitoring all
the things we can't.
David Brandt: Let's end here
with a couple quick ones.
If you were talking to an engineer
tomorrow who's being asked to lead an
AI initiative in their organization
or the company, what's the one piece
of behavioral insight you'd want
them to understand before they start?
What is it that they should
be primarily looking out for?
Mike Courtney: So in terms of behavioral,
so behavioral makes me think about
how will others, you know, view what's
happening and, and, and, you know, should
they be afraid or will they be afraid?
And I think that's the number one thing
we have to wrap our brains around is
that because of the speed of change,
because of the power that we've already
seen AI has, even if I say, well, don't
worry, it can't, it's not human, right?
It's not, it can pretend to be human,
it can be empathetic and do all the
things that, you know, be taught to.
Mimic a human.
It's not.
And so therefore, look at all the
things that you can do as a human that
it can't do, and maybe it eventually
will start to sneak over into mys.
You know, side of the room, so to
speak, and start taking tasks away
that we didn't think it could do.
So I think we just have to reassure
people that there's always gonna
be room for humans in the loop.
And there will be new jobs
that we can't even imagine now.
'cause we're only entering
AI tunnel now in AI Mountain.
There's gonna be a light at the
end of that tunnel, and there's
gonna be all sorts of new jobs
that we never imagine going in.
Mm-hmm.
And it's a little unsettling
that we don't have the new job
before we give up the old job.
You sort of have to have a little bit of,
you know, faith that it's gonna happen.
But it's the same kind of faith that we
now have when it comes to like traveling.
If I said, Hey Dave, we're gonna go to
pick a big city, Chicago or San Fran, or.
You know, Detroit or something, we
wouldn't have to say, well, it's
going to someplace I've never been.
We'll have to pack enough food and
everything to survive the trip.
And fuel and medicine in case, no,
we we're not settlers from like,
you know, the early 18 hundreds.
It's not the
David Brandt: Oregon trail, I get it.
No, it's not the,
Mike Courtney: So we, we, we got to
the point where, at least in the US and
going major cities, we assume, well,
you know, if I forgot extra socks, I can
buy 'em, you know, if we forgot X, Y, Z.
They'll probably have that there.
We don't yet understand that AI is
gonna be the same thing, that we
don't have to become, we don't have to
spend 23 hours a day trying to learn
about AI and never, ever catch up.
'cause you won't, you do have to
spend enough time to be conversant
and leverage it at least as
well as the next guy, right?
So some of the things I say is.
The the, we're going
to go on this journey.
It doesn't matter whether you wanted
to or not, and just know that you're
an important part of the journey.
'cause technology is what technology is.
It's meant to improve humans.
It's made by humans to do the things
that other humans think we want done.
And increasingly it's gonna do the
things that we tell it to do because
it's gonna be a general purpose.
AI that can do anything we ask it to
do, you know, within certain reason.
So.
It's there to help us.
But we have to learn how to use it.
And the more we sit back and go,
Nope, I'm afraid I'm not gonna do it.
You know that, that's
the dangerous situation.
If you really think you can
resist it and, and not lean into
it, that's gonna be challenging.
That's gonna be a hard place
to be because everybody else
is gonna get the benefit of it.
Right?
But if you don't have another AI
or systems in place to watch over
it, you might wake up one day and
it will at scale, at amazing Speed.
For, for, for not much cost.
Have totally destroyed
everything you have.
David Brandt: So to that end
finally our last question.
Given the speed and scale that
we've discussed, are you optimistic?
Are you pessimistic?
Where do you fall on the
prediction scale of ai?
Yes.
Mike Courtney: I, I, I'm
at both ends of the scale.
I see all the promise and potential,
and I see, I see what it's gonna
be able to do for people that.
Really need help.
Mm-hmm.
I mean, imagine AI being able to, you
know, assist, you know, human doctors
in diagnosing and treating things that
otherwise would be tough to treat or tough
to diagnose, but also to provide a level
of care that maybe some people couldn't
afford or live in a place where, you
know, they never would've access to that.
So I think it's gonna really help and,
and to the extent of like, you know,
like when I said, you know, like having
it choose the best power, you know,
maybe we don't select power once a year.
Maybe I get to select it once an hour
in AI can say, oh, boom, boom, boom.
I can arbitrage.
And hey, we're gonna hold off on the
dishwasher 'cause prices are whatever.
But I'll start it tonight.
Don't worry.
Dishes will be done by morning.
So that's great.
But we.
Again, we, we have to prepare for
a world that's gonna be faster than
we ever thought and do great things,
but has the potential to really mess
things up at scale in a big way.
Announcer: Yeah.
Mike Courtney: And we, we, we've seen not
AI doing it, but we've seen, Hey, what
happens when, you know, everybody's, you
know, ID and passwords are, you know,
for something like Netflix or something
that's all in this one server, oops.
They got a hold all of it.
Really.
Oh.
You know, I mean, AI's gonna be made
able to make those mistakes, like
not even breathing hard, you know?
So we just have to become more
resilient and more prepared to make
sure that there's humans along the
way to say, okay, let's double check.
Let's run a small little Does that go,
okay, now let's run a bigger batch.
We're gonna have to do it in stages.
David Brandt: Well, hang on everybody.
I, I guess is all we can say.
Mike Courtney, founder of Aperio
Insights, ethnographer and futurist.
Mike, we greatly appreciate your insights
and certainly appreciate your time.
Mike Courtney: Enjoyed it.
Thank you so much.
Announcer: A huge thank you to
Mike Courtney of Aperio Insights
for sharing his wisdom on how we
can navigate AI's rapid evolution
with purpose and foresight.
A special thanks to our episode
sponsor, The Poirier Group.
Boutique consultants unlocking
organizational potential through results.
Driven solutions.
Learn more at ThePoiriergroup.com.
For more conversations like
this, follow Problem Solved
wherever you get your podcasts.
And connect with us on LinkedIn,
Instagram, and at iise.org.
This has been Problem
Solved: The IISE Podcast.
Every great solution is
a story worth telling.
