Large Language Models: How Far We've Come with Dr. Joe Wilck

Download MP3

Flexsim Autodesk: For more than
two decades, industrial and systems

engineers worldwide have trusted
FlexSim to help them analyze and improve

the important systems they work on.

Now that FlexSim is part of Autodesk,
our commitment remains stronger than ever

to provide a powerful and easy to use
simulation modeling and analysis solution.

But we're also building something better.

A digital thread across your
organization where connected data

unlocks new workflows, enables
collaboration and allows you to

leverage data from across your
organization to simulate and optimize.

Autodesk is here to help systems
design and improvement be more

impactful at your organization.

Elizabeth Grimes: Here's the problem.

Large language models are
evolving so fast that even the

experts have trouble keeping up.

In less than a thousand days, generative
AI has reshaped how we search, code,

learn, and work, all while introducing
new questions and responsibilities.

So how do engineers, educators,
and everyday users make sense

of this exponential pace?

How do we prepare students when
the tools they're learning with

can also outsmart the exams?

In this episode of Problem Solved
IISE's David Brandt talks with Dr.

Joe Wilck of Bucknell University,
who returns to share what's changed

and what hasn't in the first thousand
days of large language models.

Together they unpack the realities
behind the hype, the challenges facing

educators, and the opportunities ahead for
engineers navigating LLMs Next Frontier.

This is Problem Solved.

David Brandt: Joe Wilck.

Welcome back to the

Joe Wilck: podcast.

Thank you.

Glad to be here.

David Brandt: We spoke, what was
that, fall of 23, I want to guess,

on, in what turned out to be an
award-winning podcast interview.

I just like to point that out.

Oh,

that's great.

But, we talked then about
the state of generative ai.

And that was about a year after
the public release of Chat GPT.

Our, our initial interview, we
opened up with a suggestion that

many people had taken to using
chat GPT within that first year.

I gave a presentation recently at
the annual conference in Atlanta.

it revealed to me that I may have been
very wrong about that estimate, as so

many people in the room, seemed to be...

I don't wanna say clueless,
but certainly inexperienced.

So what's your assessment today
about the widespread knowledge about

chat, GPT, Claude, other generative
AI tools, and is this text still

new to the public in general?

Joe Wilck: So I think there's actually
a lot of misnomers going on here.

I remember when, Chat GPT first
came out, it was publicized that

it was like the fastest growing
technology ever in the history of

mankind, even faster than the iPhone.

And I, I think two things
can be true at the same time.

I think yes, it could have been
widely used and adopted kind of.

At a first pass, like people tried it out
and then some people might have tried it

a couple of times and said, okay, I know
what it is, and then just left it alone.

in terms of what's going on right now.

I think where we're seeing it is
we're seeing it in being embedded

in a lot of other products.

Mm-hmm.

so for example, if you use a
search engine, very likely you're

gonna see some results that
were generated by generative ai.

But if you're just kind of a naive
user or just a casual user of a search

engine, you may not realize that
something was generated by generative

AI amongst all the other links that
it provides you in the search engine.

so I think some of that is happening
kind of behind the scenes and

people just are not aware of it.

Sure.

I also think that I. For people
like me, you know, I have an account

with, chat GPT, through OpenAI.

I have to log in to use it.

So like I have to go through that step.

So like I'm very knowledgeable,
like, yes, I'm now entering into

this realm, entering into this.

Software program, but for a lot of
people, they may kind of have that mark

of delineation and say, unless I've
logged in, I'm not actually like an

active user, I'm just a passive user.

Mm-hmm.

I think sometimes people might
not realize what's going on in the

world around them, whether it's
the browsers or the chat bots.

Sure.

David Brandt: Well, and I
think there's also still.

a confusion about, we use the
term AI kind of flippantly.

I, and, and I think there's certainly
a wide degree of difference between,

something like Siri or Alexa where
there's a bit of pre-programming going on.

They have a, they have a set of rules.

There's, there's a set of data that
they work off of, whereas generative

AI very much is a creation tool
based on a wide, much wider set.

and, and, and, and in terms of how it.

Gives output.

It's so radically different
from just asking Siri for the

weather forecast, you know?

Joe Wilck: Right.

And I like to tell people, and I'll use,
I'll use two terms that are, are very,

overused in industrial engineering.

but.

Some AI is just very discreet.

Mm-hmm.

Okay.

And what I mean by that is you
ask it a question and it's gonna

give you the same answer every
time, unless there's some kind of.

Exception.

so for example, if you use, GPS navigation
on your phone, it's very likely gonna

give you the same direction from point
A to point B every time, unless there

was like a car accident or something.

That's a more discreet example of
ai, but that's not generative ai.

That's just.

You know, we now have a computerized
version of what we used to use maps for.

Right.

Okay.

A more sarcastic or, you know,
probabilistic is the generative ai

and in some cases it's very casual.

so you mentioned, Siri
or Alexa or something.

you know, if you say Good morning
Alexa, it'll respond back to you.

It won't always say the same thing, but
it'll be in the same family of responses.

Mm-hmm.

And that's the probabilistic nature of it.

It's still in the same realm of an, an
adequate response, similar to how a, a

human would, you know, converse with you.

David Brandt: We're talking
on August 6th right now.

That's when we've recorded this, and
we're about 20 days shy of the 1000th

day since Chat GPT's public release.

The first year of large language
models, chat boutique, Claude Gemini,

highlights extraordinary change.

What are some of the biggest changes
you've realized in how we use large

language models or the applications
from others that you've witnessed?

Joe Wilck: So, two years ago when
we talked, I, I kind of predicted

that we would see two things happen.

One thing that we would see happen
would be bigger, better models.

You know, just more data that
they use to build the models.

so think kind of like a. A
jeopardy, you know, could win at

any category type of a, of a model.

Mm-hmm.

And those models have been built, all
the key players have built their version

of that bigger, better, best model.

Now the other thing that I
predicted though, was that.

Some organizations, particularly if they
had the capital, would build subject

matter experts, and so they would be very
knowledgeable in a very niche field, and

then they could then use that, to sell
to customers or to use it internally.

Or what have you.

And we've seen both of
those things happen.

Where I did not foresee what would
happen over the last thousand days

is having the different modalities.

Mm-hmm.

so you have these multimodal models and
so you have like image generation or

video generation or even audio generation,
and then having those coordinate with.

The large language model that's more
text generation, and so what I didn't

foresee is how all of those would start
working together and progressing forward.

Right now where I've seen the
biggest strides is where you're

finding these very particular models
or methods in these niche fields.

and they're actually making those,
industries either better, safer, more

efficient, et cetera, whatever, whatever,
whatever their objective is, right?

to give you an example, in
healthcare there's a website

called open evidence.com.

And what they have done is they
have gone to all of the, scholastic

journals in medicine and they've gotten
rights to search all these journals,

and they're, they're basically have
created a subject matter expert.

Different health ailments.

Okay?

And so if you can imagine if you're a
nurse or a doctor and you have a patient,

you know, you now believe that they have
a certain ailment, you can go to open

evidence and you can type in the situation
and it'll give you papers and articles

and actually give you direct links.

But the other big benefit is you
can also use a persona with it.

So you could say, okay,
my patient is a teenager.

Please give me some pointers for how
to communicate this to a teenager.

Please give some pointers for how
to communicate to their parents.

Okay?

And so, the medical professionals can
use this tool not only to get facts.

To provide to the patients,
but also to kind of curb the

conversation in a way that will help
them communicate to the patient.

David Brandt: That's
incredibly fascinating.

Elizabeth Grimes: Problem
solved will be right back.

David Brandt: Are you looking to sharpen
your skills, boost productivity, or

take your career to the next level?

The IISE Training Center offers
world-class professional development

built for busy engineers,
managers, and problem solvers.

Whether you're diving into analytics,
earning a Lean Six Sigma certification,

or exploring supply chain logistics,
our expert led courses deliver practical

tools with real results, learn online or
in person with options tailored to every.

Career stage plus our training works.

Past participants have applied their
knowledge to save their companies over

$250 million in real world projects
and 90% of Lean six Sigma candidates

pass their exam on the first try.

Visit iise.org/training to explore
the full course catalog and start

transforming your career today.

Elizabeth Grimes: Welcome
back to Problem Solved.

David Brandt: You guys though, you and
your fellow researchers, two years ago,

you were running experiments yourselves
on, the few LLMs that existed at the time.

You had each one take, I think three
tests and a final exam to compare to

the results of your human students.

and the questions I think were exercises
and not multiple choice, if I recall.

Have you since rerun that experiment,
and if so, where are you seeing

improvements or continued failure
by the large language models?

Joe Wilck: Yeah, so most of the
exercises were freeform, but

there were some multiple choices.

Okay.

we have rerun the experiment.

The large language models
have gotten better.

so that's, as an educator, that
is scary because you're, you're

gonna have to continuously make
your test, harder and different.

so my, honestly, my suggestions
to folks are very similar to

what they were two years ago.

Mm-hmm.

If you're giving a test where students
have the capability of using a

computer, you know, connected to the
internet or their phone or whatever,

you need to avoid multiple choice.

and, you need to kind of think
through, what are the, steps,

that you're gonna be looking for.

Now, on the reverse side of that, though.

It still doesn't always
get the answer correct.

and where it fails is, so far we have
large language models, and as you said

earlier, they're, they're very good at
trying to predict patterns, well numbers.

Particularly quantitative questions.

It's not necessarily a pattern,
it's just, it's a calculation.

It's a fact.

Yeah.

Okay.

So you take this number and add this
number or multiply this number, you

should get the same answer every time.

And so where we do still see
problems is when you have these

multi-step, quantitative or
mathematical type questions.

So you still need to work with that.

One of the things that
I would mention too is.

What we have seen differently in the
last couple of years with respect

to these types of questions though,
is students as well as faculty,

we've gotten better at prompting.

Mm-hmm.

Okay.

And so students, if they really are
using these, and they're actually trying

to use 'em to get the answer they can.

Perhaps ask better prompts and
then it'll lead to a better result.

or they can do what I would call a chain
of thought exercise where, where basically

you're asking the large language model to
show its work, and then if you go in and

you try to verify the calculation, you can
see whether it got it right or wrong, but.

Even if it got it wrong, you at
least understand the concept.

You're like, oh, it's trying to do this.

It just didn't do whatever
that was correctly.

Correct.

And so they, they, they're, the
students have gotten better at

perhaps prompting and oversight.

David Brandt: You know what's funny
is the getting it to explain its

reasoning, that is where I've found even.

I guess a better sense of discovering
the flaws or the hallucinations is

when it really does spell out its work.

Mm-hmm.

And that is where I do have.

Built in skepticism about
the ability of this tool.

but having said that, had I not
specifically asked for it, its reasoning,

I, I, I feel I just would've been naive.

I would, I wouldn't have considered
that possibility, but when I

see it explained out, yeah, I'm
like, oh no, that's not accurate.

And trying to verify where its sourcing
is from, you know, I just find that to be.

Still, I guess a big
reveal in all of this.

Joe Wilck: I agree.

I, I think, one of the things that
students, as well as faculty and,

and others that use these tools,
need to remember is that the number

one, they're conversational agents.

Mm-hmm.

So they were designed
to have conversation.

And typically, at least, maybe I'm
amongst friends here, but typically

you're not gonna have like conversations
about mathematical formulas and

calculations, you know, directly.

It doesn't

David Brandt: make for the
best dinner conversation.

I can say that.

Right, right.

Joe Wilck: But the other thing to
keep in mind is these models were

designed with an intense amount of
data and, They were designed to, to

the, the term is Few shot learning.

Mm-hmm.

Okay.

And so if you dive into some of the
research papers about Few Shot Learning,

what they're really getting at is with
only one or two examples, can it pick

up a pattern and then understand the
pattern and articulate and answer.

Just off of one or two examples.

Mm-hmm.

And that's what we're seeing.

but in some cases, particularly
early on with LLMs, now that they've

built bigger and better ones,
you're gonna have fewer of this.

Some of it was zero shot learning.

Like they had no example in their
corpus, but they were just going based

on what they could pull from other
things that were in their dataset.

Sure.

so, another way to really use a tool.

Like a large language model is you give
it an example in your prompt and then

tell it, okay, here's what's changed.

Can you rerun right under
these new conditions?

David Brandt: To that end, you had
used, Python, I believe, ear early on.

Is that still been the case for you or do
you largely operate with English prompts?

That sounds strange to say, but basically,
are you still using more math and coding

or are you just like maybe a more of a
general user might be using these tools?

Just giving it an English command
and waiting for, you know,

hopefully the right response.

Joe Wilck: I use it generally, but
I also use it for code as well.

and what I said two years ago to kind of
be kind of predictive and provocative,

I said that English was gonna be the
new, the new programming language.

Sure.

because people were gonna basically just
articulate what they wanted in English

and then the LLM would generate code.

Now, a couple of things
made that possible.

first in their data set or in their
corpus, they have a lot of Python code.

David Brandt: Mm-hmm.

Joe Wilck: And so they've, they've put
in Python textbooks and code, et cetera.

I believe Microsoft also, under certain
conditions, they were able to get

access to a lot of GitHub repositories.

So they were able to then flood their
corpus, with a lot of GitHub repositories.

but with that all being said.

I think to actually get the be the
biggest benefit, you need to understand

some nuances of programming languages.

It doesn't necessarily have to be Python.

Mm-hmm.

But you need to understand like
say the difference between a

conditional statement and a loop.

Right.

And you know, what's the difference
between a four loop and a while loop?

And under what circumstances would you
prefer to use one versus the other?

So I think it's still important
from a scholastic standpoint

that students like understand.

Kind of these terms and what they
mean and you know, a few examples

just to kind of get 'em started.

but with that being said, where
the large language model could

be very useful is in the syntax.

So you can say something like, I need
a conditional statement to do X and

Y and maybe z. Then it can provide
the code that does that for you.

But you, you as the user,
needed to understand that.

The key thing to say was, I
need a conditional statement.

Right?

So I think the best way to use.

Tools like this for coding is to not
think of your code in as something

that needs to be done all at once.

But think about it in terms of,
you know, different chunks, if you

will, or different parts of it.

and then, you know, using it
in an iterative fashion to

build your chunks of code.

That hopefully you can,
you know, piece together.

the one side comment to that is also
this concept of vibe coding, right?

and that concept kind of came about
earlier this year, I believe in

like January or February of 2025.

Sounds

David Brandt: right?

Joe Wilck: And that is more for
software developers, because for vibe

coding at its core is you are changing
the code that's on your screen.

But it impacts code in other
areas of your overall program.

And the LLM is smart enough to realize
if you make a change here, we need

to automatically make changes here.

Right.

And I. To be honest, the, I'm
typically teaching students that

are only usually need to worry
about what's on their screen.

They don't need to worry about
how it pieces together just yet.

And so I think for me in
teaching, I'm mostly just

focused on what's on my screen.

Right?

The vibe coding is more
for software development.

David Brandt: Gotcha.

Is there a large language model
or generative AI tool that has

since become your default instead
of the traditional Google search?

Because for the last 20, 25
years, we've been Googling things,

Joe Wilck: so I'm a, so I, my
Gmail account I had to be invited

to, to, you know, that was back
when there was invitation only.

Yeah.

And so I have a Gmail account, and
so I'm a, I'm a big Google user,

Chrome user, and so for, for various
search things, I'm still in Chrome.

David Brandt: Mm-hmm.

Joe Wilck: And what I've learned, and,
and I, I mentioned this earlier, is when

I search for something, immediately,
there's that generative AI where

it's trying to answer my question

David Brandt: right.

Joe Wilck: And in some cases I'll
just scroll right past that because

I'm, I'm actually looking for a link.

David Brandt: Mm-hmm.

Joe Wilck: So for me, for basic stuff,
I'm still using a search engine.

and it wouldn't just be Google,
I'm sure Bing if you're a Microsoft

user, does the same thing.

So it, it's not just Google.

I'm sure other search engines
have a generative AI kind

of built in now as well.

In terms of, the large language model,
the two that I primarily use, I have my

own personal account still with OpenAI,
so Chat GPT, so I still have that.

and then my university,
Bucknell University.

we are a Google School and so we have,
Google Notebook, lm, and I'll be honest,

I don't use it as much as I use Chat GPT.

Mm-hmm.

but Google Notebook, LM for
me, has been very useful.

For one key thing, when I'm trying to
put together some stuff where I have

multiple files, our account, again
through my university, we have a developer

account and I can upload, I believe it's
up to 50 documents into the notebook,

and then I can basically restrict.

It to searching just in those 50
documents and it can reference me

to the page number or, you know, it
can, it can piece things together.

And that's very beneficial because,
instead of searching the entire

internet or the entire repository
of something, I can kind of restrict

it to a certain set of documents,
but then still have it search.

so it's, it's basically, it would be
very good in terms of like a literature

review or, you know, if you're trying to
reference across multiple students' work.

so that's been very beneficial.

So those are the two large language
models that I use most frequently.

Open AI Chat GPT and then
the Google, notebook.

Lm,

David Brandt: I, I have to be
honest, I had to contain myself

when you mentioned Notebook.

Lm I love Notebook.

Lm honestly, I was surprised.

I've, I've come across people
over the last year and a half.

I mentioned Notebook, LM to 'em,
and they go, oh, what's that?

I'm just like, oh, everybody,
please, let's catch up.

Because notebook limb is an amazing,
amazing tool for that very reason.

So, I'm also a big Claude user.

Mm-hmm.

Claude is very good for,
communications work, writing, editing

proof, proofreading especially.

and I also use Gamma for presentations.

Those are my personal, that's my
personal AI toolkit as it were, today.

You, you, you've talked a little
bit about students and how you've

worked with them on generative ai.

at the annual conference I'd given that
presentation, a professor had asked me

directly about how to prevent the students
from using this tool, to, you know,

cheap, let's just flat out call that.

but basically to use it beyond
the parameters of what they

may be assigned to use it for.

And I was honest in saying at the
time I didn't have a solution for

'em, but I felt that the onus is
ultimately on the university, or

academia as a collective to provide
guidance on the ethical and appropriate

applications for generative ai.

you've previously said that it's
critical to teach students these tools.

though you stress there's
a difference between.

What they may do for an English class
assignment versus what they might do for a

class, relative to industrial engineering.

Have your views about students
in generative AI evolved, in the

past couple of years, or have they
largely stayed in that same realm?

Joe Wilck: So, to use an analogy,
I like to think of, you know, red

light, yellow light, green light.

And if you think about, in, you know,
your baseline industrial engineering

courses, I would recommend kind of
having a yellow light mentality.

So, you know, you give students
boundaries, you give them, you know, this

assignment, you can use generative ai.

However you want this assignment,
you cannot, or you, you have

to cite it appropriately or,
or something of that regard.

I think back to when I was in
college, there were certain classes.

That were mathematical, but
there would be strict rules.

They would say, okay, on this test
we're not gonna allow you to use a

calculator, or for this assignment we're
not gonna allow you to use a computer.

But for other assignments, you were
allowed to use some of those tools.

And so I think generative AI will
have, when you're teaching it,

you can have similar restrictions.

I personally have trouble
thinking of a course that should

be completely green light.

Everything's open all the time
without any type of citation.

Like I, I don't think
we're we're there yet.

I also don't think for at least mainstream
industrial engineering courses, we should

be in the red light either where you
completely restrict it all the time.

I think we should be kinda
operating in that yellow light

mentality now for other courses.

Outside of, industrial engineering,
they may be in the pure green light

or the pure red light mentality, but
I'm, I'm speaking specifically to your

industrial engineering courses here, and
I think this is also true even for big

universities that have, you know, hundreds
of students in a class or section.

And they might be thinking, well,
how do I give a paper-based test?

Or how do I, have such and such
activity without the students

being able to use a computer?

there are ways to do it.

we were able to teach students, with
hundreds in the class, before calculators

and, you know, computers and such.

So there are ways to do it.

you know, maybe you cut off the internet
for example or something like that.

but there are ways to kind
of, handle that situation.

So I'm in the same realm, but I, I do
have some comments that would be useful

to a faculty member, for example.

so just my stump speech again, you know,
in an engineering discipline, I believe

our students very likely are gonna go into
data-driven, engineering driven careers.

so they're gonna need to know how
to work with gen ai, to make their

jobs more efficient and effective.

However, also just as important
as the critical thinking.

And so a lot of courses at the
university level teach critical

thinking, but not necessarily from an
engineering or data-driven context.

So think like an English
composition class.

Mm-hmm.

so what my colleagues in an
English composition class could do.

Is, I guess you can make it so
they don't use a computer or, and,

and that's gonna be really tough
because class sizes are really big

and, you know, how do you grade,
you know, handwritten papers, right?

But one of the things that I've,
heard of some of my colleagues doing,

but it requires the students to use
a particular software such as Google

Docs instead of Microsoft Word.

they are requiring the
students to use a Google Doc.

And they put in a keystroke counter.

And so if you had to write
a thousand words mm-hmm.

And, and including editing
and deleting, and, you know.

You're looking at several
thousand keystrokes.

Sure.

Absolutely.

Okay.

And so what professors are doing in
these fields is they're using tools to

measure the effort or measure the output.

Again, it's still possible.

Someone used Chat GPT to write
the paper and then they typed

it looking on another screen.

Sure.

But the, the, you can now at least
see, did they copy and paste it

from another source and it took
10 seconds versus say three hours.

Right,

David Brandt: right.

Absolutely.

Okay.

you've also mentioned that you
had talked to a number of teachers

yourself from other sectors,
not just industrial engineering.

and they all said that they were
concerned greatly about, copyright law,

intellectual property, so on and so forth.

I think about that more and more,
particularly when it comes to the,

the level of evolution I've seen
in image and video generators.

not to say that.

Copyright is irrelevant.

It's very, very much is relevant.

but where I think everyone is
having a little too much fun is

in image and video generators.

In this academic world, how much
emphasis is being put on sourcing

and verification and this high speed
and scaling up of generative ai?

Joe Wilck: So I, I want to answer
a couple of questions that are

embedded in your question and then
get to your original question.

David Brandt: I give you
a lot I understand First,

Joe Wilck: I, I personally am also
worried about copyright law, and I know

the current, executive administration is
kind of downplayed in the United States.

however, all of these tools that
we're talking about are globally used.

Mm-hmm.

We, we'll see what happens
in Europe because Europe, has

always been ahead of the United
States in terms of data privacy.

Sure.

And also cybersecurity,
you know, regulation.

Yep.

G PDR

David Brandt: for social
media, for example.

Joe Wilck: Yep.

So I think what's gonna
happen is these companies.

And I'm not gonna name names,
but I believe they're gonna

have to pay a settlement mm-hmm.

To various copyright holders because they
use their material to build their models.

for images and video.

I am very concerned about deep fakes.

we, we are likely gonna have to
see ways for these AI tools to

have like a digital watermark.

Mm-hmm.

And so there might be a way to kind
of see a watermark to see if something

is, you know, a deep fake or not.

David Brandt: Mm-hmm.

Joe Wilck: Now getting back
to another question you have

though about the academics.

Hmm.

In the last two years, all of the
key journals and publication areas

have put into, place certain rules
about the use of generative ai.

David Brandt: Mm-hmm.

Joe Wilck: And, in almost every
journal that I've submitted since,

you know, early part of 2023.

There's a little commentary box
that you have to kind of explain,

did you use generative ai?

if so, how did you use it?

Did you cite it?

You know, all that sorts of stuff.

Mm-hmm.

And so I think from an academic
standpoint, we're kind of

still on that honor system,

David Brandt: right?

Joe Wilck: there are plagiarism
checkers, there are gen AI checkers.

I'm not sure how good they are.

But for the most part, we do still
operate on this honor system, and we're

kind of waiting for the AI checkers and
the plagiarism checkers to, improve.

David Brandt: Well, and to what
you described about, you know, how

these companies use labor overseas.

Mm-hmm.

To me that registers, there's still
a mass need for human oversight.

The AI cannot necessarily check itself,
you know, or, or, or at least hold

itself to the level of accountability
that our legal system requires.

I just don't see that
we're near that point yet.

Joe Wilck: I, I agree.

I also think that.

It's possible these companies are
kind of foreseeing the litigation

that could occur in the future.

David Brandt: Yeah.

Joe Wilck: And it'll be a strong,
it'll be, they, they will have a

better explanation to a jury that yes,
we did have human employees checking

the images, to, to train the model.

David Brandt: Yeah.

Joe Wilck: And, you know, I
still, I personally kind of think

I'm gonna use the term cyborg.

I personally think we're gonna end up
in like a cyborg situation where we'll

have, an AI tool check the image first.

Mm-hmm.

And then if the AI tool.

Says that it's okay, then a human will
check it for one final verification.

Sure.

And so these companies can say,
yes, we had humans check these

and to the best of our abilities.

You know, we tried to, to
eliminate as many as we could.

Right.

David Brandt: Well it sounds
like the human in the loop

debate will continue it rages on.

Yeah.

So how is generative AI's software
being incorporated into industry?

Are there specific industries,
that you can cite that are

benefiting more than most?

and can you give any specific examples?

Joe Wilck: So I, I mentioned,
the open evidence for healthcare.

I think that's a pretty interesting tool.

I also have seen them in like chatbots
and, or, you know, when you call, and you,

you're in automated kind of, situation and
you're seeing them being used more there.

Where I think they're the most effective.

When you call the, the line you're put
on hold, they'll actually ask you, would

you like to go to an automated system?

And if, if you choose to go to the
automated system, you don't have to wait.

What it's doing is it's reducing
their number of people that actually

need to speak to a human it.

Because some, some
questions really are basic.

It's like, what are
your hours of operation?

Sure.

those types of things can be
answered by automation and ai.

All right.

And so what that does is it reduces
their wait time and then the people

who are in the queue or the line.

that really do need to speak to a human.

They still have to wait, but
hopefully their wait time is shorter.

And so I think that's where we're
seeing it being used now by companies.

What I'm concerned about is where
we're starting to see it being used.

Also in customer facing situations where
there is no human in the loop or, or the

humans that are in the loop do not have
the authority to make a, to, to help.

David Brandt: Mm-hmm.

Joe Wilck: and I know, I know you're
near Atlanta, so in your backyard

you got the, the car, rental agencies
at the airport that are using.

The, the tool that takes images of the
vehicle as they drive it back after

renting the car and they're finding
a little speck of dust and they're

claiming it's a dent or something.

Right.

And then the person can't make a complaint
right there at the station because

you have to call the corporate office.

Yeah.

and it's been shown that it's possible,
like if the car was wet, like the

image might look distorted because
of a droplet of water, for example.

Mm-hmm.

And so those, those are what
concerns me is that the human who is

there physically does not have the
authority to override the system.

David Brandt: Yeah.

Gotcha.

Well, to that end, I'd like to go
ahead and jump over to sort of.

Speed and scale about all of this.

okay.

From a macro view, I've continued
to find it to be a challenge, to

anticipate what's coming next.

In fact, I'm really just not even
qualified to say what's coming next.

I'm now browsing, you know, daily
newsletters, blogs, new sites just to try

to keep up with insights and announcements
from all these different companies.

How do you keep up with the fast-paced
nature of this technology's evolution?

I don't, it's impossible.

Great.

I think we can end it all here then.

Joe Wilck: No, but in all seriousness,
I think I have a curiosity.

and so I think.

You know, when I have the time, I am,
I'm reading up on what's going on.

When I'm at conferences, I'm
going to sessions, I'm learning

what other people are doing.

Mm-hmm.

The papers that people were
writing in 2023 are finally

starting to hit the journals.

'cause it takes a couple of years Sure.

For the revision cycles to come through.

So I'm starting to see some of that.

but from like an industry standpoint, I
heard this example from, a person out of

industry who's like an administrator and
they said, you know, there's organizations

out there that you can trust.

That, that are, have been our vendor for
technical support and that sort of thing.

And they're starting to incorporate
ai, generative AI in some cases.

And you just have to have the
trust to grow with your vendor.

Mm-hmm.

Okay.

So as they add on new things, you
communicate things that you're interested

in and they go out and develop them
or curate them or however they need.

and I think that's what you have to do.

You have to find a trusted source.

And then when you have the
time, you go back to the trusted

source and try to catch up.

David Brandt: And I would argue too,
try to break free of the algorithms.

In other words, don't rely solely
on social media, to feed you because

it's only going to react to the
things that you're looking at.

Responding to hitting the heart button,
you know, the like button, which,

whichever one, like you say, keeping up
with this really is next to impossible.

And because of the, I mean,
like, I, like I just mentioned,

this month alone, there's gonna
be so many new models released.

There's gonna be so, so
much news coming outta ai.

and that's not slowing down at all.

So I think it'd be best also based on
what your relationship to generative

AI is, if that's something that
you're using as a tool for your work.

Find the right set of knowledge
that's gonna help you enhance

your ability to use that better to
accomplish what you wanna accomplish.

but definitely don't give in
to the algorithm if you can.

And I know that's tricky because
we've now been doing that

for 20 years, all the same.

speaking of which, Job
outlooks are murky right now.

the different economic data
that's coming in, obviously some.

Disruption there.

as of late, college students on their
way into the real world, the ones

who just graduated, they're facing
all kinds of hindrances right now.

Getting an entry level job has been
reported to be much more difficult

because a lot of companies are shifting
to leaning on generative AI or other AI

technology, to essentially replace what
an entry-level person would've done.

now AI is not the sole reason, but it's
certainly a dark cloud in the forecast.

Are, are our jobs in more danger
now than they were two years ago?

or is there still a need for
what we've been talking about?

The human in the loop Is automation
going to convince executives that the

workforce can be slashed down, which
we've seen, among some companies already?

Joe Wilck: So, I have
a macro view of this.

I would argue that for thousands of years,
humans have found a way to be employed.

In the local economy, you know,
thousands of years ago and

then now the global economy.

Sure.

And in those thousands of years, we've
seen technological advances along the way.

I do think our jobs will change.

I do think some jobs will be eliminated,
but other jobs will then be created.

I'm starting to sound like a politician,
but, I think, it's hard not to

talk about it politically though.

Well,

David Brandt: fair about it right now.

Joe Wilck: I, I think the key thing
though, and, realizing that perhaps

the majority of our audience is
like engineering leaning, since

we, since we're representing, the
Institute of Industrial and Systems

Engineers, we have technical ability.

We have the ability to continue learning,
and we have to evolve and adapt.

but you also, and I've told this to
students my entire career, if you

pigeonhole yourself to a certain region
or geography, that makes it harder.

Mm-hmm.

Okay.

So if you tell me that you want to get a
job in x, y, Z city, that may be tougher.

Than saying, oh, I want to get a job on
say, the East coast or the West Coast.

Like if you, if you, if you broaden
your horizons a little bit, geography,

then you might open up more doors.

The one other caveat, and again, I'm
not trying to get political here,

but every four years when we have an
election and then we bring in a new

administration, or perhaps we bring
in a new administration, I should say.

There's a little bit of, of a
slowdown in jobs for, particularly

for entry level in, in technical
fields because you're waiting.

The companies are, and
organizations are kind of waiting

to see what's gonna happen.

Sure.

And so, I, I would just let folks know.

What we're seeing right now may also
be compounded with some of the just

changes that we've seen politically.

Mm-hmm.

But I'll be honest, that
happens all the time.

I mean, if you think back five years
ago, COVID, if you think back to

2008, 2009, the recession, if you
think back to 2001, nine 11, like

there's always gonna be something.

But then 12 months later, 18 months later.

We've, at the macro level, we've, we've
got it figured out and we're moving

forward, you know, as an industry or as
a, as an organization or as a country.

Mm-hmm.

but that being said, the people
who will most benefit are the

ones who have the ability.

Or at least the curiosity to move to a
new city or, the ability to relocate or

the ability to pivot and learn something
new, or go into a different industry.

and so that would be my recommendation is
just try to kind of, if you're a student,

educate yourself as broadly as you can.

and if you're a professional, be
willing to, you know, make a pivot.

David Brandt: I think.

I think all that's great advice.

Certainly when it comes to
talking to students who are just

out of school and trying to give
them some sense of optimism.

If the job outlook is murky right
now, to say, give it some time.

You're right.

I mean, a change of presidential
administration brings about new policies.

I'm gonna close, I have a couple
of questions left, but I'm

gonna combine them into one.

We've already tapped a little bit into
our preferred AI platforms, beyond

your research and academic work.

And I'll say this for myself as well.

I do use generative AI as a
sounding board, as a brainstormer,

I. I need to think about grand
ideas in my personal life.

I need to think about the long term.

and it helps sometimes to have somebody
there to talk to, and I'm single, so

I don't have anybody at home to, to,
you know, gripe to or complain to about

like, oh, I don't know what to do next.

it's nice now to have a, have a
sounding board on my computer.

are you using LLMs in a similar fashion
in any way in your personal life?

Are you using it to eliminate a
routine or improve yourself in any way?

Joe Wilck: the, the, the
straightforward answer here is I

am using it more for code mm-hmm.

Than I would've
anticipated, two years ago.

And to be quite frank, it's because
I can articulate what I want to

do in a sentence or two faster
than I can actually type the code.

David Brandt: Right.

Joe Wilck: And because of that,
I can articulate what I want.

Hit enter, it generates the code.

And again, I'm, I have enough
knowledge of code to know whether what

it showed me is gonna work or not.

David Brandt: Mm-hmm.

Joe Wilck: And see, I think
that's kind of the big benefit.

When you're a subject matter expert
in whatever subject you're in, you can

use a large language model to generate
whatever it is you need to generate.

And then you can say,
oh, good idea, bad idea.

And see the benefit of the large
language model is you don't have

to ask it for just one idea.

Mm-hmm.

You know, if, if I worked in
marketing, I could say generate

10 slogans or something.

And then it would rattle off
10 of them and I would be like,

well, these three are bad.

This one's okay.

This one's really good.

And you know, then now I'm down to two.

David Brandt: Yeah.

Joe Wilck: But it's able to generate
them very efficiently, very effectively.

so.

I've used it in areas such as that.

occasionally I'll use it for, with like
paragraph forming, you know, maybe I'm

writing a, a recommendation letter.

Mm-hmm.

And so I, I've written thousands of them.

The student has given me their resume
and it's like, well, you know, how

do I wanna piece this together?

How can I say what I've said
a thousand times before?

Yeah.

But in a different way, you know?

So give me, give me use, almost
use it as like a thesaurus.

Sure.

So overall, I think if
you're subject matter expert.

You can actually get some efficiencies
because you know what's right and

what's wrong, and so immediately
when it gives you a response,

you can say, oh, that's good.

But in some cases it can actually
write it faster than you can type it.

Right?

Not faster than you can think it.

Right.

But faster than you can type it.

Sure.

David Brandt: well, Joe Wilck
from Bucknell University.

I will be curious about where we
stand a thousand days from now.

it has been an incredible first thousand
days, and, No telling what comes next.

Joe Wilck: Well, thank you.

It's a pleasure to be with you.

David Brandt: Thanks for joining us again.

Joe Wilck: Thanks.

Elizabeth Grimes: A huge thank you to
Dr. Joe Wilck for joining us again and

offering clear, grounded insight on
how large language models are evolving.

And thank you to our sponsor, FlexSim
now part of Autodesk for supporting this

episode and empowering engineers worldwide
with the simulation tools that drive

smarter decisions and better systems.

If you enjoy today's conversation,
don't forget to follow Problem Solved

on your favorite podcast platform.

Share this episode with a colleague.

Thanks for listening, and remember, every
great solution is a story worth telling.

Large Language Models: How Far We've Come with Dr. Joe Wilck
Broadcast by