Episode Transcript
[00:00:05] Speaker A: Hello and welcome to the Axiom Insights Learning and Development podcast. I'm Scott Rutherford. In this podcast series, we focus on driving organizational performance through learning. And in this episode we're talking about artificial intelligence. More specifically, developing skills throughout the organization to support and optimize performance using generally commercially available AI tools. And for this, I'm happy to be joined by John Kidd. John is a technical training expert and technologist. He has also several decades experience in internal systems and product development. John's company, Kid Corp, is a provider of training and consulting for technical skills and technology enablement. And so, John, it's great to talk to you. Thanks for being here.
[00:00:47] Speaker B: Oh, you bet. Great. Thank you for having me. It's good being here, man.
[00:00:50] Speaker A: So I wanted to start, and I don't want to retread all of the technical discussion about how large language model or LLM AI works. For those of you listening, we've done a whole other episode on that. If you want to go back into our archives, there's a episode called Artificial Intelligence for Learning and Development with David Wynn and Judy Pennington from University of Minnesota. And we get into the weeds there.
So what I was hoping, John, we could do is focus on sort of the current state of LLM AI just to understand, kind of. Okay, well, where are we now? We have obviously legacy tools. I'm going to call it legacy OpenAI's ChatGPT. I know, it's a strange term, right?
And then we have newer Entran, have, you know, newer versions of grok. We have Deep Sea Cat of China.
So can you help walk us through how you see sort of what the offerings are and how, I mean, what are, what are there differences we need to be aware of? How do you start to understand? Like, where do you begin with this sort of menu of options?
[00:01:53] Speaker B: Yeah, that's a good question. And there's some precursors to that. Right.
I think the answer to the question of what are you trying to accomplish?
Who's the target group that you're trying to enable with it?
If it, you know, I'll try to break that apart and you can direct me, Scott, if I'm going down a path that doesn't make sense.
[00:02:20] Speaker A: But no, lead on, lead on.
[00:02:24] Speaker B: So if we're working with technologists, I mean, the people that are really writing code, not, not everybody is going to be involved in building a large language model. Okay. And I think, I think that needs to be, you know, kind of put out there. The majority of us are going to be consumers of these large language model Tools.
But I think if we are dealing with a group of technologists having a basic understanding on the algorithms that are used to build those large language models is a good thing.
But ultimately the question has to be answered what are we trying to accomplish with that? The majority of us are going to be using tools like Gemini, Chat GPT, Claude, even Grok for that matter, as somebody out there is going to not like this definition, but as a replacement for what maybe we would do a usual Google search for because it becomes a little bit faster, a little bit more interactive. Right. So that's kind of the baseline, the, the, this, the difference between these tools today is a lot wrapped around the, the data that they're consuming into the model. If there's a cutoff date that they're working with like Chat GPT, what was it? How many years ago is the data updated for? I mean it's like five year old data, four year old data.
Obviously that's beginning to change because now we're dealing with Gemini, which is virtually real time. And if you look at GROK on X, Grok is pretty much, you know, kind of real time. So that's, I think that's one of the main differentiators that's in the market today.
And basically it's how quickly new data is being consumed into these models that we're interacting with, you know, which depending upon what you're trying to accomplish with it, it's going to have an impact, right? If I want to, if I want my AI tool to, my gen AI tool to help me analyze the, the latest Texas school choice bill that they're trying to put through, not to get political, which I won't, then I'm going to be going to something like Grok or I'm going to go with Gemini, right, Because they have the latest and greatest updates to them. Chat GPT is kind of playing in that area as well. But you know, that kind of gives a good idea between when it comes to technical subjects, to be honest with you, assistance with, you know, coding or something like that.
I would say that it's somewhat of a level playing field.
But for the most part that's the main differences between these tools today.
[00:05:46] Speaker A: There's the currentness, I guess is a better way to phrase that of the data that the tools are ingesting.
There's also something that I think we've been grappling with, looking at AI tools, least so far, is managing the hallucination factor.
What's your experience been in terms of the tools? If they're more and more up to date, which is good. They're taking in more and more current information.
We're still seeing results that come out of queries and they're sort of circulated around the Internet as almost things to sort of point and chuckle at. But where a tool has simply made up references or made up articles or you see, there was a story in the news I was reading a week or so ago of a pleading that went before a judge citing cases, and I think eight of the nine cited were just completely made up. And so that's still happening from these tools where somehow it's creating an alternate reality, which for an organization or for that matter just a user, doesn't that create. I don't know, I don't want to say distrust, but it's got to make you pause and say, well, okay, how, how much can I really rely on this thing?
[00:07:04] Speaker B: Well, and you know, that's a great point.
I wouldn't use, I wouldn't use these tools to file my taxes. How about that?
I mean, because you're right, you start asking some very intricate specific questions regarding something like the tax code, which, you know, probably six months ago in an experiment, I did exactly that. I wanted to find out the value of the response that I was getting.
[00:07:35] Speaker A: Right?
[00:07:36] Speaker B: Because remember, what an LLM is doing is it's not necessarily, we use the, we use the term artificial intelligence, right? But it's not a reasoning tool, Right. It's not a logical human reasoning type of a mechanism.
It's basically predicting the, the next words, the next content based upon the build of the model itself. And so you're exactly right. I mean, there is a hallucination effect and anything that I get out of any of these tools that would be something that I was going to be depending on, I'm going to double, I'm going to double, triple check the facts because you're, it's not always a 100% accurate. And this also plays into one of the big areas that I think there's a lot of value with. You know, you know, pick your tool is in the development area for software engineers like myself. You know what, I just need a quick example of, you know, a snippet of Python code or a snippet of Java code or golang, whatever you're working with.
It does a really, really good job providing that.
[00:09:03] Speaker A: Now, is that also because. Oh, sorry, don't mean to interrupt. But it's also because when you're developing code through an LLM, I'm sort of, I will say marginally technical. I'm technical enough to be dangerous. But the way I would look at it is if you get a code snippet from, from GPT, you can find out pretty quickly whether or not it's going to work or not and then iterate if it doesn't.
[00:09:28] Speaker B: It's exactly right. And, and so the, the point being is that I'm always going to take whatever it gives me and, you know, try to understand it to the depths it needs, really needs to be understood. Rather than trusting what has been given to me and just integrating it into my code base, if that makes sense.
I'm going to make sure that I understand what it's done, I understand the pieces of it, because it's going to be up to me to make sure that it's maintained, that it works exactly like I expect it to. Which for most software engineers, you know what I'm saying, makes total sense. And it's like, yeah, of course I do that. But you kind of be surprised at the content that's actually generated by the LLMs in different technologies. I was working with Terraform with a group this week. Some of the stuff that I saw come out was not even ballpark correct. And it's not or, or probably more important and this is the nuance that I think is the current concern. It functions, but is it the best way to actually implement what we're trying to implement? You know what I mean?
The fact that it works correctly, that's great. But is it maintainable? Is it something that is the way that we really want to build it? And so I think that's the back end of it. That's going to be kind of important as well.
[00:11:01] Speaker A: So I wanted to ask your opinion on the opportunity with AI, both from. I think we've been talking about the opportunity to train staff within an organization, whether that's technical staff or non technical staff. But I wanted to back up a second and sort of recognize that a lot of the folks listening to this or watching this episode are in learning and development and trying to figure out also how do I use AI to affect the development of my learning content for my organization. So it's a different, slightly different lens on that.
Do you have experience or advice for the L and D professional who's trying to maybe shorten their content to creation game accelerate or lower costs to deliver content using AI? Is this, are we at the point where that makes sense or are there quality risks that you see by embracing AI too quickly?
[00:12:04] Speaker B: You went right to it. It's the quality risk. But again, it, to me, it goes back to exactly what we just got through talking about whether I'm writing code, whether I'm generating content, whether I'm building outlines, whether I'm building a learning path. Right? Because the tools are really good for people like me that don't do real well with a blank sheet of paper, if that makes sense. Give me a starting point. And as long as I understand what I'm trying to accomplish, man, these tools are fantastic. I love it because I can have it. Give me, for instance, if I was trying to build a learning path of taking a group and a technology from point A to point Z, I'll have it build the path for me, and then I'll go in and look at it and say, no, that's not correct. I wouldn't do that. But it gives me something to edit rather than building something from scratch. The same thing is true with course outlines.
You know, build me an outline for working, for learning. I don't know. Pick your technology.
Well, the outline that it builds oftentimes is an outline that's going to take months to accomplish.
So obviously I'm going to go back in and I'm going to say, no, that doesn't make sense. Yeah, that's a good. That's a good order. And then do the editing on top of it. So I guess what I'm kind of circling the field with, I'll land the plane on it, is you, and I still need to be the expert on it. I cannot take what's given to me as gospel. Right. So to speak, it's going to be a tool for getting me over that initial creative hump, if you will, for moving me down the path to the goal that I'm trying to accomplish.
[00:13:59] Speaker A: Right. Well, the oldest advice in the world is don't reinvent the wheel. Right. So if what you're describing sounds a lot to me like you're saying, well, just don't reinvent the wheel. Use the tool to give you the foundation, which is going to be maybe not great, but good. And then adjust and use your human expertise on top of that to make it right.
[00:14:22] Speaker B: Yep, exactly. And in general, across the board, I think that is a really core piece of advice in using these tools. Right.
Something that, as I was thinking about us doing this today, Scott, I've been involved in a lot of quite a few different cloud adoptions, from Azure to aws, us to gke. Right.
And the. The statement that I consistently make about this, it's never about the technology. It's always about the culture, it's always about the, the human being interaction, if you will.
And I don't think we're talking about that enough, quite frankly, when it comes to these tools, because the tools will do whatever I ask them to do. For the most part, the, the real, you know, kind of crux of the matter is inquisitiveness. Okay.
And that's what we need. We need inquisitive people to be able to get from these tools what they could actually provide for us. The good question, you know what I mean? The, and going down that path, because the tool is not going to lay out the question for you, it's up to you to ask the questions. And that sounds like such a simple thing to say, but it, it's a, it's a really important crux piece of this whole thing because you're not going to get from it what you don't ask of it.
[00:16:12] Speaker A: Right. And the flip side of that too is if you're not trained to ask the right question or to know how to manipulate the levers, to be a little bit physical about it.
One of the promises of AI is accelerating innovation, accelerating time to market, reducing costs, and those are sort of enterprise wide benefits. If your staff are sort of grasping at the levers without purpose, a lot of those savings go away.
[00:16:46] Speaker B: Yep, they do, they do go away. So, and kind of, I think the path you're going down there, Scott, is the idea of, or not the idea, but the skill of learning how to interact with the tool that you're using. Kind of the prompt engineering type of thing. Right. Because the way you pose the question is going to have an impact on effectiveness and learning that. And that's not an overly complex skill, it's more of an orientation. Right.
On how to interact with your tool of choice, whether it's Gemini, ChatGPT, Claude Grok, whatever it is that you guys are using, learning how to prompt it to get what you need from it is kind of a important skill.
[00:17:43] Speaker A: Right. And it sounds like that might be one way to look at. There's a little bit of an anxiety, I think, about training for AI right now, which is, okay, well how do I get ahead of this thing and train in a way that's going to be meaningful and relevant six months or a year from now as these continue to evolve? We're chasing a ball that's rolling away from us at some speed in many cases.
But it sounds like what you're describing sort of, okay, let's focus on the basics of understanding how to manipulate the tool. Prompt engineering perhaps as the core of that, because that's going to be a durable skill even as the technology evolves.
[00:18:23] Speaker B: Exactly.
In the, and not to be an advertisement here, but as a big fan of instructor led training, this is one of the areas that I think it kind of excels is because in a lot of my courses, right, that I teach, all actually include the, the interaction with either ChatGPT or Gemini and I'll show students, okay, as we're going through learning brand new concepts, I'll pull over Chad CPT and say, well, let's ask that question and see what the AI gives us for this. You know what I mean? And so I guess the point I'm trying to say is I think we're at the point where we need to begin incorporating the actual usage of these tools in the context of the training that we're actually offering to employees, students, right wherever they're coming from. Because it's not going to go away and as you said, it's only going to accelerate and so beginning to break that ice, showing the impact on how it can really assist.
I think that's huge and I think that's an important piece of that whole learning and development arena.
[00:19:53] Speaker A: As an instructor, how do you approach advising or maybe prescribing AI learning for various different audiences in the organization? We've talked a bit about technical audiences and I think perhaps the path there is clearest. If you're talking about solving a coding problem and you can use the tool as a resource or interrogatory to help you, help you generate code, that makes sense to me very clearly. But how do you, how do you work with say a mid manager or C suite and say, how do you use or how should you be using or thinking about AI in a way that's going to help their role?
[00:20:34] Speaker B: Well, and for me, I think there's some basic orientation, okay, this is what it is, this is how you get to it, this is how you. And the basics of this is how you interact with it. But then I think it's very much an interactive type of an activity because what I like to do is depending up, you know, with, depending upon the group. I'm going to ask what is it that, what, what, what is, what's your job role? What is it that you do day in and day out? Okay, that's great. What questions do you normally have through the course of your day?
What is the information that you want at your fingertips to be able to easily get access to and so we'll actually. I actually take them through different scenarios that they may encounter during their days based upon what they've told me, and we actually do it because I think seeing it and interacting with it based upon yesterday's problem that you were trying to solve and seeing how you could have done it with, you know, ChatGPT or Gemini or, you know, the tool that your company is using, I think that's important because that's how you make it really concrete and you kind of remove the illusion of, you know, AI. Because I think it has kind of a.
I don't know, it kind of has this movie effect to it. You know, we're going to be taken over by the AI bots kind of thing. You know what I mean? But.
[00:22:16] Speaker A: Well, yeah, yes, go ahead.
[00:22:18] Speaker B: To break down that barrier. You know what I mean? To. To say, look, see, we just. We just got an answer to the question that you had.
And obviously you need to know if the, you know, if the data that you've been presented with is, you know, factual and accurate based upon what you already know. But this is how you can get that information very, very quickly.
[00:22:43] Speaker A: So I did want to build on what you were just saying in terms of the. Maybe the technological skepticism. I think there's a.
And when we were sort of prepping for this, I had used the phrase of, you know, AI is kind of the dog that caught the car. Now what do I do with it? Now that I've caught.
Reminds me of kind of where we were in business in the, frankly, the middle 90s with the Internet and with web. It was this, you know, there were a number of businesses I remember at the time who looked at the web and said, you know, it'll never catch on. It's, you know, this is a distraction. Why are we putting so much money and effort into this? It's never going to amount to anything.
We've sort of proven that wrong. But it was a technology that many businesses were trying to learn.
Build the plane while you're flying it and then try to adapt to an unknown future. That seems where we are today with AI, where there's a potential for businesses to really embrace the potential of transformation of their business model, to embrace AI, not just implementing it on a tactical level.
[00:23:53] Speaker B: Oh, no, I totally agree. And I mean, in your. I think your analogy of the. In comparison to the adoption to maybe not the web as much, but the cloud. Okay, sure, sure. The cloud adoption into one of the big three providers is a direct parallel, in my most humble opinion, to the whole AI thing.
And I think with the cloud, I'll never forget a conversation that I had with a, I guess you would call them an operator, an admin of their ecosystem, their compute world. Okay, I'll leave the company name out of it.
And we were discussing the impact on moving into the cloud on the group that he managed. And he said, well, frankly, my people, all of my folks that do operations are really concerned about their future. And as we discussed this, the idea that the cloud was going to take over their job was very prominent. I think we're experiencing the exact same thing right now with, with AI, with Gen AI as well. Right. All across the board.
The answer to that, though, was not that it's going to take your job. You just need to learn what the, the next evolution for the role that you have is, how to, how to adopt that capability into what you do.
And so, you know, a lot of those folks found out that the cloud didn't take their job. Right? AWS didn't take their job. What they had to learn was the infrastructure of the cloud provider and then adapt to it. Well, the same thing is true with these AI tools that we have. Okay. It's not that it's going to take over your job. Okay. I think we're, I think we're quite a ways away from that.
But how can I use it to make what I do be more efficient? How can I get to the finish line quicker because I'm using these tools? How is it going to help my project team and in what ways would it be beneficial?
I think that's where the focus needs to be because it's not going to all of a sudden kick out all of the, you know, all of your legal department, the folks that manage the insurance at your company, your developers, you know, across the board. It's not, it's not going to replace it. So being able to adapt, the benefits of it is huge. Right. And I think being able to see that, I think that's a very, very, very big part of the training because I think companies are out there going, what in the world, you know, scratching their head. What in the world do we do with this? You know, it's out there. They're very concerned about not getting company information into these tools. Right. So much that access to them by their employees has been totally locked out. I think we need to find a way to begin to, you know, open up the doors a little bit. Many companies have accomplished this in many different ways so that people can begin integrating this in their daily Workflow, if that makes sense.
[00:27:42] Speaker A: I think it does. And I sort of bring it back to an example. Within learning and development, let's say if you have a task and you're developing a module and you're going to spend five hours putting together assets and organizing a learning flow, well, if you could, as the L and D professional, put AI into that process and say, I'm not going to do the graphics manually, I'm not going to do the voiceover with a human, perhaps you might not spend fewer than five hours, but you might spend those five hours differently. And my hope for AI would be that the quality of the product increases because you're enabling your experts, the person you've paying, to do the job, to use their skills more effectively. The human element can be supported, I think, by the AI enablement.
[00:28:33] Speaker B: Yeah, absolutely. And going back to another conversation about, you know, that's that same manager for his operations team, the other thing that they were really concerned about at that time was the blossoming of DevOps. Oh, my gosh, this whole thing is going to automate my job away. No, it's not. It's going to actually make what you do more efficient and it's going to allow you to focus your attention on those other problems that you've been wanting to look at that you haven't been able to have the time to look at. Now that we can automate a lot of that stuff, hey, I can go solve those problems. The same thing is true here. I mean, I think it really, truly is.
And a lot of the classes I teach, especially the technical classes, have really been able to demonstrate that. Oh, wow, I didn't know it could give me that information.
Somebody asked a question about what would that look like if I put that together in a python type of a subroutine? Well, let's take a look at that real quick and I'll pull over, you know, chat GPT and have it generate something for me, even though it's not close to production quality, but it gives you an idea on what it would look like. Oh, okay. And we just accomplished that in a matter of, what, three or four minutes, rather than spending the time going back through the whole thing and how to put it together. And, you know, as you well know, Scott, an example is worth a thousand words, right? I mean, it's like a picture, because give me an example on how something functions.
That is a. That's a great teaching tool and that's, I think, a really big benefit today of what these tools can actually do for us.
What I find funny at this point in history, and I think it's your analogy with the cloud I think is just right on the money. There are many things that we do in our job role and I'm not just speaking technically. It's across the board, even in business roles as well, that if we could offload those, it would be huge. Think about I just wish I had a tool that could summarize the highlights of all these documents right? Because I normally spent hours and maybe days on that type of activity previously. Hey, guess what? We do have those tools. So it's learning where it is going to be helpful, that we can adopt it and then move forward, right? I think we're going to become more efficient as a result of it, to be honest with you.
[00:31:33] Speaker A: John Kidd, I appreciate your time. Thanks for coming on the podcast and great to talk to you.
[00:31:37] Speaker B: Oh you too. Thanks for having me.
[00:31:40] Speaker A: This has been the Axiom Insights Learning and Development Podcast. This podcast is a production of Acxiom Learning Solutions. Axiom is a learning and development services firm with a network of learning professionals in the US and worldwide, supporting L and D teams with learning staff augmentation and project support for instructional design, content management, content creation and more, including training, delivery and facilitation, both in person and virtually. To learn more about how Axiom can help you and your team achieve your learning goals, visit axiomlearningsolutions.com and thanks again for listening to the Axiom Insights podcast.