Ep 56 - Workflows vs Agentic AI – What's the Real Gap?
Watch the YouTube video version above or listen to the podcast below!
Ep 56 - Workflows vs Agentic AI – What's the Real Gap? Podcast and Video Transcript
[Disclaimer: This transcription was written by AI using a tool called Descript, and has not been edited for content.]
Dave Dougherty: [00:00:00] Hello and welcome to the latest episode of Enterprise Minds. Everybody's present. This is awesome. So this will be a good conversation. The large batch summary of what we're gonna talk about today is old technology and facades of newness. What that means. Stick around. You'll find out.
Quick pitch before we jump into the conversation. We are doing a newsletter called Pathways. Please go sign up for that so you can get additional information from us or deeper dives into topics or interesting ideas that we find along the way. That'll be one of the primary ways for you to interact with us and get ideas to us, for us to discuss or look into or whatever you wanna say. Family [00:01:00] recipes, whatever. So that pitch being done, Alex, you had the idea. Start us off.
AI Agents and Current Technology
Alex Pokorny: Sure. Yeah. Just as a common topic, going around with the AI circles right now, I'm talking about agentic AI or AI agents. Mm-hmm. And also open AI just had a release that basically is able to incorporate open AI's chat GPT into other products that go calendar, other things like that, Gmail and.
A few of us have also been playing around with some other kind of connection based kind of tools. Like Zapier is a common one that everyone probably has heard of, but n8n make, make.com is the other one. Those kind of ones, they try to connect together different pieces of software and different platforms, and then usually there's some kind of trigger.
This thing does this. So if someone fills out a form on your website that. It goes into a Google Doc and then or Google sheet, and there's a reply to that from Eloqua. HubSpot has this, Salesforce has this. Eloqua itself actually just has it prebuilt in, it's [00:02:00] not new tech, right? So there's this line between what we have today, which is chatbots.
Basically what we have kind of coming. That's old tech gone new, which is if this, then that or workflows or apps or software or automated triggers, whatever you wanna call it, right? Which is basically just different connection points that are automated. And then there's this next giant leap on the other side of the Grand Canyon that suddenly is agentic AI and a GI.
And I'm, I'm struggling with how do we get from where we are today, what we're talking about and what kind of things are possible with the current tech. And then where on earth, somehow the discussion about agentic AI and what it's able to do, which is this like, you know, amazing home robot sort of thing that apparently is able to do everything very simply, right?
Who's on the other side of the Grand Canyon? And I, I, I can't rationalize that story. I can't wrap my head [00:03:00] around it that well. So I want to talk to you guys about it. Of the, the few little experiences that I've had recently that have been kind of leading up to this. And then trying to get an understanding of the statements out there right now about AI and what it can currently do are in like two different worlds.
And I, I, I don't know what's in between. I don't know what, what the bridge is in between these two pieces. So I really wanna talk to you guys as well about where have you seen this going? Do you see it continuing all the way through, or do you think there's a gap somewhere and we're gonna get a little time to cold today.
We'll get through a little bit of that. Mm-hmm. But we're also getting into some like use cases, the realistic real life use cases of AI and where it stands today.
Dave Dougherty: Right.
Defining Agentic AI
Dave Dougherty: I think before we go too far in, we need to agree on some definitions because at least for me as we go through this conversation, because it is so murky.
There are a [00:04:00] lot of people using ag agentic. Yeah. In very liberal ways. And yeah, so for the sake of discussion and debate and. Hyperbole, we will, at least in my mind, what you were talking about, Alex, of the, if this then and all that, that to me is a workflow, an automated workflow that has been around for 15, 20 years through the audit marketing automation platforms through some of the Zapier connections, like even the really old ones where you could, you know.
Upload your YouTube video into Facebook automatically so that you wouldn't have to, right. Right. Like that's, that's been a thing for a really long time. It's been very manual to make sure that it works correctly and that it doesn't fail and that, you know with, as the platforms change, you know, the thing [00:05:00] still works.
Has there been some cost savings with that for certain people who are very. Minded like that? Yeah, absolutely. Absolutely. Is that where most people are playing? Is that what most people believe age Agentic AI is? I don't believe so. I think what the perception of age agentic AI is would be that it has that reasoning capability and it takes actions on your behalf.
Like it knows enough of your data and enough of your processes to know that, okay, if. This particular thing in the CRM is triggered, then send this, you know, drip campaign of emails. Send this SMS text that, you know, do this, that and the other and alert the sales guy. Like, okay, great. That is awesome. 'cause now you're talking that's like six or seven different communications.
You don't have to do those emails as a human anymore. You don't have, you know, like, that's all, that's all wonderful. [00:06:00]
Real-Life Use Cases and Challenges
Dave Dougherty: That presupposes a lot of knowledge on the front end that, you know, we've talked about on other episodes. Where are you comfortable giving these systems, these platforms, that much data, right?
And, and yeah. Is that at least clear? Would you guys change those kinds of definitions or is that how you see them before we move on?
Ruthi Corcoran: I think we're good, but I'm just gonna recap a little bit. So, and also I'm gonna add to this, so we've got sort of the generative piece, which is you've got based on sort of natural language we're creating, text, right? Correct. Visuals like there, there's that piece. You have automated workflows, which to your point, Dave, these are sort of sequential steps that are repeated in a specific order over and over and over again. Mm-hmm. Again, oftentimes with if then so they can become more complex. And then [00:07:00] we have this agent piece, which is sort of you have an almost an independent actor that's able to take an information.
Pursue a specific goal, but make decisions along the way, regardless of the specified workflow. Yes. And then learn based on feedback loops. So those are those sort of the three that we wanna play around with.
Alex Pokorny: Mm-hmm. That's it. I mean, I, I think your third definition's really good. It's taking notes. I
Ruthi Corcoran: totally catchy pt it.
Alex Pokorny: Well, the, the, the example I think in my mind is a lot of people talk about agents as like, oh, they can, you can buy sports tickets and a plane ticket. I'm like, that's cool. Except no because. There's decisions to be made. So if I had a high schooler educated individual as an intern and I said, Hey, I need to book a book, a ticket from my city to New York City on this day.
They could go do that. And they were like, oh, I got you a flight. And it, [00:08:00] the flight leaves at 8:50 AM which means you need to get to the airport a couple hours earlier. And the cost is gonna be this, and your return flight will get you back around like 5:00 PM or something like that. And it'd be like, oh, those are great times.
That's great. A bot who would be more price conscious would say, Hey, I got you on a 4:00 AM flight and you're gonna come back at 11:00 PM And I'd be like, no. I don't want that. That sounds terrible. Or like I got you a rental. You're gonna have to take an Uber for two hours outside of the city, but there it is and it's super cheap.
No, I want one at the airport. I'm willing to pay more in that situation, but I'm not willing to pay a luxury amount. I'm willing to pay somewhere in between amount, and a high schooler would have the reasoning to say. Ooh, booking a 4:00 AM flight or 5:00 AM flight seems a little weird. I should probably ask at least first, or you know that maybe that's not right.
Maybe [00:09:00] I should look for this. Or, Hey, where is that super cheap car actually located? And they Google map it and they're like, oh, oh, it's super far. And with rush hour traffic and all the rest, you're never gonna get there. Like there, there's all these other little added on pieces. The car rental thing was maybe a personal life example there.
Dave Dougherty: Well, having just experienced the car rental situation with a friend of mine, I was looking at the prices and he, you know he tends to be on the cheaper side of of things and I said, screw, screw it. The premium car is only $20 more. I will pay the $20 because $5 a day. For a nicer car is worth it to me when what we're talking about is a lot of driving to and from places.
Sure. Right. Because then I don't show up on my vacation and I get stuck in a, you know, Volkswagen golf with all of our luggage, with all of our, you [00:10:00] know, and we're just crunched, like mm-hmm. Nope. Nope. Not happening. That's not how I travel.
Alex Pokorny: Yeah. Which there's a lot, there's a lot of context there and mm-hmm.
There's a lot of nuance there as well, which I think that, to Ruthie's point of her definitions, I think that really gets into the agent that people are talking about versus the agent that we've seen a lot today, which is more of those automated workflows, which would be, go to this website, find this thing, select it, go through this purchase process, use the saved credit card, purchase it, which that's a workflow.
There's, there's, I think a good line there, Hy.
Ruthi Corcoran: Some of the thoughts come to mind as we talk through this. And I'm also thinking about, Alice, your original question of sort of like what it concurrently do and sort of what, what do we think it can do or sort of where is it trending that it may be isn't quite, but we see, we see little bits and pieces of it is kind of how I see it.
So first I wanna just. Put a note that if you put a high schooler or intern on this, they might [00:11:00] not have the full thought process. Oh, I know. Yeah. We might be giving too much credit, but I think it's a really good analogy. But like a workflow to me is more like an intern or a, you're out, you, you have your documentation.
You go, go execute, said documentation.
Dave Dougherty: I'm glad you brought that up because you are talking
Ruthi Corcoran: about
Dave Dougherty: question. The intelligence of high schoolers publicly. So thank you for bringing that one up, Ruth.
Ruthi Corcoran: Not I'm setting the bar low. Go ahead and jump.
But then I think about like the, the sort of more agentic piece that we're talking more about an assistant or a trained coworker, right? Mm-hmm. It's, it's, you know, that extra layer. And what I notice is, a, a couple of things. First is one of the, we talked about this a little bit, Alex, of this convergence idea of you've got you've got the natural language, so now you can, you can talk to your computer, [00:12:00] which I think we brought that up too as an idea of like, now you can speak to the computers.
That's pretty cool that that alone mm-hmm. Does some, changes the game a bit. You can speak to the computer in order to execute on the automated workflow so that that addition. Now all of a sudden makes the automated workflow a bit more powerful. Or you might be able to create more automated workflows than you could before.
Simply be, because you, me, a non-technical user can now create automated workflows. I couldn't before because I didn't have the technical chops. So that's kind of cool, and it's important to note that that alone is. An upgrade than what we have before, but maybe it's not the dream yet. And so then I think about this agentic thing.
And what I notice is if setting even work cases aside in my own personal use cases when I'm working with, when I'm using chat GPT to find the latest recipe or figure out how, what I can make out. My kitchen or decide on paint colors, it's already doing [00:13:00] that extra layer of asking me contextual questions.
Maybe it doesn't, it doesn't presume to know all the time what the answer is before giving me the answer, and that I think is important. It's, it's learning enough. Or it looks like it's learning enough to be able to ask the follow-up questions related to the rental car thing related to the airport. What is a convenient time to travel?
Doesn't, wouldn't shock me as a follow-up question that these, the current level of technology could. Could do. Mm-hmm. So that's where I go. I don't, I don't think we're that far away from the, the, what we talked about as this tic idea. And I would, I suspect that it's already in practice in places. Where it can, it can take information.
It can say, okay, I think I know what I need to do. I can execute on it. I can ask for clarification if I need to before moving forward and I can make a decision. Like I have seen a couple working [00:14:00] examples of that. So that's my 2 cents.
Dave Dougherty: Yeah. So I was in a, okay, go ahead Alex. No, no. Taking different direction.
Go ahead. Okay. So I was in. A conversation with somebody that, you know, coffee talks like I keep Fridays open for those kinds of things. Right.
Stages of AI Adoption
Dave Dougherty: And we were talking about sort of what are the stages of AI or what are you thinking of that for like, you know, individuals and, I know this has come up in, in certain conversations around like job interviews to show like, Hey, you are AI first.
Like, let's classify the, the potential new hire on, on AI knowledge or awareness or whatever else. Mm-hmm. And for me, I think, you know, Alex, if you bring up the, the if then soft and, and the workflows. I have been surprised the sheer number of people that I've met over the last 15 years of, of being in business.
The amount of [00:15:00] people who don't know that that's a possibility, even though it's been around, even though it's been in a simpler form. So on the one hand it's kind of like. Ai, when we first started talking about it three years ago, where it was, Hey, this is going to enable a whole number of people to get access to things that they currently don't have access to.
Mm-hmm. Right? Like if you go to chat PT and you say, Hey, give me a personal workout plan because you can't afford a personal trainer. That's a fantastic step towards better health if you actually follow through and do that. Right. Because you know you didn't. You shouldn't be punished for not having enough money for personal training.
Right. So that part's cool. But then you have this middle level and the expert tier. And I think where we're talking about is that kind of expert tier, where in small and medium sized businesses you might have more. Opportunity to play with these fully automated things or knowing that, [00:16:00] okay, if I look at all the different data sources that I have, there's a possibility that I can connect them via APIs or, you know, however it's decided that it's okay to do it.
And then from there I can do this, that, that, that, that, this, right? Mm-hmm. I don't know of a huge number of people who actually think that.
Upskilling people to that point is good. I'm glad we're having those conversations. But there is as I kind of outlined it during the coffee talk, for me it was like, if you are at zero, the first thing that you should do is just pick an AI platform and get used to it, right? Do the free version. Once you're comfortable with free version, get the paid version, you know, get comfortable with the paid features.
Step two. Deep research, do deep research, figure out how to do that, how to leverage multiple tools simultaneously, you know, or like pitting certain things against [00:17:00] each other. That's kind of stage two. Stage three is that multiple sources, whether it's your documents or you know, your Google Looker studio with your YouTube and your ads and your whatever else already in a dashboard, to then make decisions off of that dashboard data, right?
Those are all different use cases, but also varying levels of skillset with the ai, but then also varying levels of awareness of how to use the tools and what the possibilities are with the data stuff. So just something that came to mind.
Alex Pokorny: Yeah, and I think it's really important to. To your, both your points about accessibility.
Mm-hmm. Like there's definitely stuff that I am doing today that I could not do two years ago be and that which is enabled mm-hmm. Largely mm-hmm. To an LLM of some sort, like is absolutely true that, that, that's allowing me to kind of get to that next stage and [00:18:00] allowing me to. Be that kind of technical person or in some of the cases of what I've been doing, things that would've with normal work schedule and meeting schedule, taken me probably a month are now taking me a couple of days.
Right. Like that's a huge upgrade. And troubleshooting, sounding, sound boarding, like somebody to like, you know, chat against on this very, very deep technical topic and it get advice on troubleshooting a technical thing.
Dave Dougherty: Mm-hmm.
Alex Pokorny: That's tough and especially like. At my convenience, that's a huge difference.
Like, yeah, maybe I would've been able to pull some time with a particular developer and say, Hey, can I grab lunch with you? So, because I'm stuck this, on the other hand, I just pull up a new browser tab and I'm asking questions. I'm getting answers within a minute like that. That's a fantastic, a accessibility.
Ethical Decisions and AI Limitations
Alex Pokorny: That we're getting out of it, the agentic kind of gap or golf is also in part to kinda a recent experience I shared with you guys before the show about [00:19:00] asking Chad GBT about how it come, came up with some ethical decisions. And I had had a rather long like text-based adventure thing that I had created with it and going back and forth and there was a bunch of ethical decisions and I was like, I started, I started questioning it.
I was like, you know, I made a bunch of decisions during this process where it's like, you can do this, this, or this. And I was like, which one do you choose? And it's like, you do the next thing. You do this or this. Here's the situation. Does the next thing, well, the village. Right, exactly. Like little narrative based campaign.
Right. Kind of almost Dungeon and Dragons esque, but not, not really. There's no probability kind of, or odds being used. No pizza rolls. I was like, what is the odds that I would've chosen any of the four options that it can been consistently given to me? And it started listing out different percentages.
I was like, interesting. Interesting. Which one, sort of, which ones would you have chosen? It started listing out its likely ones, which some of 'em matched and some of 'em didn't. It was like, interesting, how do you make [00:20:00] your ethical decisions? Mm-hmm. It went through, which we've talked about in previous episodes, and you've probably heard from other places, is how these systems actually work and it's based off of, it's called the vectorization and embedding of text.
But really if you think about it, it's if I'm gonna write a book. And it's a, you know, fiction story. I'm gonna start it out. It was a dark and stormy, what do you guys think the next word is? Night. Night. How do you know that
Ruthi Corcoran: Common knowledge?
Alex Pokorny: Because of all the other books that start the exact same way.
And that is basically in short, how an LL works. It looks at all the amount of information that it has. Mm-hmm. Says what's the most likely next phrase, and basically picks that phrase. And it builds it out and it keeps on going. And different topics that are closer together are put together. So more likely we're gonna talk about dark and stormy night versus dark and stormy, you know, decade, century room, whatever.
And all those other words are basically [00:21:00] farther away than the word night is. So. It gets pushed in. The word day is very far away because it doesn't make any sense at all in that context. Right. So it's building out all these things and it basically gets down to the point of it is looking at words, the relationship between them based upon all the information it has, but it has no understanding of what the words mean.
Right. And knows connection points. It's math, probability based. Exactly. Connection points where these words are more likely to be next to these words, and these words are more likely to be far away. So the far away ones we don't use. The ones that are close, we do use, and it creates that. So it's, you know, it's giving you that generic kind of answer, best practices and all the rest, because that's what it has.
It's not coming up with a creative answer, a creative solution, because that would be one document on the thousands that it has and the other thousands that are doing the best practices, but, right. Mm-hmm. So that, that's the thing, is like, it's, there is no comprehension there.
Iterative Development and Future Challenges
Alex Pokorny: And that's the gap that I'm seeing is we can come up with [00:22:00] some, I'm also building a GPT for a landing page, you know, prototype kind of tool.
And it's extremely iterative approach. Every time it kicks one out, something obviously wrong with it. So I have to go back and edit that GPT and say, no, we got another rule that we gotta always use this. You always gotta have a privacy policy at the end of our form. You know, we always have to have these fields and all the rest keep going back and forth, back and forth, back and forth.
It's iterative, which is software development. Mm-hmm. A very accessible software development, but it's software development. Mm-hmm. It's not intelligent enough to look at it and say, here's what they're trying to go for. This is, I'm gonna put up a creative solution. Here it is. And that's the Gulf that I'm, I, I don't understand of how we're gonna get past that.
I don't know how we're gonna get past that gap towards this future state when it's no longer documentation based. It's now action based. And there may not be a template that it can use. [00:23:00] That's the gap that I'm seeing. That's the one that, I don't know how we're gonna, there has to be a different technology or something else that gets kind of built in.
But I'm lost, man. I'm lost on how on earth we're gonna get from here to there. Like the Sam Omans quotes about a GI and AI scientists next year and stuff like that. I don't know. I don't. I guess I just don't have the, the technical background, the information or something, but I don't know how we're gonna get there.
Ruthi Corcoran: Right. And what I'm thinking about as you're, as you're describing this, you are actively hitting the borders of this problem right now. Yeah.
Dave Dougherty: Yeah. You're,
Ruthi Corcoran: you're hitting it. And myself, perhaps other users haven't hit the borders of this in the same way, because when I'm using the tool, they go, oh, I'm trying to find this space, this thing.
Tapping into General Knowledge
Ruthi Corcoran: The thing is already out there, like it exists. And so when, when I'm prompting and I'm sending out questions or trying to like play with things, not all the [00:24:00] time, of course, but a lot of the times I'm getting back really valuable stuff because I'm tapping into the general knowledge that exists. Out there that makes what you described function.
And so I go, oh, this is fantastic. This is helping and this is working. Mm-hmm. Mm-hmm. Because the thing exists and I'm just playing off of that. So it perhaps is an illusion in many cases, and I think that's what you're pointing out of. It's a very helpful, it's a very helpful tool right. But it is, it in and itself is not yet take creating, creating or, or pursuing goals as far as we can tell.
Alex Pokorny: Yeah.
Creative Connections in Interior Decorating
Alex Pokorny: Creative connections. You can see also like for instance with kind of the, the interior decorating example of colors. Mm-hmm. The questions that I might be asking, you can kind of think of it of. All the like, like a kind of word map of all the words that are related to interior decorating and kind of the different directions you could go with it.
Dave Dougherty: Mm-hmm. Right.
Alex Pokorny: And it's trying to ask you questions because, wait, are we going in [00:25:00] this direction? Which takes me like further to the left. Are we going further down? Are we going further up and we're asking a couple questions to start narrowing. It in. And now we're gonna go look at, you know, this word cloud over here.
Oh, you don't like that word cloud. We're gonna shift to this little word cloud below it. You can kinda understand like where these questions are coming from. It's starting to push you towards the different kind of like nexus points of saying, oh, you're not a big fan of. Classic, you know, whatever style you're kind of looking for this coastal thing.
Oh, actually you're not quite, you just like some of the colors of it, but you don't like the, the, the boat motif. So we're gonna get away from that. A little bit less nautical, but we're gonna keep some of the colors and the, the bright colors and you know, you can kind of see where it starting. That up
Ruthi Corcoran: down analogy is so helpful for me because you run into these situations where you're just like, you just missed the boat entire.
Like what even is this question? And to, yeah. The way that you've sort of laid out sort of a broad brush, here's how this works, here's this, here's the sort of word pairings that it's looking for and it's using to inform, [00:26:00]
Alex Pokorny: right? Mm-hmm.
Ruthi Corcoran: That's where it gets stuck
Alex Pokorny: or, and the, the, yes. Oh, go ahead.
Challenges with AI Conversations
Alex Pokorny: Oh, it's just, it also, it, the other interesting thing about it is it tokenizes your entire conversation and it's all of its responses so far.
So it's conversation is constantly being guided, and you can sometimes see that with really, really long chats. Our conversations with any kind of AI tool right now, it gets stuck and it's sometimes really hard for it to say like, no, forget this. We're not going in that direction. We've, we've already talked about that.
We're going in this direction. And it's like, it brings it up again and it's like, my gosh, how did we not get this, you know, moved forward yet? And that's the thing is it's taking this entire conversation into account. And then basically getting stuck in this particular part of like the word cloud map, basically.
And it takes a long time to get it, to remember to say, no, break those connections. We wanna move it in this direction. And sometimes it gets really frustrating with some of these AI tools, but the more you can kind of like see of like, oh, you, you [00:27:00] took a wrong turn and you're refusing to let go of it. I understand like,
Dave Dougherty: well, that's also I.
You talk about creative connections created creativity in and of itself, or one definition of creativity that I've seen is making unique connections between two, unlike, but similar things. Yeah. So if you have a model that is purely based off of probability. The creative solution would be the least probable solution.
Yeah, yeah. You know, so it takes somebody to go, you know what? I do want mov and orange for this living room. I wouldn't personally do that, but more power to you and your, you know, red plastic glasses. You.
And, you know, it takes all types. And that's the wonderful thing about creativity, right? Because my version of creativity is in a certain lane and, and it's different than yours and it's different than [00:28:00] Ruthie's and, and, but it's all, you know, collaborative and awesome. Yeah, where the problem that I think is, or what my interpretation of where some of the problems are, right, is that it is promised as a general purpose.
You know, technology where we are trying to apply it generally, and sometimes it's working, right? Like the, to your point, Ruthie, I'm in a hundred percent agreement with you. Like when I ask it questions where I'm like, just give me the best practices in this field that I'm not an expert in. And I just wanna be like kind of okay with my thought process in, you know.
With my boss about paying me more. Right? Like how do I approach that? Right? Those kinds of things are, it is wonderful for, because then it's okay, it is kind of crowdsourcing general knowledge and, you know, expert knowledge and, you know, and giving me [00:29:00] a good thing to think about and, and work through.
If I am trying to solve for how to make a less. Deadly chemotherapy thing. I don't need it to answer the HR problem, right? I need it to know about protein structures and radiation and whatever else goes into chemo. I don't know why I chose that, 'cause I'm totally not skilled and versed in any of that.
But, you know, like you want, you want that vertical thing and you know, what was it like a year ago? Year and a half ago, it seemed like a lot of the news in the, the industry and a lot of the conversations the three of us were having was, okay. You know, you have the general text, but then there're gonna be these vertical ai, like you have your finance, AI and your, you know, banking, ai and your medicine and manufacturing AI [00:30:00] and great, you know?
Yes. And that right. But it takes specialists to know where those guardrails are and where those those limits are. Right? The fact that, you know, you have thought a lot about ethics and morality in your life allows you to question these things in a better way than a lot of Americans
Ruthi Corcoran: comments. I'm trying to think of. I'm trying to recall a podcast or something I was listening to. And one of the comments that the host had made was that in various, you know academic field experts that he had spoken to one of the values they saw wasn't so much, I'm thinking about your, your chemo example.
One of the values they saw in these tools wasn't so much that it was an expert in their field, but. They were able to use it to ask [00:31:00] better questions of themselves, to be able to solve,
Dave Dougherty: right.
Ruthi Corcoran: And so I think I, I, I think this is a really cool exploration of, of sort of what the limits are that we're seeing currently or where it's not as as good and what it is good at.
AI as a Strategic Advisor
Ruthi Corcoran: And I've noticed this as well in interacting, and I, my preference is for chat, GBT. I use Claude occasionally and, and Google, you know, when it's embedded. You guys, I think, use Gemini more than I do, but one of the biggest value ads I see is having it improve some of the ways that I'm thinking.
Thinking and pushing back, or have you thought of this? Have you thought of this? And right in point, I know that we've touched on that one as much as well, where it's not, maybe it's is tapping into the best practices thing. Maybe it's because humans are good at asking questions and so it's sort of picked up on that.
I, I'm not entirely sure, but that's a, that's a different type of value that I've seen. That's not the generative, it's not an automated work. Flow. It's not the sort [00:32:00] of pursuing goals and, and sort of making decisions. It's the dialogue portion, which is a surprise. I I didn't expect that when they, when Chey PT was first announced that that would be one of the main value adds for me.
Dave Dougherty: Yeah. I have to say, I, I'm in total agreement with you on that and the being able to use it as kind of a strategic advisor or like, just that, that idea sounding board. Saved you guys from getting so many late night texts. You know, like you don't even know how much that these things have saved you. So, you know, but it's like, cool.
I don't have to annoy my friends anymore with these one-off ideas or, you know, here's a weird thought. Because I can either choose to explore them on my own or just, you know, leave them be because.
Balancing Productivity and Creativity
Dave Dougherty: I'll throw this out there and you guys can choose whether or not we talk about it today, but I know for me, one of the [00:33:00] things that's sort of counterintuitive to all of this too is, I mean, we talk about using these things for productivity and, you know, we wanna be able to do this outcome based thing.
Like, Alex, your point of like building the landing page thing and but then having to troubleshoot it so much because of, you know, at that point. Is it faster to just have built it yourself in the platforms that exist today? You know? Yep. So there's, there's that portion of it, but because I have access to the sounding board all the time, or the deep research things all the time, I can go chase after whatever wackadoo idea I have, and then I actually have to put up guardrails for myself to prevent intellectual burnout.
I'm able to chase whatever ideas I want or come up with, whatever creative things I want or you know, like I have to actively go, you know what? Shut up. Go for a walk and listen to some music. Like, get away from all of this because you're about to spiral. [00:34:00] Like.
Alex Pokorny: I feel that so hard, man, there's, there's too many times where I'm like, like, okay, like, you know, here's a situation I'm struggling with at work.
I'm trying to pull this together, this kind of a thing and this kind of thing. And it gives me like, okay, here, here it is. Like, here's the code you need to implement and here's the, like, the next like 15 steps that you need to do. And oh, you wanna, you know, do a book lunch or something. Here's like the, the 15 things that you need to do.
And underneath of those, there's like five tasks each and all the rest. I'm like, great. Yeah. I need to go to bed now
because it's like that. That's, I would love if it would just then do it. Mm-hmm. But it is still at the end of the day, I've now been given a ton of direction. That I didn't have to slowly kind of figure out for myself step by step, you know, you know, go, kind of going through the, the world's natural workflow of, you know, mm-hmm.
Decision trees as you come across this problem and this prom, you learn this and you keep going through this. Like, you know, that's life. Right? And [00:35:00] instead gave me like this gigantic rubric, which will take me, I don't know, 15 hours to complete. And that's, but the
Ruthi Corcoran: cool part is. Okay, he, this is not answering Dave's question, but addressing Alex's situation, you learned that faster.
Alex Pokorny: Super fast. Super fast. I
Ruthi Corcoran: mean, you might not have gone down that route anyway. Yeah. Maybe, maybe you're going, maybe you're exploring different roads because it's faster. Sure. So that's taking up additional time, but the key piece is now you've learned, okay, this is what it would take for me to get there.
Do I want take that or do I wanna just say No, that's good.
Dave Dougherty: Yeah. Or
Ruthi Corcoran: some of the similar use cases that I've had, i'll start down a project where I kind of think, I kind of know what I need, but I'm not entirely sure. And it's like, all right, like what would I need to if I wanted to pursue this? Or like, what does exactly this thing do that, is it valuable to me?
Is it not? We'll break it down and then I go, [00:36:00] oh, okay, I don't need that thing. Like I'm gonna move on to the next, or, yes, I do, but I'm also, I actually need this subset that I didn't even realize I needed. So there is that, that value add too, of being able to. Process ideas and what it would take to execute on those ideas faster.
And then what's frustrating is you're like, okay, can we just do it now? I don't actually wanna go through the steps. Let's just have it done.
Dave Dougherty: Mm-hmm. That'll work. It's, I do not. Interesting. Not
Ruthi Corcoran: yet, by the way. I, okay.
Dave Dougherty: Well, you have enough going on in your life that we don't have to talk about. I don't enough time to
Ruthi Corcoran: spend so much time with these things.
So it's just like I haven't even don't have the capacity to reach the burnout point. I'm just like, oh, good, we move, go.
Dave Dougherty: Yeah, it's interesting. So I was talking to a coworker the other day and you know, granted this is one of those like totally anecdotal, so just take it as a good story kind of thing. Like, again, just pitching an I, an interesting idea to think about for later.[00:37:00]
So she was talking to a doctor friend and they were saying that one of the nasty side effects of all the, the e-bikes and the e scooters and whatever else is that kids are getting more ankle injuries because they haven't developed their calf and small sort of stabilizer muscles enough. So you're seeing these like, you know, Achilles sprains in in younger and younger people.
So total second, second order effect that you couldn't or wouldn't consider. Right. That is now like a thing. I wonder with how quickly people can get to an answer, but then to your point Alex, be like, Ugh, work. Like how quickly does that start impacting us at work? You know what I mean? Like. Yeah.
Yeah.
Ruthi Corcoran: Hopefully quickly. And here's my argument for why, because [00:38:00] it, it cuts out a lot of, it allows for the, I've got an idea, let's run with it. But you learn, you can learn faster,
Dave Dougherty: right?
Ruthi Corcoran: This turns out to be not a good idea, or it turns out to be much bigger lift than I anticipated,
Alex Pokorny: right? Yeah, I agree to the same point of, I've talked about some of the projects that I've been working on, but.
The amount of effort that it would take to resolve them completely. I can learn about, very quickly, about ahead of time and start to understand like, oh, this is gonna take a lot. Like I, I, I had this idea, this concept, and it gives me like, you know, like, you know, if you're at a, like a random, small business idea or something like that, cool.
Right? You got an idea, but then you, the tough part is not the idea and the concept, it's always the workload. It's a question of like, how much effort is this going to take? Right. Okay. Like to be able to do those explorations, those sounding boards, those, I mean, advice, troubleshooting, you [00:39:00] name it.
Absolutely amazing. I mean, no doubt on that one. Mm-hmm.
Enhancing AI Creativity
Alex Pokorny: I wanted to get back to a side topic we hit because we talked about creativity a little bit and there are methods and actually ways to make chat GBT and other ones become more creative. So I wanted to mention that quickly about how. There's kind of two main methods.
One is continually asking it so that your conversation starts to go in that direction. So give me a more creative example, give me a different example. It's something more unique and it'll start to actually broaden itself out. So there's a whole K count thing and all the rest, but it's basically talking about what words are close to the cluster that it's at.
Thinking of that like word map kind of thing, right? Temperature wise, which is the technical phrase, basically keeps it very, very tight circle around which ones that it's gonna basically include. And you can, even with any of 'em, you can ask about like what temperature you're currently using and how, what, which one should I make used to make a more creative solution.
Right? And you can annually switch it [00:40:00] for that conversation basically to a different number. So if you go to like 1.1, you're gonna get some really creative stuff. It's basically out of one, so it's. 7.6 points, eight, that kind of thing. Mm-hmm. And it usually sits around like a 0.9 or something like that.
But once you get just a couple points above, now you got crazy stuff. So lemme give you two examples of how this works out.
Fun with Creative AI Scenarios
Alex Pokorny: 'cause have I ever told you about my Darth Vader ducks?
Dave Dougherty: Publicly you probably shouldn't because that mouse has teeth with floors of lawyers.
Well
Alex Pokorny: blame Chuck. This,
Ruthi Corcoran: this, this falls into parity territory. I think you're good.
Alex Pokorny: It definitely is. So two situations that kind of show both methods. One was basically, again, during this kind of text adventure thing, it was a Star Wars based one. So I did a bunch of like, you know, random like we, you know, Jedi beating the, the sth, whatever kind of thing.
It was like, great, that's fun. Give me your more unique story. And it kind of gave me a little [00:41:00] bit more and I was like, no, no, no, no. Gimme more unique story. And it kind of gives me another scenario, another narrative. And eventually we got to this point where it said there was a disturbance in the force at the zoo.
Because there's a bunch of ducks that are parade around with little glow sticks as light sas and they even have little black cloaks and they're currently worshiping a duck that has possibly has Jedi powers and you have to go investigate it. And I was like, that is hilarious. Yes, all about it. So eventually my trope of little characters included.
I, I can't remember what I called them. It was like the dark fuzziness or something like that. And I would send them on missions and it would be hilarious because like you get these like super critical, like ethical decisions and I was like, no, no. I'll throw the ducks at it and then get bunch of fuzzy ducks waddling around with their little like glow stick light sas.
It's adorable. But what that was, was an extremely long tokenized conversation that it was basically taking all of those elements saying, okay, where are you kind of in this world? Right? You're asking for more and more [00:42:00] unique. So I'm doing little shifts because you're basically have narrowed, think of like a funnel.
I'm not at the top of the funnel. I'm like darn near the bottom of the funnel. And it's just going little bits, little shifts around in the bottom of that funnel. Very, very, very small kind of circumference that it's working with it. Mm-hmm. Pause on that one. New, new scenario. I said, okay, you know, that that was fun.
You know, weeks have passed. I was like, Hey, let's do another star Wars role play, like gimme something unique. And I started messing with the temperature on it just to try to get it to go more unique because there was really struggling to gimme something like that. And I was like, man, that was so fun.
Like why can't we just pick that back up again? And it started. I switched the temperature just a little bit above to make it a little bit more creative, and I got Jedi witches who are made of glass, and you land on a planet where the trees change the time based upon the words you used. And I was like, whoa, whoa, whoa, whoa.
We have gone too far here. Like rain it back, and it couldn't, it just simply couldn't. It was going from all the way out to. [00:43:00] As you were saying, wackadoo all the way to like something much, much, much more normal in between and just, you have to be very
Dave Dougherty: careful with your annunciation on glass witches,
which
Alex Pokorny: is made of
Dave Dougherty: glass.
Alex Pokorny: How about that?
Oh my gosh. It, it was so bizarre. But that's piece of it as well as where you have this. And Ruthie, you were saying before about how i's learning, where it's kind of like focusing in and if you have this conversation that allows it to focus in more and then allow it to be a little bit more creative and kind of push it in different directions, you can usually get like a little bit more of a controlled result.
But if you start out from all the way back at the front. I mean, it, it goes between, you know what to do on a Saturday and ha go have a picnic at a park. And I was like, more creative. Okay, I have a picnic on the sun. I was like, okay. Somewhere in between, like, seems to be a bit hot. Exactly. It's
Dave Dougherty: like, [00:44:00]
Ruthi Corcoran: it's, it's like, it's like the the temperature control on on faucets, you know, there's like this really, and this is a problem, right?
You either have too cold or too hot, and there is the narrowest little band in which you actually want it to be. And, and it's exceedingly frustrating. We haven't even solved this faucet problem. It's, it's a, it's soft. Yeah. And then you get
Dave Dougherty: the one friend that has nerve damage and they don't feel temperature in their hands, and they use it before you do and forget to shut it off.
And then, you know, you walk around with a bandage for a week. It
Ruthi Corcoran: sounds like a personal problem.
Dave Dougherty: Well, not really, but you know,
Ruthi Corcoran: oh my goodness, this is such a example. What is say not anymore. And also you know, I would love to see some of some of the prompts you use for these role playing in Pathways, because I am like ready to go.
I'm like, sign me up. This sounds like really fun.
Dave Dougherty: Yeah, no, I've been fun. [00:45:00] Okay, well that was an interesting conversation. I had no idea where we would end up. I did not expect Darth Duck. But I'm happy we got there. So yeah, shout out for Pathways. Check that out. And thank you for making it. To this, you know, this far into the episode like, subscribe, share, it helps us out, you know, anybody else who will enjoy these kinds of conversations or just an exploration of what's even possible with these things, right?
Yeah, let us know. We appreciate it. We'll see you next time, the next episode with.