Episode 22: Chat GPT with Jason Gulya

In this episode, I speak with Jason Gulya, who has been at the front lines of Chat GPT in Higher Education. For the shownotes and transcript visit my website at http://www.calebcurfman.com

Transcript:

1
00:00:01.070 –> 00:00:29.879
Caleb Curfman: All right. Well, welcome, everybody. I’m excited to be here with somebody that started popping up on my Twitter and the start popping up on Linkedin, and is kind of, you know, hitting this world of chat gpt by storm. And that is Jason Goolia, who I get to speak with today. And so, as we’re getting started, would you be willing to share just a little bit about You know your experience, what what you’re doing and kind of how you

2
00:00:29.990 –> 00:00:33.679
Caleb Curfman: came into this work around Chat Gpt.

3
00:00:34.940 –> 00:00:43.250
Jason Gulya: My experience has been sort of strange. I’ve had this very zigzagging history which somehow got me to AI,

4
00:00:43.360 –> 00:00:55.800
Jason Gulya: and I’ll walk you through it a little bit. I do think that there are weird ways in which connections start to pop up. So my Phd. Is actually in seventeenth and eighteenth century British literature

5
00:00:56.280 –> 00:01:10.100
Jason Gulya: so very far afield. Nothing to do with AI. I didn’t study technology. I didn’t really have especially early on any connection to chat which I started to get interested in. So

6
00:01:10.270 –> 00:01:21.510
Jason Gulya: I got my Phd. In seventeenth, and it seems furnish literature, and I actually studied allegory. That was my focus. I went to school for that. I got my Phd. In it, and

7
00:01:21.510 –> 00:01:49.880
Jason Gulya: then, when Chat Gbt came out. So I teach literature, and I teach writing. I teach the humanities. And then, when I got really interested in Chat Gbt, when it came out the end of November. It was only months later that I started to kind of connect these 2 interests, because one of the things that drew me to the eighteenth century, or the enlightenment, or whatever you want to call it, was that it was basically mass upheaval. There were these huge seismic shifts when people

8
00:01:49.880 –> 00:02:14.879
Jason Gulya: people had to learn to think about the world in a certain way. So for a lot of the literature that I study during that time period, they didn’t have germ theory. A lot of them actually didn’t have the models for understanding the universe that we have today. And a lot of them came out throughout the course of what I studied. And so now I kind of see the similarity between what was happening there and why I’m so interested in Chat

9
00:02:14.880 –> 00:02:22.630
and AI in general as a way to rethink what we do, and also rethink how we think I think there is that meta aspect

10
00:02:22.630 –> 00:02:28.379
Jason Gulya: that it, I, I hope, is starting to happen to become more aware of how we actually understand the world. And then

11
00:02:28.550 –> 00:02:54.660
Jason Gulya: so Chat Gbt came out in November. I had started to get interested in AI assisted writing probably a couple of months before I started to create a unit around it. So I teach composition, and I also teach advanced composition. And in that course I do a lot with technology. And I have for years I did an entire unit on robots, and that started to get me into a assist of writing. And then in November

12
00:02:55.210 –> 00:03:04.999
Jason Gulya: I saw this program, and I’m going to tell you my initial reaction, which I am not proud of. I

13
00:03:05.390 –> 00:03:18.239
Jason Gulya: came across Jack Gbt. And I played with it as soon as I got access to it, and my wife was across on the other side of the table, and I said to her, I just found the most awful thing

14
00:03:18.280 –> 00:03:34.860
Jason Gulya: that every student, if they want to get an A in their college courses. They can use it, and they can just cheat their way through the system. And this is the most awful plagiarism machine I’ve ever seen. That was my immediate response. And then

15
00:03:34.880 –> 00:03:50.590
Jason Gulya: I eventually got past that knee-jerk reaction and started really playing with it and really trying to understand it. And that’s what got me into it. And I’ve used social media with Twitter a little bit and mostly linkedin to kind of

16
00:03:50.640 –> 00:04:13.709
Jason Gulya: not just get my ideas out there, but really force myself to work through them. I use them as kind of accountability machine trying to get it to, you know, create some ideas. And now I teach it all the time. It’s worked into all of my courses in some way or another. And now I do so much talking with faculty members, and I also started to dabble with

17
00:04:14.190 –> 00:04:32.809
Jason Gulya: activities that teach me a little bit about the spectrum. So today I taught AI to students ranging from 14 years old to 17 years old, and having that sort of shift so moving from college into that, even if just for a little while, has really helped to, because I think that there are these

18
00:04:32.900 –> 00:04:53.599
Jason Gulya: huge differences in terms of how we approach this kind of technology. But that’s my kind of long, winded backstory getting me from the age of enlightenment to chat to AI, and I do think that they’re weirdly connected, even though the connection was totally subconscious. At first.

19
00:04:53.600 –> 00:05:04.559
Caleb Curfman: connection. I definitely see that. And I am definitely going to use that when we start talking about the enlightenment and and how these things change. Because, yeah, you’re right. It was a

20
00:05:04.560 –> 00:05:21.419
Caleb Curfman: scary but exciting time in in world history. And and again I think we could say the same. Now many people start with that fear. And then they’re starting to realize, what can we actually do? And so I appreciate that you walk through that process. I don’t think.

21
00:05:21.420 –> 00:05:39.879
Caleb Curfman: Well, I know you are not alone with that original knee-jerk reaction. I think that’s how we all felt. And then it becomes what do we do now? And so for faculty that are getting ready. you know, when we’re recording this, we’re just a few weeks away from the fall semester starting for many of us.

22
00:05:40.410 –> 00:05:50.189
Caleb Curfman: What are some things that we should be looking for, or ways we could be thinking about embracing this as we move into the fall semester.

23
00:05:51.620 –> 00:06:19.339
Jason Gulya: I would say. The first step, regardless of where you’re coming from and what your own personal beliefs and investments are is, if you haven’t already make sure you’re playing with it. I think that we sometimes all victims of just wanting to jump in and have a policy, and I totally understand, like, but having a firm policy from the very beginning, but I don’t think we can do that until we really start playing with it. Kind of

24
00:06:19.340 –> 00:06:27.949
Jason Gulya: work with it enough to understand the nuts and bolts at least a little bit right. You don’t have to know everything. You don’t have to know what’s happening on the back end, but

25
00:06:28.220 –> 00:06:37.399
Jason Gulya: really trying to be creative with how we can work AI into our classrooms. And I think, at least for me and for a lot of us.

26
00:06:37.410 –> 00:06:39.939
Jason Gulya: That’s the way forward. I

27
00:06:40.010 –> 00:07:04.939
Jason Gulya: am at the heart of it. A very practical person, I do, you know, have these huge moral ideas, these huge generalizations that I live in my life by, in principles. I live my life by, but in the end a lot of it just comes down to you. Brass tax. How do we move forward in the world where our students will be using it? And now AI is getting to worked into all these different programs. It’s not just chat. Gbt.

28
00:07:04.940 –> 00:07:25.469
Jason Gulya: but it’s popping up in Google Bar, and that’s getting work into their whole system. It’s already my Gmail. When I go there. I have that help me write this. It’s in my Google docs, students are going to start seeing it. And they’re gonna start clicking on it. And Microsoft is going to work it into their entire suite, too. So just very practically speaking, I think that we need to experiment with it.

29
00:07:25.470 –> 00:07:26.710
Jason Gulya: Think about

30
00:07:26.900 –> 00:07:36.500
Jason Gulya: how we can encourage our students to really live in that world, because for me, a lot of it comes down to an ethical obligation.

31
00:07:36.610 –> 00:08:03.970
Jason Gulya: I think this faculty members. We are ethically obligated to prepare our students for that world, and that kind of helps me whenever I feel like I hit a R. And just I’m not sure how to move forward, remembering that helps me out kind of pushes the to try to be innovative. And I would say, just play with it and find, use cases and collect them. And I’m I’m trying to. Sorry ahead. Oh.

32
00:08:04.470 –> 00:08:05.280
yeah.

33
00:08:06.350 –> 00:08:10.539
Caleb Curfman: So one thing I I wanted to pause there because

34
00:08:10.700 –> 00:08:30.949
Caleb Curfman: you bring up the important point that this is the world that students are going into And so early on, we, we know of cases where entire systems college systems were, we’re saying, no, we can’t have this. We can’t use this. But I I like how you frame it as this kind of ethical responsibility that

35
00:08:30.950 –> 00:08:42.399
Caleb Curfman: it’s not good to keep people out from this, because when they go out into and you know I hate the term. But the real world of you know, whatever their job ends up being wherever they end up going.

36
00:08:42.409 –> 00:08:49.250
Caleb Curfman: they’re going to be interacting with this and and all the time, right? And so we want to make sure

37
00:08:49.370 –> 00:08:59.470
Caleb Curfman: we’re not becoming this idea of. No, you can’t. Let’s restrict everything. Let’s find ways to use it. And you you mentioned the idea of using cases

38
00:08:59.610 –> 00:09:13.349
Caleb Curfman: and understanding how to work with this. And so that’s kind of where I want to go next is so now, hopefully, we start by playing with it, using it a little bit. But how can we engage our students with it

39
00:09:13.570 –> 00:09:18.299
Caleb Curfman: in ways that maybe we couldn’t have without it, or maybe not as easily without it.

40
00:09:19.490 –> 00:09:28.200
Jason Gulya: One of my favorite things to do now with AI, especially because Chat Gbt allows us to share our links

41
00:09:28.220 –> 00:09:44.839
Jason Gulya: is, I actually have. My students have dialogues with Chat Gbt. And share the entire conversation with me, and early on what I would do for my composition courses. So, since you are just kind of learning what an argument is.

42
00:09:44.860 –> 00:09:58.980
Jason Gulya: I actually early on in the semester, I asked them to go into Chat Gbt. And I give them a prompt and it’s about a page log, right? Most students don’t read it. They just know to plug it in. And

43
00:09:59.010 –> 00:10:23.789
Jason Gulya: but the prompt does. Is it? Asks, Chat Gbt to act as a controller in. And I give it very specific things that should be doing. And basically what I’m doing is, I am asking Chat to be a really, really really smart creator of a counter argument that really pokes holes in in their ideas. And I ask students to run that prompt.

44
00:10:23.790 –> 00:10:30.350
Jason Gulya: and again some of them look at it, some of them don’t. But then they have a conversation with

45
00:10:31.000 –> 00:10:46.840
Jason Gulya: Chat Gbt. And I learn so much about how students think through that I learned more by reading that dialogue than I do reading an entire paper, because it really lets me see how

46
00:10:46.840 –> 00:11:06.350
Jason Gulya: one of my students reacts when you have a really really smart interlocutor that has a ton of information that it’s working with. And it is very good at finding holes in your argument, and kind of poking at it as long as you prompt it in a certain way. And that’s something that I do to help my students gain

47
00:11:06.360 –> 00:11:08.820
Jason Gulya: an understanding of argumentation.

48
00:11:09.070 –> 00:11:17.179
Jason Gulya: and also just what it means to have a dialogue with a machine like this. And sometimes what I notice is.

49
00:11:17.410 –> 00:11:41.990
Jason Gulya: I assume that a lot of my students, because of the age and because of technology that they come in with a degree of AI literacy already. And it’s just not true. There is a huge range of how students interact with Chat Gbt. Some of them are totally game, and you can see that they’re reading the outputs, and they’re actually interacting with it and poking holes in Chatuvich’s own logic, which is what I want them to do ideally.

50
00:11:41.990 –> 00:11:58.320
Jason Gulya: and some of them give up. They just they just throw in this out very, very quickly. or some of them. They misunderstand the exercise, so they don’t actually have a conversation with Chatgbt, they just give it another question. So they’re effectively just

51
00:11:58.320 –> 00:12:22.719
Jason Gulya: shifting gears to something else and trying to coach students through what it would mean to use chat, and and actually have a conversation like that. so that’s one activity that I do that has pro very proven very, very helpful, especially as a way to just encourage students to start thinking about how to work with AI and what AI is and how it actually functions.

52
00:12:22.870 –> 00:12:28.049
Caleb Curfman: Yeah, I I like that because it it highlights some of the

53
00:12:28.270 –> 00:12:43.150
Caleb Curfman: the good uses that aren’t necessarily product creation. And by product, I mean assignment right when we look at things one thing that I’ve done to kind of change my focus as an instructor is really

54
00:12:43.160 –> 00:13:12.780
Caleb Curfman: focus on the process more so than the final project, or whatever happens to be right. And so this is a way to to help strengthen an assignment. I really like that right in. In so many of our disciplines. We are looking at questions at critical thinking. it’s like we all have a Socrates in our pocket. Who can who can go through these with us? Right? And so I really like that. that form of questioning and interacting in that way because

55
00:13:13.480 –> 00:13:26.809
Caleb Curfman: it’s it’s one thing to ask it a question and get an answer. It’s a whole nother thing to engage with a chat box as a chat, you know, and and actually get that going. And so you mentioned

56
00:13:27.060 –> 00:13:30.670
Caleb Curfman: that you create the prompt for that one?

57
00:13:31.000 –> 00:13:44.010
Caleb Curfman: what kinds of things should we be looking at as instructors. when we’re trying to create. If we want to do something similar. what types of things should we do to create a prompt because I know

58
00:13:44.230 –> 00:13:47.089
Caleb Curfman: AI is only as good as the prompt we give it.

59
00:13:49.500 –> 00:14:07.579
Jason Gulya: Yeah. And I would just just create something. So if you have an idea, I would say, start with the purpose whatever you want students to get out of it. So take the learners perspective. Really, try to think about what you want them to come away with. And the other distinction which you already mentioned is, you know.

60
00:14:08.080 –> 00:14:32.289
Jason Gulya: what are you going to focus on? Are you going to focus on the product or the process. So for that one, it is straight up. I want the process right? I want to be able to see it. Another one might focus on the product. Right? You might say you’re going to use Chat Gbt to write an email. And the only thing that you’re going to evaluate is the end product. However, students get there, it doesn’t matter as long as they’re working with it. And then that’s what you’re looking at. That’d be another way to kind of

61
00:14:32.630 –> 00:14:55.439
Jason Gulya: do it, and I would go through and start with that purpose and work your way backwards. I’m a big believer when we create prompts in backward design from instructional design. Or, you know, a lot of a lot of teachers use it, even if we’re not in that if they’re not in that field. But try and figure out what can actually get you there and build the prompt backwards from there. Right so.

62
00:14:55.770 –> 00:14:57.759
Jason Gulya: And I know we have all these different

63
00:14:58.140 –> 00:15:04.910
Jason Gulya: prompt guides and and paradigms that we use. So the one that I personally use is

64
00:15:04.910 –> 00:15:34.039
Jason Gulya: see idi, so context instructions, details. And then input so I start with the purpose, and I work my way back right? What input can be put in there to end with whatever that purpose is, whatever that goal is right, then, with details about instructions. And then finally, what role? And that kind of helps me out and then really just testing it out, running it on your own, seeing how it feel, seeing if it actually gets there, because

65
00:15:34.640 –> 00:15:41.410
Jason Gulya: you can feel really, really good about what you think of prompt is going to give you, and then you run it in. It completely

66
00:15:41.680 –> 00:16:06.399
Jason Gulya: loves, and it just doesn’t work very well. And we also have. Now that Chat Gbt has been around for a little while. We’re starting to see a behavior shift. Our behavior drift. Sorry? so that a prompt that might have given you a certain output like 5 months ago, it might give you a different input now. So you want to make sure that we’re continually going in there and seeing what we get out of it. And then there are a lot of platforms

67
00:16:06.400 –> 00:16:13.540
Jason Gulya: or a lot of paradigms that we can use to focus on how to craft these prompts better. And again, I think it’s just iterating

68
00:16:13.540 –> 00:16:16.910
Jason Gulya: and seeing what you get as you run it again and again.

69
00:16:17.280 –> 00:16:18.930
Caleb Curfman: Yeah. So

70
00:16:19.190 –> 00:16:39.490
Caleb Curfman: I. I really like this because this is giving a very practical idea of what we can do as as an activity, as introducing it, bringing it out there but there isn’t a you know. We can’t have a a true strong conversation about Chat Gbt. Without realizing that there are some negative

71
00:16:39.570 –> 00:16:44.980
Caleb Curfman: to this as well. And one of the things that I’m curious about is

72
00:16:45.170 –> 00:16:59.769
Caleb Curfman: when you’re looking at your class and creating your class. a lot of places are trying to develop a good syllabus statement, or or some sort of way of of determining.

73
00:16:59.770 –> 00:17:18.669
Caleb Curfman: How is Chat Gbt or other? AI going to be used in a class. How are we going to do that? Would you be willing to share? Maybe some considerations we should have as faculty when we’re creating something like that so that we aren’t restrictive. But we also

74
00:17:19.060 –> 00:17:21.259
Caleb Curfman: are not leaving the Wild West open.

75
00:17:22.710 –> 00:17:25.149
Jason Gulya: Yeah, and actually

76
00:17:25.440 –> 00:17:48.510
Jason Gulya: coach a lot of faculty as they do this, because it’s the big question, what kind of policy do we need to have? And institutions are very different. My institution right now is just handing it to professors and just saying, you get to design and stick to your own AI. Policy, and other colleges and universities are taking a very different line that they’ll have more college wide

77
00:17:48.510 –> 00:18:04.190
Jason Gulya: policies, and I think that they are pros and cons to either. And I talk to a lot of faculty who are just very overwhelmed, because then you’re told. Come up with an AI policy. It’s very hard to figure out where to begin. I would just start, you know, very, very simple.

78
00:18:04.310 –> 00:18:06.000
Jason Gulya: Can students use it?

79
00:18:06.960 –> 00:18:22.630
Jason Gulya: I just like right there, can students use it? And then you build from your answers to that question, can they use it? How can they use it? And why? Right? So if you get to the point where, say your faculty member who decides that

80
00:18:22.690 –> 00:18:46.069
Jason Gulya: students cannot use any chat. Gbt for any of their writing. Okay, say, why, very, be very, very explicit. And you think that students aren’t learning when they use chat. Gbt, whether I agree with you or not. Right, give them that answer. I think that if you don’t have that, why, I don’t think it’s going to work very well, because then I think it comes off as just a

81
00:18:46.240 –> 00:19:04.319
Jason Gulya: kind of arbitrary blanket role. No one can use it because I said so. Right. We want to really, really avoid that as much as we possibly can. And I think that how questions and the why questions are, how to get there. And so, you know, really starting with, you know, how can students use it? And then

82
00:19:05.110 –> 00:19:32.100
Jason Gulya: another question, as we kind of just continue to unravel is, how will students know the proper use of it? Right? Because it’s not just going to be on a syllabus, and that’s going to be it. But they’re going to need a system. At least, I think that reminds them of responsible uses and ethical uses. So one of the things that I do on my core syllabus is. I tell my students that I’m going to have a scale for them.

83
00:19:32.650 –> 00:19:53.860
Jason Gulya: So scale from one to 5, one is no AI right total. You totally your thing, and a 5 is AI reliant right? The output is created by AI and all along the way, and I put this in the syllabus. I will let them know where I want them to be. They’re doing rewriting.

84
00:19:53.860 –> 00:20:13.939
Jason Gulya: It probably should be a one right. It’s not really that useful to have a view of 4 or 5, but there might be some assignments that are actually are a 5. Right. If you want to ask students to create someone who chat in between. You’re just going to look at the output. They’re not going to look at the process. You’re not going to do anything like that then it might be a 5

85
00:20:13.950 –> 00:20:20.449
Jason Gulya: other than that most things will be in the middle. right? And they’re all guidelines. We’re we’re all different with these. So

86
00:20:20.770 –> 00:20:41.949
Jason Gulya: if I am asking students to rough to do a rough draft, I don’t think rough draft should be totally handed off the ais, because that’s kind of where we tinker with our ideas and figure out how what we actually believe. So for me, a rough drop is probably more like a 2, maybe a 3 right for using it to supplement our ideas. And I tell my students that

87
00:20:41.950 –> 00:20:50.889
Jason Gulya: I want them along the way to ask me if they ever have any questions about where they should be on the scale, and I try to really, really stick to it, because

88
00:20:50.920 –> 00:21:05.109
Jason Gulya: that becomes a a conversation in of itself about how we can use this technology. And I also try to. And this is sort of off the Sy of this question, but related to it. I try to get my students a chance when they get to pick their own number

89
00:21:05.540 –> 00:21:27.630
Jason Gulya: right? So if you wrote this, where was it on the scale? And why? Right? If you felt you could do it well with, you know, 3 on the scale, or 4 on the scale, or even a 5 on the scale. Why did you think you were able to do that? So it allows them to kind of be a little bit more Meta, in terms of thinking about their own process. And I would say with the syllabus

90
00:21:27.630 –> 00:21:41.000
Jason Gulya: again, avoid anything that has the feel of. Because I said so, or because this is how we’ve always done things. And I would also recommend telling students that you’re learning to

91
00:21:41.140 –> 00:21:51.540
Jason Gulya: right, emphasizing that this technology hasn’t been around for that long, especially in the whole scheme of things, and that we are all trying to figure out

92
00:21:51.790 –> 00:22:05.390
Jason Gulya: big questions, how to use the this technology ethically, how humans even fit into this world where AI is built into more and more things, and I think, being very, very clear with them.

93
00:22:05.600 –> 00:22:17.789
Jason Gulya: that those are the kinds of questions that you’re thinking about, and that you’re experimenting and really trying to do good by them. I think that opens up a dialogue even more. And I think that’s exactly what we want with the syllabus.

94
00:22:17.880 –> 00:22:40.170
Caleb Curfman: Yeah. And and that’s that’s exactly right. The idea of the dialogue. Let’s have a conversation. Let’s continue that conversation, because it’s it’s one of those things. you know, the more transparent we can be as instructors the the better it’s going to be for everybody. And I had an experience where I was using it this spring with students where we just

95
00:22:40.180 –> 00:23:04.599
Caleb Curfman: the the assignment I’ve talked about for so I’m not going to rehash the whole thing. But essentially I ask AI to create the perfect government. It was in a comparative governments class, and then we had a kind of a Socratic dialogue with it, and you know there are so many ways we can do that. And so even for those faculty, or colleges or campuses that may not

96
00:23:04.660 –> 00:23:33.019
Caleb Curfman: want to encourage or allow or require students to use it themselves. I would definitely encourage the use in the classroom, find ways to get it involved, and and use that. And so that’s why I kind of like where you’re going there with, there’s different levels of how much to use right? And and how how do we use it? Do we do we use it for the product, or do we use it? kind of on the side, and I think specifically from your discipline.

97
00:23:33.020 –> 00:23:45.120
Caleb Curfman: it’s very interesting. I I would not have necessarily like you talked about this whole thing coming together. I would not necessarily put somebody that teaches writing as somebody that’s going to be such a strong proponent of

98
00:23:45.120 –> 00:23:56.740
Caleb Curfman: of using AI. But now it makes more sense, as you kind of have explained your journey with it. And and so for those of us who are on this journey.

99
00:23:57.160 –> 00:24:02.519
Caleb Curfman: You. You have a great tip at the beginning to just try it. Get used to it.

100
00:24:03.510 –> 00:24:22.660
Caleb Curfman: What kind of advice would you have? So you know, we’re almost there right now. Some people are maybe just coming back. And it’s, you know. Now we’re learning about how how much further it’s come, because we started out in November, and if you haven’t really looked at it even since May, there’s a lot of changes.

101
00:24:22.890 –> 00:24:41.399
Caleb Curfman: And so what is probably the most important thing. And I know this could be an hour long. But what is the most important thing that you could think of? where Chat Gpt is right now today on August seventh of 2,023, just to be clarifying. If you’re listening to this in a year, probably not as important. But

102
00:24:42.150 –> 00:25:01.429
Caleb Curfman: what is the most important thing that we need to know about this at this moment. That may be different from what has been in the last year. What’s what’s improving the most? What do we need to be mindful of? And and maybe that’s not the correct question. But I don’t know. Maybe there’s a there’s some sort of response for that.

103
00:25:02.930 –> 00:25:10.009
Jason Gulya: Yeah, there’s there’s a lot. Yeah, the the technology has really changed. But

104
00:25:10.490 –> 00:25:13.679
Jason Gulya: a lot of the black boxes are still there.

105
00:25:14.040 –> 00:25:25.660
Jason Gulya: and one of my tips for faculty members, especially if they haven’t really, you know, they maybe they played with it, or they started to play with it, and kind of a band in debt.

106
00:25:25.870 –> 00:25:33.799
Jason Gulya: but I think for most of us there will be some people who are avidly against it, and some people are avidly for it. But

107
00:25:34.200 –> 00:25:48.030
Jason Gulya: in the end the most important thing for me and for students is that we’re moving forward with both of our eyes wide open that we are looking at both the positives and the negatives of this technology

108
00:25:48.230 –> 00:25:59.170
Jason Gulya: as the technology is advanced, and it has advanced a lot over the last 6 months. The black boxes are still there. There are these still these giant

109
00:25:59.560 –> 00:26:19.280
Jason Gulya: boxes that we just don’t know what’s going on in the other side. We don’t have access to the algorithms. We still have to sort of guess what Chat Gbt actually does with information when we put it in there, we actually don’t know. We have some guesses, and we know a little about the trading data, but not a lot. And

110
00:26:19.710 –> 00:26:36.070
Jason Gulya: when they release Chat Gbt, for we, they actually double down on the black box. What they basically said was that we’re gonna make it harder to figure out what’s actually happening behind the curtain or in the black box, whatever metaphor you want. And

111
00:26:36.070 –> 00:26:59.539
Jason Gulya: our students, I think, really need to recognize that. And faculty members need to recognize that that this technology is a tool. It is an extension of humans. It is an extension of our collective biases and hatred. And it is very much a part of our history. And it is not

112
00:26:59.540 –> 00:27:16.109
Jason Gulya: something that was, I mean, it’s artificial. But it’s modeled off of us and the Internet that we created. And I think that’s that’s so essential. And I think that’s the big thing I want students to get and faculty to get that

113
00:27:17.030 –> 00:27:22.600
Jason Gulya: if we have a chance to shape this technology. It’s now.

114
00:27:22.860 –> 00:27:36.959
Jason Gulya: I think if we wait 5 years, or even 2 to 3 years, I think it’s going to be too late if we don’t dig deep and start to shape things, how we want to shape them. They’re gonna do their own thing.

115
00:27:36.960 –> 00:27:58.250
Jason Gulya: And the questions here with the use of this technology are huge. We’re talking about job displacement or replacement, or whatever your your angle is, we’re talking about jobs. We’re talking about lives. And we’re talking about depending on our age, the age that we teach students are just going into this world.

116
00:27:58.620 –> 00:28:05.260
Jason Gulya: and if you’re in college they might be 18 to 22 years old, and they’re now being told that

117
00:28:05.300 –> 00:28:16.009
Jason Gulya: that job that they have trained for and paid for, you know, paid to the education, for it might be gone in half a year right? And really

118
00:28:16.590 –> 00:28:27.289
Jason Gulya: trying to get to a place where we are taking all that into account and also empathizing with that situation. That’s that’s a big deal to be at that moment in your life.

119
00:28:27.300 –> 00:28:53.379
Jason Gulya: you know, 18 to 22 years old and have that hanging over you. Our students need empathy. They’re trying to figure things out, too. They’re not quite sure what this world is that they’re going into. And how could they be? So I think I gave you about 3 to 4 questions. Those are the ones that kind of I’m combining in my mind. Yeah, no. And that’s that’s really good that that helps answer a lot. you know, specifically looking at the way that

120
00:28:53.390 –> 00:29:09.259
Caleb Curfman: when you look at this as a whole, not just chat gpt in higher education. But in the world. How it’s changing things, how it is coming from us. in in in the end, because

121
00:29:09.430 –> 00:29:11.349
Caleb Curfman: the one thing it can’t do

122
00:29:12.160 –> 00:29:28.349
Caleb Curfman: is kind of create its own ideas. Right? It’s it’s drawing off of things that have been created. And and any of that’s where I’ve had some great conversations with people about creative thinking and creativity, and how you know we need to embrace some of these things

123
00:29:28.350 –> 00:29:48.940
Caleb Curfman: and how that works. And so I do think it’s important for us to remember that this is a product of ourselves and and it in that way we need to understand how we’re going to be working with it, shaping it as we ask questions. so I think that’s that’s really interesting. And of course, looking at the way that we can share empathy.

124
00:29:48.940 –> 00:29:54.349
Caleb Curfman: and you know the kind of the another issue that I’ve seen is

125
00:29:54.400 –> 00:30:23.969
Caleb Curfman: it creates some equity problems in certain cases. When one, you have to have the Internet to support it. and that’s always been a thing. And and you know, that’s been a thing with with other college aspects. But now, more than ever, you know, it’s a great tool if you have access to it. How do we? How do we leverage that I know. There’s been a lot of talk about the different voices, or even ideas that come through, and some of the the prompts. And it it says right now there’s a disclaimer, you know that.

126
00:30:23.970 –> 00:30:36.480
Caleb Curfman: you know. There’ll be some in the actor information. Some things are going to be in different ways. Some bias is going to be put in as well. Right? And and so I think it’s a great time to help students

127
00:30:36.880 –> 00:30:53.199
Caleb Curfman: go through questions, ask more questions figure out, you know, how do we determine a bias? How do we understand what is good information? I think all of that’s kind of coming together at this very unique time. And so thank you for that. That has been very helpful.

128
00:30:53.200 –> 00:31:08.169
Caleb Curfman: And you know, one of the things we try to do on this podcast is try to make it very clear. You know, what are the issues? How can we deal with it? What are some tips? And you’ve checked all those boxes. So that’s been fantastic. So thank you so much. But you know, we could.

129
00:31:08.180 –> 00:31:16.829
Caleb Curfman: being somebody that studied literature and history, we could talk all day long but what I do want to do is make sure

130
00:31:16.980 –> 00:31:38.710
Caleb Curfman: people know how to reach you, because I have personally got some help from you, and I’m like, Oh, my gosh! What’s going on? When I was in that first phase of understanding chat? Gpt. That kind of you know that that feeling of what’s happening, nothing’s going to work anymore to now feeling very positive about it. So how can people reach you? What is the best way for that.

131
00:31:38.850 –> 00:32:01.700
Jason Gulya: The best ways, Linkedin. That’s where I do a lot of my connecting a lot of my content. I put on there, and so always for you or your listeners feel free to DM. Me, send me a message. I really try to help people as as much as I can, and for free, I mean, I have my own consulting thing on the side, too. So there is that formal aspect to it. But in general I

132
00:32:01.920 –> 00:32:26.780
Jason Gulya: I’m very invested in making sure as much as I can that we get this right? I I think we have to get it right? So yeah, the easiest way is probably reach out to me and through Linkedin and I try to be on top of that, and so people can reach out to me that way, and I’d be happy to help fantastic. Well, thank you so much. There’s so many aspects to this. And when we are at this moment, when this is being published here.

133
00:32:26.780 –> 00:32:42.689
Caleb Curfman: What a great time to have these conversations! I know. we’re gonna have a piece on this at our workshop days or in service at at at my college, and I’m sure many, many, many others, as we try to better understand. But I am.

134
00:32:42.690 –> 00:33:01.659
Caleb Curfman: I’m hopeful because it feels like the conversation has really started to shift to. It’s not going away. What can we do? And so thank you for that. And I will be sure to to put your Linkedin on the show notes. Make sure everything is there just a great conversation. So thank you so much for joining me.

135
00:33:01.890 –> 00:33:04.360
Jason Gulya: You’re very welcome, Caleb. Thank you so much for inviting me.

Leave a comment