Most of the time, we underestimate the role of context in learning to code. We have machines that can now act like mentors, thanks to advancements in large language models (LLMs). Bob Ippolito suggests this shift is less about the nature of teaching and more about its reach. Imagine a world where every student, regardless of location or socioeconomic status, has 24/7 access to guidance. This doesn't mean we replace human mentors; instead, we amplify their impact. The idea is simple: machines handle the basics, allowing the human experts to focus on the tough stuff. Through this balance, education could become more affordable and accessible.
Yet, there's a catch. Machines say things with confidence, right or wrong. Bob makes an interesting point: these models often give solutions with zero context about their accuracy. You might get an SVG that looks like a Picasso bike when what you wanted was a perfectly normal one. A human in the loop can prevent these mishaps. So, while the machine can help scale tutoring, it doesn't eliminate the need for discernment. This is where understanding when to question the answer becomes key in benefiting from AI.
Bob implies that LLMs are like search engines on steroids. They're not human; they're another layer between us and the information we seek. Use them for what they do best—offering a broad view, new keywords, and directions you might not consider. The model's understanding isn't as deep as a human's, but its breadth is unmatched. It's not the go-to for the final answer, but it'll help you ask better questions. And when you need to delve deeper, resources like enki.com can round out your learning path with structured guidance.
Curiosity is the backbone of coding. Sometimes, you're just stuck. An AI might jolt your thinking like a slap to the face, forcing you to reconsider where you've gone astray. This is part of what Bob touches on—the model’s ability to unstick us. But remember, you're still the driver. The AI might provide a map, but it won't steer the car. Tuning in to this episode offers an experiment in thinking about the future of education: using technology not to take over, but to collaborate and extend human capability.
Nemanja Stojanovic: And now we have this new kind of alien mentor that we can work with, right. With this, um, LLM technology and everything. And I'm wondering, how do you think this fits in into, into, um, learning to code and in the process, especially early on.
Bob Ippolito:
I think it's, it's really going to change quite a lot of things, not so much at a fundamental level, but at a scale level, like you said, um, these, these are available to everybody, at least right now, like they're, they're kind of giving it away. There is. At this point, probably at least as good as most TAs and the office hours are 24 seven and the door's always open. They're never too busy for you. Unless that particular LLM service is down for that. Hour or whatever, but that's pretty rare.
I think it will change the scale. Uh, it'll change the economics, right? A lot of approaches right now are very expensive because you need to hire all those mentors. But if you could have. The LLM provide the sort of first tier of mentor support. And then you have, um, you know, human expertise above that. It's really high leverage because you can have a few really good mentors support way more, uh, learners than they could directly if the LLMs were able to provide a first tier of, uh, support.
Nemanja Stojanovic:
What do you think are the main trade offs, like the best thing education wise and like some of the downsides of using them?
Bob Ippolito:
The biggest downside in my experience is that they always give you an answer without having really any indication of its confidence in that answer. LLMs will be very confidently. Giving you just wrong information. Like you, you ask Hey, could you give me the SVG to draw a bike? And it'll just draw something that, that has like variables named wheel and crank set but the, the coordinates will be all over the place and it won't look anything like a bike, because it just doesn't understand it at that level. Whereas, a human would draw circles for the wheels and stuff, but they would tell you, like, you know, it would take me a couple hours to figure out what all the coordinates should be and equations to draw the spokes and the triangle and the handlebars the LLM is like, well, I have a budget of this many computations, and I'm just going to give you whatever fits in that budget, even though it doesn't look like a bike at all. I think that's, that's the worst part about LLMs is you don't know, like, it won't tell you that it's wrong. And if you're starting out, you won't know that it's wrong because you don't know what you're doing, right? So the only thing you can do is be like, hey, LLM, like, you're wrong. How do we fix this? And sometimes that kind of prompt works and sometimes it doesn't. And that's the cases where you really need a human involved or just to be very, resilient and find other ways of getting to that answer because the AI is not going to lead you there.
Nemanja Stojanovic:
One thing I've noticed sort of a funny, realistic consequence of this is that people now tend to open GitHub issues for problems that were hallucinated by the AIs that don't even exist in those software repos, which I thought was interesting.
Bob Ippolito:
Yeah, I've seen reports about APIs that don't even exist.
Nemanja Stojanovic:
How do we treat them?
Bob Ippolito:
Personally, this is my opinion, but, you tell me what you think. It's not a human equivalent. It's almost like a more advanced Google in some way where, you can't blindly trust it, like you said, but how should we, uh, how do we approach them?
Nemanja Stojanovic:
"Um, And prompting them and talking to them such that we kind of a better set up to make them, the most useful.
Bob Ippolito:
I think that's that using them as kind of a search, I think is, is the right approach, I treat AI code the same way that I treat stack overflow code and just assume that it's tainted and, and try and, understand it before I do anything with it. Because it's, you know, usually. At best, maybe not the current way to do things. Because these things get stale, as I'm sure you all know, the technology changes every couple of months, especially AI technology. Like if you ask AI about AI technology, it's only going to tell you about the stuff that it was trained on last year or whatever, right? So it can only get you so far. But I think that, something great about AI is that it has a very broad knowledge base it knows. The, terminology that people use to talk about these problems. So if you give it a general idea of what you're trying to do, it might start spitting out words that you didn't know before, but are the right words to search. For that problem, so it can help you narrow your research, whereas. It would be very difficult to figure out what those words are otherwise. A lot of programming has terms that are very hard to search, like very common English words and symbols AI knows those things and can help you refine your search. But, I, I think, I think that's at very right that it, it's certainly no replacement for a human, but it is in many cases a much better search engine for solving problems.
Nemanja Stojanovic:
One other benefit is that it sort of helps you. There's a lot of moments when you're just kind of stuck and you're like, what now? Like, I tried all I could think of. What do I do now? That happens a lot of times, especially early on. But I think it happens forever in programming there's just sometimes where you're just like why is this not working you know it should and so i think they help a little bit even if it's a wrong answer even if their advice is not exactly accurate the the feedback with with like an ai helps you get unstuck a little bit do you think?
Bob Ippolito:
Yeah i think that's right but You know, it really also depends on you. You're still required to do quite a lot in those situations. You have to be able to judge whether the AI is correct or not. You have to refine those prompts to get there. It doesn't kind of do your job for you. It provides templates and directions that you might not have thought of, at least not at that speed.