Leveraging AI to conquer unfamiliar programming languages
Here we are.
Another AI post.
I bet you’re sick of them now, eh?
But I promise you, I’m interested in how AI can actually improve our workflows, processes, and lives. So in this blog post, I’m going to share the ways I’ve been using AI, and more specifically utilizing a large language model (LLM) as a coding buddy when diving into unfamiliar programming languages and environments.
The blog post will assume that you have some programming experience and that you’re confident in at least one programming language or environment. If you’re new to coding and want to use AI to help, that’s great! But there are many aspects that may be missing when learning directly from a LLM, which this post will not dive into.
For more on this topic, check out How will Generative AI replace SDK engineers?
Should we fast-track the learning process?
First, let’s start with the why.
I’m going to guess you’re reading this blog post because you want to get up to speed with a new language faster. That’s the promise of AI, right? More work, faster?
I’d argue that it all depends on your goals.
In the modern world of programming, it’s becoming less viable to be specialized in one language or area of programming. You’re expected to know three, four, or even five languages. Or maybe you’re a “fullstack” web developer (don’t get me started on that). But I’d like to argue that you should be competent in programming and not necessarily a master in many programming languages.
That said, if you’re looking for the challenge of learning a new language for the sake of the challenge and the insights it can bring you, you may want to take a different AI approach.
Slow down. Ponder. Question. And iterate.
Sadly, many of us don’t have that luxury, and we have to get things done — and fast. So let’s look at some ways an LLM can fast-track our understanding of a new language.
General rules for talking with an LLM
Before we jump into specific use cases, let’s talk about how to interact with an LLM on a general level.
It sounds very silly, but… pretend it’s a human with little-to-no memory, context, or awareness of the question you’re about to ask.
To achieve the best response from an LLM, you want to be explicit and verbose and talk like a human, so that the LLM can do its job of “trying to continue in a statistically sensible way.”
This is done by first priming the LLM for the task it’s about to perform. Define roles, set rules, and give examples to further steer the LLM in the right direction.
This is why you may have seen many “out-the-box” prompts that start like the following.
“You are an assistant tasked with …..”
The further you take this concept, the better your results will be, but there’s a tradeoff between time and quality here. Do you really want to spend 20 minutes priming your LLM to answer a single question?
This is why many people utilizing LLMs daily have a library of prompts that get them 80 percent of the way toward a well-primed LLM for a specific task. Or you’ll see “here are the top 10 prompts for X.” They have their place, for sure, but don’t fool yourself into thinking that this is where priming an LLM stops.
Lucky for us, there are some tools that have already taken us that 80 percent toward an optimal programming buddy. That’s the reason I’m using JetBrains AI to shortcut some of that priming process. Of course, that doesn’t mean you have to use JetBrains AI. You can easily use the same techniques with Copilot, or even ChatGPT, but remember you may have to put more effort into priming the LLM and provide a wider scope of context if that isn’t provided directly from your integrated development environment (IDE).
Learn by examples
This next section will walk you through some examples that will help you start to understand how and when to use LLMs to speed up the learning process.
Syntactic sugar
Trying to understand what a piece of code is attempting to achieve can be hard, especially when the target language has lots of syntactic sugar to reduce character usage. Think back to a time you were staring at some code you wrote a year before and had zero idea what the thought pattern was behind it. We’ve all been there. Now put yourself in a new programming environment, where hundreds of developers have touched the code. It can be a puzzling disaster.
Understanding new syntax or a solution to a problem is the best way of learning, and that’s where an LLM can come into play.
Because LLMs have been trained on millions of code examples and explanations of code, if you provide the correct context, it can take a good stab at explaining what’s going on.
Prompt
Explain the following code: ``` const description = schema.description ? ` // ${schema.description}` : ""; ```
Response
The TypeScript code you’ve asked about checks if there’s a description in the schema object and assigns it to the description constant in a particular form, or assigns an empty string if it’s not present. Here’s a breakdown: * `schema.description` is a value from the schema object, which seems to be of JSON Schema format. * The `?` operator checks if `schema.description` is truthy, i.e., if it’s not `null`, `undefined`, `false`, `0`, `NaN`, or an empty string "". * If `schema.description` is truthy, it inserts the description into a template string, `// ${schema.description}`. This notation `${...}` inside the backticks ` indicates that it’s an expression to be evaluated and converted into a string. This means it appends the string with a prefixed space and double forward slashes. * If `schema.description` is falsy, it assigns an empty string "" to the description constant. ...
The response continues, but you get the gist. This is a great way to learn new syntax or to learn a new pattern that may not be common in your preferred language.
Following logic in code
Sometimes you need to be more specific about what you don’t understand. For example, say you’re following the code all the way up to a certain call, but you get lost in weeds if calls and returns are split across lines.
Now I encourage you not to jump to Stack Overflow or Google. I know this is the natural reflex many of us have now, but we need to learn new tricks! In addition to that, “googling” can be trickier for formulating good search queries, as copying code verbatim doesn’t work with traditional search engines.
Prompt
I understand that the following code splits the properties into a key-value pair, but I don’t understand the type that’s returned from the call to `map`. Could you tell me would that would be? ``` const properties = Object.entries(schema.properties) .map(([key, value]) => { const isOptional = schema.required?.includes(key) ? "" : " (optional)"; return `${" ".repeat(indent)}"${key}": ${this._schemaToInstruction( value, indent + 2 )}${isOptional}`; }) .join("\n"); ```
Response
Response: The type returned by the `map` function call is an array of strings. This is because the function that you’ve provided to `map` constructs a string for each key-value pair from `schema.properties` and returns it. The portion of the code starting with `${" ".repeat(indent)}"` is a template literal used to create a string. The map function’s return value in this case can be described as `Array<string>`, or "an Array of strings" in TypeScript. After the `map` call, `join("\n")` is used to concatenate all the strings in the array into a single string, separating them by newline characters `(\n)`. So, the properties variable itself would be of type string.
As you can see from my prompt, I’ve gone one step further than querying the syntactic sugar in the previous section; in particular, I’ve given more context to say what I do understand and where my understanding fails. This is exactly how you’d ask the question to a human. Naturally, this produces the more precise and accurate answer you were looking for, and it helps you learn to read the code.
Remember, if you don’t understand the answer fully, you can follow up with another question. Dig deeper with specifics about the call. The more you guide the LLM, the closer you’ll get to a precise answer.
This technique is key for learning unfamiliar patterns. It’s something you’d ordinarily do when learning a new language anyway, but this time, instead of researching in a book or on the web, you can just ask the LLM.
Refactoring X
Sometimes you know there’s something wrong, but you can’t put your finger on it. Maybe there’s a rule of the language you’ve heard in the past, or a warning the compiler is showing you, but you can’t solve it.
The answer is simple. Just ask!
Take the following code for example:
function sleep(ms) { return new Promise((resolve) => setTimeout(resolve, ms)); } async function aNumber(multiplier: number): Promise<number> { await sleep(1000); return 2 * multiplier; } let numbers = []; for (let i = 0; i < 10; i++) { numbers.push(await aNumber(i)); }
I know the previous code is inefficient due to the await
in the for
loop, but I don’t know how to refactor the code to solve it.
I could search on Google, find a solution, and adapt that for my needs. Or, I could just ask the LLM.
Prompt
Remove the use of `await` in the `for` loop.
You can guess what the answer would be; it’s not too important here. But what is important is that you provide context (the code), be specific (“await
in the for
loop”), and give it an action (“remove”). The more complicated the context, the more specific and verbose you’ll have to be. It could possibly take a few back and forths to get right.
In the learning environment, you can even do this with code that you believe you have right! Ask if there are any refactoring opportunities or potential errors in the code. That way, you can catch gaps in your knowledge when you’re unaware of them.
Generating code from scratch 😲
Here’s the more advanced example where we start generating code. I’d encourage you to use this technique to learn, rather than have AI “do the job for you.” We’re still at the stage where you, the human, hold a wider context of the source, project, and goals, meaning you’re able to better reason with solutions to understand if they solve a problem optimally.
Here’s a quick example.
Prompt
Please create an API to return a number based on the current Unix timestamp and the length of a username. The username will be passed in as a parameter and should be stripped for any whitespace prior to generating the number. The username must valid and not be zero in length.
As you can see, I’ve not stopped at describing the shape of the function; I’ve gone further to describe the behavior of the internals. This is important because, without that additional guidance, the LLM is free to omit it, or even worse, hallucinate about certain aspects of the function.
It’s at this point where you’d question whether investing the time in prompting all the necessary information is worth the effort. Wouldn’t it be quicker to just write code yourself? Well, the answer depends on how well versed in the language you are. If you’re a complete novice, this is a great way to build some examples to learn from, query and question about aspects of the response, and learn common patterns.
For more on this topic, check out Programming in the AI era
Conclusion
In this blog post, I’ve introduced just a few common prompt examples to help you learn and play around with a new programming language. I’d encourage you to take this further and adapt the examples to your use case and context. Remember to be verbose and include additional context where required, and keep refining until you get to the answer you’re looking for.
I’m not proposing that LLMs take over the whole learning process, but they prove to be a good tool to help speed up knowledge acquisition when compared with searching on Google or reading a book on the topic.
When Nick started tinkering with guitar effects pedals, he didn’t realize it’d take him all the way to a career in software. He has worked on products that communicate with space, blast Metallica to packed stadiums, and enable millions to use documents through Nutrient, but in his personal life, he enjoys the simplicity of running in the mountains.