Using Generative AI for Programming
Posted
Since Sundar Pichai’s talk at Google I/O, I have been wondering what generative AI will mean for the field of programming.
Earlier this year, I signed up for ChatGPT for Programmers on Udemy when the price fell down to under $20, which is a thing that happens on Udemy a lot. The course used OpenAI to build Python programs. The real value of this class was in learning the limitations of ChatGPT and how to work around them. This class helped me realize how powerful context is in a discussion with ChatGPT. Tasks like drawing a picture that could not be done in one context could be done in the context of drawing a picture on a graph using various Python plotting libraries.
For example, here is a new conversation with GPT-3.5:
Me: Can you generate a png image of a thermometer?
GPT-3.5: I’m sorry, but as a text-based AI model, I’m unable to generate or directly provide visual content such as PNG images. However, I can certainly help describe or provide information about a thermometer if that would be helpful to you.
In the context of a discussion where I was asking GPT-3.5 to plot temperatures and dates using Python graphics libraries:
Me: Can you generate a png image of a thermometer?
GPT-3.5: Certainly! Here’s an example code that generates a PNG image of a thermometer using Matplotlib:
|
|
Identical prompts produced to very different outputs due to context!
I also learned that asking ChatGPT to make a program more elegant can simplify verbose syntax into something more succinct.
After finishing that class, I completed the ChatGPT Prompt Engineering for Developers course that was free at DeepLearning.AI. This class used the ChatGPT APIs to teach the fundamentals of Prompt Engineering. It explained that Prompt Engineering is an iterative process and that you have to be pretty specific when you ask ChatGPT to summarize things. For example, you have to tell ChatGPT approximately how many words you want in the summary and what aspects of a text it should focus on. If you are not specific about what you want, it will give you summaries that are too short or too long or focusing on the wrong thing. Even when you specify the number of words you want, ChatGPT is not very exact. It may give you a few more. As the tutorial continued, the output from ChatGPT in the video and in the Jupyter notebook where I followed along diverged. For example, when ChatGPT attempted to correct sentences for grammatical errors, it was inconsistent about whether the model believed “It’s” or “Its” should be used in a sentence. Generating responses over and over again led to inconsistent results. As we learned about expanding text, the instructors were mindful in pointing out that this could be used for spamming. I imagined a recipe page that continued to produce an infinite amount of filler text to the point where you could never reach the recipe.
Beyond ChatGPT, the Studio Bot AI assistant is introduced in the Hedgehog version of Android Studio, and I watched as Matt McKenna attempted to use it on Twitch to learn Jetpack Compose. Studio Bot was not up to this task because the API of Jetpack Compose has changed a lot in recent years. Studio Bot generated a lot of stuff that looked like Jetpack Compose but never quite worked.
From what I have seen of generative AI, it appears to be good at tasks that are highly structured and repetitive. It can also name variables better than I can, but it can easily be wrong about a lot of stuff for a long time as well.