Using “AI” in My Own Work, Personal Projects, and Learning; Responsibly and Not.

ChatGPT is still relatively new, but everyone who might read this will already know how much of an effect it has had in nearly every industry. It's impossible to ignore right now. And, in fact, ignoring it might actually be a bad decision in the long run.

Of course, I'm saying this after having ignored it for the better part of a year. Once Microsoft released Bing chat, I did play around with it for a bit, though, I only treated it as a novelty. It felt similar to when Apple began integrating Siri into the iPhone. I would throw requests at it, get a laugh, raise an eyebrow, and pretty quickly make an assessment of it's full usefulness.

I came to realize that my initial response, in some part, was driven by fear — suddenly, I had felt more replaceable that ever. While at the same time, I recognized that ChatGPT wasn't in a position to replace most of us just yet, even though IBM's hopes are high. So, the months went on and I basically stopped using it.

Almost a year later, two projects I was working on called for leveraging object storage in a unique way. The available tools I came across didn't meet my needs, so I decided to write my own. Since I'd never used the AWS SDK for Go or the OCI SDK for Go, this seemed like the perfect opportunity to keep pushing forward with Go and get familiar with these monster SDKs.

Since the projects weren't for my own personal growth, I had a deadline. I'd need to jam a lot of knowledge into my head, quickly. The obvious first step was to hit google and look for examples on how to upload to S3 with the SDKs. There are, surprisingly, not many thorough examples of how to do this. I had 500 word Medium.com articles without context and two hour YouTube videos with ultra specific contexts. And, I had the AWS documentation which, at first, looks cryptic. As a whole these resources are helpful, but I needed a base to help get up and running. A finished product would need to authenticate to each service, walk directories, pull lists of files, name them to create a pseudo-directory structure, deal with potential overwrites, actually upload the files, handle hash validation, and more. The simple problem suddenly looked much bigger.

I made some progress as the hours and days went by, though inevitably I had to deal with more and more edge cases. Finally I remembered that GitHub Copilot had a free trial available — and now seemed like the best time to take advantage of it.

With Copilot, I was, almost immediately, off to the races. Copilot was suggesting code with astounding accuracy. In the moment, my thought process was “Well, this is okay because I already “knew” what I was going to type.” But, as the program grew from a small script, to a program with multiple modules and thousands of lines, I found myself using Copilot more liberally. It started to feel like cheating in a way. Copilot was feeding me these line completions; At most, I was glancing at them and sometimes just assuming they would work.

As my scope increased, and my project's complexity grew, the code completions became a distraction. I was trying to work around edge cases and error handling patterns that Copilot wasn't reliably picking up. Often times, the suggestions were way off base. I'd accidentally accept a completion, here and there, that would throw me off base and confuse me. Still, there are those times that Copilot would give me a real winner that would save me some precious time.

Consuming AI vs. Using AI Responsibly.

This article isn't about anything more than my own response to the rapid proliferation of ChatGPT and “AI”. I think the hype machine has been working overtime on this one. In fact, using the term AI is at best, completely inaccurate. Truth is, that GitHub's Copilot isn't thinking about solving the problems I'm trying to solve. It has zero sense of urgency or a need to responsibly complete a task. Copilot's LLM is pouring through all of it's training data it has and suggesting that I solve my own problems using a smattering of data provided by others.

Overtime, as all of our service providers hand our data away, these services will have more and more context and, potentially, the answers provided by the LLMs will be better and better. The other side of the coin is that the data being fed to the LLM could, overtime, reduce in quality as more and more of the incoming data is data that was generated by itself, or other LLMs — with all of the thoughtlessness and flaws intact. That's extremely cynical though, who knows.

As far as my own work is concerned, I've turned off the auto-complete options and switched over to Jetbrain's “AI” Assistant. I use it purely in a conversational way asking it questions, reading answers carefully, testing the replies, and then looking for confirmation on my own. It's usefulness has ballooned as I've developed a responsible pattern of use.

I know that responsible might be a big word to throw around, especially when it comes to “AI”, but in my own world, responsible simply means that I'm not doing myself a disservice by accepting any code as a given, worry-free, factual solution. Like in almost every other aspect of life, the data I have incoming, with a relative amount of effort, should be vetted and independently verified before I put it together and spew it out into the world. /s


See something wrong? Email [email protected].