A few thoughts on AI
6 min · 1055 words
Pardon me for a moment while I dump the contents of my brain onto this piece of digital paper. I don’t quite remember where I was when ChatGPT started taking off, but when I started hearing about it, it seemed like a bit more than a party trick. It could do some neat stuff, but it was still fallible. Large language models (LLMs), or, as we’ve come to call it colloquially, “AI”, have been with us for a few years now. It certainly seems like it’s here to stay, for better or worse. As time has gone on, though, I’ve noticed some things about it that give me some pause. Quick disclaimer: I’m speaking from my experience, so take things with a grain of salt. I’m not a turbo user of this tech, and I’d say I’m barely a user at all. Now, let’s talk about it.
The Good
I’ll give LLMs their props where I see them doing well. They’re more than decent at summarizing things. If you need the shortened short version of something, it does a pretty good job at giving you some bullet points. Do you lose a lot of context for something that you otherwise would have had if you had read the entire book/blog post/article/etc.? Of course! But, if you need a bit of an overview of something for whatever reason, it’ll give it to you.
It’s really great at creating diagrams. I’m speaking from a software developer perspective, and I’ll give you an example. Last year at work, my co-workers and I were tasked with doing some maintenance on some software that was over 20 years old. We’re talking ancient tech here. The folks that wrote it did their best with the tools they were given, and I don’t fault them for it. I’m honestly impressed at the work they did given the constraints. Having said that, it’s been one of the worst, if not the worst maze of code I’ve ever witnessed. Side effects upon side effects. Nothing is testable. Nothing is predictable. One of the things I was asked to do was to tweak a single value on one of the web pages. The problem was that, the last time I had to do something like that for the same project, it took me over an hour to find out where the value comes from, its journey through several different code files, and its eventual landing place. It was excruciating. This time, though, I would try out GitHub Copilot to see if it could show me the flow of data from start to finish via a diagram. That would at least help better understand the journey, given the visual representation. It took something that would have taken me an hour or so, and did it in a minute or two. I was floored. Thank you, Copilot! I was able to quickly figure out what I needed to do next, and my high effort bug fix went down to a medium effort one.
The Bad
One thing I’ve noticed while using a chat bot, such as Copilot, is that, if you’re not an expert on something, and you don’t do your due diligence, you’re not going to know when this thing is just flat out wrong. If I see the “you’re absolutely right!” response one more time, I might blow a gasket. I think the frustration comes from two places: the marketing of AI, and the way in which we interact with it. The marketing would have us believe that these things contain the entire chronicles of human knowledge. There’s nothing it doesn’t know! That, and the fact that they come across as sentient. Heck, OpenAI even has voices for them to really help bring them to life. When we “speak” to these chat bots, it can be easy for us to let our guard down and trust it. You’re the super intelligence! Who am I to question such an authority? But we know now that isn’t the case at all. It frequently makes mistakes, and the only reason we don’t always catch them in the moment is because we’re often looking to learn something we don’t know.
The Ugly
Let’s not mince words here. This thing is an ethics nightmare. We all know that it’s been trained on stolen data, stolen IP, stolen work, etc. OpenAI is currently being sued because of this. Any time you’re using it to generate an essay, a thank you card, a poem, an image, a song, etc. you’re aiding in this theft. I’m hoping that there is a way to give these authors and artists their proper dues.
There’s also the fact that the data centers required for these are eating up all of the water. The UN announced recently that we’re headed towards global water bankruptcy. There’s no coming back from that. This sort of thing has been a trend for decades. Short term profits will always trump long term benefits. I don’t know what to do about this, but it makes me feel sick to my stomach every time I think about it.
Conclusion
I saw recently somewhere that using AI makes you the client, not the artist, and I love this take. Being creative and creating things are important. You could argue that’s what makes us human. You can’t be creative and imaginative with AI. AI cannot create anything new. It can only regurgitate what it “knows”. With all of the AI generated slop out there, updated AI models have nothing new to feed off of, so we’re basically inbreeding these models at this point. It may sound corny, but we need to make things with love and care, even software. If you outsource all of that to AI, you’re no longer invested in the outcome. Take some time to be creative this week. Make something fun. Make something dumb. Be silly. Let’s not forget what makes us human.
Thanks for reading. Take care.
- CJ
This post was in no way, shape, or form written with or assisted by AI. That goes for everything on my website. You have my guarantee that everything you see here is written by me, and possibly proofread by my wife. And by written, I mean the code, too. Not just the blog posts. Alright, I’m done.