AI for Skeptics: A Universal Function of Only Certain Things

A phrase we use a lot in our society is, “Drink the Kool-Aid”, which means to be irrationally obsessed with a questionable idea, technology, or company. It has its origins in 1960s psychedelia, but given that it is closely associated with Jim Jones’s mass suicide in Guyana, perhaps we should find something else. In the sense that we use it though, it has been flowing freely lately in regards to AI, and the hype surrounding it. This series tried to look behind that hype, first by examining the motives behind that symbolic drinking of Kool-Aid, and then by showing a simple example where technology does something useful that is difficult to do otherwise. In that last piece we touched on perhaps the thing Hackaday readers should find most interesting, seeing the possibility of LLM as a universal API for useful tasks.
That’s Not What an LLM Can Do Do itThat’s what I did Do it
When we program, we use functions all the time. In most programming languages they are built into the language or can be defined by the user. They include a piece of code that does something, so it can be called repeatedly. Life without them on an 8-bit microcomputer was miserable, with many GOTO statements required to do the same. It is no accident that when I looked at LLM as a sentiment analysis tool in the previous article I used a function. GetSentimentAnalysis(subject,text) to explain what I wanted to do. The processing capacity of the LLM was a good fit for my workload, so I used it as the engine behind my task, taking a piece of text and a title, and returning an integer representing the sentiment. The word “do” sums up the point of this article, that maybe the hype has done it wrong by being all about what the LLM can do. do. Instead it should be as much as possible do. People who think they’ve struck gold because they can churn out a ton of content or get emails sent don’t.
So we have an LLM, even a small one on our computer, and looking at it in that light it is immediately clear that it can be a job to do almost any processing job, if you wrap the right information and an API call in the job description. Of course that is dangerous, because if I may I would like to coin a new sentence: work slop.
For example I can call LLM to do a simple number and it will do the job, but doing so will be completely pointless given the presence of the + operator. If you are going to use an LLM to do a processing job it is important that it is a job where doing so makes sense, otherwise your job is the death of a job. A quick web search tells me that function slop isn’t a thing yet, so I’d like to take this time to apologize for what I may have introduced to the world.
Work drops aside however, using LLM to do processing work where it makes sense, should not be overlooked as a useful tool. These things are very good at summarizing and categorizing information as humanly possible, a task that is often difficult in conventional systems, so if the task at hand matches those skills then it makes sense to use them.
This has been a three-part series, and it’s not the same star Wars or The Hitchhikers Guide To The Galaxymay always be the case. I hope that in our explanation we have successfully looked beyond the hype and found something useful in all of this. It’s strange though, since you’re writing it you might think I’d be full of new ideas, but without analyzing the sentiments of the previous article I still find myself with little I find the need to apply LLM to. Which is probably the point, it’s one thing to know a little about them, but just because they’re there doesn’t mean you are. be using them.



