This is a subject I've been thinking about for a long time. It took me a while to get my impressions in order but I think that now I have something germane to express on the subject.

The subject being, of course, the latest technological craze, LLMs.

This is a random stream of thoughts. I will express different points of concern on the technology, but they will not be in any order whatsoever.

Will we save as much time on PLC programming as we think?

Let me be direct here. In 2026, if you are not algorithmically generating at least 50% of your PLC code, you've been behind for years. I have a bunch of scripts that allow me to generate and configure subsystems without ever actually touching the PLC code. I fire them up depending on what's needed. The output they produce is deterministic. I have written the template, the scripts and the tests for them so I know intimately how everything works. This allows me to have a high level of confidence in the finished product, and to be able to pinpoint the source of a problem if there ever is one. This truly saves me time and effort.

LLMs produce probabilistic output. For a given prompt, the output will never be the same thing. If there isn't enough training data, the LLM will produce something rather than nothing at all (hallucinations). If I were to generate code via LLM, I would have to review it, every time, because putting any trust in its output is folly. Unless I use a model explicitly trained on my own code base, and even then, chances are that it will produce code that differs considerably from my usual style, leading to a mental clash and the additional mental overhead of switching from my mental model, acquired through years of practice, and another model that is alien to me. Chances are that if it bugs, I will have to spend much more time "re-understanding" it because since it didn't come from me, I haven't internalized it to the same degree.

If the effort to review it is higher than the effort of writing it in the first place, I'm not really getting anything out of it.

All in all, executing PLC programming in itself should not take a major time slice out of a controls professional's time. This is not where the bottleneck lies. It usually lies in understanding the problem and crafting the actual solution. PLC code is just a translation of something that should already be understood.

The decimation of education

We face a huge problem in the years ahead, and this concerns most fields where worker output is more intellectual than physical. LLM adoption has been so sudden and widespread that educational institutions have not had time to react to the introduction of this technology. Anything take-home, be they exams, assignments, labs, papers, is suddenly of no value whatsoever. The only things that holds the slightest value is the good old written and proctored exam, and oral exams.

We have whole cohorts of college students coming in who have surfed on the LLM wave to attain diplomas while having learned absolutely nothing, both in terms of academic knowledge but also in terms of mental tooling. They will come in the workplace with less mental self-defense and more learned helplessness: they will be simple human interfaces to LLMs, techpriests who when faced with a problem, will feverishly attempt to find just the right magic incantation to make the stochastic parrot output more or less what is required of them.

I'm not saying that they will all be like that, but there will be some. My feeling is that there will be more than we think. I often lurk on forums for education professionals and I don't think we're really anticipating at which point our society has failed to catch the rebound. LLMs have professors and teachers despairing over their inability to properly instruct their charges. Combined with administrative pressures to make students "pass", I think that we'll see how disastrous education has become.

The ironies of automation

Did you know that planes almost fly themselves? Autopilot software is so advanced that most of a modern pilot's job is monitoring the system. Likewise, in manufacturing, we've made great strides in outputting finished product of consistent quality with little human intervention.

Lisanne Bainbridge published the article "The Ironies of Automation" in 1983, where she argued that the training needed for an operator to manually intervene in times where an automated process fails is greater the more automated the process becomes. The reason is that the operator does not practice the skills needed to be able to take over the machine on a regular basis, their time instead swept up in complex validation tasks.

It is the same dilemma with pilots. Flying may never have been so safe and accessible, but there is a real worry that as pilots manually fly less and less, they may be less empowered to take over were the aircraft's systems fail. In addition to being intimately familiar with the aircraft's systems, they must also maintain a high degree of flying skills while having less opportunity to practice these skills.

Transposed to LLM code generation, this would mean that the person piloting the LLM would need greater training to be able to spot the edge cases and bugs that may contain the generated code than the person who simply writes the code. If we accept this and combine with the previous point, we're in for a catastrophe at some point: a flood of people who don't have the required, greater skills to monitor LLM output and a workplace intransigent on their using these tools to save time and money.

Nuts and bolts

Usually, the best engineers are those who have extensive field experience, rather than those who have spent their careers cooped up in offices and meetings. Perhaps they've been a maintenance technician in the past, for example. Their designs are usually more in harmony with the requirements of the real world; they might be deemed easy to work on for maintenance personnel because their past experience has coloured their present skills. They have the mental models of both a maintenance technician and an engineer. They may even keep the former mental model up to date by undertaking extensive fieldwork in addition to design tasks.

Software people love abstractions that take them away from the nuts and bolts so they can concentrate on "higher value tasks". First programming languages, then frameworks, and now LLMs as the ultimate abstraction: converting human language into code. But abstractions always leak. There comes a time where an understanding of the nuts and bolts becomes necessary to resolve a problem, so what happens when the pool of people who actually understand the nuts and bolts becomes that much smaller?

Someone who has little nuts and bolts experience will be hard-pressed to architect software that is elegant from the point of view of somebody who does have this nuts and bolts experience.

Brandolini's law

Brandolini's law is an adage that goes like this:

It takes an order of magnitude more effort to disprove bullshit than to produce it.

This is the base assumption underlying such concepts as the "firehose of falsehood", where the mental space is filled with lies to such an extent that the effort required to dispel them is overwhelming.

It is so easy to flood a space with auto-generated bullshit with LLMs in a volume never before seen that there exists a real danger that our modern public spaces may suddenly find themselves even more intellectually dangerous than they already are. I don't think most citizens, including myself, are ready to face the metric ton of lies, in all formats, that can easily be served to them, because they can be created and disseminated so cheaply in contrast to the effort to takes us to reason about whatever claim, picture, audio or video is sent our way.

A question of value

Our brains are wired to reciprocate. For example, on a dating website, a person is much more likely to write a response to a message containing clues that the sender has read and understood their profile than to a generic message. If the opening invitation to an interaction is low effort, then we take it as a proxy that the whole interaction has low potential value. We can extrapolate this to a whole range of human interactions.

Have you ever noticed how mentally painful it is for us to be forced to put a high amount of effort in a task where other contributors have put low amount of effort? For example, reviewing logic that is evidently flawed not because the person who wrote it is a beginner, but because the person clearly did not put any effort in, or having to save some teamwork assignment from the pits of failure because your other team members put the minimum effort required, or none at all.

Humans value that in which they have put effort. Studies have shown that cognitive engagement is markedly diminished when working with LLMs. People have more trouble retaining information, for example not being able to explain how a feature works or not being able to cite from a paper they have written with the help of an LLM.

LLM output is fundamentally low effort and thus low value. This is why a lot of people deride AI content as "AI slop" and refuse to engage with it. This is why reading material made from the same "probabilistic template" is so annoying. Doubly so when it also happens to contain obvious untruths. And, I believe, it's why turning work into babysitting the output of an LLM will end up, in the long run, not being very popular.

The real price

This is my final concern. No one is paying the real price of this technology. Neither financially, nor socially.

OpenAI will be sneaking ads inside output soon and has also mused about lobbying the US government for a "financial backstop" (read: for a taxpayer bail-out). Anthropic has settled for pirating thousands of books for a very paltry sum, considering that we used to slap $80k per song fines on regular people using Kazaa or Napster to pirate music. Grok is on X openly generating CSAM and deepfakes. The AI financial landscape looks more and more like a bubble, with a few companies giving money to each other driving the vast majority of US economic growth. We still have regurgitation lawsuits pending with a decent chance of success. All LLMs are contributing to widespread destabilization of our societies via their capability to enable firehoses of falsehoods with flow that has never been seen before. The Internet is shittier than ever before, consisting of endless low effort LLM-generated content and bots, ruining what used to be the ultimate third space (and we don't have many of those left anymore).

We can blame it all on gullible idiots with inexistent critical thinking, or bad actors using what is just another tool. We can put all the liability on the users, rather than the companies who have chosen to "move fast and break things" at scale. I think that would be a mistake. We should rather ask ourselves if the risks are worth the benefits and if the companies who have introduced these tools have done their due diligence. My opinion is that it is not, and they did not. Rather, they are again attempting to capitalize on our processes' lack of agility (for example, the meandering process of having regulations adopted) to generate profit and leave cleaning up after them to society at large.