banner image

AI is My Worst Employee

August 29, 2024 by Paul Byrne

web
AI

I jumped on the AI bandwagon in early 2021. I had been experimenting with various models for a while and dove in head first around the time GPT-3 came into service. I started creating products using OpenAI APIs, utilizing AI to assist in research, and I wrote a series of blog articles documenting my adventures.

I’m not a researcher or an insider, and there are very few of them whose statements I trust. That being said, for those who are remotely interested, here is my current assessment of the AI landscape as a practitioner, product developer, and observer.

I would love to know yours.

Will AI Replace Artists?

Generative AI produces pretty mediocre stuff right now. Proponents point to some stunning visuals, which are nice, but they lack meaning and branding. Most art is commercial art, and most artists who make a living have a knowledge of branding, UI/UX, layout, and more.

There are several things generative AI is extremely poor at that humans with moderate levels of expertise can do well:

  • Consistency: If you ask it to make two things that are of the same style, or to apply a particular style, you might get a good result, but chances are, you will not.
  • Iteration: When using AI software, it can become difficult to tweak a project in the program without losing attributes you’d like to keep.
  • Understanding Anatomy: It lacks basic understanding of human (and animal) anatomy. Where Bigfoot got his third leg or why there is a sudden epidemic of polydactyly (extra fingers) has yet to be explained.

AI’s goal is completion: coming up with images that are plausible matches to the request. It does that well but fails on critical metrics.

AI-generated image of a gorilla jumping over hurdles
AI-generated image of a gorilla jumping over hurdles
(images courtesy of DALL-E, an AI image generator)

Will AI Robots Replace Factory Workers?

AI-generated image of a robot and human working in a warehouse
Despite all of Boston Dynamics’ dancing robot videos and Musk’s claims about factory robots “making great progress,” there seem to be a lot of limitations when AI moves from the information space to the physical space.

Think of Tesla’s Full Self Driving (FSD) dream. While they have made some amazing progress that assists human drivers (full disclosure - I am a long-time Tesla driver), there are still many cases where the FSD has to give up and summon a human to take the wheel.

From their pronouncements, the Tesla team seems to be taking an approach of gathering real-world cases from Tesla drivers and making sure the AI can deal with them. However, I wonder if they are able to get enough examples of edge cases (e.g., a cat drops from a tree onto the windshield) to train the neural network.

Will AI Replace Software Developers?

I think Lane Wagner’s article on AI taking programming is a great starting point, especially where he discusses compounding, linear, and diminishing returns. It is entirely possible, for reasons that I discuss in my article on the Last Mile Problem, that AI remains a helpful but not life-changing innovation, 

which is where it is now.

Screenshot of a tween stating that no skilled programmers think AI will replace them

AI is OK at producing code in limited situations. It cannot architect a large, innovative project. In fact, it struggles to produce anything with any degree of complexity.

I have seen only moderate progress from GPT-3 to GPT-4 in this regard. I’ll believe GPT-5 when I see it.

Why is this? Designing software is akin to visual design. There is no right answer about how to do it. Good software design relies not only on technical knowledge but also on an ability to recognize opportunities to simplify, to see patterns and conceptualize them into meaning that can be communicated to large teams, and the ability to apply concepts to new circumstances.

AI-generated image of a person writing HTML on a computer

Currently, GPT-4 still struggles to write code that works. I cannot tell you how many times I have talked to software developers who have told me how they turned off Microsoft’s coding Copilot because it did more harm than good. It’s basically an unreliable text completion bot.

At best, it is a second set of eyes on code. It points out when I misspell a variable or module name. It sometimes writes reasonably good tests, but that assumes you write your code first. I, and many developers, often start with the tests.

However, the refactored code it produces often fails the tests or changes variable and function names in a way that affects downstream modules. It lacks the ability to create code in the context of an existing code base.

The platforms are all claiming that they are having the AI ingest code documentation in order to produce better code. I doubt this will resolve the issue. Can you imagine teaching someone to drive by giving them more books on driving? It may help, but probably only on the margins.

Whether it is driving a car, completing a sentence, or writing code, the model is relying on the training it receives from observations. When a new circumstance arises that falls outside of training data sets, it rarely comes up with a good solution. The vast amount of value-added and innovation occurs when new situations arise.

Developers who use OpenAI to write code can usually fare just as well with Google and StackOverflow.

Yann Lecun and others argue that current Large Language Models (LLMs) like ChatGPT, Grok, Gemini, and Claude (Anthropic), are an off-ramp on the way to true artificial intelligence. There are ways to monetize the impressive current capabilities, but they will not be able to achieve actual intelligence. None of them are capable of reasoning on their own. Thus, they remain façades and facsimiles of human intelligence.

For example, a human can learn to drive an automobile safely on average with about 20 hours of practice. These models can learn nothing on their own. They have to be trained with massive amounts of data. Data scientists and software engineers write algorithms to improve training results.

Will AI Take YOUR Job?

Automation has been both taking jobs and creating entire new economies for centuries. This will continue. AI will not take your job, but a person who knows how to use it may. If you want to keep your job and continue to be employable, you should always learn how to use the new tools. This has been the story of humanity for the past couple of centuries. It is not changing yet.

Subscribe to our newsletter for regular community updates, case studies, and more.