In today’s column, I address an increasingly debatable and hotly debated question: whether software developers who help exercise generative AI knowledge and giant language models or LLMs act treacherously when they do so to improve AI-based code generation capabilities. The concept is relatively simple. One could simply argue that AI’s move towards generating programs and code will end up putting all software developers out of work.
This is where the traitor comes into play.
Let’s communicate about it.
This analysis of an innovative proposition is part of my ongoing Forbes.com column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here).
If you take a look at the numerous vacancies for many AI creators, you will notice a classification of positions that, at first glance, is absolutely harmless. The name of the task sounds like that of a code tagger or perhaps some listings may state that the task is to be a code tutor. The truth is that the other people who get the task will help exercise the AI to generate code.
This goes almost unnoticed and few people realize that these are jobs aimed at gradually and inevitably eliminating the need for human programmers. The more AI is able to generate code, the fewer human software engineers are needed. If AI can, it still does everything a human programmer can do, bam, I don’t want to rent human coders.
Boom, drop the microphone.
Are the developers who settle for those jobs doing something smart or bad?
In the hallways of those who work full-time in software development, unpleasant arguments arise about this situation. For one, programmers who perform this specific task only rely on their extensive qualifications in coding and work a day for a salary. Good for them. They take advantage of the talents acquired and earn a living.
The counterpoint is that they are leveraging their programming abilities to ruin the marketplace for software developers, namely, human ones. Those developers could instead be using their skills to build software that provides almost any other application in any wide-ranging realm, such as innovative software for medicine, finance, robotics, etc. But, no, they have decided to work on something that will undercut the future of software developers everywhere.
Selfish. Fake. Annoying. These are other people who can simply use their skills for the good of humanity and they seem to be aiming for anything that ruins the future of existing programmers and someday software developers in training.
As the popular definition says, a traitor is someone who betrays a friend, a country, a principle, or who is a dirty rat and betrayed.
Do you think those developers are traitors?
Wow, those developers urge, you’re above that and you lack sense.
First of all, if you need to point fingers, go ahead and blame the AI creators for designing AI that can generate code. They are the main culprits. One way or another, they will continue. Taking on a task as a so-called AI code tutor or tagger is only a small component of the problem. Whether or not software developers settle for those positions, the writing is already on the wall. It is reaching the frame of all software developers.
Live with this reality.
Second, it can be argued that AI code generation would not possibly bring all programmers up to date. The idea is that AI will help software expanders expand and verify code. This could actively stimulate more programming jobs for humans. Every software expander may be far more productive than they are now. This will be a great help for hiring.
Third, and again encouraging hiring, the bottom line is that AI code generation will allow those with less brilliant programming skills to generate code. In this sense, it will eliminate the fiefdom of wonderful and hard software developers and democratize software creation. People from all walks of life will be dedicated to code.
Fourth, believe in software diversity that we don’t have today due to lack of resources to locate top-notch programmers. If everyone was paired with a smart AI code generator, the next thing you know, that we would have a large number of programs. There would be programs like we had never noticed before. The burden of creating programs would be significantly minimized. The availability of applications would increase significantly.
All in all, by doing the job of an AI code tutor or labeler, those doing so are making the world a better place and ought to be heralded for their sense of duty and heroism.
As you think about the pros and cons of this question, it may be helpful to have an idea of what these types of paints entail.
Suppose a program has a line of code that says this:
That’s it, we’ll just use one line of code to review things.
Imagine that we are using a specialized version of generative AI to analyze the code. There isn’t much to do in this instance. About all the AI can accomplish is to determine that when the value of the variable known as “n” is a value of zero, the result is that the variable known as “r” is set to the value 2, otherwise when “n” is not zero the value of r is set to the value 3.
Note that there is no explanation for why all this examination of the variable “n” is happening. The code itself has a mechanical indication. There is no general context. Just do this or that and then stop.
The problem with this is that there isn’t anything useful to be learned from that snippet of code. You can’t readily say that this is a great piece of code and that similar code in the future ought to look like it. We really have no foundation for discerning if this code is a gem or a nothing burger.
Okay, so a programmer hired as a coding tutor or tagger joins the field. They are presented with the line of code. Aha, the programmer realizes, I recognize this line of code. The scenario arrives at the variable “n” which represents when we need to open a valve in 2 or 3. The variable “r” is indicative of the valve configuration.
If the condition of “n” is zero, which is the default condition, the price is intended to be set at a point of 2, which means partially open. If something has replaced the price of “n” at a non-zero price, the valve will set to 3, which is a full flow.
All in all, the context is that this code snippet queries the desired configuration (“n”) and then configures the valve accordingly (“r”).
The programmer has guessed what the line of code should achieve. The next step would be to inform the generative AI of the information obtained.
There are two mainstay ways of letting AI know what’s happening:
In some situations, code tagging is the path, while in other contexts an interactive explanation of the code takes place. It all depends on the complexity of the code, the skills of the code tutor or labeler, and how generative AI has been set up to try to understand the machinations of coding and programming.
All day long, that’s what work is essentially about. The responsibilities would be more to do the same in different frameworks than the code, adding various programming utilities, tools, aspects of operational formulas, APIs or application programming interfaces, etc.
Keep in mind that the job can be very complicated and requires significant coding experience. Normally, you wouldn’t rent a basic encoder to do this kind of work. The reason it doesn’t do this is because they’re less likely to know the industry’s coding tricks and are more likely to miss what’s going on in the code. I’m not saying don’t hire beginners, but if you do, they often work under the tutelage of a more experienced coding tutor or tagger. This helps to avoid mistakes, etc.
Paintings may be more difficult than you think.
You can also get a set of codes that you have never noticed before. Documentation for the code may no longer exist. From the code, you need to make all sorts of guesses about what the code does. In a sense, you’re dealing with code engineering.
In addition, the more advanced AI-based code generation is best aided by the code tutor or labeler going beyond just the presented code itself. For example, imagine that the valve value for “r” is later on in the code given the value of 4. The code tutor or labeler might have identified that the permissible values for the valve are only 2 or 3. Thus, the place in the code where the valve is set to 4 is a problem and the code contains a bug or error.
By telling the AI about the bug, the AI might eventually in-the-large identify patterns of how to detect bugs in code. Notice that this is well beyond merely examining a particular line of code. The AI is somewhat getting data trained on how to interpret code and spot portions of code that might be buggy.
You probably know that one of the reasons trendy generative AI doesn’t spew profanity and seems to haunt civil discourse is because a strategy known as RLHF, or reinforcement learning with human feedback, is commonly used today. This is how ChatGPT took the world by storm. AI maker OpenAI had chosen to use RLHF extensively in its nascent generative AI before releasing it publicly (see my detailed explanation at the link here).
The RLHF strategy is simple.
That’s how it works. The AI presents to a human whatever the human is thought to compare with a thumbs up or a thumbs down. In a traditional setting, consider that the word “Please pay attention, thank you” is presented and the human evaluator wants to imply whether or not it is an appropriate sentence. They’d probably give a thumbs up. If the word had said “Hey, idiot, open your damn eyes,” it’s possible that just a thumbs down would have been used, just because it sounds insulting.
The same approach can be used when reviewing code. Snippets of code can be presented to a code tutor or code labeler and ask them to rate the code. A thumbs up or thumbs down is probably not going to be fully expressive, so they might be required to enter various details and get into the nitty-gritty.
If you have a bunch of code tutors or code taggers, and you keep presenting them with code and they give their answers appropriately, the AI can gradually stumble upon patterns of what is smart code rather than bad code, and what needs to be done. generate code that is perhaps as intelligent as what humans can produce.
God forbid, AI can produce greater code than humans.
Why?
Humans are human. They make mistakes. They skip things. They are tired. They can only write code at a specific pace. They design their code based on what they have in mind. A given programmer may not have as extensive coding experience as others. Etc.
Envision generative AI as not getting tired, being less likely to make mistakes, writing code as fast as the computing resources allow, and leveraging the identified patterns of how humans, including the best of the best, write code. This could be a lot less expensive in the long run, and potentially boost consistency, spur the pace of coding, and might bolster quality.
A retort is that AI doesn’t have instinct, it doesn’t have the creativity that humans do, and otherwise will be a mindless spewer of code. Heaven help us. For my analysis of such notions, see the link here.
You probably won’t forget a famous scene in the movie Top Gun where the fighter pilot and the high-flying navigator joke that they should go to truck driving school to avoid being allowed to fly.
This raises two provocative or sneaky questions that I hear among experienced software engineers:
Well, first of all, truck driving is also facing the writing on the wall. The advent of autonomous vehicles such as self-driving cars and self-driving trucks is predicted to wean down the need for human drivers. No sense in switching from the frying pan to the fire.
Second, we’re still a long way from AI being so smart at programming that we’d like to have software generated exclusively autonomously. This can be done here or there. We still want human software developers. They’re less likely to code from scratch and run aspect by aspect, say, hand in hand with generative AI.
Doing so at this time can be exasperating for human software engineers. The AI is still clunky at times. The AI needs to get better at coding. Those code tutors and code labelers are doing their part. This could democratize coding, vastly expand the applications we might all enjoy, lower the costs of making use of applications, and change the world accordingly.
Will this eliminate all those software developer jobs or aid them? Will this spur the need for more human programmers? Will AI only be proficient as a bottom-feeder in coding?
Unfortunately, haven’t we heard time and time again that programmers will soon be replaced and out of work? This is a clamor that existed when the so-called 4GL or fourth generation languages appeared, as well as even before, when coding languages such as RPG appeared. trend. Perhaps generative AI is just another in a long line of claims that the sky is falling.
Time will tell.
Traitors or inventors of the future, this is an enigma worth discussing, but do so civilly, please, and with decorum, thank you.
One Community. Many Voices. Create a free account to share your thoughts.
Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.
To do this, please adhere to the posting regulations in our site’s terms of use. Below, we summarize some of those key regulations. Simply put, civilized.
Your message will be rejected if we notice that it appears to contain:
User accounts will be locked if we become aware that users are engaging in:
So, how can you be a user?
Thank you for reading our Community Standards. Read the full list of posting regulations discovered in our site’s Terms of Use.