Is “vibe coding” with AI gnarly or reckless? Maybe some of both.

SnoopCatt

Ars Centurion
1,112
Subscriptor
Nah, the AI is writing source code just like a human and compiled using the same compiler using the same ISA and hardware. It will be deterministic and if it isn’t, a human could make the same non-deterministic error. Various compiler flags / developer tools to spot things like uninitialized variables or undefined behavior are standard tools to clean up the rest. At worst you can chose something like Swift or Rust with is designed to catch a lot of that stuff automatically at compile time. There are too many crufty coders out there for bad code not to be a heavily explored sector out there with long experience dealing with it. One more monkey trying to write Shakespeare is t going to upset the apple cart.
I probably didn't make my point very clear - the "abstraction" layer between a traditional application and the computer hardware may be complex, but it is consistent and fundamentally explicable.
Whereas the abstraction layer between a prompt to an AI and some lines of code are both inconsistent and inexplicable.
 
Upvote
24 (24 / 0)

Atterus

Ars Tribunus Militum
2,117
I can see this going the same way AI itself has. Hordes of "experts" that simply know how to use a input prompt or toolbox and minimal actual knowledge. Reinventing the wheel to be square.

Seems great at first, but when things start falling apart, they delude themselves into pretending they are experts or are entirely dependent on the whims of another "expert".

I can say firsthand that these tools are great for very low level stuff. But you are asking for trouble saying "upgrade my Fortran 77 Ur code plz" and cheering when the machine merely goes "ding" at the end. Zero clue about what the proper output should be. Already plenty of tales of people finding out months later their AI code was garbage.

I tried a few things like that... was not impressed by AI coding... but, i was when it pointed me toward the actual solution it seemed incapable of finding itself. So, for some learning or "bouncing ideas", this is a useful... toolbox.
 
Upvote
6 (7 / -1)

iollmann

Ars Scholae Palatinae
904
Funny that you got downvoted for that simple half-fact half-joke. Must have hit a nerve huh?
It is certainly true in software optimization. Most even experienced software engineers have little concept about what makes for fast code. They can do high level algorithmic optimization, but low level? You still see even professors trying to publish better matmul algorithms trying to reduce multiplications for what is a world almost entirely dominated by FMA/FMAC CPUs and GPUs on which multiplications are largely free with every add, once one gets past heat and chip area. Properly optimized matmul code is throughput, not latency, dominated.

Here’s one: the AMD Zen5 architecture has a full 512-bit wide vector ALU pipeline capable of operating on 64 8-bit integers concurrently in a single instruction, and more where it is superscalar. You can get one with 16 cores and it also has SMT if you want to count that. So you could write single threaded scalar code to process “one byte at a time” — this is by far the common case — or you could use threads and AVX-512 to compute 16x64=1,024 bytes in the same time. It is literally 1000x faster on paper, but “oh microoptimization doesn’t really help and compilers are good enough.”

To hell they are! If the workload is embarrassingly parallel, you are leaving multiple orders of magnitude of performance on the table with vanilla single threaded C code, and that, odds are, is the only kind of code a engineer writes.
 
Upvote
11 (12 / -1)

JaneDoe

Ars Scholae Palatinae
1,362
Subscriptor
Getting a small project from scratch to 80 to 90 percent "okay" is easy. Most of it you could copy paste from stackoverflow and friends. I see that a llama could do this most of the time. But getting the rest right or changing an existing non-trivial project is a different beast.
I see a risk of lacking senior software engineers later, as companies replaced the tasks juniors grow on with "prompting".
 
Upvote
9 (11 / -2)

cmbasnett

Smack-Fu Master, in training
72
Most people do not even understand on a high-level what programming a computer even entails. I'm finding that younger people that I encounter online are not interested in learning programming fundamentals, and just accept whatever Copilot spits out, then throw up their hands and have to ask an adult to fix their remedial programming problems for them. Pretty sad stuff.
 
Upvote
8 (9 / -1)

iollmann

Ars Scholae Palatinae
904
I tried a few things like that... was not impressed by AI coding... but, i was when it pointed me toward the actual solution it seemed incapable of finding itself. So, for some learning or "bouncing ideas", this is a useful... toolbox.
I watched a AI coding demo on YouTube for someone working on a little microcontroller for some lights. The AI tool had a verbose option where it would explain what it was doing as it did it. I came away quite impressed, since it self reported thinking about some non-obvious problems, at least to me— I don’t work on microcontrollers.

While the tool didn’t generate correct code and the prompt based process of getting it to fix it didn’t work, even after the presenter diagnosed the obscure problem, I did come away with the feeling that there might be something here as a tool to at least help lead your early thinking on how to organize the code and what problems you might hit. I think in that role it would definitely be a leg up for a junior engineer or any engineer working in an unfamiliar area.
 
Upvote
-1 (4 / -5)

copiedright

Ars Centurion
245
Subscriptor++
Writing software is the easy bit. Its maintaining it where its hard. My rule of thumb is that 90% of lifecycle cost for software is maintenance cost.

Maintenance is where AI sucks most. Because this is when new problems emerge, problems that haven't been learned by a language model yet. And soon to be problems created by AI itself.
 
Upvote
21 (21 / 0)

janhec

Ars Scholae Palatinae
783
Subscriptor
Most developers haven't known how computers work for the last 20 years or so anyway
I think this is a valid point touching unavoidable specialization covering all aspects of manufacturing, creating usefull things from materials. The plate-steel of a car is easily described but hard to make well. etc. Fully comprehending a computer means understanding all the way from transistors through flip-flops through logic units, alu's, memory and storage, to name just a few. Then there's the soft side ranging from microcode to assembly (which in a way is already quite abstract in relation to at least the hardware), then your perhaps not favorite low level language to eventually something fully task-oriented, let's say R or python or javascript, whatever people work most with, where most of the (productive) work gets done.
It would even be of questionable intelligence to want to understand it all (curiosity is another thing), because you might end up spending way less time on your actual job-studies.
AI is special because it looks so much less structured (in use at least) than assembling car-parts or writing a program in a (somewhat) low level language (think C++ or C#). But the writing is on the wall, since quality issues and hard to satisfy demands for code-maintainability are already with us for quite some time. Higher level language reduce this problem but increase dependencies and diminish parts of understanding.
Just think what that does to software costs when every part has to be done by an expensive human.
So, instead of vibing software generated AI away, focussing on using AI in a more structured way appears the way to go - like every new technology ever. We are just playing around to get started and have conscientious people point out how it might be done better. Of course, AI might eventually fail to prove up to tasks, reminding me of (cheaper?) cars in ancient times that worked when leaving the dealership, but be a hassle afterwards.
AI, though, seems already way to big to fail and vanish in a very expensive bubble-puff.
 
Upvote
-1 (1 / -2)

Erbium68

Wise, Aged Ars Veteran
800
Subscriptor
Bear with me a moment:
During WW2, American pilots were faced with a very complex set of engine controls, for boost, richness and a couple of other parameters based on altitude, temperature and desired speed.
Meanwhile, German pilots lucky enough to have a BMW engine in front of them had a device called the Kommandogerät, a hydraulic-mechanical computer that automated all those tasks into more or less a single lever. Despite inferior engines, German pilots were on average more successful than Allied ones. At that point, task automation really started. But if the Kommandogerät or its inputs and outputs happened to get hit by something, the plane was as dead as if the engine had just swallowed a cannon shell. The pilot couldn't take over. Today, engine management systems are such that a minor error in design can destroy an engine just as quickly as a failed oil pump.

I've been through several "revolutions". First, beginning on early 8 bit controllers, machine code to macroassembler. Then C. Then various "high level languages", retiring during the J2EE/SQL era. Each time there have been benefits in the amount of working program that could be produced, but with tradeoffs in control and efficiency. I once had to take the Coral code of a military application and convert it to assembler, and remove all the redundant junk that was causing it to exceed the available runtime. It was a crap compiler. But at least I could compare the output assembler directly to the program functional description; the complexity was within the capacity of a couple of human beings to deal with.

I would say that as the capability of the design tools increased, more and more time has had to be spent on analysis and debug to ensure things are working as planned.

With AI working to a vague specification, how do you verify and optimise? We are already told that AI is going to cause a US electrical generation crisis so bad that emissions targets have to be forgotten and more oil and gas generation will be needed.

At some point I think a law of diminishing returns will set in, with the unintended consequences of AI outweighing the benefits.
 
Upvote
2 (6 / -4)
I wasn't one of the downvotes, but one reason might be that they aren't confident that the code you're using works more than 95% of the time too.

If you don't understand the code then you're making the first programming class student mistake of thinking because it runs and it worked for one or two test cases then it must be correct. Painful experience has taught us all otherwise.
to be frank you never have 100% certainty that your code contains no errors no matter how well you test it. to paraphrase: "there is no code without errors, only insufficiently tested one". but this is obvious for people who write code by themselves and is being accounted for. if you have no idea about the product code, when you have no idea about the test being ran this falls apart.
 
Upvote
-6 (2 / -8)

Centine

Ars Scholae Palatinae
1,016
Subscriptor
I’m not too fussed about this. It’s good that tech is approachable from entry level, for the sake of an education funnel, or for democracy at a larger scale.

It also doesn’t mean software as a whole will go to hell. There are many consumer and prosumers that handle a lot of their own DIY, and that doesn’t automatically translate into completely dumping engineering standards for “real” projects, bridges, skyscrapers etc.

Besides, sometimes I’ve reviewed, or read analysis of, some truly horrible spaghetti that’s used in real world businesses. I’m not convinced that this would be any worse 😅
 
Last edited:
Upvote
-6 (1 / -7)

orangedan

Seniorius Lurkius
4
Subscriptor
As a CS professor, I can tell you a surprising number of students "vibe code" without LLMs too - they try some things that they see on stack overflow, or from the notes, play with it until it outputs the right thing, then they shrug and hand it in. Helping students in office hours debug, I'm always surprised how they just try random things instead of approaching problems systematically and with intent. I'm not sure how much this is shifting that status quo, outside of making it even easy to vibe successfully.
 
Upvote
16 (16 / 0)

Tanterei

Wise, Aged Ars Veteran
173
Subscriptor
Security researcher Alfredo Ortega already discovered a case where an AI, given the same inputs, was able to generate the same code (bug-for-bug). https://xcancel.com/ortegaalfredo/status/1897056244474810490 If you're using AI to generate your products, you don't have any "secret sauce". In fact, all your bugs might be discoverable by people who don't have access to your source code.
That's a nice hallucination he got there.
I've tried LLAMA2 for code generation (it was code-llama as I recall) once in 2023 to ask for a python code that would perform resampling of a list of tuples consisting of a weight and a value. I wanted to see if I could get something better out than what I had created myself.

It yielded: A homework assignment with an empty function containing a 'TODO: Implement this function' taken 1:1 from twitter. At least as far as I could determine, since there was no attribution in the answer.

Granted that was "ancient times" on the LLM time-scale, but as far as I know the problem of attribution still persists.
 
Upvote
7 (7 / 0)

lost

Ars Scholae Palatinae
1,414
Subscriptor++
I do not think that AI coding is so clear future.

On the one hand, most "risks" mentioned in article are not risks - even in enterprise software companies AI coded part will not be worse than human coded part due to "not being understood by original programmer", because most often person who actually get to maintain/fix issue is not person who originally coded anyway, so they must try to understood someone's other code in any case. And AI code tend to be written more clearly than human code, with lot of comments and mostly using better practices/patterns than average human programmer. Also, any confabulation that AI makes ( frequent one is using nonexistent functions) is immediately revealed by compiling.

That will make "AI coding" very viable in near future - regardless if "AI assisted" or full "AI vibe" coding, they both will be comparable to pure human programmer code. Personally I use "AI assisted" coding for quite some time now.

BUT ... it is questionable if AI coding, and particularly "AI vibe" coding, can be sustained future just by using LLMs. Because current LLMs are trained on human generated text on coding, problems and practices - from human written context/books/..., human comments on sites like StackExchange/Quora/Reddit/... etc. And when/IF "AI vibe" becomes predominant, there will be lack of humans to post about new problems/issues/technologies ... so LLMs will have no source to learn about new stuff.

Once we reach "AGI" or however they call general AI (one that really can think like human), then "AGI coding" will be really the future. But I do not see "LLM coding" as a clear future - at some point it will dry out when its knowledge become obsolete.
 
Upvote
-12 (3 / -15)

Kavinsky

Smack-Fu Master, in training
17
...

Other than that, I don't know. ChatGPT is fine for hobbyist programming in any form and it does have some uses in a professional environment. Just as long as it's not blindly used in the professional environment. A professional has to treat the code output like any other code they did not write but are accountable for. They must review it for accuracy and quality.
Except it's not fine for hobbyist programming, because, by design, it will lock people into never progressing past the sort-of-right output of the AI tool.

It leads to people never learning, which in turn would mean things like the Linux kernel will never get invented in the future.
 
Upvote
20 (22 / -2)

Tanterei

Wise, Aged Ars Veteran
173
Subscriptor
I’m not too fussed about this. It’s good that tech is approachable from entry level, for the sake of an education funnel, or for democracy at a larger scale.

It also doesn’t mean software as a whole will go to hell. There are many consumer and prosumers that handle threat own DIY, and that doesn’t automatically translate into completely dumping engineering standards for “real” projects, bridges, skyscrapers etc.

Besides, sometimes I’ve reviewed, or read analysis of, some truly horrible spaghetti that’s used in real world businesses. I’m not convinced that this would be any worse 😅
Oh the "democracy" approach - pray tell what is that supposed to mean?

You are ignoring a fundamental tenet of a large part of the business world that is behind all the "real" projects: Reduce the expenditures - especially on personnell. The latter translates directly to a drive to reduce the time spent on a task.
My observation w.r.t. this reduction is that - prior to the rise of LLM-generated code - the first thing to be culled were tests for correctness (after all: it works for the nominal case we thought of - so it must work for all inputs we permit, right?). Now the pressure will be to reduce the time to actually write and understand the code.

StackOverflow and other sites will be flooded by requests to fix code which the poster can not explain once they hit a wall with the LLM.
 
Upvote
10 (11 / -1)

McTurkey

Ars Tribunus Militum
1,817
Subscriptor
The reason LLMs for coding are interesting is because code is an abstraction for math, and math can ultimately be tested and validated. Even so, whenever I've tried using an LLM to produce code that does something I'm not already familiar with how to do, it doesn't work. If I'm going to have to go through the process of feeding error messages back in and debugging alongside the LLM, I'd rather learn how to do it myself. Why? Because then I have the knowledge and skills for the future, and can bypass the translation layers.

I'm all for programming abstractions that allow us to more easily produce highly complex programs--but this methodology is not rules based, and cannot be reduced to a consistently reproduceable and solvable mathematical formula. This is programming by throwing rubber ducks at the monitor and hoping the computer does something useful with them.

Maybe someday it leads to the holodeck experience, where you can create a fully interactive experience just by describing it out loud. I have a great deal of respect for what Karpathy has done over the years, and I recognize that his broader vision of the AI future is plausible. I just don't think the current generation of LLMs are anything more than a novelty at this point. To me, this feels like the wrong set of tools to explore the early framing of that paradigm, and I worry that we're going to end up baking a lot of really bad assumptions into our future designs as a result of using these tools. But then again, those are just the vibes I get from all this.
 
Upvote
3 (5 / -2)

AWilco

Wise, Aged Ars Veteran
142
Subscriptor++
I'd be interested to know how this would work with Test Driven Development. If you write a test, then ask an AI to write the software to pass that test, with it feeding it's own error messages into itself in a loop, until it passes.

This would basically be structured vibe coding, but at the end what you do have is a set of tests that pass. This might be what you need to validate the code against requirements and get it approved.

I'm not an expert in TDD. To me it seems you can never write tests to cover every possible error, you have to make some assumptions of how logically something is done. So if you write a spec that says "function should return 2x the input" and test that f(2) =4, then that sounds good, but if you don't trust the person implemented that as as a mathematical operation, rather than a switch statement, then it doesn't work. Maybe you can write fuzzy tests that run them over a long period of time. I wonder if you just end up coding all the logic anyway to ensure full coverage. But interested in the views of people with a lot of experience of TDD.
 
Upvote
6 (6 / 0)

denemo

Ars Scholae Palatinae
1,062
Subscriptor++
if you're going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else.

I would argue that if you can't explain it to someone else then you fundamentally don't understand how and why the code works.
 
Upvote
14 (14 / 0)

Uncivil Servant

Ars Scholae Palatinae
4,028
Subscriptor
As a programmer, I love this notion that one might be able to craft and test law using a legal programming language of some sort. This is because from a professional point of view the effort and rigor used to craft legislation is laughably poor, and frankly shocking. It could use some rigor and well, unit tests.

Please do start with tax law and a few general failure cases such as anything that triggers a tax cliff is a bug.

Republicans make great hay over the law of unintended consequences, but for software engineers, dealing with those surprises is all in a days work. It is not actually necessary to ship problematic legislation. “We’ll fix it after passing the bill in maybe 10 years if the right lobbyists pay.” It’s scandalous what passes for rigor in law.

Oh yes, that's basically the entire field of public policy analysis. Had I been wiser as a young man, I probably would have gone into tax policy instead of healthcare policy...in the USA.
 
Upvote
4 (4 / 0)

triplebonk

Seniorius Lurkius
17
Subscriptor
The companies that hire cargo cult ai devs to replace real devs will be devoured by the wise to skip this trend.
My older brother spent a good chunk of his life writing in assembly language.

I started a career in data analysis with SQL. When I explained what SQL was to my brother it was as if AI had learned from him and generated most of these comments. I never once wrote in assembly language, yet was paid well for my code.

Today’s AI is imperfect but it’s improving rapidly. It may be years away from writing the best code, but it can already write crappy code faster than a human. Time for a new paradigm.
 
Upvote
-11 (1 / -12)

had to change your name

Wise, Aged Ars Veteran
153
Subscriptor
As a CS professor, I can tell you a surprising number of students "vibe code" without LLMs too - they try some things that they see on stack overflow, or from the notes, play with it until it outputs the right thing, then they shrug and hand it in. Helping students in office hours debug, I'm always surprised how they just try random things instead of approaching problems systematically and with intent. I'm not sure how much this is shifting that status quo, outside of making it even easy to vibe successfully.
As a former CS student (2008 era) with "fond" memories of late nights in the lab, quietly sobbing into my keyboard... it came down to a lack of comfort with the language syntax and programming in general. The learning curve was incredibly steep, and there was a ton of concepts rapidly thrown at me. And there wasn't a lot of higher level information about how the concepts piece together. So I would hit a problem and start randomly trying things from different classes because I didn't know any better.
 
Upvote
7 (7 / 0)
In what way is this morally different from using AI to write a submission to a judge?
No serious developer ever would testify and promise the code is bug free :p

And unless approved by the higher ups, any dev just accepting AI code will bear the full burden of any gross errors they cause in the end, maybe not first time, but while AI can generate code, unless the instructions are detailed enough or the problem is common enough for the AI to get all the requirements the code will be incomplete.

For prototypes and creative things like easy games that might be completely acceptable, but for a checkout in an online shop, if the code randomly sets the wrong price, or forgets or adds items to the cart, that will very quickly be a major problem and the dev responsible would at minimum have to get it fixed, and if it then becomes obvious they do not understand the code they might get fired and possibly even accused of fraud and face charges of paying damage to the company.
 
Upvote
6 (7 / -1)

J.King

Ars Praefectus
4,138
Subscriptor
My older brother spent a good chunk of his life writing in assembly language.

I started a career in data analysis with SQL. When I explained what SQL was to my brother it was as if AI had learned from him and generated most of these comments. I never once wrote in assembly language, yet was paid well for my code.

Today’s AI is imperfect but it’s improving rapidly. It may be years away from writing the best code, but it can already write crappy code faster than a human. Time for a new paradigm.
I'm not sure that's comparing apples to apples. SQL is just an abstraction (albeit a very high-level one). It's a formal language with rules, designed for one specific purpose which is not readily applicable to other problem domains. LLMs (as least as they exist today) are not designed to write code, and will probably never do so reliably. And this at a time when, increasingly, crappy code just isn't cutting it anymore. I'm not at all convinced the paradigm is shifting, not if you need code good enough that you've needed trained programmers in the past.
 
Upvote
12 (12 / 0)

Natrous

Seniorius Lurkius
36
I don't know about "vibing" but any comment that's decrying use of copilot as 'script kiddies' hasn't used it.

Sure, you can be a script kiddie, but you also can rapidly make quite advanced projects much faster with it.

You don't have to pick "no thought, just vibe" or "whole thing in notepad" fer christ's sake. There's SOOOO much in between.

Are we back to the argument that autocomplete isn't for professionals? Gimme a break. And for people who know a language but, say, haven't used it in a decade - holy shit does copilot get you back up to usable speed quickly.

No, after 25 years I don't remember syntax details of every language I've used ever, but I can evaluate if the code I get back has proper error checking and covers my use cases and is working. Would I do this with a language I don't know at all? Nope.
 
Upvote
-6 (4 / -10)

AndrewZ

Ars Legatus Legionis
11,607
Good lord. At best AI coding is a parlor trick that looks clever but absolutely falls apart with any complexity. Save yourself a little time by having it create some boring templates? Sure, why not step onto that slippery slope to save a few hours. But add any complexity and you haven't bothered to understand your code and it won't work as soon as any assumptions are changes.

And who's going to use this stuff? Exactly the people who maybe aren't the best coder, and who maybe should put in the extra time to understand and test their code. But now they have a tool that saves them time and saves them from having to think through the solution. But they don't see the trap here. People who don't understand code are asking AI, which almost never writes correct code, to write code.

I think we can expect to see public code failures in the coming months.

What makes this worse is that the upcoming generation is not learning fundamentals, including coding. We are hearing stories about kids sleep walking through CS classes. Those people will definitely use AI coding. And it definitely won't give the desired results.

The world is going to hell at a very fast pace.

AI coding is an abomination.
 
Upvote
11 (13 / -2)