Accepting AI-written code without understanding how it works is growing in popularity.
See full article...
See full article...
Yeah but if Facebook programs their next algorithm feeding me ads that way or EA codes their next FIFA installment that way, I’m sure I couldn’t care less.Raise your hand if you want to get on a plane with software designed by vibe coding.
Raise your hand if you want your vitals monitored during surgery by software designed by vibe coding.
etc...
We're very rapidly headed for a future where a lot of low quality AI generated code is running in a frightening number of places.
You can vibe with it (not what I do)...
...or you can use one of the new "AI IDEs" to help understand/study existing undocumented code better in an automatic whole-project context.
Look I'm not a coder, I'm a network engineer. But we have the same types floating around. They figure out how to make an EVPL and they save the little script and punch it in. 95% of the time it's fine, it works. But the 5% of the time they need to step out of the script a little and maybe pop two tags, or swap tags, or whatever... it doesn't work and they have no idea why. Everything will look absolutely fine in the router, because it doesn't check to see that you have the same VLAN on both ends. And because they don't understand the actual commands they've entered, how it affects the traffic, and how the router interprets the traffic, they can't spot it, they can't fix it, and frankly it'd be faster for me to just do it right the first time than getting roped into fixing it and soothing the customer after they've spent an hour on the phone with the vibe network engineer trying to fix it getting nowhere.But if you don't understand the underlying logic of the system itself, I don't see how you can possibly hope to have your intended effect on it, whether you're editing source code or legal code.
"This provides a natural boundary for vibe coding's reliability—the code runs or it doesn't."
Hoo boy. Well, no.
The code runs, or it doesn't. Or it runs, until it doesn't. Or it runs, but it creates the same directory 200 times instead of 200 directories. Or it runs, but it creates the directories but doesn't unzip the files. Or it runs, and it creates the directories, and it unzips the files, but not into the directories (good luck cleaning up the mess). Or it runs, and it creates the directories, and it unzips the files into them, and then it deletes a random file somewhere else. Or it runs, creates the directories, unzips the files, then orders 500 rolls of toilet paper from Amazon...
And took many of us down a notch when the lab assignment we turned in, that we thought worked perfectly, didn't in some of the edge cases the TAs' software tested for. And that was long before LLM AI existed.I wasn't one of the downvotes, but one reason might be that they aren't confident that the code you're using works more than 95% of the time too.
If you don't understand the code then you're making the first programming class student mistake of thinking because it runs and it worked for one or two test cases then it must be correct. Painful experience has taught us all otherwise.
only if it AIRFORCE ONEThe search for silver bullets continues unabated ...
Completely missed is the fact that all too often significant elements exist in the edge cases. Without understanding the rest, you're screwed when trying to resolve these.
I mean if you're okay with that, why not have AI teach your kids everything they need to know? The best part is you don't even have to check their work. Just surrender to the flow, man.
Agile methodologies were supposed to solve many problems, but didn't because they were used incorrectly and because project sponsors read "functional software faster", stopped reading and that somehow came to mean "cheaper" in their heads. Ha, ha, HA!
AI is just another tool; why is everyone in such a hurry to turn their brains off?
As I stated in another thread, software development isn't a shortcut to avoid engineering, it IS engineering.
Would you fly in a plane that was designed by an AI without verification by actual aircraft engineers?
Look I'm not a coder, I'm a network engineer. But we have the same types floating around. They figure out how to make an EVPL and they save the little script and punch it in. 95% of the time it's fine, it works.
What do you mean? Putin understands the American government very well.Putting people in charge of running critical systems who don't understand a fucking thing about them seems to be the trend these days, yeah
If we can have vibe-based programming, we can have vibe-based legislation too. Just select random citizens (method known as Lottocracy) to write laws, except they can ask ChatGPT to write laws for them. Then give resulting laws to professional lawyers for review (analog of the compliator), if something wrong send their feedback to ChatGPT, in hope it will fix problematic places. /s
Because lobbists have some kind of a plan in their mind, they optimize the system in direction of powerful corporations. In contrast, ChatGPT has no fricking idea what it is doing and will write random meaningless laws.How is tonight different from all other nights?
No, really, this is the system we already have if you just replace "ChatGPT" with "lobbyists".
They actually sounds like a decent alternative to what we got. No /sIf we can have vibe-based programming, we can have vibe-based legislation too. Just select random citizens (method known as Lottocracy) to write laws, except they can ask ChatGPT to write laws for them. Then give resulting laws to professional lawyers for review (analog of the compliator), if something wrong send their feedback to ChatGPT, in hope it will fix problematic places. /s
This is what libraries like Intel Performance primitives or Accelerate.framework are for. They are faster and much better tested.For example, if I want to apply a 5x5 kernel to an array of data, I need to deal with the edge cases, where the indexes need to be checked to see if they're outside the bounds (and what to do if it is). I'd much rather "describe" to the compiler the problem and have it generate the required code.
This sort of thing can happen to humans too. I was for a while responsible for writing tests for some powerPC SIMD code. For the 3.2 people out there who have done that, you will know that running off the end of the array is super easy to do by mistake, even for experts. We fixed this by exhaustive brute force testing against buffers at various alignments against guard pages. It was exhaustingly effective at finding bugs, with emphasis on exhausting. It can be super depressing to be pulling bugs out of your code for day after day for a week on a simple function.Cursor, when given enough context, has gotten pretty smart. Most of the time it's very good, even clever, but then every so often it 'fixes' a failing test by adding test-environment only logic.
The data I'm processing can have "missing" values, which require falling back to smaller kernels and then to a separate routine that works in the face of missing data. I don't imagine a stock library will handle that case (and why would it, that's not a standard convolution operation).This is what libraries like Intel Performance primitives or Accelerate.framework are for. They are faster and much better tested.
So there is another problem. If you don’t preserve your prompts that generated the code, how maintainable is this code? Is it one and done for the AI and humans maintain it forever more, or are we talking about full synthesis here.Horrifying. And I am completely serious and in earnest.
"risky trade-offs in code quality, maintainability, and technical debt"
Agents absolutely can. One way to do it is, as I hinted earlier, to start with high level docs and examples then divide and conquer. Like a human does.While I can see the appeal of the ease AI coding, I feel it is hopelessly naive and a dereliction of duty to also throw away your unit tests. It is probably the case the AI can write those too
I guess this is why people try to create more foolproof languages, like by creating Rust. The ultimate goal is such languahe, that if the code compiles, then it must work, and if it is broken then compilation should fail. Under such conditions you can hire less educated and experienced programmers. At the limit, literal monkeys randomly hitting keys, while poor compiler allows only such random code changes from the monkeys that, by lucky coincidence, make sense.Question:
I am now retired, 45 years programming in 'C' and laterly C++ on Unix then Linux at the systems level.
Being retired I have no interest in playing with AI tools but my question to those that have is this.
Probably 40% of what I programmed within any specific program was code that didn't actively move you towards whatever the solution was but was rather error checking (inputs, intermediate result sets, final outputs, resource allocation/opens/closes, sub process execution checks etc)... so do these AI tools give you the full enchilada (i.e. with error checking) or just some bare bones slab of code that will fail with the slightest/smallest edge case or error?
Bluck
I would like to think my husband and I will be employable forever thanks to that, but I don’t really believe it. AI will reach a point where even their C++ is better.all the C++ guys are dead
There are sparse libraries out there for BLAS, for example. If your convolution is locally rectangularly dense then these things should still work. VImage for example has options to turn off its own internal multithreading if you would like to set up your own for the dense segments. It can also render out of the middle of a larger image structure and doesn’t require an opaque data container. It just needs a pointer and a row stride and rectangular size.The data I'm processing can have "missing" values, which require falling back to smaller kernels and then to a separate routine that works in the face of missing data. I don't imagine a stock library will handle that case (and why would it, that's not a standard convolution operation).
And let's not get into the hairball of multithreaded programs, where you need to protect access to common data structures efficiently.
Or ones who realize, historically, C and C++ is dangerous for anybody or anything to write in.I guess this is why people try to create more foolproof languages, like by creating Rust. If the code compiles, then it must work, and if it is broken then compilation should fail. Under such conditions you can hire less educated and experienced programmers.
Depends on what you write. The software development pipeline you design for your agents. The limits are your creativity.so do these AI tools give you the full enchilada (i.e. with error checking) or just some bare bones slab of code that will fail with the slightest/smallest edge cause?
As a programmer, I love this notion that one might be able to craft and test law using a legal programming language of some sort. This is because from a professional point of view the effort and rigor used to craft legislation is laughably poor, and frankly shocking. It could use some rigor and well, unit tests.Almost all of my formal education in programming comes from law classes.
Well, at least in CS, they will do what they always did since they are already having to solve the find-the-work-of-the-Bard-in-million-monkeys problem. Write a pile of unit tests and see who passes. They will not find it hard.Naturally, I have played with AI in summarizing papers and helping explain concepts. It is not great. It gets at least 1 out of every 4 questions wrong in some way. But! If you were not really paying attention, its answers look right. And students in our forum are using it and getting things wrong. I cannot imagine how professors are going to deal with this.
The judge can exact vengeance. Doomed airline passengers, not so much.In what way is this morally different from using AI to write a submission to a judge?
I think there is an important difference between using a traditional software application to program a computer versus using an AI tool to code. Traditional software is deterministic: you give it the same instructions and the computer will do the same thing each time. Even if you don't "know" the programming language, you could (with sufficient time and patience) read and comprehend the code.Before too long, people made useful software applications that let non-coders utilize computers easily—no programming required. Even so, programmers didn’t disappear—instead, they used applications to create better and more complex programs. Perhaps that will also happen with AI coding tools.
See also: Biology.A world where literally nobody understands the systems we need and depend on will be unbounded in harm. This is a path to incredibly dark things.
Nah, the AI is writing source code just like a human and compiled using the same compiler using the same ISA and hardware. It will be deterministic and if it isn’t, a human could make the same non-deterministic error. Various compiler flags / developer tools to spot things like uninitialized variables or undefined behavior are standard tools to clean up the rest. At worst you can chose something like Swift or Rust with is designed to catch a lot of that stuff automatically at compile time. There are too many crufty coders out there for bad code not to be a heavily explored sector out there with long experience dealing with it. One more monkey trying to write Shakespeare is t going to upset the apple cart.I think there is an important difference between using a traditional software application to program a computer versus using an AI tool to code. Traditional software is deterministic: …
But an AI is a figurative black box that no-one can properly explain, and more importantly, it doesn't always generate the same output for the same input. It seems inevitable that anything but the most straightforward code generated by an AI will have something that is inexplicable, something that makes you ask "Why is that there? What does that do?"