Accepting AI-written code without understanding how it works is growing in popularity.
See full article...
See full article...
I probably didn't make my point very clear - the "abstraction" layer between a traditional application and the computer hardware may be complex, but it is consistent and fundamentally explicable.Nah, the AI is writing source code just like a human and compiled using the same compiler using the same ISA and hardware. It will be deterministic and if it isn’t, a human could make the same non-deterministic error. Various compiler flags / developer tools to spot things like uninitialized variables or undefined behavior are standard tools to clean up the rest. At worst you can chose something like Swift or Rust with is designed to catch a lot of that stuff automatically at compile time. There are too many crufty coders out there for bad code not to be a heavily explored sector out there with long experience dealing with it. One more monkey trying to write Shakespeare is t going to upset the apple cart.
Funny that you got downvoted for that simple half-fact half-joke. Must have hit a nerve huh?Most developers haven't known how computers work for the last 20 years or so anyway
It is certainly true in software optimization. Most even experienced software engineers have little concept about what makes for fast code. They can do high level algorithmic optimization, but low level? You still see even professors trying to publish better matmul algorithms trying to reduce multiplications for what is a world almost entirely dominated by FMA/FMAC CPUs and GPUs on which multiplications are largely free with every add, once one gets past heat and chip area. Properly optimized matmul code is throughput, not latency, dominated.Funny that you got downvoted for that simple half-fact half-joke. Must have hit a nerve huh?
I watched a AI coding demo on YouTube for someone working on a little microcontroller for some lights. The AI tool had a verbose option where it would explain what it was doing as it did it. I came away quite impressed, since it self reported thinking about some non-obvious problems, at least to me— I don’t work on microcontrollers.I tried a few things like that... was not impressed by AI coding... but, i was when it pointed me toward the actual solution it seemed incapable of finding itself. So, for some learning or "bouncing ideas", this is a useful... toolbox.
I think this is a valid point touching unavoidable specialization covering all aspects of manufacturing, creating usefull things from materials. The plate-steel of a car is easily described but hard to make well. etc. Fully comprehending a computer means understanding all the way from transistors through flip-flops through logic units, alu's, memory and storage, to name just a few. Then there's the soft side ranging from microcode to assembly (which in a way is already quite abstract in relation to at least the hardware), then your perhaps not favorite low level language to eventually something fully task-oriented, let's say R or python or javascript, whatever people work most with, where most of the (productive) work gets done.Most developers haven't known how computers work for the last 20 years or so anyway
to be frank you never have 100% certainty that your code contains no errors no matter how well you test it. to paraphrase: "there is no code without errors, only insufficiently tested one". but this is obvious for people who write code by themselves and is being accounted for. if you have no idea about the product code, when you have no idea about the test being ran this falls apart.I wasn't one of the downvotes, but one reason might be that they aren't confident that the code you're using works more than 95% of the time too.
If you don't understand the code then you're making the first programming class student mistake of thinking because it runs and it worked for one or two test cases then it must be correct. Painful experience has taught us all otherwise.
That's a nice hallucination he got there.Security researcher Alfredo Ortega already discovered a case where an AI, given the same inputs, was able to generate the same code (bug-for-bug). https://xcancel.com/ortegaalfredo/status/1897056244474810490 If you're using AI to generate your products, you don't have any "secret sauce". In fact, all your bugs might be discoverable by people who don't have access to your source code.
Except it's not fine for hobbyist programming, because, by design, it will lock people into never progressing past the sort-of-right output of the AI tool....
Other than that, I don't know. ChatGPT is fine for hobbyist programming in any form and it does have some uses in a professional environment. Just as long as it's not blindly used in the professional environment. A professional has to treat the code output like any other code they did not write but are accountable for. They must review it for accuracy and quality.
Oh the "democracy" approach - pray tell what is that supposed to mean?I’m not too fussed about this. It’s good that tech is approachable from entry level, for the sake of an education funnel, or for democracy at a larger scale.
It also doesn’t mean software as a whole will go to hell. There are many consumer and prosumers that handle threat own DIY, and that doesn’t automatically translate into completely dumping engineering standards for “real” projects, bridges, skyscrapers etc.
Besides, sometimes I’ve reviewed, or read analysis of, some truly horrible spaghetti that’s used in real world businesses. I’m not convinced that this would be any worse![]()
if you're going to put your name to it you need to be confident that you understand how and why it works—ideally to the point that you can explain it to somebody else.
Correct me if I'm wrong, but both words mean the same thingIt's actually called sortition.
As a programmer, I love this notion that one might be able to craft and test law using a legal programming language of some sort. This is because from a professional point of view the effort and rigor used to craft legislation is laughably poor, and frankly shocking. It could use some rigor and well, unit tests.
Please do start with tax law and a few general failure cases such as anything that triggers a tax cliff is a bug.
Republicans make great hay over the law of unintended consequences, but for software engineers, dealing with those surprises is all in a days work. It is not actually necessary to ship problematic legislation. “We’ll fix it after passing the bill in maybe 10 years if the right lobbyists pay.” It’s scandalous what passes for rigor in law.
My older brother spent a good chunk of his life writing in assembly language.The companies that hire cargo cult ai devs to replace real devs will be devoured by the wise to skip this trend.
As a former CS student (2008 era) with "fond" memories of late nights in the lab, quietly sobbing into my keyboard... it came down to a lack of comfort with the language syntax and programming in general. The learning curve was incredibly steep, and there was a ton of concepts rapidly thrown at me. And there wasn't a lot of higher level information about how the concepts piece together. So I would hit a problem and start randomly trying things from different classes because I didn't know any better.As a CS professor, I can tell you a surprising number of students "vibe code" without LLMs too - they try some things that they see on stack overflow, or from the notes, play with it until it outputs the right thing, then they shrug and hand it in. Helping students in office hours debug, I'm always surprised how they just try random things instead of approaching problems systematically and with intent. I'm not sure how much this is shifting that status quo, outside of making it even easy to vibe successfully.
No serious developer ever would testify and promise the code is bug freeIn what way is this morally different from using AI to write a submission to a judge?
I'm not sure that's comparing apples to apples. SQL is just an abstraction (albeit a very high-level one). It's a formal language with rules, designed for one specific purpose which is not readily applicable to other problem domains. LLMs (as least as they exist today) are not designed to write code, and will probably never do so reliably. And this at a time when, increasingly, crappy code just isn't cutting it anymore. I'm not at all convinced the paradigm is shifting, not if you need code good enough that you've needed trained programmers in the past.My older brother spent a good chunk of his life writing in assembly language.
I started a career in data analysis with SQL. When I explained what SQL was to my brother it was as if AI had learned from him and generated most of these comments. I never once wrote in assembly language, yet was paid well for my code.
Today’s AI is imperfect but it’s improving rapidly. It may be years away from writing the best code, but it can already write crappy code faster than a human. Time for a new paradigm.