You would drive a car assembled by children???I feel AI, like children, will require a lengthy period of legal guardianship during which all decisions are supervised. If a child can do it, for example work in a automobile assembly plant, that seems fine to do semiautonomously, but governance, investment decisions, law enforcement, pulling any triggers, jurisprudence, and other things with lasting and deeply troubling outcomes if mismanaged will require minders. Advisors, not decision makers.
Smart children, sure. Not children like Elon Musk.You would drive a car assembled by children???
5 years away from AGI when we can't even define what AGI is?
Sure sure, we're also 5 years away from direct contact with a deity... now we just have to decide what we mean by 'direct contact' and 'deity'!
When you don't have to define the words first, they kinda mean nothing.
Artificial general intelligence (AGI), AI that’s at least as capable as humans at most cognitive tasks,..
This makes no sense to me. Basically, they just say "use an AI that's twice as big as the one you would have used otherwise". Why would that be any better? As far as I know, current systems are already broken into subsystems in a somewhat similar way, so what's new here and why would this particular way of breaking it into two subsytems be magically super-robust?To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue.
You should add a “neener neener” to drive it home.They say in the paper this is a reasonable timeline, believe it or not.
This is not really true. A lot of alignment today is inspired by the three laws.AI is developed without any regard for morals, ethics, or cultural norms. Asimov's 3 Laws were the first and most fundamental programming of robots. AI today has no such ethical grounding. It scrapes mountains of conflicting data and condenses it to some sort of usefulness, sort of, Hence "You should just kill yourself."
DeepMind is not an AI. Although this apparently gets you upvotes:So the topic is an AI has offered an 'opinion'
A group of some of the smartest researchers on the topic do not have ideas! But I do! I smart commenter! I get dirtied upvotes!DeepMind does NOT have ideas.
Why is it better when somebody else checks your work? Or even if you do after taking a break?Why would that be any better?
Now that one I find likely. Whatever the generation of AI, it’s gonna be human stupid as the root cause.I find it highly optimistic to take it for granted that we make it through the Narrow AI phase. Narrow AI will kill us first, by reflecting our own stupidity back at us.
I think challenge is there is no single definition - as touched on the article. How do we define human intelligence? Is it IQ? Is it adaptability? Is it a combination or in some cases very domain specific?5 years away from AGI when we can't even define what AGI is?
Sure sure, we're also 5 years away from direct contact with a deity... now we just have to decide what we mean by 'direct contact' and 'deity'!
When you don't have to define the words first, they kinda mean nothing.
That depends on the definition, doesn't it?You've stated better what I intended to post. AGI will, by definition, be better and faster than people or their guardrails.
AIs aren't humans, they are computer software. I was talking about the way these things actually work, not some kind of analogy based on an anthropomorphisation of them. Did you actually read my second sentence?Why is it better when somebody else checks your work? Or even if you do after taking a break?
So was I. The same rules mostly apply. Something doesn’t have to be literally human to share behaviors or be able to check work.I was talking about the way these things actually work, not some kind of analogy based on an anthropomorphisation of them. L
It depends on the system. Mostly there is just some dumb classifier for objectionable material, not fact checking. You can’t easily stream that because causality (You can’t check a finished product without a finished product, although you kinda can in chunks). And people want streaming responses.Did you actually read my second sentence?
Fuck me, I hope so.Cause widespread distrust of people and institutions?
Been awhile (decades) since I read that stuff, but I think the three laws eventually failed, and there was a genocidal war with robots.Asimov's Three Laws of Robotics
Just like politicians and journalists. People live in their bubbles and believe in the version of "true" served to them.For example, AGI could create false information that is so believable that we no longer know who or what to trust.
Logically, any concept of 'deity' would be a type 2 or higher civilization. 'Direct contact' would be them arsing themselves to become revealed to us for reasons other than ultimately becoming stuck in Earth's gravity well. Any such reason would just be a concept of resource harvesting, which would be offset by the cost of fighting Earth's gravity to extract them into and beyond orbit.5 years away from AGI when we can't even define what AGI is?
Sure sure, we're also 5 years away from direct contact with a deity... now we just have to decide what we mean by 'direct contact' and 'deity'!
When you don't have to define the words first, they kinda mean nothing.
AGI is never going to happen. Humans can't create it before ending humanity.We don't need AGI to destroy humanity we seem to be on course to do it ourselves.
I believe upping it to three systems and giving them some religiously-coded names might be the winning strategy here, actually.To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us.
AI is helping out though, especially with collecting tariffs from penguins.
I believe upping it to three systems and giving them some religiously-coded names might be the winning strategy here, actually.
View attachment 106834
We really, really need to bring back ostracism. Just a simple, straightforward vote where we say "You in particular need to get the fuck out of our country for 10 years."It seems sort of self-indulgent to be worrying about what the hypothetical universal paperclips optimizer or skynet might do when we have a much more immediate problem in the form of what the bot-herders plan to use their, for the moment, quite obedient tools to do unto us.
Would an 'alignment' problem that causes the national defense expert system to mitigate potential threats through human extinction be bad? Yeah, presumably.
Would the 'alignment' problem that causes my health insurer to get paid more for building expert systems that deny me than expert systems that approve me be a problem? Neither hypothetical nor future tense nor in need of any breakthroughs.
You want 'alignment' problems? The tech bros have you covered:
View attachment 106776
Unless the AI could find things for people to do...If we have AGI, we're going to need UBI
If we have AGI, we're going to need UBI