GPT-5 might be farther off than we thought, but OpenAI wants to make sure it is safe.
See full article...
See full article...
Meanwhile, the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO)
but OpenAI wants to make sure it is safe.
the new Safety and Security Committee, led by OpenAI directors Bret Taylor (chair), Adam D'Angelo, Nicole Seligman, and Sam Altman (CEO), will be responsible for making recommendations about AI safety to the full company board of directors
Me, too, although my suspicion is just that we don't have any good grasp on what modalities we actually need, and are rather just using what we have, coupled with LLMs' poor ability to characterize correctly. If they don't "understand" that the tokens they're "looking at" at the moment are essentially a "math" problem--or how to correctly rephrase it in its mathematical form--all the access in the world to things like Wolfram Alpha don't help.I am a little surprised that multi-modal models haven't broken through the limitations of LLMs, they feel closer to brain structures to me. Perhaps plumbing outputs from different models together (forgive the metaphor) is proving more difficult than it sounds.
I'd take a guess they had to wait for new hardware to be procured, installed and validated.Frankly I'm shocked they aren't continuously training new models.
OpenAI says the committee's first task will be to evaluate and further develop those processes and safeguards over the next 90 days. At the end of this period, the committee will share its recommendations with the full board, and OpenAI will publicly share an update on adopted recommendations.
Ah, that explains a lot of things. The discontinuation of the Zilog Z-80s, the expedition to asteroid Bennu, the delay in the new models, the endless 20-years-out-of-fusion-power, the sudden new interest in lunar manufacturing....in thirty years we're going to wake up to a new moon on the sky, the AI Overwatch powered by clean energy, every neuron a Z80 created by moon-based facilities from space-borne materials!I'd take a guess they had to wait for new hardware to be procured, installed and validated.
Count me among them, because it seems very obvious to me that a language model is not AGI, categorically and obviously, and that LLMs cannot be bootstrapped up to AGI and will not naturally lead to AGI or anything even credibly mistakable for same.which it expects to bring the company closer to its goal of achieving artificial general intelligence (AGI), though some critics say AGI is farther off than we might think
Yeah these things really have a “thoughts and prayers” vibe to them.These openAI posts always read like PR Puff pieces and not unbiased reporting. How is the above statement true outside of what OpenAI states?
I think they do train new models constantly. The 4-turbo an 4o were both new smaller models that were meant to keep the capabilities of a much larger original GPT-4.I'd take a guess they had to wait for new hardware to be procured, installed and validated.
OpenAI is starting to take their naming cues from Capcom here:I think they do train new models constantly. The 4-turbo an 4o were both new smaller models that were meant to keep the capabilities of a much larger original GPT-4.
I actually suspect that 4o is not that much bigger than GPT-3.5-turbo based on the fact that they are making it free to all users.
I am personally still processing their incredibly revealing decision to say “what if Chat GPT…but fuckable?”Count me among them, because it seems very obvious to me that a language model is not AGI, categorically and obviously, and that LLMs cannot be bootstrapped up to AGI and will not naturally lead to AGI or anything even credibly mistakable for same.
This is why LLMs piss me off. They're a novelty with some limited usefulness and a number of deep foundational flaws, but there's just enough meat there for it to feel genuinely revolutionary, thus making it feel more solid than bullshit like crypto and bitcoin and therefore perfect fodder for stock-pumping grift.
telling the UN my nuclear arms business doesn't need regulation because I'm just 'doing physics for money'Businesses doing linear algebra for money don't need regulation and generally shouldn't be regulated any more than any other business.
"One billion dollars pronto, or I nuke Toronto"?telling the UN my nuclear arms business doesn't need regulation because I'm just 'doing physics for money'