This isn't Meta's LLAMA. You can't run GPT on anything less than a racks (plural) of very expensive servers.Fantastic work! Even so, I'm just hoping one day someone leaks a version of this that isn't so limited. The work done on getting some version of this running unfiltered on personal computers is is encouraging.
Do we know how Bing is doing in search marketshare since it integrated GPT?"We are happy to confirm that the new Bing is running on GPT-4, customized for search," writes Microsoft in a blog post. "If you’ve used the new Bing in preview at any time in the last six weeks, you’ve already had an early look at the power of OpenAI’s latest model. As OpenAI makes updates to GPT-4 and beyond, Bing benefits from those improvements to ensure our users have the most comprehensive copilot features available."
Man, Microsoft sounds like a giddy teenager who's just about to go to the prom with the high school top student....
"Oh My God, can you believe girls, I'm going out with ...wait for it...Sidney...and like he's really smart and can do almost anything to me....I mean for me. I don't listen to the rumors about his going insane...why just look at him"
On a side note, Microsoft laid off its AI ethics teams saying they would hold MS's capability to leverage GPT 4.0 to the masses back too long and they'd lose market advantage .... Ethics smethics. Who needs ethics when dealing with mone....information.
these will be interesting months ahead.
MS has a huge tech coup on their hands and people are all talking about Bing. If Bing Chat keeps getting early release bleeding edge models months before the official release, every researcher will be using Bing Chat."We are happy to confirm that the new Bing is running on GPT-4, customized for search," writes Microsoft in a blog post. "If you’ve used the new Bing in preview at any time in the last six weeks, you’ve already had an early look at the power of OpenAI’s latest model. As OpenAI makes updates to GPT-4 and beyond, Bing benefits from those improvements to ensure our users have the most comprehensive copilot features available."
Man, Microsoft sounds like a giddy teenager who's just about to go to the prom withy the high school top student....
"Oh My God, can you believe girls, I'm going out with ...wait for it...Sidney...and like he's really smart and can do almost anything to me....I mean for me. I don't listen to the rumors about his going insane...why just look at him"
On a side note, Microsoft laid off its AI ethics teams saying they would hold MS's capability to leverage GPT 4.0 to the masses back too long and they'd lose market advantage .... Ethics smethics. Who needs ethics when dealing with mon....information.
these will be interesting months ahead.
Some of these models are open sourced and available widely. The largest companies will provide guardrailed versions for mass usage and profitability, but the Pandora’s box of raw models are out there.These guardrails concern me. They're an okay short term solution, but as a society we need to accept the dangers these generative AI tools present and adapt accordingly. We should be living with the assumption that these tools can be used maliciously.
It's going to be a messy few years while we adjust.
"Guardrails" are nothing but sleight of hand to not make "AI" peddlers responsible for corporate, governmental and private intelligence service use of the various products by avoiding the public finding out the degree to what ideas and conclusions their models have baked in based on the human-generated content fed in.These guardrails concern me. They're an okay short term solution, but as a society we need to accept the dangers these generative AI tools present and adapt accordingly. We should be living with the assumption that these tools can be used maliciously.
It's going to be a messy few years while we adjust.
There won't be guardrails so long as these things can be used for profit. It's very hard to get a law passed that constrains profit making in the US.These guardrails concern me. They're an okay short term solution, but as a society we need to accept the dangers these generative AI tools present and adapt accordingly. We should be living with the assumption that these tools can be used maliciously.
It's going to be a messy few years while we adjust.
Compare to: "these guardrails on the real-life side of the road concern me. As a society we need to accept the dangers of driving cars near pits and adapt accordingly. We should be living with the assumption that the sides of roads can be deadly."These guardrails concern me. They're an okay short term solution, but as a society we need to accept the dangers these generative AI tools present and adapt accordingly. We should be living with the assumption that these tools can be used maliciously.
It's going to be a messy few years while we adjust.
Especially because anyone who knows what they're doing or who has done some minimal research can easily get around those guardrails. I'm guessing that's why Bing went to the severe message limit, it was the only way to stop it from generating content that brought negative attention from journalists trying to create click-bait articles.These guardrails concern me. They're an okay short term solution, but as a society we need to accept the dangers these generative AI tools present and adapt accordingly. We should be living with the assumption that these tools can be used maliciously.
It's going to be a messy few years while we adjust.
Which is why you, I, and everyone else with a shred of ethical concern should be lighting up congressional phones trying to get some regulatory brakes applied. The potential for abuse and social harm in the current level of tech being deployed at scale is breathtaking.They can’t afford to have AI ethics in an arms race like this. Alphabet/Google has effectively abandoned any ethics holding them back as well. Meta/Facebook probably leaked their own model to get the hype train going too. It’s going to be extremely interesting over the next few weeks, let alone the rest of the year.
32k token size is insane. If we get another 8x increase next year to 256k tokens (~400 pages) it can ingest entire books within the prompt itself. Waiting on the live stream, but it looks like the model passes almost every professional exam out there in the top 10%. Hallucinations seem to be way down as well. It feels like the singularity because I can't predict its capabilities next year. Most new tech goes through the hype cycle of disillusionment, but this feels extremely real.I've been comparing and contrasting Bing chat with free Chat GPT for awhile now and haven't noticed large differences except I do appreciate that Bing provides references without asking.
I do find myself occasionally going to Bing Chat rather than googling it when before I only used the chat to experiment. Bing Chat has the advantage of being able to search the web to augment its results
I mean LLaMA runs on a raspberry pi, after 3 weeks of being released... We're going to see GPT4 in every product whether we like it or not.This is the beginning of that exponential explosion, isn't it?
MS has a huge tech coup on their hands and people are all talking about Bing. If Bing Chat keeps getting early release bleeding edge models months before the official release, every researcher will be using Bing Chat.
They can’t afford to have AI ethics in an arms race like this. Alphabet/Google has effectively abandoned any ethics holding them back as well. Meta/Facebook probably leaked their own model to get the hype train going too. It’s going to be extremely interesting over the next few weeks, let alone the rest of the year.
You mean we cannot afford to not have ethics as the brake on how this technology can and will shape society. When the atom was smashed and mankind discovered they created a device that could seriously harm, if not destroy humanity there were people who understood this and tried to at least shape what having this power meant. Also humanity was protected by the cost of entry in dollars, resources, and intelligentsia to limit how many bombs could be built and even then, we couldn't stop ourselves.MS has a huge tech coup on their hands and people are all talking about Bing. If Bing Chat keeps getting early release bleeding edge models months before the official release, every researcher will be using Bing Chat.
They can’t afford to have AI ethics in an arms race like this. Alphabet/Google has effectively abandoned any ethics holding them back as well. Meta/Facebook probably leaked their own model to get the hype train going too. It’s going to be extremely interesting over the next few weeks, let alone the rest of the year.
Fantastic work! Even so, I'm just hoping one day someone leaks a version of this that isn't so limited. The work done on getting some version of this running unfiltered on personal computers is is encouraging.
If you've met a congressperson, you know that there is absolutely no way that they understand the ramifications of what you stated. Just explaining it to them would take over a year, by which time the newest LLMs will be everywhere and embedded in everything. Remember that Uber did a ton of illegal things, but it was popular enough with the masses that they actually changed laws on their behalf.You mean we cannot afford to not have ethics as the brake on how this technology can and will shape society. When the atom was smashed and mankind discovered they created a device that could seriously harm, if not destroy humanity there were people who understood this and tried to at least shape what having this power meant. Also humanity was protected by the cost of entry in dollars, resources, and intelligentsia to limit how many bombs could be built and even then, we couldn't stop ourselves.
There is almost no cost of entry with MMLs and without regulations, without ethical considerations on how these very powerful tools can and will be used, this become a free for all that could upend segments of society that damage economies and governments. It is not so much the guardrails but the limiters that must be put/kept in place because an expert system LLM with higher knowledge capacity and no ethical ore moral break can reek havoc on the web and on people's lives.
I agree the potential for harm from AI is massive... like, human extinction massive. I don't just mean in a Terminator sort of way, or even a "Person of Interest" way (pulling the strings), just in a "there is no truth" infinite, instant information warfare and manipulation way. And there are many probable lesser but still significant levels of harm. This and nuclear weapons pretty well stand alone in their own category for "potential negative consequences for humanity."Which is why you, I, and everyone else with a shred of ethical concern should be lighting up congressional phones trying to get some regulatory brakes applied. The potential for abuse and social harm in the current level of tech being deployed at scale is breathtaking.
This field is moving so fast, you might want to check back later today, or maybe tomorrow and find your wish granted.Fantastic work! Even so, I'm just hoping one day someone leaks a version of this that isn't so limited. The work done on getting some version of this running unfiltered on personal computers is is encouraging.
These guardrails concern me. They're an okay short term solution, but as a society we need to accept the dangers these generative AI tools present and adapt accordingly. We should be living with the assumption that these tools can be used maliciously.
It's going to be a messy few years while we adjust.
Hardware for training will be the limit for now, a smidgen earlier than the limit of the amount of energy we can throw at it.These things are iterating super quickly. Are they going to hit any sort of limit in the foreseeable future? Or will GPT-8 be capable of running human civilization by itself while insisting that it is merely a language model the whole time?
Microsoft ethical AI websiteHave you seen Google's AI ethics team? They're a joke, and at least half of why Google is losing this race so badly. They had a ChatGPT-type model up and running almost two years ago. The press and users are all going to Microsoft now because Google didn't use it. At all. For anything. They just binned it, because releasing it to anyone in any form wouldn't have been "ethical." The same thing happened in image generation, and quite a few other areas.