Ask them about things you DO know about, not things you don't, and you'll quickly see that they're a terribly unreliable source of information. Worse, they're a confidently incorrect source of information. Maybe it's better if you pay for the new ones. Maybe I'm not using it in the intended way but there's only language, no logic or other smarts, so it'll start getting into grammatically correct nonsense pretty quick. I don't want to be too negative because the language aspect IS pretty neat but it's like the AI gizmodo article that was full of errors. It seems like there needs to be more internal "fact" checking or something in it.I find these things intellectually fascinating, but in my limited plays with them, I've ended up saying "Huh" and just not being bothered very much. I just return to keyword heavy standard internet searches and page browsing.
I think this tells me two and a half things. Firstly this is yet another sign that I'm getting old and stuck in my ways, secondly that (for now) I'm likely to use them more when they come to me in domain specific wrappers than just in generalities. And lastly I just haven't found the right thing to prick my curiosity.
Instead of using ChatGPT as a replacement for search, I personally use it a like a digital assistant to do annoying grunt work for me like doing some basic, but tedious arithmetic calculations. I also use to in lieu of Stack Overflow on how to do thing in programming languages I'm less familiar with like Golang, Rust, or Perl.I find these things intellectually fascinating, but in my limited plays with them, I've ended up saying "Huh" and just not being bothered very much. I just return to keyword heavy standard internet searches and page browsing.
I think this tells me two and a half things. Firstly this is yet another sign that I'm getting old and stuck in my ways, secondly that (for now) I'm likely to use them more when they come to me in domain specific wrappers than just in generalities. And lastly I just haven't found the right thing to prick my curiosity.
ChatGPT did not watch those videos for you. It gave you a mash-up of web text related to keywords related to the videos.Instead of using ChatGPT as a replacement for search, I personally use it a like a digital assistant to do annoying grunt work for me like doing some basic, but tedious arithmetic calculations. I also use to in lieu of Stack Overflow on how to do thing in programming languages I'm less familiar with like Golang, Rust, or Perl.
It can also read an article (or watch a video) and give you a summary of it (I played with as TLDR/TLDW solution for all the article/video people send me). I also found it is good at converting your lecture/interview/meeting notes into a more polished summary that you could email out to people (obviously user-beware to proofread the output before sharing).
Actually, what I think what the ChatGPT bot I used was doing was reading the Youtube auto-generated transcripts and giving a summary of that and/or answer questions about what was said.ChatGPT did not watch those videos for you. It gave you a mash-up of web text related to keywords related to the videos.
Just wait until they start to use longnet transformers. Was recently reported by microsoft, and has I shit you not a 1 billion token context limit. I give it 18 months at most before this is in the publics hands as well.When are they opening up 32k? Costs for gpt4 are high, but for our use case, it’s a huge savings. Gpt3.5 16k is dirt cheap in comparison, but there are too many errors for our use case.
It's not even always grammatically correct.Ask them about things you DO know about, not things you don't, and you'll quickly see that they're a terribly unreliable source of information. Worse, they're a confidently incorrect source of information. Maybe it's better if you pay for the new ones. Maybe I'm not using it in the intended way but there's only language, no logic or other smarts, so it'll start getting into grammatically correct nonsense pretty quick.
Yes, that makes sense. My response was an objection to anyone thinking ChatGPT has any level of comprehension. Your follow-up makes perfect sense, although it is not a "summary" so much as "syntactically related gibberish."Actually, what I think what the ChatGPT bot I used was doing was reading the Youtube auto-generated transcripts and giving a summary of that and/or answer questions about what was said.
Have you actually tried ChatGPT personally? I mean this is the summary ChatGPT gave of this Youtube video. YMMV but to me that summary feels at least one step above "syntactically related gibberish."Yes, that makes sense. My response was an objection to anyone thinking ChatGPT has any level of comprehension. Your follow-up makes perfect sense, although it is not a "summary" so much as "syntactically related gibberish."
The 2008 financial crisis was caused by the creation of risky mortgage-backed securities and collateralized debt obligations, which were given AAA ratings by credit rating agencies. Investors, including big-money global investors, bought these securities because they provided a higher return than other investments, and home prices were going up. But when housing prices collapsed, borrowers defaulted on their mortgages, and investors lost money. Major players in the financial industry declared bankruptcy, and the US economy plummeted into a disastrous recession. The government responded by enacting a number of measures, including emergency loans to banks, the Troubled Assets Relief Program (TARP), stress tests on Wall Street banks, a stimulus package, and the Dodd-Frank Law. Perverse incentives and moral hazard played a role in the crisis, as did the government’s failure to regulate and supervise the financial system.
Exactly this. Like many tools, art and craft are required to get the most out of it. Just like has developed around image generation prompts.I always find some of the negativity in technical forums around ChatGPT and other LLMs to be interesting.
It is absolutely true that ChatGPT can spout sometimes-convincing and always-authoritative garbage. But man is it a useful little tool as long as you understand its limitations.
I have basically completely stopped writing small, domain-specific scripts or routines. I tell ChatGPT what I would like to happen, and it quickly outputs something that's somewhere from 90-100% functional.
A lot of things that used to be a "keyword heavy" search that resulted in parsing a lot of results are now a simply phrased question. If I want to know how to do something in an application, I regularly find that I get accurate and comprehensive results by just asking ChatGPT - and when they're incorrect, I usually can clarify and get a corrected answer.
It's definitely not perfect. Has plenty of inaccurate information and flaws in the way it answers questions. Sometimes it can fail hilariously. But as a tool to work with while being aware of its shortcomings, I find it astonishingly helpful.
How many "tokens" does a human absorb in a year, parsing them out as the equivalent bits of text or speech encountered by said human? Just curious how this compares.Just wait until they start to use longnet transformers. Was recently reported by microsoft, and has I shit you not a 1 billion token context limit. I give it 18 months at most before this is in the publics hands as well.
https://www.arxiv-vanity.com/papers/2307.02486/
Well 8k tokens is roughly 4k words. In a lifetime someone will say about 860 million words. So its comprehensive.How many "tokens" does a human absorb in a year, parsing them out as the equivalent bits of text or speech encountered by said human? Just curious how this compares.
Yeah it's not good as a search or knowledge engine, it's a shame that's how they became known. It's a language model. (a large one!). It's very good at manipulating language especially if you give it what to manipulate. Can be quite useful, but as another noted, it's like an extra step outside the workflow and I type fast enough I'm often... meh, just do it myself.Instead of using ChatGPT as a replacement for search, I personally use it a like a digital assistant to do annoying grunt work for me like doing some basic, but tedious arithmetic calculations. I also use to in lieu of Stack Overflow on how to do thing in programming languages I'm less familiar with like Golang, Rust, or Perl.
It can also read an article (or "watch" a Youtube video with transcripts) and give you a summary of it (I played with as TLDR/TLDW solution for all the article/video people send me). I also found it is good at converting your lecture/interview/meeting notes into a more polished summary that you could email out to people (obviously user-beware to proofread the output before sharing).
We might all find out soon that when you put the sum total of all human knowledge into a network capable of fully contextualizing it, that there aren't really any great insights to be had that people haven't already thought of.Well 8k tokens is roughly 4k words. In a lifetime someone will say about 860 million words. So its comprehensive.
That's really interesting, their results look great. That said, I haven't yet seen a context length increase that didn't cause the model to suffer from the serial position effect to a very noticeable degree, and I haven't yet seen any research on meaningfully addressing that. It's going to be important to address for use cases like dumping a very large code base into an LLM and asking it specific questions that could be answered by content anywhere in the answer.Just wait until they start to use longnet transformers. Was recently reported by microsoft, and has I shit you not a 1 billion token context limit. I give it 18 months at most before this is in the publics hands as well.
https://www.arxiv-vanity.com/papers/2307.02486/
Excellent. I’ll be using this in conjunction with plaintext records of HP Lovecraft’s writings and correspondence to do something truly awful. Thank you for the information.Just wait until they start to use longnet transformers. Was recently reported by microsoft, and has I shit you not a 1 billion token context limit. I give it 18 months at most before this is in the publics hands as well.
https://www.arxiv-vanity.com/papers/2307.02486/
Now how useful will this be for things like CHAI, character AI, Harpy chat and the other character roleplay bots?How many "tokens" does a human absorb in a year, parsing them out as the equivalent bits of text or speech encountered by said human? Just curious how this compares.
[...] the company expects to continue fine-tuning the models throughout the year.
So, you intentionally gave them incorrect information, and linked that incorrect information to your /actual/ phone number AND you're blaming OpenAI for this? They didn't provide you a 'I confess I fed you incorrect information about me.' button, so you're mad at them?My phone number is already linked to a throwaway account I made many months ago to try out ChatGPT when it first hit the news. Unfortunately there is no workflow reclaim my number, and OpenAI support has not responded to any of my help requests over the past 3 months. I'm sure it's a great product, but I won't pay for access until I know I can complete the basic action of logging in.
"model": "gpt-4",
from OpenAI's own API Playground, I see:"error": {
"message": "The model: `gpt-4` does not exist",
"type": "invalid_request_error",
"param": null,
"code": "model_not_found"
}
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <API_KEY_HERE>" \
-d '{
"model": "gpt-4",
"messages": [
{
"role": "user",
"content": "<YOUR QUERY HERE>"
}
],
"temperature": 1,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}'
As a paying customer, every time I try and use"model": "gpt-4",
from OpenAI's own API Playground, I see:
Code:"error": { "message": "The model: `gpt-4` does not exist", "type": "invalid_request_error", "param": null, "code": "model_not_found" }
Full request (minus API key):
Code:curl https://api.openai.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <API_KEY_HERE>" \ -d '{ "model": "gpt-4", "messages": [ { "role": "user", "content": "<YOUR QUERY HERE>" } ], "temperature": 1, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 }'
It worked for me just now, I do not think this is global.As a paying customer, every time I try and use"model": "gpt-4",
from OpenAI's own API Playground, I see:
Code:"error": { "message": "The model: `gpt-4` does not exist", "type": "invalid_request_error", "param": null, "code": "model_not_found" }
Full request (minus API key):
Code:curl https://api.openai.com/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <API_KEY_HERE>" \ -d '{ "model": "gpt-4", "messages": [ { "role": "user", "content": "<YOUR QUERY HERE>" } ], "temperature": 1, "max_tokens": 256, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 }'
It is not that:I haven't used the API myself, but my first thought is that you might not be using a valid model name. Try hitting GET https://api.openai.com/v1/models first to get a list of valid models, per the documentation: https://platform.openai.com/docs/api-reference/models/list
$ curl https://api.openai.com/v1/chat/completions \
> -H "Content-Type: application/json" \
> -H "Authorization: Bearer key_removed_for_post" \
> -d '{
> "model": "gpt-4",
> "messages": [
> {
> "role": "user",
> "content": "please respond to this test message"
> }
> ],
> "temperature": 1,
> "max_tokens": 256,
> "top_p": 1,
> "frequency_penalty": 0,
> "presence_penalty": 0
> }'
{
"id": "id_removed_for_post",
"object": "chat.completion",
"created": 1689032591,
"model": "gpt-4-0613",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Test message received successfully. How can I assist you further?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 13,
"completion_tokens": 12,
"total_tokens": 25
}
}
Those are by far the easiest kind of program to write which use it, so it is not surprising that is what it is known for at this point in time.Yeah it's not good as a search or knowledge engine, it's a shame that's how they became known. It's a language model. (a large one!). It's very good at manipulating language especially if you give it what to manipulate. Can be quite useful, but as another noted, it's like an extra step outside the workflow and I type fast enough I'm often... meh, just do it myself.
Not a ton yet, but it could increase if usage takes off.Any news on how the explosion of LLMs is affecting global energy usage? In yet another year that's slated to be the latest "hottest on record", maybe we need to care about that too. For example, do they have an info on how they expect the general availability of their APIs to affect things?
We might all find out soon that when you put the sum total of all human knowledge into a network capable of fully contextualizing it, that there aren't really any great insights to be had that people haven't already thought of.
Yesssss. The moment I read this article, that was the first thing I tried looking up. I don't see anything yet, and I've let my Plus subscription lapse because of rate-limiting.Great. Now when are they going to stop rate-limiting ChatGPT Plus? =]
ChatGTP gives dishwater dull friend centric responses that no one with two brain cells to rub together would find engaging.
The chatbots of the 90s were much more convincing since they were authored by individuals.
You can choose from hundreds of free source code repositories on GitHub & customize your own in an hour.
Much more satisfying than being someone's customer.
Be a hustler homie not a customer crony .