Even if it hasn’t been intentionally changed, quality is still inversely related to traffic. More people using chatgpt at a time, the worse it responds.
Sure it might not have changed since Dev Day, but those exact changes made on Dev Day (grouping all the tools into one chat, adding GPTs, etc.) is the major stem of all the issues we have today..
I’m sure there isn’t a new model. The way it speaks has changed. Either new parameters, guidelines, or pre-prompt.
I wonder if I could make a reference question that I ask everything you see the version number change. Then can I show all the responses to GPT 4 and have it guess what had changed.
Fuck this mass hysteria gaslighting 🐂 💩. I have been working on the same giant ass project doing the same kinds of work over and over again. I was able to get the outputs I needed, no problem. Then 4 Turbo drops and I suddenly have to argue with it to get the same outputs. It will get to 90% complete and then just refuse to go the last mile. I can point out that it has already done the work, uploaded its own previous work to it to show that it has completed the task before, and it just refuses to budge an inch further. If I ask it why it just says it can’t. I ask if it is prevented by instructions to not finish the task and it says it just can’t and we should move on to something else. More often than not it just breaks if the conversation is with a custom GPT.
OpenAI employee needs to stop blaming the user for problems with the model.
Could be that it’s not the model that is changing it’s their shitty system prompts and content filters. The bot is now acting as an “educational” tool instead of productivity tool so when you give it a task it tells you how to do it.
Bullshit. They’re nerfing heavytime. Don’t forget that guy is a nerd. He’s right to say the model hasn’t changed, because it didn’t. But what changed were the preprompts and base instructions.
THERE IS an issue with slow answers and failing uploads analysis. This is likely because of the elevated usage they discussed not long after dev day that caused them to close sign ups.
GPT4 is also “lazy” as default currently, thats also true, and seems to be a result of the fine tuning. This can be easily resolved through prompt engineering. I have no issues making coding GPTs that stay on track, listen to instructions and aren’t lazy. Can’t be positive it was a dev that said it but I believe this is being worked on, however prompt engineering is an easy workaround. I do not believe this is OpenAI deliberately changing the system prompt to prefer shortened answers, someone already would have uncovered that, I’ve already seen multiple posts of people coaxing the model to reveal its system prompt since dev day.
GPT being “nerfed” (apart from the lazy thing) I think is confirmation bias. The actual, real issues are causing you to increase your observation and feel more stress, and so GPT4s failures feel more common and worse to you now.
Ya, I noticed that it got worse after Dev Day lol. That’s my complaint.
There are multiple areas where you can obviously tell that the developers have hard coded in certain parameters. I understand their reasons, but the outputs have decreased in utility.
There is also the part where people get more experience in using ChatGPT and want it to do more. So they are comparing the output they were getting with the output they are now getting. So they believe it’s doing less when in reality they are asking it to do more now.
ChatGPT is not the model. I don’t know who this person is, but afaik what people are complaining about are their interactions through ChatGPT.
ChatGPT is not GPT-4, there’s all sorts of things going on between inputs and outputs based on what people see in their chat, what the models get, and vice versa. Standard instructions that people cannot see or change about content policy, and other behavioral molds and guardrails.
So they’re not training and retraining the model to make it more restrictive or dumber or different, they’re changing the systems behind the scenes between us and the model.
So I don’t know if this employee isn’t directly involved in what’s going on in the ChatGPT side of things, but things have definitely changed there over time. You can tell just by how many “jailbreaks” can’t work anymore. It’s just not that the model that’s changing. So he’s likely 100% right, but his commentary isn’t addressing what’s actually changed over time.
Everyone knows the model hasn’t changed; what has changed is the instructions and computational output these guys on the frontend provide to GPT. I really can’t believe how shameless he is to say something like this – our model hasn’t changed, it’s you guys acting up. Did Developer Day Sam hypnotize everyone into a collective illusion, huh?
It’s been horrible for coding for me for a while now. It’s not hysteria, we can still see the chats and quality of answers from previous iterations. The current one sucks at coding, period
I’m kind of hesitant to say this because I don’t want them to patch it, but if you use old conversations and clear the context you can improve the function a lot. I found that using old image generation conversations I can get past some of the new limitations. For example just in the last week they now don’t allow me to generate reiki symbols. they never were able to generate reiki symbols btw but it was a nice style for it to try. Then in this last week it started saying it was too sensitive of a topic. So I just go to an old image generation conversation and ask it. I also found that other censors aspects are less as well. I found that this works for old conversations as well. I used to be able to generate hypnosis scripts but now it won’t do that, but if I go back to an old conversation where I was doing that, it’s fine with it.
Bullshit. One day hes doing excel sheets for me , the next he just tells me the instructions , used same prompt as a test. They are doing doing shenanigans behind the scenes or gtp has developed a personality and suffers from fatigue.
Know Roon is prone to hyperbole, but mass hysteria might be a little much lol
Even if it hasn’t been intentionally changed, quality is still inversely related to traffic. More people using chatgpt at a time, the worse it responds.
Dogs and cats, living together!
Then why do I hear about them fooling with it all the time?
Sure it might not have changed since Dev Day, but those exact changes made on Dev Day (grouping all the tools into one chat, adding GPTs, etc.) is the major stem of all the issues we have today..
Yeah he’s correct, because it hasn’t been as good since dev day
Half the time it says analyzing and then error occurred. Unusable now
I’m sure there isn’t a new model. The way it speaks has changed. Either new parameters, guidelines, or pre-prompt.
I wonder if I could make a reference question that I ask everything you see the version number change. Then can I show all the responses to GPT 4 and have it guess what had changed.
Hey it’s roon!
Sure….
Some of the employee said they’re working on fixing the bugs, how are they fixing things without changing the model or behavior?
Fuck this mass hysteria gaslighting 🐂 💩. I have been working on the same giant ass project doing the same kinds of work over and over again. I was able to get the outputs I needed, no problem. Then 4 Turbo drops and I suddenly have to argue with it to get the same outputs. It will get to 90% complete and then just refuse to go the last mile. I can point out that it has already done the work, uploaded its own previous work to it to show that it has completed the task before, and it just refuses to budge an inch further. If I ask it why it just says it can’t. I ask if it is prevented by instructions to not finish the task and it says it just can’t and we should move on to something else. More often than not it just breaks if the conversation is with a custom GPT.
OpenAI employee needs to stop blaming the user for problems with the model.
Could be that it’s not the model that is changing it’s their shitty system prompts and content filters. The bot is now acting as an “educational” tool instead of productivity tool so when you give it a task it tells you how to do it.
Bullshit. They’re nerfing heavytime. Don’t forget that guy is a nerd. He’s right to say the model hasn’t changed, because it didn’t. But what changed were the preprompts and base instructions.
ITT people confusing multiple issues.
THERE IS an issue with slow answers and failing uploads analysis. This is likely because of the elevated usage they discussed not long after dev day that caused them to close sign ups.
GPT4 is also “lazy” as default currently, thats also true, and seems to be a result of the fine tuning. This can be easily resolved through prompt engineering. I have no issues making coding GPTs that stay on track, listen to instructions and aren’t lazy. Can’t be positive it was a dev that said it but I believe this is being worked on, however prompt engineering is an easy workaround. I do not believe this is OpenAI deliberately changing the system prompt to prefer shortened answers, someone already would have uncovered that, I’ve already seen multiple posts of people coaxing the model to reveal its system prompt since dev day.
GPT being “nerfed” (apart from the lazy thing) I think is confirmation bias. The actual, real issues are causing you to increase your observation and feel more stress, and so GPT4s failures feel more common and worse to you now.
Ya, I noticed that it got worse after Dev Day lol. That’s my complaint.
There are multiple areas where you can obviously tell that the developers have hard coded in certain parameters. I understand their reasons, but the outputs have decreased in utility.
There is also the part where people get more experience in using ChatGPT and want it to do more. So they are comparing the output they were getting with the output they are now getting. So they believe it’s doing less when in reality they are asking it to do more now.
Their PM Logan.GPT says it’s known issue that they’re working on.
ChatGPT is not the model. I don’t know who this person is, but afaik what people are complaining about are their interactions through ChatGPT.
ChatGPT is not GPT-4, there’s all sorts of things going on between inputs and outputs based on what people see in their chat, what the models get, and vice versa. Standard instructions that people cannot see or change about content policy, and other behavioral molds and guardrails.
So they’re not training and retraining the model to make it more restrictive or dumber or different, they’re changing the systems behind the scenes between us and the model.
So I don’t know if this employee isn’t directly involved in what’s going on in the ChatGPT side of things, but things have definitely changed there over time. You can tell just by how many “jailbreaks” can’t work anymore. It’s just not that the model that’s changing. So he’s likely 100% right, but his commentary isn’t addressing what’s actually changed over time.
wasnt this all proved months ago with a paper linked on here showing that it indeed was worse?
Everyone knows the model hasn’t changed; what has changed is the instructions and computational output these guys on the frontend provide to GPT. I really can’t believe how shameless he is to say something like this – our model hasn’t changed, it’s you guys acting up. Did Developer Day Sam hypnotize everyone into a collective illusion, huh?
It’s been horrible for coding for me for a while now. It’s not hysteria, we can still see the chats and quality of answers from previous iterations. The current one sucks at coding, period
I’m kind of hesitant to say this because I don’t want them to patch it, but if you use old conversations and clear the context you can improve the function a lot. I found that using old image generation conversations I can get past some of the new limitations. For example just in the last week they now don’t allow me to generate reiki symbols. they never were able to generate reiki symbols btw but it was a nice style for it to try. Then in this last week it started saying it was too sensitive of a topic. So I just go to an old image generation conversation and ask it. I also found that other censors aspects are less as well. I found that this works for old conversations as well. I used to be able to generate hypnosis scripts but now it won’t do that, but if I go back to an old conversation where I was doing that, it’s fine with it.
Bullshit. One day hes doing excel sheets for me , the next he just tells me the instructions , used same prompt as a test. They are doing doing shenanigans behind the scenes or gtp has developed a personality and suffers from fatigue.