14 Replies to “I asked ChatGPT to help me to make a Chess game in Python”
Writing a chess game is indeed difficult and I would be really surprised if chatgpt was able to do it. Still doesn’t look bad though, maybe it can still fix some more issues with additional prompts and you can fix the placement of the pieces yourself lol
I’m curious about the process you went through to get this done. Did you iterate through a conversation? Did you do any of the design work or did you leave all the design up to the AI?
ChatGPT isn’t writing chess for you. There is a fundamental point that a large amount of people misunderstand about how LLMs’s work:
It isn’t ‘thinking’. It doesn’t know what chess is, even if you explain it to it to extreme degrees.
It builds sentences based on probability, word by word, of what it is has learned is a likely answer.
“chess is a game”
“chess boards are 8 X 8”
“the queen is strongest”
“There are more pawns than other pieces”
<a million other bits of training data about **conversations** about chess>
​
‘It’ wants to give you the best and most rewarded output to any string of words that are currently being fed in – what usually comes next.
​
It can be fooling, because we infer intelligence (and it is convincing).
​
For a test: Ask it if it knows wordle. then ask to play a game where it chooses a word and you have to guess it. Chances are it will wither diverge to a path telling you that you have a letter correct or wrong, or that you have a few letters. Sometimes asking the answer will make it tell you that you can’t know. After a while, ask for the answer and see that it just tells you a new word that didn’t follow the games logic (and you probably even guessed letters that contradict the rules). It didn’t cheat you – it is giving you the most probable outcome of the string of text that you told it:
“Is ‘E’ in the word?’ -> ‘YES’
(it doesn’t have memory. There is no word. But most people asking this question were told ‘YES’ in the data it trained on)
It absolutely doesn’t **think** and absolutely doesn’t **remember** anything. When you talk to it, each **and every** message you send in the conversation gets replayed in order to it. It gets *stupider* after time because all your messages, strung together, exhaust the token limit at some point
Writing a chess game is indeed difficult and I would be really surprised if chatgpt was able to do it. Still doesn’t look bad though, maybe it can still fix some more issues with additional prompts and you can fix the placement of the pieces yourself lol
Google en pas-
/r/AnarchyChess
Holy hell
I’m white
If it’s not UCI compatible then it’s worthless
And yet when I try to get it to show me pawn…..
I’m curious about the process you went through to get this done. Did you iterate through a conversation? Did you do any of the design work or did you leave all the design up to the AI?
I will generously allow the opponent to start off as black.
Considering the scale of the task I’m still pretty impressed. This looks like what an 18 year old gifted but inexperienced coder might come up with
ChatGPT isn’t writing chess for you. There is a fundamental point that a large amount of people misunderstand about how LLMs’s work:
It isn’t ‘thinking’. It doesn’t know what chess is, even if you explain it to it to extreme degrees.
It builds sentences based on probability, word by word, of what it is has learned is a likely answer.
“chess is a game”
“chess boards are 8 X 8”
“the queen is strongest”
“There are more pawns than other pieces”
<a million other bits of training data about **conversations** about chess>
​
‘It’ wants to give you the best and most rewarded output to any string of words that are currently being fed in – what usually comes next.
​
It can be fooling, because we infer intelligence (and it is convincing).
​
For a test: Ask it if it knows wordle. then ask to play a game where it chooses a word and you have to guess it. Chances are it will wither diverge to a path telling you that you have a letter correct or wrong, or that you have a few letters. Sometimes asking the answer will make it tell you that you can’t know. After a while, ask for the answer and see that it just tells you a new word that didn’t follow the games logic (and you probably even guessed letters that contradict the rules). It didn’t cheat you – it is giving you the most probable outcome of the string of text that you told it:
“Is ‘E’ in the word?’ -> ‘YES’
(it doesn’t have memory. There is no word. But most people asking this question were told ‘YES’ in the data it trained on)
It absolutely doesn’t **think** and absolutely doesn’t **remember** anything. When you talk to it, each **and every** message you send in the conversation gets replayed in order to it. It gets *stupider* after time because all your messages, strung together, exhaust the token limit at some point
White to win in…
Did you use chain of thought to break down into units that you assemble by hand, or just tell it “make chess game 4 me”
Did it provide code to draw the chess pieces, or did it use Dall-E to make those, or did you get them from somewhere else?