14 Replies to “I asked ChatGPT to help me to make a Chess game in Python”

  1. Writing a chess game is indeed difficult and I would be really surprised if chatgpt was able to do it. Still doesn’t look bad though, maybe it can still fix some more issues with additional prompts and you can fix the placement of the pieces yourself lol

  2. I’m curious about the process you went through to get this done. Did you iterate through a conversation? Did you do any of the design work or did you leave all the design up to the AI?

  3. Considering the scale of the task I’m still pretty impressed. This looks like what an 18 year old gifted but inexperienced coder might come up with

  4. ChatGPT isn’t writing chess for you. There is a fundamental point that a large amount of people misunderstand about how LLMs’s work:

    It isn’t ‘thinking’. It doesn’t know what chess is, even if you explain it to it to extreme degrees.

    It builds sentences based on probability, word by word, of what it is has learned is a likely answer.

    “chess is a game”
    “chess boards are 8 X 8”
    “the queen is strongest”
    “There are more pawns than other pieces”

    <a million other bits of training data about **conversations** about chess>

    &#x200B;

    ‘It’ wants to give you the best and most rewarded output to any string of words that are currently being fed in – what usually comes next.

    &#x200B;

    It can be fooling, because we infer intelligence (and it is convincing).

    &#x200B;

    For a test: Ask it if it knows wordle. then ask to play a game where it chooses a word and you have to guess it. Chances are it will wither diverge to a path telling you that you have a letter correct or wrong, or that you have a few letters. Sometimes asking the answer will make it tell you that you can’t know. After a while, ask for the answer and see that it just tells you a new word that didn’t follow the games logic (and you probably even guessed letters that contradict the rules). It didn’t cheat you – it is giving you the most probable outcome of the string of text that you told it:

    “Is ‘E’ in the word?’ -> ‘YES’

    (it doesn’t have memory. There is no word. But most people asking this question were told ‘YES’ in the data it trained on)

    It absolutely doesn’t **think** and absolutely doesn’t **remember** anything. When you talk to it, each **and every** message you send in the conversation gets replayed in order to it. It gets *stupider* after time because all your messages, strung together, exhaust the token limit at some point

Leave a Reply

Your email address will not be published. Required fields are marked *