Ale Pan's Blog

Alejandro Panza | Rosario | Argentina

We need OpenAI to digitally sign their API responses

If you shoot ChatGPT with the same prompt twice, you will get different answers. That is by design and probably well intended, but it makes OpenAI unaccountable. Since you cannot reproduce a past response, you cannot convince others that you got a certain answer.

This variability of the generated text means that the model does not always pick 'the most probable next word' (token, actually) every time it is generating text. ChatGPT is giving you a slightly lower quality answer to mimic some human-like randomization. When using other models, like text-davinci-003, you can set the level of randomization you want using the temperature param.

There is a notion that setting the temperature higher will give you a 'more creative answer'. It actually gives you a more random (i.e., lower quality) answer. If you want the response to be more creative, the right way to do it is to ask for it in the prompt.

There are a lot of screenshots out there of ChatGPT, giving bizarre or worrying answers. Wouldn't it be nice for a user to be able to prove that an answer is real and not just an easy fake? Wouldn't it be reassuring for OpenAI to be able to undoubtedly deny that an inappropriate answer was generated by them?

All that can be easily done if OpenAI would digitally sign its API responses. A second-tier solution would be for OpenAI to include a randomization seed in the answer and accept it as a param so anyone could reproduce any outcomes. The downside, of course, is that they could 'patch' the model, and you would never get that controversial result again.