Demystifying Prompts in Language Models via Perplexity Estimation
This paper tests a hypothesis that the prompt that is familiar to the model (low Perplexity) is related to its performance. If a prompt has a low perplexity to the model, the model tends to produce better answers.