The Power of Scale for Parameter-Efficient Prompt Tuning

Neural language models

The fine-tuning approach requires retraining the large models; the prompt design used for GPT is remarkable, allowing few-shot learning, but the design is somewhat arbitrary and cannot achieve state-of-the-art performance.

This paper proposes a method to do prompt tuning (Prompt learning), which has the benefit of freezing the model while achieving good performance.

The “soft prompt” is k tunable tokens prepended to the input text.