This paper proposes prompt-tuning, learning a set of soft-prompts in the latent space instead of using natural language.

Model tuning ends up creating different versions of the model specialized for each task, while prompt-tuning maintains one base model and only learns the task specialized prompts.
The authors examine prompt-tuning by transforming a T5 model into a language model (instead of the original reconstruction task).