Foundation model
A foundation model is a general-purpose pre-trained model, which can be adapted to specific tasks through fine-tuning or prompting. It should have lots of “knowledge” about the modality and domain (e.g., LLMs understand “language” and the “world” very well). Potentially, there can be a single foundation model for everything. But for now each domain has a foundational model.
Ideally, you want an ultimate model to which you can ask any questions or tasks. Specialized models may perform better than general models because they are tuned to specific tasks. However, training really good task-specific models for each task can be extremely expensive and tedious. As such, a useful idea is to train a very large general model (i.e. LLMs) and then use fine-tuning, few-shot prompts, or light-weight adapters to perform special tasks. You can tune these elements much more easily than retraining the entire model. What modern LLM research has shown is that generative LLM can serve as a really powerful “foundational” model on which other machineries can be built on.