Foundation model
Ideally, you want an ultimate model to which you can ask any questions or tasks. Specialized models may perform better than general models because they are tuned to specific tasks. However, training really good task-specific models for each task can be extremely expensive and tedious. As such, a useful idea is to train a very large general model (i.e. LLMs) and then use fine-tuning, few-shot prompts, or light-weight adapters to perform special tasks. You can tune these elements much more easily than retraining the entire model. What modern LLM research has shown is that generative LLM can serve as a really powerful “foundational” model on which other machineries can be built on.