See how top teams stay future-ready for audits. 🚀
AI Compliance

Foundation model (FM)

A Foundation Model (FM) is a large-scale artificial intelligence model trained on a vast amount of data (often using self-supervision at scale) that can be adapted to a wide range of downstream tasks.

Coined by the Stanford Institute for Human-Centered AI (HAI), the term reflects the role of these models as a "foundation" upon which other specialized applications are built. Unlike traditional "narrow AI" models, which were trained for a single specific purpose (e.g., a model built solely to detect spam), a Foundation Model learns broad patterns, reasoning abilities, and representations from its massive training set. This allows the same model to be fine-tuned for diverse tasks, such as sentiment analysis, code generation, summarization, and translation.

To understand the significance of Foundation Models, one must recognize their core characteristics:

  • Scale: They are typically trained on petabytes of text, images, or audio, containing billions of parameters. This scale allows for "emergent properties” capabilities that were not explicitly programmed or anticipated (e.g., a language model learning to do basic arithmetic).
  • Self-Supervision: They are trained on unlabeled data (e.g., "predict the next word in this sentence") rather than requiring humans to manually label every example. This allows them to ingest the entire internet's worth of data.
  • Adaptability (Transfer Learning): Once pre-trained, an FM can be adapted to a specific domain (like legal or medical) with a relatively small amount of new data. This is often referred to as "fine-tuning."
  • Homogenization: Because so many applications are now built on top of a few powerful FMs (like GPT-4, Llama, or Claude), improvements and defects in the foundation model ripple out to every application built upon it.

Strategic Impact: Foundation Models represent a paradigm shift in AI development. They lower the barrier to entry for building AI applications because organizations no longer need to train models from scratch. However, they introduce systemic risks: if a Foundation Model has a hidden bias or security vulnerability, that flaw is inherited by every single downstream application that relies on it, creating a single point of failure for the AI ecosystem.

Subscribe to our newsletter
Get monthly updates and curated industry insights
Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to see what security-first GRC really looks like?

The Scrut Platform helps you move fast, stay compliant, and build securely from the start.

Book a Demo
Book a Demo