Artificial intelligence (AI) system
An Artificial Intelligence (AI) System is an engineered system that, for a given set of human-defined objectives, can generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments.
This definition, aligned with the OECD and EU AI Act classifications, emphasizes that an AI system is not just the "model" (the mathematical algorithm) but the entire pipeline required to make that model useful in the real world. It encompasses the hardware, the software infrastructure, the data processing workflows, and the user interfaces.
An AI system typically operates through a lifecycle of:
- Perception: Gathering data from the environment using sensors, cameras, or digital inputs (e.g., a microphone on a smart speaker).
- Processing (Reasoning/Learning): Analyzing that data using machine learning algorithms or logic-based rules to identify patterns or infer meaning.
- Actuation (Output): executing an action based on that analysis (e.g., unlocking a phone, steering a car, or generating a text response).
Strategic Impact: Distinguishing between an "AI Model" and an "AI System" is critical for governance. A model might be safe in a lab, but the system wrapping it might introduce risks, for example, by connecting that model to the internet or giving it permission to delete files. Regulatory frameworks, like the EU AI Act, regulate systems, not just abstract code, because the impact on human rights occurs at the system level where the technology interacts with people.

















