Technology evolves rapidly, often faster than the ability of legislators to regulate it. This makes it difficult, if not impossible, to create and enforce specific regulations when a new technology enters the market. Legal processes, in fact, require analysis, evaluations and approvals at national and international levels, slowing down their implementation.
Artificial intelligence (AI or AI) was no exception. Until recently, the European Union did not have a unified regulatory framework on the subject. This gap was filled with the entry of the AI Act, approved on February 2nd, 2025. This is the first European regulation that governs the use, commercialization and application of AI systems, providing common rules for all member states and addressing both artificial intelligence service providers and end users.
The AI Act establishes a set of harmonized standards for the introduction, the use and the selling of artificial intelligence systems in the EU. In particular, it prohibits some practices of AI considered dangerous or harmful, introduces specific requirements for systems considered to be at high risk and imposes obligations on the operators who manage them. In addition, it provides common rules to ensure the transparency of certain AI systems and for the commercialization of general artificial intelligence models. The regulation also governs market monitoring and surveillance, governance and enforcement. Finally, it includes measures for supporting innovation, with particular attention to small and medium-sized enterprises, including startups.
The AI Act aims to reduce regulatory fragmentation among EU Member States by ensuring uniform rules across the entire EU market. Its main goal is to promote trustworthy and human-centered artificial intelligence, while also ensuring a high level of protection for health, safety, and fundamental rights. The regulation also seeks to improve the functioning of the internal market and support innovation by creating an environment conducive to the development and adoption of AI systems in Europe.
Furthermore, the AI Act prohibits the use of practices deemed to pose an unacceptable risk, such as AI systems that manipulate people’s decisions or exploit their vulnerabilities, as well as those that assess or classify individuals based on their social behavior or personal characteristics. As of February 2, 2025, these practices are officially banned within the European Union.
The AI Act applies to anyone who:
artificial intelligence systems within the EU, regardless of where they are based. It also covers AI systems used in the EU, even if they were developed elsewhere. However, certain categories are excluded from the regulation.
The AI Act does not apply to AI systems intended for military or national security purposes, nor to those used for scientific research, provided they do not infringe on fundamental rights. It also excludes AI systems used for personal and non-professional activities, and open-source systems, unless they fall under the high-risk category.
The AI Act adopts a risk-based approach, categorizing AI systems into a hierarchy based on their level of risk.
The AI Act introduces specific obligations for companies and public administrations that use AI systems, requiring them to ensure adequate staff training.
Article 4 of the regulation states that anyone working with AI systems must possess appropriate technical knowledge, experience, and training. This requirement is not limited to tech companies; it also applies to any organization using AI systems, regardless of the industry.
As a result, companies must ensure their employees and collaborators are properly educated in AI usage, capable of assessing its impact, and aware of the context in which it is applied.
The implementation of the AI Act will be gradual, with several deadlines for compliance with its provisions.
In August 2025, rules on AI governance and obligations for general-purpose AI models will come into force. At this stage, those subject to the regulation must maintain detailed documentation on system testing and development, follow standardized procedures to ensure safety, and conduct regular evaluations to verify compliance. Failure to meet these obligations may result in significant penalties.
In August 2026, full application of the AI Act will be completed, with final enforcement of rules for all AI systems, including those classified as high-risk.
In the field of business-related artificial intelligence, most current systems fall under the minimal or limited-risk categories. However, it is crucial for companies to properly train their workforce to ensure the responsible and effective use of AI. This means providing the necessary tools and knowledge to safely and successfully harness the potential of this technology.
For more details, you can consult the full regulation at the following link.
Discover how ShowK.AI protects your company's privacy!