JP Weber

AI in business and legal compliance: How to prepare your organisation

Share your content

AI in business and legal compliance: How to prepare your organisation

The use of artificial intelligence in business is no longer a technological experiment. Today, AI tools are widely used in sales, marketing, HR, data analytics, customer service, content creation, and the automation of internal processes.

From a legal perspective, however, AI should no longer be treated merely as an efficiency tool, but as an area of compliance, information security, and risk management. The EU AI Act entered into force on 1 August 2024 and, as a general rule, will apply from 2 August 2026, although certain obligations have already started to apply earlier. 

AI in Business Is Now a Compliance Issue, Not Just an Innovation Issue

In practice, many companies adopt AI from the bottom up: employees use publicly available large language models, content generators, coding assistants, or analytical tools without any clear internal rules defining the scope of permitted use. This operating model increases the risk of confidentiality breaches, unlawful processing of personal data, unauthorised use of intellectual property-protected materials, and business decisions being made without adequate human oversight. The AI Act has been designed precisely around the principles of transparency, oversight, and a risk-based approach. As a result, organisations are expected to manage the use of AI consciously, systematically, and with proper documentation.

Which Obligations Already Apply Under the AI Act

At present, the most practical provision for businesses is Article 4 of the AI Act, concerning so-called AI literacy. Under this provision, providers and deployers of AI systems must take measures to ensure a sufficient level of knowledge, skills, and awareness among individuals using such systems. This obligation has applied since 2 February 2025 and means that companies should ensure not only access to AI tools, but also training, instructions, rules of use, and an appropriate alignment between users’ level of knowledge and the context in which AI is being deployed.

This marks an important practical shift: simply giving employees access to an AI tool is not enough. A business should be able to demonstrate that the individuals using AI understand the basic limitations of such systems, are capable of assessing the reliability of outputs, and know what types of data must not be entered into them. Otherwise, AI may be used without any real organisational control, even if, formally, it is being used “within” the company’s operations.

 

What Will Change from 2 August 2026

From 2 August 2026, the core part of the AI Act will become applicable, including provisions relating to most high-risk AI systems and the transparency obligations set out in Article 50. It is important, however, to remain precise: this does not mean that every piece of content produced with the assistance of AI must be labelled. The AI Act provides for information obligations, among others, in relation to systems that interact directly with natural persons, providers of synthetic content marked in a machine-readable format, and, in certain cases, deepfakes and AI-generated or AI-manipulated text intended to inform the public on matters of public interest.

From a business perspective, this means that organisations should already begin to assess which business processes may fall within the scope of future transparency obligations and which use cases may qualify as high-risk. Expert-level organisational readiness therefore does not consist in implementing one generic “AI policy”, but in distinguishing between specific use cases and assigning the appropriate legal requirements to each of them.

The Main Legal Risks Associated with the Use of AI in Business

The most common practical risk is the disclosure of information that should never be entered into an external AI tool: trade secrets, know-how, negotiation materials, contract content, customer data, employee data, or information subject to confidentiality obligations. In many organisations, the issue does not stem from bad faith, but from the absence of clear guidance on what data may be used in prompts and what data must remain strictly within the internal environment. This is precisely why the use of AI should be integrated with the company’s information security framework, confidentiality rules, and controls over which tools are approved for use.

The second key area is personal data protection. If an AI system processes personal data, the organisation must assess the legal basis for processing, the scope of data involved, the necessity of using the tool, the level of security, and the role of the provider. Under the GDPR, a controller may only use processors that provide sufficient guarantees of implementing appropriate technical and organisational measures, and the relationship must be regulated by a data processing agreement or another appropriate legal instrument. The GDPR also emphasises the need to assess risk, ensure confidentiality, and implement security measures proportionate to the nature of the processing.

Where the intended use of AI is likely to result in a high risk to the rights and freedoms of individuals, the organisation should also consider carrying out a data protection impact assessment. In practice, this is particularly relevant to solutions used for profiling, behavioural assessment, automated decision-making, the analysis of large datasets, or the monitoring of employees and job applicants.

AI in HR and Employee Monitoring: An Area of Particular Risk

Particular caution is required where AI is used in recruitment, candidate assessment, task allocation, employee monitoring, performance evaluation, or decision-making affecting the course of employment.

The AI Act expressly classifies as high-risk, among others, systems intended for the recruitment and selection of individuals, the analysis of applications, candidate evaluation, decisions affecting the terms of employment, promotion, termination of cooperation, as well as systems used to monitor and assess behaviour or performance.

This means that the use of AI in HR should not be implemented solely on the basis of a tool’s functionality. A prior assessment is essential, covering the purpose of use, the scope of data involved, the impact on employees or candidates, information obligations towards affected individuals, internal documentation, and the contractual relationship with the technology provider. Otherwise, the company exposes itself not only to regulatory risks, but also to employment disputes and evidentiary difficulties in demonstrating how decisions were actually made.

What Documents and Procedures Should Be Implemented Within the Organisation

A professional AI implementation framework within a company should include at least several layers.

First, the organisation should adopt an AI use policy or a separate internal procedure specifying which tools are permitted, for what purposes they may be used, what categories of data are prohibited in prompts, when human verification of outputs is required, and who is responsible for approving AI use in sensitive processes.

Second, existing internal regulations should be updated, including the data protection policy, information security documentation, confidentiality clauses, template agreements with vendors, procurement procedures, and, where necessary, HR-related rules.

Third, the organisation should provide training and document all measures aimed at building users’ competence, since this obligation follows directly from Article 4 of the AI Act.

From a governance perspective, it is also advisable to map the AI tools actually being used within the organisation and the specific purposes for which they are deployed. Only on that basis can the company reliably assess whether it is using exclusively low-risk tools or whether it is entering areas that require additional compliance measures, particularly where such tools are used by HR teams, in monitoring activities, in essential services, or in decision-making processes affecting natural persons.

Why Mere “Approval to Use AI” Within a Company Is Not Enough

In practice, the biggest mistake is not the use of AI itself, but the use of AI without rules, without risk classification, and without process-level accountability. A company that does not define who may use AI, under what conditions, and for what purposes has only limited ability to enforce standards internally. It is harder for such a company to demonstrate due diligence and more difficult to control the flow of information.

By contrast, a well-designed legal and governance framework allows an organisation to combine innovation with security. The company can use AI more effectively and more rapidly, but in a way that is defensible from a regulatory, contractual, and evidentiary perspective.

JP Weber

Rafał Gołąb, Phd.

Rafał Gołąb, Phd.

Partner,
Attorney at Law

Read also

JP Weber and Eight Advisory advised Marvesting SAS on the acquisition of Option1 See more

See more JP Weber and Eight Advisory advised Marvesting SAS on the acquisition of Option1

JP Weber as Due Diligence advisor to the owners of KPS Food Group in the formation of a Joint Venture with ForFarmers See more

See more JP Weber as Due Diligence advisor to the owners of KPS Food Group in the formation of a Joint Venture with ForFarmers

Always up to date.

Our experts share their knowledge on a daily basis. We invite you to read our articles, reports, and alerts.