In August 2026, obligations for high-risk systems will come into force, including those used in workforce management, such as recruitment, performance evaluation and other decisions that affect working conditions, including remuneration. The European law was published in July 2024 and is being implemented in phases. It is this stage — the one that directly affects how companies manage their teams — that becomes applicable in August.
This topic has yet to gain traction in Brazil’s commercial and management landscape. It is not on the agendas of sales leadership, it is not part of HR or finance discussions, and it rarely appears in the innovation forums that take place every week in the country’s main cities. The debate is largely confined to legal experts, data protection specialists and regulators — an important but still limited circle.
That concerns me. Not because the European law applies directly to Brazil — in most cases, it does not. But because we have seen this playbook before. The European GDPR was approved in 2016. Two years later, Brazil approved the LGPD. The path Europe opens today often becomes the path Brazil follows tomorrow.
The EU AI Act may be the next chapter in that story. And it feels like the right time to broaden this conversation.
What changes in practice — and why this should already be a question for your business
Before explaining what the law is, I want to be clear about what it requires — because that is the most honest way to show why this debate matters beyond the legal sphere.
If Brazilian regulation follows the same principles as the EU AI Act — and there are concrete reasons to believe it will — these are the questions any company using AI in processes that affect people would need to answer:
Do you use AI to set targets or calculate commissions?
AI systems that influence decisions about employee remuneration and performance are classified as high-risk. Companies would need to document how the algorithm arrived at its outputs, ensure human oversight with real power to intervene, and inform employees that an automated system influenced a decision about their career. Without this audit trail, the ability to defend decisions in disputes or audits is significantly weakened.
Do you use AI to screen CVs or assess candidates?
AI systems used in recruitment are explicitly listed as high-risk. Companies would need to audit the model, ensure human oversight in the final decision and document the criteria used. Candidates would have the right to know that an algorithm influenced the decision — and to challenge it.
Do you use AI-powered productivity monitoring tools?
Systems that monitor employee behaviour — screen time, communication analysis, activity scores — may be classified as systems that infer behavioural states in the workplace. European law already prohibits some of these practices. A Brazilian regulation based on similar principles would place these tools under direct scrutiny.
Do you use ChatGPT or other AI tools with employee data?
Using AI with employees’ personal data — performance, salary, targets history — without a proper legal basis and formal instruments, such as supplier contracts and data processing records, already represents a risk under the LGPD.
Under AI regulation, this would likely be one of the first items on a compliance checklist. And many Brazilian companies still do not have this documentation properly structured.
As the founder and CEO of a RegTech company focused on commercial management — using AI to analyse sales performance and support decisions on variable remuneration — I do not see this as regulatory theory. I see a reflection of the day-to-day reality of hundreds of Brazilian companies that already operate these systems, but have not yet asked the right questions.
What the EU AI Act is
The EU AI Act — Regulation (EU) 2024/1689 — is the world’s first comprehensive legal framework on artificial intelligence. It is not a guideline or a recommendation. It is binding law, with clear deadlines and fines of up to €15 million or 3% of global turnover for high-risk systems, and up to €35 million or 7% for prohibited practices.
Its logic is risk-based: the greater the potential impact of an AI system on individuals and society, the stricter the obligations. There are four levels:
- Unacceptable risk: prohibited since February 2025
- High risk: strict obligations from August 2026
- Limited risk: transparency obligations
- Minimal or no risk: no specific regulation
Already prohibited since February 2025 are social scoring systems, emotion recognition in the workplace, subliminal manipulation and real-time biometric surveillance in public spaces.
From August 2026, full compliance will be required for high-risk systems, including AI used in employment, workforce management and performance evaluation. These systems must have technical documentation, traceability of results, genuine human oversight and prior notification to employees before implementation.
What stands out to me is that the principles underpinning these requirements — transparency, traceability, human oversight — are not new. They are the same principles any system affecting people should already have. Regulation is simply making mandatory what should already be standard.
An important nuance: the European Commission’s Digital Omnibus Package, presented in November 2025, proposes linking the application of high-risk obligations to the availability of harmonised technical standards. If approved, the practical enforcement of these obligations may be delayed, depending on when those standards are published. Even so, experts advise against waiting — the process of becoming compliant is lengthy.
Who is subject to the law — and what this means for Brazil
The EU AI Act has extraterritorial reach. If the outputs of an AI system are used or have an effect within the European Union, the company is subject to the regulation, regardless of where it is based. Brazilian companies with European clients, employees or operations are already within scope today.
For the rest of the Brazilian market — companies operating exclusively in Brazil — the European law does not apply directly. But this is where the analogy with the LGPD becomes particularly relevant.
The parallel is clear: the GDPR was approved in 2016. Two years later, Brazil approved the LGPD with a very similar structure and principles. The Brazilian Artificial Intelligence Legal Framework (Bill 2,338/2023) was approved by the Senate in December 2024 and has been under review in the Chamber of Deputies since March 2025. The National Data Protection Authority (ANPD) has included AI as one of its four main enforcement priorities for 2026–2027. The path is being paved.
Meanwhile, the market is growing faster than any regulation. Estimates from the Brazilian Internet Steering Committee (CGI.br, 2025) indicate that 50 million Brazilians already use some form of AI tool. Companies are adopting automated systems in HR, finance, healthcare and public safety, often without a clear understanding of the associated risks.
“Not regulating would leave Brazilian citizens at the mercy of systems that can be discriminatory, that often decide the future based on the past, without the possibility of correction or oversight.”
— Laura Schertel, rapporteur of the Senate-appointed legal commission
A conversation that needs to reach the commercial market — and the startup ecosystem
Brazil has an opportunity it did not have with the LGPD: to prepare in advance, to debate early, and to build a mature position on how we want to use AI in decisions that affect people.
For founders building AI solutions — in HR, sales management, recruitment, credit or healthcare — there is one point that deserves particular attention: you are not just users of regulation. You are providers. And the EU AI Act places heavier obligations on providers than on users.
Those building today have an advantage that established companies do not: they can get it right from the start. Retrofitting governance is far more expensive than embedding governance from day one.
We often talk about how AI makes everyday work easier. It is time to talk about how to use it with awareness, safety and governance. Regulation and compliance are not just bureaucracy — they are the natural consequence of technologies that affect rights, and AI already does.
This is a topic that every startup founder, every HR leader and every commercial head should be discussing now. Not because the law requires it, but because it is the responsible way to operate. And because when regulation arrives, those who have already built with these principles will not need to scramble.
If you have a view on this — whether you agree, disagree or see the debate from another angle — I would like to hear it. This is a conversation that needs to happen beyond the legal sphere.

Angelita Oliveira
Founder & CEO at Salespring
Angelita is founder and CEO of Salespring — a Brazilian RegTech focused on commercial management, an associate member of ABFintechs in the RegTech category, and founder of the Women in Sales community.


