“You can’t escape AI anymore – even if you want to"

In recent years, the rapid rise of artificial intelligence (AI) has posed new challenges for regulation. Within the European Union, there have been ongoing efforts to place legal boundaries around machine-based decision-making. The AI Act, the new EU regulation governing the use of artificial intelligence, came into effect in February of this year. Its first major milestone was the market ban of AI systems classified as “prohibited,” i.e., those deemed most dangerous. We discussed these developments and more with Zoltán Karászi, head of the QTICS Group.

 

Zoltán Karászi is an electrical engineer and economist whose career has evolved into a leading role in the TIC (Testing, Inspection, Certification) industry. In 2010, he became Managing Director of TÜV Rheinland Hungary and Deputy Regional CEO for CEE, where under his leadership the company became the market leader in Hungary and developed into a center for international innovation. Since 2015, he has founded and co-owned several TIC-sector companies, now organized under the QTICS Group, which embodies his vision of a network-based, knowledge-driven, integrative business model.

 

You say no one can escape the impact of AI anymore.
That’s right—even if you wanted to, you couldn’t stay out of it anymore. This is a technological revolution that is becoming increasingly democratized. By that, I mean that artificial intelligence and deep learning technologies have existed for quite some time with significant historical foundations—but until recently, they weren’t accessible to the general public.

Technological democratization means that these tools are now available to a broad audience with low entry barriers. A classic example of this process was the widespread adoption of the internet. AI had long been used in research institutes, universities, and by scientists, but it remained largely unnoticed until a company skillfully brought a product to market. That company was OpenAI, and the product was ChatGPT. Since then, similar applications have proliferated—some arguably better than GPT—but none have managed to dethrone it.

And we shouldn’t forget forward-looking initiatives like the Panorama Project. This is a Europe-wide knowledge development program aimed at increasing AI literacy, or the public’s understanding of artificial intelligence. After all, AI systems of the future won’t just be used by engineers—they’ll be employed by lawyers, teachers, customer service agents, civil servants, and utility workers. Every user will need to understand the basics of how these systems function.

But AI still “hallucinates” quite convincingly.
These applications have taken over many tedious, time-consuming tasks like researching, searching, and organizing information. AI can now deliver results in the blink of an eye. On one hand, this is highly efficient; on the other, it enables things that were previously impossible for both traditional computing and humans. Even creative, innovative, yet somewhat lazy individuals who aren’t tech-savvy can now bring their ideas to life.

However, there’s always the risk that AI will invent data or extrapolate scenarios and statements that don’t hold up in reality. It can present entirely fictional content in impeccable Oxford English. This happens because the quality of AI-generated content depends entirely on the quality of the data it's trained on. Training AI is therefore a colossal task.

Because in the end, we’re still talking about a machine—as you often emphasize.
Many people tend to overestimate artificial intelligence. Some even believe it has beliefs, emotions, or ethics of its own. But it doesn’t. It’s a machine, and I honestly don’t understand why some presume otherwise. In the “land of miracles”—the United States—some sects ascribe superhuman powers to it. In reality, it simply draws conclusions through algorithms—though very quickly and precisely.

What does it mean when an AI tool is certified by a designated organization?
A so-called notified body is an independent, accredited organization authorized to certify that an AI application complies with EU standards. These tools can only be placed on the EU market after passing the required assessments.

QTICS Group is among the first to apply for designation as a notified body under the AI Act. We already have GDPR certification capabilities, we certify AI management systems, and we play an active role in the Panorama Project and the AIQL Consortium, which is working on developing AI compliance standards.

But what happens if the developer chooses not to follow the rules? That’s possible too, isn’t it?
In that case, legal consequences follow: financial penalties, civil liability, and even criminal liability in some instances. And “we didn’t know” won’t be a valid excuse—under the AI Act, the manufacturer is always responsible.

Who can actually set boundaries for all this?
The AI Act is the first European regulation created specifically to govern this seemingly runaway, democratized, and publicly accessible technology and its use. Its purpose is to protect people from AI applications that could cause physical, mental, or financial harm. It states clearly that there must always be a traceable human behind any AI system.

So no matter what enters the market, there will always be an accountable individual behind it. An intriguing dimension of this issue is how legal frameworks—and even the concept of legal personhood—are evolving. Some countries have already granted rights to natural entities; for instance, an attorney can now represent the interests of a forest or a lake. The same principle may be applied to AI: if, for example, ChatGPT told you to jump into a well and you did, the organization behind the application would be held liable.

This brings us back to a fundamental tenet of modern capitalist societies: market protection is based on manufacturer accountability. However, the necessary evaluations must be conducted by an independent body—a notified certification authority.

What does it mean that, as of February 2025, the EU AI Act will designate certain applications as prohibited?
February 2025 marked the first major implementation milestone of the AI Act. From this point on, AI applications categorized as prohibited began to be phased out of the market. These systems cannot be developed, distributed, or tested in public environments. Examples include real-time biometric identification at large gatherings (such as iris or facial recognition), or emotion recognition systems used during HR interviews.

The EU has defined four risk categories:

·        Prohibited: These systems are entirely banned. This includes certain manipulative or surveillance-based applications.

·        High-risk: These include AI tools used in education, employment, or medical contexts. They must be certified by a notified body before they can be placed on the market.

·        Limited-risk: These systems must notify users of their AI nature. For example, chatbots must clearly state that users are not speaking with a human.

·        Minimal-risk: Examples include spam filters or AI-based translators. These require no special certification or review.

Do you think there's an AI black market—or will there be one in the future?
Yes, but the situation is more nuanced. The “black market” for AI doesn’t necessarily resemble the dark web. AI is already being developed for military or national security purposes that fall outside the scope of the AI Act. These uses aren’t secret—but they’re not subject to the same regulations.

It’s also clear that generative AI tools already offer opportunities for abuse. With minimal data, one can create avatars, deepfake videos, or voice clones. Harassment, fraud, and manipulation are becoming more sophisticated. So not only will there be a darker side to AI—there already is. The emergence of gray and black zones in the market is inevitable. That’s exactly why regulation is essential: to build defense mechanisms against such risks.

The conversation will continue in our next feature.

Stay informed—follow us on LinkedIn:

Share this Post: