How You Use Generative AI in a Legal and Profitable Way – and 5 Tips to Guide You in Practice

Published on:
February 12, 2025
|
Reading time:
7 minutes
WRITTEN BY
Frederik Them Pedersen
Assistant attorney
TABLE OF CONTENTS

Want to know more about AI compliance in practice?

Get to know how the Danish DPA responded to the use of AI in healthcare

The use of generative artificial intelligence (generative AI) offers significant opportunities for innovation but...

You must always take into account the regulatory requirements of the EU’s AI Act and ethical implications.

By understanding the AI Act and implementing responsible practices, you and your organization can enjoy the (profitable) power of AI and, at the same time, ensure compliance and minimize risks.

And I’ll tell you how to do this in this blog post. But let’s start by defining what generative AI actually is, so you know when you’re using it.

What is generative artificial intelligence (AI)?

Generative artificial intelligence (generative AI) is a specific type of general-purpose AI system (GPAI).

While all generative AI systems fall under the broader category of general-purpose AI, not all general-purpose AI systems are generative in nature.

Therefore:

To find out how generative AI is regulated, you must look into how general-purpose AI is regulated in the AI Act.

The AI Act defines generative AI systems as those designed to create new content such as text, images, audio, or code based on large statistical language models and requires extensive datasets for training.

These systems are often capable of integration into other AI systems, posing potential systemic risks due to their broad applicability.

Thus, generative AI is a type of artificial intelligence that can serve a variety of purposes.

Examples of generative AI are ChatGPT, CoPilot, and DeepSeek.

It's always important to understand the rationale behind the rules. Because it gives us some guidelines on how to interpret them:

Having a specific regulation of general-purpose AI is needed because it’s trained on big sets of data so that it can solve a variety of purposes. For that reason, it can come with a systemic risk.

And this is the reason why we need to regulate the use of generative AI (and AI systems, in general). And especially three requirements are the regulatory focal points of the AI Act – and my advice is to make sure your provider meets these.

Three requirements you should tick off when choosing a provider  

Before using a general-purpose AI model, you should make sure that the provider can document:  

  1. Transparency, meaning that outputs are marked as AI-generated clearly and prevent misleading information.
  2. Compliance, including instructions and limitations, policies, technical documentation, and so on.
  3. Reporting of the AI model to the European Commission and evaluation strategies in human oversight and so on.

When does your use of generative AI makes you a provider?

It’s crucial to understand the distinction between the roles of a provider and a deployer of an AI system as it determines the level of responsibility and regulatory obligations:

A provider is the legal ‘person’ responsible for either developing or placing AI systems on the market (therefore, an organization can be a provider even if they haven’t developed the AI system themselves). Deployers use these systems within their processes.

But here’s the tricky part:

A lot of companies have integrated a generative AI system into a system or into a process that they use themselves.

Therefore, a question we often get from these companies: Does this make us a provider or a developer?

If you use general-purpose AI in your internal processes, you would be a deployer of the AI system.

If you sell or provide to some commercial degree on the market an AI system, you might become a provider.

That is the case if you:  

  • Have defined and narrowed down the purposes of the AI
  • Put it on the market in your own brand name
  • Use it for high-risk purposes (such as critical infrastructure, education and vocational training, and recruitment).

If bullet 3 is the case, your company would be covered by Article 6 in the AI Act and would have to comply with a lot of high-risk AI system obligations. The reason for this is article 25 that states:

“Any distributor, importer, deployer or other third party shall be considered to be a provider of a high risk AI system for the purpose of the purposes of the Regulation and shall be subject to the obligations of the provider of under Article 16 if (...) they modify the intended purpose of an AI system, including a general purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned become a high-risk AI system in accordance with Article 6.”

5 practical tips to guide when using generative AI

You must consider these five practical factors when integrating generative AI into your operations:

#1: Safeguard data privacy and security

Ensure that personal and sensitive data are not inadvertently shared with AI systems, as this could lead to privacy violations.

The reasons for this are, first and foremost, that generative AI typically use the input you ‘feed’ it to train itself. You should therefore not share customer information, trade secrets, or intellectual property protected material with the system.

However, it’s possible to set up some generative AI systems so that the system doesn’t use the input for its own training.  

#2: Ensure awareness and training across

Mistakes happen.

Especially when humans are involved.

Thus, you should educate your employees about the risks and responsibilities associated with using AI systems to prevent misuse and ensure compliance. You can also ensure this by drafting policies on how your employees are allowed to use AI.

Other ways to ensure cyber hygiene are by looking at options for setting up AI, such as a corporate account to make the licences centralized. Also, you should have super users who can manage how the AI system is used.  

#3: Be critical of the AI’s output

You should always verify the accuracy and reliability of AI-generated content to avoid potential biases or misinformation. You should be able to answer questions like these:  

  • Is the output true?
  • Is the output non-discriminatory?
  • Is the output protected from others using it (can you own the IP)?  

If the answer to these questions is no, you shouldn’t use the output.

And if you can’t answer the questions with confidence and if you don’t have the knowledge to critically assess the AI’s output, you shouldn’t use the generative AI system.

#4: Be aware of GDPR compliance

You should be aware of your GDPR obligations related to the use of AI, meaning that you need to make sure that you don’t share personal data with AI.

Therefore, you need to do risk assessments under the GDPR (not to be confused the risk categorization under the AI Act). The risk assessment should include new threat that are unique to AI systems such as bias in decision-making, Lack of Explainability (the AI system’s lack of ability to explain its reasoning), or model drift (model performance degrades due to changes in data or in the link between input and output variables).

Also, you always need to do a data protection impact assessment (DPIA) and most likely a data processing agreement (DPA) between you and the AI system provider.

Finally, you need to make sure that there’s a legal basis and purpose for the entire ‘data journey,’ so that you know for which purposes the data you put in the system is used for.

#5: Ensure documentation

You must document responsible use of AI in policies and procedures such as:  

  • Policy for responsible use of generative AI
  • Procedures for personal data breach
  • Information security policy for supplier relationships
  • Privacy policy
  • DPIA
  • Risk assessment

I hope my practical walkthrough has made you feel unstoppable and equipped to use generative AI in a legal and profitable way.

If you’re curious to know more about the use of AI in practice from a GDPR point of view, feel free to check out my colleague’s blog post on the use of AI in healthcare here.  

Want to know more about AI compliance in practice?

Get to know how the Danish DPA responded to the use of AI in healthcare

Read more
Published:
February 12, 2025
Category:
AI Act