Advice and Legal Risk Management in the field of Artificial Intelligence
- Guidelines for the use of content to train AI systems
- Copyright protection options for prompts on which AI systems are based
- Restrictions on the use of protected content as part of prompts
- The question of whether AI infringes copyright if parts of protected works are included in the output
- The legal protectability of AI-generated works and possible legal claims
- The effects of the license conditions of generative AI systems on the exploitation of AI-generated output
Privacy risks: AI systems may process sensitive personal data such as health data, financial information or personal preferences. If this data is not adequately protected, it can be misused or compromised, which can lead to serious data breaches.
Transparency and explainability: Many AI algorithms are complex and difficult to understand. This makes it difficult for people to understand how decisions are made or why certain recommendations are made. However, transparency and explainability are crucial to ensure user trust in AI systems and that they are not discriminatory or biased.
Legal requirements: Data protection laws and regulations, such as the General Data Protection Regulation (GDPR) in the European Union or similar regulations in other regions, set strict requirements for the processing of personal data. Companies using AI systems must ensure that their practices comply with these regulations to avoid legal consequences.
Public trust: To gain and maintain public trust in AI, it is important to take privacy concerns seriously and implement robust data protection measures. If people feel that their data is not secure or could be used inappropriately, they will be less willing to use or interact with AI systems.
Overall, adherence to data protection principles and policies is crucial for companies and AI technology developers to maximize the potential of AI without compromising the privacy and rights of individuals. We offer you solutions for all data protection issues in connection with the use of AI systems.
Erroneous decisions and malfunctions: AI systems are not error-free and can make wrong decisions or malfunction. This can lead to material or immaterial damage, be it through incorrect diagnoses in medicine, incorrect financial decisions or other negative effects.
Responsibilities for development and use: Responsibility for the development and operation of AI systems lies with the companies or individuals who create or use these systems. The question arises as to who is liable for damage caused by the use of AI technologies, be it the manufacturer, the developer, the operator or even the user.
Human control and supervision: AI systems can operate autonomously or be semi-autonomous, which means that human supervision and control are required. If an AI system malfunctions or causes harm, the issue of liability may also affect the human actors who developed, implemented or supervised the system.
Ethics and legal framework: The development of AI systems raises ethical issues that also affect the question of liability. For example, who is liable if an autonomous vehicle is involved in an accident – the manufacturer of the vehicle, the developer of the AI algorithm, the driver or someone else? The legal framework must clarify these questions and establish clear liability rules.
Overall, the issue of liability in connection with artificial intelligence is a complex and multidimensional problem that touches on technical, legal, ethical and social aspects. On the one hand, the question arises as to which specific legal regulations result for liability from the currently applicable laws, for example from the German Civil Code or the Product Liability Act. Independently of this, the new European regulations must be included in the assessment, for example the EU AI Regulation or the current efforts to create a specific AI liability directive.
Our Services
Our Services in the field of Artificial Intelligence:
- Advice on the regulatory framework and contractual structuring options.
- Solutions for the design of specific business models including licenses and data strategy;
- Legal risk assessments for specific individual issues;
- Specific recommendations for establishing efficient compliance management;
- Drafting an AI ethics compass suitable for your company / organization.
Dr. Jana Jentzsch LL.M.
Dr. Jana Jentzsch advises clients in the area of IT compliance, particularly in connection with risk assessments in the area of technology usage (software licenses, data usage, IT security) and legal implications of digitalization strategies in German and English.
Call us: 040 22 86 83 86 0
Send an Email

Interested? Get in touch with us.