How artificial intelligence will shape the future for law firms? In this interview, Klitos Platis, Advocate & Partner at Kleanthous & Platis law firm shares his thoughts on regulating AI, how to use legal technology and what could be the implications.
With the rapid development of AI, what are the primary concerns and roles for governments and policymakers in regulating this technology?
AI’s rapid advancement has presented unique challenges for governments and policymakers. Key areas of debate include determining what interests AI regulation should protect, such as ethical use, consumer rights, and data privacy. Another critical question is whether to adapt existing regulatory frameworks or develop new ones specifically tailored to AI. Additionally, there’s a need to balance innovation and regulation, ensuring that regulatory measures are proportionate and do not stifle technological advancement. Finally, defining the role of central government in overseeing and guiding AI regulation is crucial, considering the global impact and reach of AI technologies. These debates are crucial in shaping a well-regulated AI future that aligns with societal values and needs.
What is the EU’s approach to AI-specific laws?
The European Commission is at the forefront of crafting AI-specific laws in the EU. Two key legislative proposals are currently in progress:
The AI Act: This proposal aims to establish harmonised rules on AI across the EU, also modifying certain Union legislative acts. Its goal is to create a cohesive regulatory environment for AI development and deployment.
The AI Liability Directive: This directive focuses on adapting non-contractual civil liability rules to accommodate the unique challenges posed by AI. It seeks to clarify and establish liability frameworks in instances where AI systems cause harm or damage, ensuring legal certainty and protection for consumers.
These proposals represent a significant step by the EU in shaping the legal landscape for AI, aiming to balance innovation with ethical and societal considerations.
Can you provide an overview of the EU AI Act?
The EU AI Act, proposed in April 2021 and expected to come into force in early 2024, is a
comprehensive regulatory framework for AI systems. It’s a horizontal approach, applying generally across AI systems rather than focusing on specific sectors. Its primary aim is to mitigate
risks associated with AI systems proactively. Key features include:
– Defining AI systems;
– Identifying roles in the AI supply chain;
– Outlining prohibited AI practices; and
– Establishing administrative fines.
It works in tandem with the AI Liability Directive, which addresses post-event harm caused by AI, contrasting with the AI Act’s “before the event” (ex-ante) focus. Following a marathon 37-hour trialogue negotiation, provisional agreement of the AI Act was reached on 8 December 2023. It is expected that the AI Act will complete its final technical phases and enter into law early in 2024.
How does the EU AI Act structure its administrative fines for non-compliance?
The AI Act outlines a multi-tiered system for administrative fines:
For Violating Prohibited Practices: The highest tier, with fines up to the greater of €35 million or 7% of the company’s total worldwide annual turnover.
Breach of Core Obligations by AI Providers/Users: The second tier, with fines up to the greater of €15 million or 3% of the company’s total worldwide annual turnover.
Providing Incorrect/Misleading Information: The third tier, with fines up to the greater of €7.5 million or 1.5% of the company’s total worldwide annual turnover.
How does the EU AI Act structure its administrative fines for non-compliance?
The Proposed AI Liability Directive aims to clarify liability for damages caused by AI systems. Given the complexity and often opaque nature of AI systems, establishing liability can be challenging. The directive introduces a rebuttable presumption of causality, meaning that under certain conditions, an AI system can be presumed to have caused the loss unless proven otherwise. This proposal is part of a broader legislative effort, alongside a revised product liability directive, to create a comprehensive framework for AI-related liability. The two directives are designed to complement each other, ensuring a more robust and clear legal environment for AI-related claims.
On 13 September 2023, President von de Leyen confirmed the EU’s commitment to finalising the
AI Act and announced several AI related initiatives, including a possible global framework for AI.
How does the General Data Protection Regulation (GDPR) apply to AI?
The GDPR’s basic principles are expected to remain applicable to AI, albeit with potential
legislative reforms. Since the GDPR has extraterritorial effects, it may also apply to entities
outside the EU if they have EU establishments. AI introduces specific challenges in GDPR
compliance, including:
– Determining whether an AI provider acts as a data processor or controller.
– Issues around data anonymization.
– The application of AI data protection compliance tools.
– The use of research and pseudonymization techniques.
– Profiling and automated decision-making concerns.
– Establishing a lawful basis for web scraping activities.
These issues necessitate careful consideration to ensure AI’s compliance with GDPR’s rigorous standards.
How is AI impacting contract law?
AI’s role in tort law, particularly in product liability and negligence, is increasingly significant.
AI systems raise questions about the standard of care and duty of care, especially as AI capabilities may surpass human abilities in certain tasks. That said, the common law duty of care will likely extend to many AI applications, reflecting the principle that “the categories of negligence are never closed.”
How does AI impact contract law, particularly in B2B, B2C, and smart contracts?
In the realm of AI:
B2B AI Systems: Contracts for the development and use of B2B AI systems, whether
licensed on-premise or provided as a cloud service, will initially mirror traditional software
contracts. Similar legal issues found in software and data licenses need to be addressed.
B2C AI Services: For mass-market B2C AI services, like digital personal assistants, the
typical model will continue to be click-accept licensing terms for end-users.
Smart Contracts: Legal complexities arise with blockchain-enabled smart contracts,
capable of creating real-time, interlinked contract chains. These contracts must align with
basic contract law principles such as legal capacity, intention, offer, acceptance, and
certainty of terms. While smart contracts introduce new questions, established contract law
principles remain applicable, with a growing consensus that they can be enforced using
these well-established principles.
Who is who
Klitos Platis is an Advocate, Legal Consultant and Partner at Kleanthous & Platis law firm. He is primarily engaged in litigation, real estate, energy and fintech sectors. With a strong academic background including a BSc in Economics and the completion of the Bar Professional Training Course (BPTC) in England and Wales, Klitos applies an integrative understanding of legal and economic principles to his work. Beyond his legal practice, he contributes to the legal field as a trainer under the Human Resource Development Authority of Cyprus.