Artificial Intelligence (AI) has seemingly been in every article and conversation for the better part of 2023. It seems as though every email received from an advertising or marketing technology provider is promoting their use of AI in new features and functionality, of course all to our benefit! But with the focus on this new technology, so too has come scrutiny with the ways in which data is being used and the potential harms to end users. Just last month we saw the ‘banning’ and then re-enabling of the popular ChatGPT product in Italy.
The European Union (EU) has introduced a new Artificial Intelligence (AI) Act. This act is designed to make sure AI is used safely and ethically. But many people are worried that these rules might actually harm innovation. Let’s explore how these regulations can slow down progress in AI technology.
High Costs for Small Companies
One big problem with the AI Act is the high cost of compliance. For small companies, the cost to follow these rules can be very high. For a company with 50 employees, the cost could reach six-figure sums. This is a huge burden for small businesses, which often do not have a lot of extra money. They need to spend their resources on growing their business and developing new technologies, not on meeting expensive regulatory requirements.
Complex and Unclear Rules
The AI Act has many rules, but some of them are not clear. For example, the act categorises AI systems by risk level. Systems with minimal risk, like spam filters, are unregulated. But high-risk systems, such as biometric identification, have to meet very strict rules. However, the details of these rules are often vague and confusing. Businesses do not know exactly what they need to do to comply. This uncertainty makes it hard for companies to plan and develop new AI technologies.
Timeline of Enforcement
The timeline for enforcing these rules is also challenging. The AI Act starts in August 2024, with different rules coming into effect over the next two years. By February 2025, prohibitions on “unacceptable risk” AI systems will begin. Then, by August 2025, rules for general-purpose AI models will kick in. Finally, by August 2026, high-risk AI system rules will be in place. This gradual enforcement adds more layers of complexity for businesses to navigate, making it harder for them to innovate quickly.
Global Competition
Another issue is how these rules affect Europe’s ability to compete globally. The EU wants to set the standard for AI regulation worldwide. But other regions, like the United States and China, are also working on their own AI regulations. These countries may have less strict rules, making it easier for their companies to innovate and grow. As a result, European companies might find it harder to compete on the global stage.
Hiring Challenges
Implementing the AI Act also requires hiring many experts to enforce the rules. The EU needs to fill 140 full-time positions, including technical staff and policy experts. But finding the right people can be difficult. Big tech companies often offer higher salaries, attracting the best talent away from regulatory bodies. Without the right experts, it will be hard for the EU to effectively enforce the AI Act, leading to potential delays and confusion.
Impact on Startups
For startups, these regulations can be particularly harmful. Startups are often the driving force behind innovation. They bring new ideas and technologies to the market. But the high cost of compliance and the complexity of the rules can stifle their growth. Small companies may choose to avoid developing new AI technologies altogether because they cannot afford to comply with the regulations.
The EU’s new AI regulations aim to ensure that AI is used safely and ethically. However, these well-intentioned rules may actually harm innovation. High compliance costs, unclear rules, and tough competition from other regions make it hard for European companies to grow and innovate. To foster innovation, it is important for regulations to be clear, affordable, and supportive of new technologies. By doing so, the EU can ensure that it remains a leader in the global AI race while also protecting its citizens.