Enabling policy environment: aligning the regulatory framework with the new EU AI Strategy
Commission President Ursula von der Leyen has called for a coordinated European approach on the human and ethical implications of AI as well as a reflection on the better use of big data for innovation. Policy options were set out in the paper On Artificial Intelligence - A European approach to excellence and trust, which suggests that the new technology should be subject to legislation and says otherwise there is a risk of fragmentation in the internal market.
In response to comments received during the consultation period, the Commission has considered a temporary ban on facial recognition and remote biometric identification technology in order to protect citizens’ rights and safeguard from discrimination. This could lead to a divergence in regulation between the EU and the US, which advocated for a hands-off approach in guidance published earlier this year. This threatens to create additional costs and technical challenges for international firms and may stifle development.
Given the nascent nature of AI and the numerous sociotechnical challenges it can bring, a governance-based approach to legislation identifying broad objectives will be more welcomed by industry than a prescriptive one. Governance processes could help developers and deployers of covered AI systems identify and quantify any relevant risks of harm to individuals or society (including risks related to unfairness) and, where those risks are determined to be significant, to implement measures to mitigate against them.
Developers of high-risk AI systems might be required to adopt internal policies and procedures to promote the development of trustworthy AI, for example to ensure transparency with customers, users, and other affected stakeholders about risks inherent in the use of the relevant AI system. Rigid regulatory and compliance procedures may hinder innovation, especially if they are not adapted at the same speed as AI developments.
On the other hand, the EU may capitalise on its reputation as a well-regulated bloc to reinforce the image of ‘trustworthy AI’ with a greater emphasis on human values, citizen protection and ethical principles than other countries, according to the Carnegie Endowment for International Peace report ‘Europe and AI: Leading, Lagging Behind, or Carving Its Own Way?’.
“Given the need to address the societal, ethical, and regulatory challenges posed by AI, the EU’s stated added value is in leveraging its robust regulatory and market power—the so-called “Brussels effect”—into a competitive edge under the banner of ‘trustworthy AI.’ Designed to alleviate potential harm as well as to permit accountability and oversight, this vision for AI-enabled technologies could set Europe apart from its global competitors. It can also serve as a key component of increasing the EU’s digital sovereignty by ensuring that European users have more choice and control,” the report says.
Any company implementing an AI strategy using European customer data will have to consider the upcoming actions on AI, Data Governance Act, European Data Spaces, E-privacy regulation, and the outcome in defining a data transfer mechanism between the EU and US.
But even after having secured access to data, regulated companies need to make sure they can recover investment spent on digital technologies. One of the most frequently cited regulatory reforms holding back digital transformation is the linking of cost recovery to capital expenditure rather than operating expenditure, meaning utilities are incentivised to spend money on physical infrastructure rather than frequently more efficient software solutions. Moving to a whole-of-life total expenditure or TOTEX review similar to the UK’s approach would facilitate investment in IT projects including AI.