By Talita Soares, EU Strategy and Policy Advisor | Internationalization Department CCG/ZGDV Institute
Artificial intelligence (AI) has arrived in our lives, changing the way we live and work. However, the emergence of AI raises questions about what our new future will look like, leaving questions about security and ethics in the air: “Can these technologies be harmful to society? How do we ensure the transparency of private investigations? How can we ensure that systems are fair and respect human rights? Which applications could be most harmful to citizens?”
Questions that prompted the 27 EU countries to adopt this year the first legislation entirely dedicated to artificial intelligence.
What impact will the new rules have on the investigation?
The new legislation underscores the political commitment to boost investment in AI research and innovation, having a strong impact on various EU R&D programmes, such as Horizon Europe and Digital Europe. Existing public-private partnerships are also expected to evolve to include new policy priorities around AI.
The European Commission and the European Parliament play an important role in defining future calls for proposals under these programmes and future partnerships in this area. Direct cooperation with EU institutions is essential to ensure that we, as research participants, remain informed and able to participate in the various initiatives.
Are citizens protected?
Partially. The regulation addresses the complexity, bias, and behavior of AI systems to ensure they are consistent with human rights. It helps create a non-exhaustive list of “high-risk AI applications” that may be banned, however, it does not specify penalties for misuse.
The European Commission has called on all EU countries to put in place procedures to assess compliance with the use of these high-risk applications, as well as possible sanctions. These measures will be essential to protect citizens throughout the development of AI technology.
So-called “high-risk AI applications” include the use of AI to influence behavior, government-run social registration, or real-time remote biometric identification (i.e., to detect emotions) in publicly accessible spaces. Other controversial practices that are under regulatory scrutiny include the use of AI for data-scanning tools. the biography, It ranks job applicants or AI systems designed to be used to prioritize dispatching emergency first response services, including emergency assistance and medical assistance.
What comes next?
The new rules will come into force over the next two years and will be binding on all EU countries. A new AI office will be created within the European Commission to help companies and search service providers, among others, comply with the new legislation.
It is now crucial for Europe to avoid over-reliance on third countries for key AI technologies. Players America’s technological capabilities will require significant resources and concerted efforts.
We will continue to closely monitor AI discussions, alongside regulators, to ensure the development of the most ethical and best-in-class AI technology for our economy and society.