![]() The EU finalised negotiations, reaching agreement between the EU Council and EU Parliament on its Artificial Intelligence Act this past Friday. This means the parliamentary vote in the new year becomes a process formality as these two legislative bodies are in charge of approving legislation. As could be predicted - some say it hasn't gone far enough, some feel it has gone too far, whereas others are holding judgement until the technical details are worked out in full. The devil will unsurprisingly be in the detail. Nonetheless, organisations can start their preparations. Regulation is coming and violation could be expensive. Summary of key agreements: Applications of AI deemed to threaten human rights and democracy are classified as unacceptable and will be banned, e.g. emotion recognition in the work place and in schools; biometric classification of people using sensitive, personal or discriminatory criteria; social scoring; manipulation of human behavior that seeks to override free will; exploiting the people's vulnerabilities; 'untargeted' harvesting of facial images to build facial recognition databases and biometric mass surveillance. Surveillance exceptions are in place for law enforcement and even that has been limited to certain, serious crimes. AI systems that could harm health; violate safety and fundamental rights; damage the environment and compromise the rule of law are classified as high risk and will be required to meet certain obligations, including undertaking impact assessments focused fundamental human rights, evaluating and mitigating against systemic risk, conducting adversarial testing (!!!) to minimise the risk of problematic or harmful outputs, e.g. sexist, racist, antisemitic, homophobic, classist or other discriminatory outputs as has been seen in the past (sometimes from organisations with the resources to do better). Further, citizens and consumers will have a right to lodge complaints and receive feedback on AI systems' impact on their rights. Regulation of base models won out over self-regulation i.e. the decision was between regulating only the application or use of AI vs. also regulation the the underlying AI models: companies building AI models will be required to ensure transparency by drawing up technical documentation; adhere to EU copyright law (the devil will be in the 'how' here); assess and mitigate against systemic risks, ensure transparency - at summary level - about the data used to train AI models, report serious incidents and implement robust cybersecurity measures. There is an attempt to work against the monopolisation of the AI market by larger tech companies in possession of deep stores of resources and larger market share through the provision, by national authorities, of infrastructure for small and medium sized businesses to develop and train their AI models and solutions before going to market (a tough goal to meet). Fines of up 7% of global turnover or €35m are on the cards. This post was originally posted on LinkedIn in December 2023.
0 Comments
Leave a Reply. |
Archives
July 2024
Categories |
Inno Yolo |
|