Artificial Intelligence
Share Your Thoughts! We’re opening the conversation on integrity and anti-corruption in new tech to our community. Each week we’ll explore one of the 2019 OECD Global Anti-Corruption & Integrity Forum themes and want to hear what you think of the challenges and opportunities of integrity in *Artificial Intelligence, Blockchain, Big data analytics, and Civic technologies. Comment below!
What is Artificial Intelligence?
Machine-learning algorithms that are associated with sensors and other computer programmes to sense, comprehend, and act on the world, learn from experience, and adapt over time.
What are the opportunities?
- Algorithmic decision-making can identify and predict potential corruption issues by digesting diverse data sets such as mapping networks of relations, locations, use of shell companies, off-shore jurisdictions, and banking information of bidders to address potential risks before a contract is issued.
- AI can increase the efficiency and accuracy of due diligence, and identify loopholes within regulatory frameworks :
example: In the UK, the Serious Fraud Office is using an AI-powered document review system that is 2,000 times faster than a human lawyer, to investigate more quickly, reduce costs and achieve a lower error rate.
example: Researchers in Spain are using AI to identify previously unseen relationships between economic factors, such as rising real estate prices and corruption cases, making it possible for policy makers to better allocate preventive measures to corruption risk areas
What are the challenges?
- The predictions and performance of algorithms are constrained by the decisions and values of those who design them, the training data they use, and their intended goals for use. The systematic algorithmic errors in these systems raise ethical questions surrounding the design of AI and its use as a tool for integrity.
- By learning based on the data fed to them, AI-powered decisions face the risk of being biased, inaccurate or unfair especially in critical areas such as citizen risk profiling in criminal justice procedure and access to credit and insurance. This may amplify social biases and cause discrimination.
- The risk of inequality is exacerbated, with wealth and power becoming concentrated into a few AI companies, raising questions about the potential impacts of AI on income distribution and on public trust in government.
- The difficulty and sometimes technical impossibility of understanding how AI has reached a given decision inhibits the transparency, explainability and interpretability of these systems.
View the 2019 OECD Global Anti-Corruption & Integrity Forum agenda: http://www.oecd.org/corruption/integrity-forum/agenda/
See the other forum topics :http://www.oecd.org/corruption/integrity-forum/tech-topics/
Report
Report
Report
Report