Artificial Intelligence methods are used in a growing number of areas. Thus, more and more often the effect of these methods directly affects the citizen. Many FinTech companies offer solutions in which AI is used to assess credit applications or to price insurance products. In Denmark, automatic algorithms have been used to grant social benefits. In the United States, the COMPASS algorithm was used to support judges in assessing the risk of recidivism. In some schools, student results are used for automatic evaluation of teachers.
Unfortunately, in many cases, it turns out that these algorithms make mistakes which may hurt some social groups. The historical bias in data translates into discrimination on the basis of gender, age, or race.
This has led to many new changes in legal regulations at the level of individual companies, local governments, countries or unions. See for example  . The number of these regulations and the pace of change makes it very difficult to analyse the facts and to make a comparative analysis between these regulations . This is a serious difficulty for AI developers, legislators, and AI users alike.
The aim of this project is to expand the institutional grammar  for monitoring and analysing changes in deferent types of legal documents related to automatic algorithms and AI. The project involves the development of AI tools to track and interpret changes in AI regulations. The key element is to build upon institutional grammar that allows for efficient human and automatic processing.
dr hab. inż. Przemysław Biecek (explainable artificial intelligence) https://scholar.google.pl/citations?user=Af0O75cAAAAJ
dr hab. Bartosz Pieliński (institutional grammar) https://scholar.google.pl/citations?user=hnWiaVEAAAAJ
Good to have:
Write an email to p.biecek and b.pielinski at uw.edu.pl.