Berlin – The use of artificial intelligence (AI) to make forecasts and support decisions is becoming increasingly widespread. But with the growing number of applications, criticism also grows. In the recent past, there have been repeated scandals due to systematic discrimination by algorithms based on criteria such as race, gender or age, e.g. in predicting the probability of re-offending after release from jail to inform judicial decisions (ProPublica, 23 May 2016). For consumer protectors and end users the lack of transparency of these models is of concern. The underlying decision-making processes of these algorithms are difficult to comprehend. Thus, errors or biases are detected late or not at all. In addition, there is an ethical dimension: Algorithms are making increasingly important decisions for people, but even their developers often find it difficult to understand them. A supposed dilemma exists: there is the widespread assumption that only with more data, higher complexity and the resulting intransparency these models can achieve good performance. The logical conclusion is that one has the choice between transparent models, which are comprehensible but achieve worse performance, and intransparent models, which achieve good performance but which one must blindly rely on or blindly trust the information provided by the manufacturers. However, this is by no means the case, says Prof. Dr. Gerd Gigerenzer, former Max Planck director, co-founder of Simply Rational. Gerd Gigerenzer is one of the most influential psychologists and decision researchers worldwide.
Good decisions despite fewer data
„The algorithms used today by credit agencies, for example, to check creditworthiness, often contain hundreds of variables and are very complex. Neither consumers nor end-users understand how sometimes serious decisions, e.g. about the granting of credit, are made by these models. Our algorithms can achieve the same predictive power with a fraction of the variables. At the same time they are in their nature transparent and easy to understand „. Gerd Gigerenzer and his team locate their approach in a branch of AI called „Augmented Intelligence“. Simply Rational’s consultants and scientists combine the latest findings from decision science with artificial intelligence to develop algorithms that people can understand intuitively. The resulting transparency allows the best of human and machine to be combined. Decision processes are sustainably improved. The first applications in the field of intensive care medicine are currently being implemented at the Charite – University Medicine Berlin, for example. „But this is only the tip of the iceberg,“ says Prof. Florian Artinger, also co-founder of Simply Rational, „our approach has added value wherever there is a high degree of dynamism in a market or where decisions should be transparent.“
Bringing Augmented Intelligence to the application
As a consequence, the decision has been made to establish an advisory board. With Markus Jüttner and Johan Zevenhuizen, Simply Rational has gained two extremely important partners with many years of experience in the areas of financial services and compliance. As the first behavioural and data science spin-off of the Max Planck Society, the establishment of the Advisory Board represents the next step in the professionalization of the core business areas of finance and compliance. Dr. Niklas Keller, Managing Partner, says: „Markus and Johan not only bring expertise and an extensive network, they understand the relevance of our scientific approach. Thus, they can help us to identify completely new business potentials, for example in the application of Augmented Intelligence“.
„Algorithms from artificial intelligence have an advantage wherever there is a large amount of data. People, on the other hand, make very good decisions in situations where there is relatively little data available or when circumstances change quickly. Especially in dynamic business areas such as the financial sector, it is of great importance to effectively link human and machine,“ says Johan Zevenhuizen.
In the context of compliance, Markus Jüttner particularly appreciates the evidence-based, context-driven approach and the transparency and robustness of these models: „Corporate decisions – not only legal or compliance decisions – must be justifiable and comprehensible for all parties involved. The path is the goal, i.e. the justification, the argument, the weighing up, which often play a role in deciding on legality or integrity, are of central importance. This cannot be left to a „black box“, i.e. classical algorithms from artificial intelligence, where one does not understand how they arrive at a result“. Whether this new approach can establish itself will be shown by its success in the concrete improvement of decision-making processes in practice. However, a higher level of transparency should be welcomed by most decision makers who (have to) deal with models from artificial intelligence.
Keywords:Augmented Intelligence ; Finance ; Compliance ; Machine Learning ; transparent Algorithms ; Artificial Intelligence ; Max-Planck-Institute
Powered by WPeMatico