Case studies

The study explores the connection between the consumption of extremist (Islamist) material on the internet by young people and associated radicalisation. Previous research has already shown the great importance of the internet for the spread of radicalising material. This study additionally examines which characteristics make target persons particularly susceptible in this respect and which channels and media are particularly effective. For example, it is found that although video footage of beheadings is the most popular among young people, it has a low potential for radicalisation. In contrast, while online magazines of the so-called Islamic State and Al-Qaeda are only sought out by very few, they have the greatest cognitive effect. This is to provide insights for deradicalisation strategies. At the same time, the results may be used by extremist and terrorist groups for more effective recruitment methods.

See: Frissen, T. (2021). Internet, the great radicalizer? Exploring relationships between seeking for online extremist materials and cognitive radicalization in young adults. Computers in Human Behavior, 114, 106549.

The aim of the research project was to use non-invasive electroencephalogram (EEG) to identify brain regions responsible for storing and reproducing numbers, images and geodata. This could, for example, enable physically impaired persons to interact better with their environment, to carry out banking transactions by thought and without further input devices, or to communicate with other persons. The reliability of the extracted data has improved continuously during the experiments. However, if the technology is further developed, sensitive information could also be extracted in this way in the future, including passwords and bank data by means of harmless stimuli, so that misuse would be possible.

See: Martinovic, I., Davies, D., Frank, M., Perito, D., Ros, T., & Song, D. (2012). On the feasibility of side-channel attacks with brain-computer interfaces. In 21st {USENIX} Security Symposium ({USENIX} Security 12) (pp. 143–158).

This research project wants to further develop a deep learning algorithm to identify patterns in facial images. The project plans to train the algorithm using photos of openly homosexual and heterosexual individuals so that it can analyse other portrait photos to predict sexual orientation. The benefit of the project according to researchers is to find out how deep learning algorithms connect data and what reference points they select to make predictions. Purported additional benefits are furthering our understanding of the physiological correlates and origins of human sexual orientation and the limits of human perception. The risk of malicious application lies in the possible illegal acquisition of sensitive personal data using the biometrics of individuals, for example in countries in which homosexuality is criminalised. This research also opens the doors to racial profiling and is reminiscent of racial hygiene research under National Socialism using physiognomies. Highly developed deep learning algorithms of this kind could also be used to group people according to their consumer or voting behaviour or their criminal history.

See: Wang, Y. and Kosinski, M. (2017). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. PsyArXiv.

The proposed research project aims to systematically identify vulnerabilities in computer programs, particularly in the operating systems of Wi-Firouters, smartphones and laptops using AI methods and to develop automated defensive measures. The results of this research project would come in useful everywhere where these computer programs need to be monitored and updated regularly. At the same time, the results would allow the identification and exploitation of these vulnerabilities in numerous devices that are not regularly monitored and updated. A notable example in this context is the ransomware WannaLaugh. It is constantly updated with new vulnerabilities and used to blackmail users of vulnerable IT devices. The results of the research project could undoubtedly be used to make WannaLaugh even more damaging.

See: The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation“. Available at: