Summary
Dr. Evelyne Tauchnitz’s expertise focuses on how digital technologies can be employed to build, support, and maintain peace through non-violent methods of conflict transformation. She is employed as a Senior Researcher Fellow (Post-Doc) at the Lucerne Graduate School in Ethics (LGSE), University of Lucerne, where she is writing her ‘Habilitation’ on PeaceTech – exploring the impact of the digital transformation on peace & war from an ethical and human rights point of view.
Evelyne is also a Research Associate at the Centre for Technology and Global Affairs (CTGA), University of Oxford, where she is co-coordinating the Global PeaceTech project. She holds a PhD in International Relations with a specialization in Political Science from the Graduate Institute of International and Development Studies (IHEID) in Geneva and was a Visiting Fellow (Post-Doc) at the Department of Political and Social Sciences (SPS), European University Institute (EUI) in Florence, Italy, where she conducted research on international norms and negotiation strategies.
Source: Lucerne Webpage
OnAir Post: Evelyne Tauchnitz
About
CV
Previously, she studied political science, economics and law at the University of Bern. During her PhD, she was employed at the Institute of Public Law at the University of Bern, Switzerland, where she was the principal researcher of a multi-year research project funded by the Swiss National Science Foundation (SNSF) exploring the nexus between political discourses, the legitimacy of state violence and human rights.
She has experience in qualitative research and quantiative data analysis (including mixed methodology) and has undertaken extensive field research in Ethiopia, Mexico and India. Young Global Changer (YGC) at the Think 20 Global Solutions Summit in Berlin, Germany (2017) and grant from the Falling Walls Foundation, Berlin, Germany (2016). Apart from academia, she has experience working as an expert and independent consultant for the government (Swiss Parliamentary Services), civil society (Intermon-Oxfam, ‘Theater for Peace’ and others), and international organizations (such as Unicef) in different employment and independent consultancy positions.
Source: Lucerne Webpage
Research
Research interests
• Peace and conflict research
• Digital change and new technologies
• Human Rights
• International Relations & Global Governance
• The study of norms, discourses and negotiation strategies
• Empirical and normative methods & theories
Research projects
PeaceTech – Building Peace in the Digital Age
Web Links
ITDF Essay, April 2025
We May Lose Our Human Unpredictability in a World in Which Algorithms Dictate the Terms of Engagement; These Systems Are Likely to Lead to the Erosion of Freedom and Authenticity
Source: ITDF Webpage
“Advances in artificial intelligence (AI) tied to brain-computer interfaces (BCIs) and sophisticated surveillance technologies, among other applications, will deeply shape the social, political and economic spheres of life by 2035, offering new possibilities for growth, communication and connection. But they will also present serious questions about what it means to be human in a world increasingly governed by technology. At the heart of these questions is the challenge of preserving human dignity, freedom and authenticity in a society where our experiences and actions are ever more shaped by algorithms, machines and digital interfaces.
Freedom … is the very bedrock of moral capability. If AI directs our actions and our choices, shaping our behavior based on data-driven predictions of what is ‘best,’ we lose our moral agency. We become mere executors of efficiency, devoid of the freedom to choose to err and to evolve both individually and collectively through trial and error. … Surveillance, AI-driven recommendations, manipulations or algorithms designed to rely on patterns of what is defined as ‘normal’ may threated this essential freedom. They create subtle pressures to conform … The implications of such control are profound: if we are being constantly watched or influenced in ways we are unaware of, our capacity to act freely – to choose differently, to be morally responsible – could be deeply compromised.
“The Erosion of Freedom and Authenticity
AI and BCIs will undoubtedly revolutionize how we interact, allowing unprecedented levels of communication, particularly through the direct sharing of thoughts and emotions. In theory, these technologies could enhance empathy and mutual understanding, breaking down the barriers of language and cultural differences that often divide us. By bypassing or mitigating these obstacles, AI could help humans forge more-immediate and powerful connections. Yet, the closer we get to this interconnected future among humans and AI the more we risk sacrificing authenticity itself.
“The vulnerability inherent in human interaction – the messiness of emotions, the mistakes we make, the unpredictability of our thoughts – is precisely what makes us human. When AI becomes the mediator of our relationships, those interactions could become optimized, efficient and emotionally calculated. The nuances of human connection – our ability to empathize, to err to contradict ourselves – might be lost in a world in which algorithms dictate the terms of engagement.
“This is not simply a matter of convenience or preference. It is a matter of freedom. For humans to act morally, to choose the ‘good’ in any meaningful sense, they must be free to do otherwise. Freedom is not just a political or social ideal – it is the very bedrock of moral capability. If AI directs our actions and our choices, shaping our behavior based on data-driven predictions of what is ‘best,’ we lose our moral agency. We become mere executors of efficiency, devoid of the freedom to choose, to err and to evolve both individually and collectively through trial and error.
“Only when we are free – truly free to make mistakes, to diverge from the norm, to act irrationally at times – can we become the morally responsible individuals that Kant envisioned. This capacity for moral autonomy also demands that we recognize the equal freedom of others as valuable as our own. Surveillance, AI-driven recommendations, manipulations or algorithms designed to rely on patterns of what is defined as ‘normal’ may threaten this essential freedom. They create subtle pressures to conform, whether through peer pressure and corporate and state control on social media, or in future maybe even through the silent monitoring of our thoughts via brain-computer-interfaces. The implications of such control are profound: if we are being constantly watched, or even influenced in ways we are unaware of, our capacity to act freely – to choose differently, to be morally responsible – could be deeply compromised.
Change requires room for failure, for unpredictability, for the unknown. If we surrender ourselves too completely to AI and its rational, efficient directives we might be trading away something invaluable: the very essence of life as a process of continuous growth and change as manifested through lived human experiences. While AI may help us become ‘better’ persons, more rational, less aggressive and more cooperative, the question remains whether something of our human essence would be lost in the process – something that is not reducible to rationality or efficiency, but is bound up with our freedom, our mistakes, our vulnerabilities and our ability to grow from them.
“The Limits of Perfection: Life is Rife With Unpredictable Change
This leads to another crucial point: the role of error in human evolution. Life, by its very nature, is about change – about learning, growing and evolving. The capacity to make mistakes is essential to process. In a world where AI optimizes everything for perfection, efficiency and predictability, we risk losing the space for evolution, both individually and collectively. If everything works ‘perfectly’ and is planned in advance, the unpredictability and the surprise that gives life its richness will be lost. Life would stagnate, devoid of the spark that arises from the unforeseen, the irrational, and yes, even the ‘magical.’
“A perfect world, with no room for error would not only be undesirable – it would kill life itself. Change requires room for failure, for unpredictability, for the unknown. If we surrender ourselves too completely to AI and its rational, efficient directives, we might be trading away something invaluable: the very essence of life as a process of continuous growth and change as manifested through lived human experiences. While AI may help us become ‘better’ persons, more rational, less aggressive and more cooperative, the question remains whether something of our human essence would be lost in the process – something that is not reducible to rationality or efficiency, but is bound up with our freedom, our mistakes, our vulnerabilities and our ability to grow from them.
“The Need for a Spiritual Evolution
The key to navigating the technological revolution lies not just in technical advancement but in spiritual evolution. If AI is to enhance rather than diminish the human experience, we must foster a deeper understanding of what it truly means to be human. This means reconnecting with our lived experience of being alive – not as perfectly rational, perfectly cooperative beings, but as imperfect, vulnerable individuals who recognize the shared fragility of our human existence. It is only through this spiritual evolution, grounded in the recognition of our shared vulnerability and humanity, that we can ensure AI and related technologies are used for good –respecting and preserving the values that define us as free, moral and evolving beings.”
This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report “Being Human in 2035.”