Summary
Danil Mikhailov, Ph.D. is a computer scientist and social scientist and a world-leading expert in the application of technology and data innovation for social impact. Danil’s main research interests are in Science and Technology Studies, including social media, AI, and misinformation; technology and AI ethics, particularly where informed by non-western cultural context; development of methodologies of responsible data science and AI programs, particularly in lower-and-middle-income countries and settings; and the anthropology of interdisciplinary science and technology teams. A second, unrelated area of research interest is the history, philosophy, and practice of traditional Chinese martial arts.
In parallel to his research Danil serves as the Executive Director of data.org, where Danil launched global data for social impact programs in climate, health, and financial inclusion to build digital public goods, train a new generation of purpose-driven data practitioners, and design sustainable data ecosystems in Latin America, Asia, Africa, Europe, and the US.
Source: Data.org webpage
OnAir Post: Danil Mikhailov
About
Biography
Prior to data.org, Danil was Head of Data & Innovation at Wellcome where he founded and directed the Wellcome Data Labs, an interdisciplinary team of data scientists, software developers, and social scientists, creating open-source data tools supporting Wellcome’s mission. Danil has served on executive and advisory boards of multiple other high-profile academic, nonprofit, and public sector initiatives, including founding and chairing the Digital Strategy Forum for Science, Art, and Culture, in the United Kingdom, co-founding the Research on Research Institute that studies the processes and structures of scientific research, and co-founding and chairing the Global Pandemic Data Alliance, formed under the auspices of the G7 to improve the effectiveness of world’s pandemic & epidemic data infrastructure.
Danil holds a Ph.D. in Sociology and Communications from the University of Brunel, an MA in Philosophy, from Birkbeck, University of London, an MA in Chinese Studies, from SOAS, University of London, and a BSc in IT & Business Management, University of York.
Source: Data.org webpage
Web Links
ITDF Essay, April 2025
Respect for Human Expertise and Authority Will Be Undermined, Trust Destroyed, and Utility Will Displace ‘Truth’ at a Time When Mass Unemployment Decimates Identity and Security
Source: ITDF Webpage
“It seems clear from the vantage point of 2025 that AI will be not just a once-in-a-generation but a once-in-a-hundred years transformative technology, on a par with the introduction of computers, electricity or steam power in the scale of its impact on human societies. By 2035 I expect it to fully penetrate and transform the vast majority of our industrial sectors, both destroying jobs and creating new jobs on an enormous scale.
“The issue for most individual human beings will be how to adapt and learn new skills that enable them to live and work side-by-side with AI agents. As some lose their jobs and are left behind, others will experience huge increases in productivity, benefits and creative potential. Sectors such as biomedicine, material sciences and energy will be transformed, unlocking huge latent potential.
“The issue for corporations and governments will be how to manage the asymmetry of the transition. During previous industrial revolutions although eventually more jobs were created than destroyed and economies expanded, the transition took a number of decades during which a generation of workers fell out of the economy, along with ensuing social tensions.
As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. … Social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.
“If you were a Luddite out there breaking steam-powered looms in the early 19th century in England to protest industrialization, telling you that there will be more jobs in 20 years’ time for the next generation did not help you feed your family in the here and now. The introduction of AI is likely to cause similar inequities and will increase social tensions, if not managed proactively and systemically. This is particularly so because of the likely vast gulf in experience of the effects of AI between the winners and losers of its industrial and societal transformation.
“In a parallel change at a more fundamental level, AI will upend the Enlightenment consensus and trust in the integrity of the human-expert-led knowledge production process and fatally undermine the authority of experts of any kind, whether scientists, lawyers, analysts accountants or government officials.
“As the majority of information humans consume on a daily basis becomes at least augmented by if not completely created by AI, the prevailing assumption will be that everything could be fake, everything is subjective. This will undermine the belief in the possibility or even desirability of ‘objective’ truth and the value of its pursuit. The only yardstick to judge any given piece of information in this world will be how useful it proves in that moment to help an individual achieve their goal.
AI will lead society 350 years back into an age of correlative, rather than causal, thinking. Data patterns and the ability to usefully exploit them will be prioritised over the need to fully understand them and what caused them. These two parallel processes of, on the one hand, social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other, in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies.
“AI will lead society 350 years back into an age of correlative, rather than causal, thinking. Data patterns and the ability to usefully exploit them will be prioritised over the need to fully understand them and what caused them. These two parallel processes of, on the one hand, social tensions caused by losses of jobs and identity for some while others prosper, coupled with the reversal of Enlightenment ways of thinking and the new dominance of utility over truth may feed off each other, in generating waves of misinformation and disinformation that will risk an acute crisis of governance in our societies, just as the promised fruits of AI in terms of new drugs, new energy and new materials are tantalisingly within reach.
“Resolving such a crisis may need a new, post-Enlightenment accommodation that accepts that human beings are far less ‘individual’ than we like to imagine, that we were enmeshed as inter-dependent nodes in (mis)information systems long before the Internet was invented, that we are less thinking entities than acting and reacting ones, that knowledge has never been as objective as it seemed and it never will seem like that again, and that maybe all we have are patterns that we need to navigate together to reach our goals.”
This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report “Being Human in 2035.”