Anil Seth

Summary

My mission is to advance the science of consciousness, and to use its insights for the benefit of society, technology, and medicine.

I am Professor of Cognitive and Computational Neuroscience at the University of Sussex, where I am also Director of the Sussex Centre for Consciousness Science. I am also Co-Director of the Canadian Institute for Advanced Research (CIFAR) Program on Brain, Mind, and Consciousness.

I  was the founding Editor-in-Chief of Neuroscience of Consciousness (Oxford University Press), a role I served from 2014-2024. I currently serve on the Editorial Board of Philosophical Transactions of the Royal Society B and on the Advisory Committee for 1907 Research and for Chile’s Congreso Futuro. I was Conference Chair for the 16th Meeting of the Association for the Scientific Study of Consciousness (ASSC16, 2012) and was an ASSC ‘member at large’ from 2014-2022.  I previously co-directed the Leverhulme Doctoral Scholarship Programme: From Sensation and Perception to Awareness, and I was an Engagement Fellow with the Wellcome Trust (2016-2020).

My research has been supported by the EPSRC (Leadership Fellowship), the European Research Council (ERC, Advanced Investigator Grant), the Wellcome Trust, and the Canadian Institute for Advanced Research (CIFAR).  Check out these profiles of me and my research in The Observer, The New Statesman, and Quanta.

Source: Website

OnAir Post: Anil Seth

About

Outreach and engagement

My 2021 book Being You: A New Science of Consciousness was a Sunday Times Top 10 Bestseller, a New Statesman Book of the Year, an Economist Book of the Year, a Bloomberg Business Book of the Year, a Guardian Book of the Week , an El Pais Book of the Week, and a Guardian and Financial Times Science Book of the Year. My 2017 main-stage TED talk  has more than 15 million views and is one of TED’s most popular science talksMy 2018 conversation with Sam Harris appeared in his recent book of 11 favourite interviews. I edited and co-authored the best-selling 30 Second Brain (Ivy Press, 2014), and I also write the blog NeuroBanter.

For more background, there’s my interview on BBC’s The Life Scientific, with Jim Al-Khalili (2015), and my Aeon essay on consciousness – The Real Problem (a 2016 editor’s pick; see also this video). Other features include the Vice/Motherboard documentary film The Most Unknown, the TED Interview with Chris Anderson, and Feel Better, Live More with Rangan Chatterjee.

Source: Website

Recognition

I am a Clarivate Highly Cited Researcher (2019-2024), which recognizes the top 0.1% of scientists and social scientists in the world, by impact of their publications. In 2023, I was awarded the Royal Society’s Michael Faraday Prize, which is ‘awarded annually to the scientist or engineer whose expertise in communicating scientific ideas in lay terms is exemplary’, and Prospect Magazine listed me as one of their Top 25 Thinkers for 2024. The book Eye Benders (with Clive Gifford) won the 2014 Royal Society Young People’s Book Prize, and the radio play The Sky is Wider won the BBC Radio Drama Awards Best Single Drama (with Linda Marshall-Griffith and Nadia Molinari, 2016). I was awarded the 2019 KidSpirit Perspective prize by a jury of teenage writers, and the 2023 Segerfalk Prize from the University of Lund.

Sign up to my mailing list to keep informed of new projects. If you are interested in having me speak at your event, please contact me.

Source: Website

Join my mailing list

Join my newsletter for updates on new projects, including news about Being You

Emails are sent directly by me and I’ll never share your personal information.

Source: Website

Web Links

ITDF Essay, April 2025

Dangers Arise as AI becomes Humanlike. How Do We Retain a Sense of Human Dignity? They Will Become Self-Aware and ‘Inner Lights of Consciousness Will Come On for Them’

Source: ITDF Webpage

“AI large language models [LLMs] are not actually intelligences, they are information-retrieval tools. As such they are astonishing but also fundamentally limited and even flawed. Basically, the hallucinations generated by LLMs are never going away. If you think that buggy search engines fundamentally change humanity, well, you have a weird notion of ‘fundamental.’

“Still, it is undisputable that these systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor.

“The deeper and urgent question is: How do we retain a sense of human dignity in this situation? AI can become human-like on the inside as well as on the outside. When AI gets to the point of being super good, ethical issues become paramount.

These systems already exceed human cognition in certain domains and will keep getting better. There will be disruption that makes humans redundant in some ways. It will transform a lot, including much of human labor. … How do we retain a sense of human dignity in this situation? … [Beyond that] with ‘conscious’ AI things get a lot more challenging since these systems will have their own interests rather than just the interests humans give them. … The dawn of ‘conscious’ machines … might flicker into existence in innumerable server farms at the click of a mouse.

“I have written in Nautilus about this. Being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is rooted in the fundamental biological drive within living organisms to keep on living. The distinction between consciousness and intelligence is important because many in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware – at which the inner lights of consciousness come on for them.

“There are two main reasons why creating artificial ‘consciousness,’ whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With ‘conscious’ AI, things get a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

“The second reason is even more disquieting: The dawn of ‘conscious’ machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel.

Future language models won’t be so easy to catch out. They may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them, too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.

“Existential concerns aside, there are more immediate dangers to deal with as AI has become more humanlike in its behavior. These arise when AI systems give humans the unavoidable impression that they are conscious, whatever might be going on under the hood. Human psychology lurches uncomfortably between anthropocentrism – putting ourselves at the center of everything – and anthropomorphism – projecting humanlike qualities into things on the basis of some superficial similarity. It is the latter tendency that’s getting us in trouble with AI.

“Future language models won’t be so easy to catch out. They may give us the seamless and impenetrable impression of understanding and knowing things, regardless of whether they do. As this happens, we may also become unable to avoid attributing consciousness to them, too, suckered in by our anthropomorphic bias and our inbuilt inclination to associate intelligence with awareness.

“Systems like this will pass the so-called Garland Test, an idea which has passed into philosophy from Alex Garland’s perspicuous and beautiful film ‘Ex Machina.’ This test reframes the classic Turing test – usually considered a test of machine intelligence – as a test of what it would take for a human to feel that a machine is conscious, even given the knowledge that it is a machine. AI systems that pass the Garland test will subject us to a kind of cognitive illusion, much like simple visual illusions in which we cannot help seeing things in a particular way, even though we know the reality is different.

Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.

“This will land society into dangerous new territory. Our ethical attitudes will become contorted as well. When we feel that something is conscious – and conscious like us – we will come to care about it. We might value its supposed well-being above other actually conscious creatures such as non-human animals. Or perhaps the opposite will happen. We may learn to treat these systems as lacking consciousness, even though we still feel they are conscious. Then we might end up treating them like slaves – inuring ourselves to the perceived suffering of others. Scenarios like these have been best explored in science-fiction series such as ‘Westworld,’ where things don’t turn out very well for anyone.

“In short, trouble is on the way whether emerging AI merely seems conscious or actually is conscious. We need to think carefully about both possibilities, while being careful not to conflate them.

“Accelerated research is needed in social sciences and the humanities to clarify the implications of machines that merely seem conscious. And AI research should continue, too, both to aid in our attempts to understand biological consciousness and to create socially positive AI. We need to walk the line between benefiting from the many functions that consciousness offers while avoiding the pitfalls. Perhaps future AI systems could be more like oracles, as the AI expert Yoshua Bengio has suggested: systems that help us understand the world and answer our questions as truthfully as possible, without having goals – or selves – of their own.”


This essay was written in January 2025 in reply to the question: Over the next decade, what is likely to be the impact of AI advances on the experience of being human? How might the expanding interactions between humans and AI affect what many people view today as ‘core human traits and behaviors’? This and nearly 200 additional essay responses are included in the 2025 report Being Human in 2035.

More Information

Wikipedia

Anil Kumar Seth (born 11 June 1972) is a British neuroscientist and professor of Cognitive and Computational Neuroscience at the University of Sussex. A proponent of materialist explanations of consciousness,[1] he is currently amongst the most cited scholars on the topics of neuroscience and cognitive science globally.[2]

Seth holds an BA (promoted to an MA per tradition) in natural science from King’s College, Cambridge, and a PhD in computer science from the University of Sussex. Seth has published over 100 scientific papers and book chapters, and is the editor-in-chief of the journal Neuroscience of Consciousness.[3] He is a regular contributor to New Scientist, The Guardian[4] and the BBC,[5] and writes the blog NeuroBanter.[6]

He is related to the Indian novelist and poet Vikram Seth.

Early life and education

Seth was born in Oxford[7] and grew up in Letcombe Regis,[8] a village in rural South Oxfordshire. His father, Bhola Seth, obtained a BSc from Allahabad University in 1945, before migrating from India to the United Kingdom to study engineering at Cardiff. Bhola Seth subsequently obtained a PhD in Mechanical Engineering at Sheffield, was a research scientist at the Esso Research Centre in Abingdon, and won the veterans’ world doubles title in badminton in 1976. His mother, Ann Delaney, came from Yorkshire.[9]

Seth went to school at King Alfred’s Academy in Wantage. He has degrees in Natural Sciences (BA/MA, King’s College, Cambridge, 1994), Knowledge-Based Systems (M.Sc., Sussex, 1996) and Computer Science and Artificial Intelligence (D.Phil./Ph.D., Sussex, 2001). He was a postdoctoral and associate fellow at The Neurosciences Institute in San Diego, California (2001–2006).[citation needed]

Career

Since 2010 Seth has been co-director (with Hugo Critchley) of the Sussex Centre for Consciousness Science,[10] and editor-in-chief of Neuroscience of Consciousness.[3] He was conference chair of the 16th meeting of the Association for the Scientific Study of Consciousness and continuing member ‘at large’[11] and is on the steering group and advisory board of the Human Mind Project.[12] He was president of the Psychology Section of the British Science Association in 2017.[13][14]

Publications

Seth has published over 100 scientific papers and book chapters, and is the editor-in-chief of the journal, Neuroscience of Consciousness.[3] He is a regular contributor to New Scientist, The Guardian[4] and the BBC,[5] and writes the blog NeuroBanter.[6] He also consulted for the popular science book, Eye Benders, which won the 2014 Royal Society Young People’s Book Prize.[15] An introductory essay on consciousness has been published on Aeon“The Real Problem” – a 2016 Editor’s Pick. Seth was included in the 2019 Highly Cited Researchers List that was published by Clarivate Analytics.[16]

Books

  • Being You: A New Science of Consciousness (Faber and Faber, 2021)[17] – author
  • Brain Twisters (Ivy Press, 2015)[18] – consultant
  • 30 Second Brain (Ivy Press, 2014)[19] – editor and co-author
  • Eye Benders (Ivy Press, 2013)[20] – consultant
  • Modelling Natural Action Selection (Cambridge University Press, 2011)[21] – editor and co-author

Popularisation of science

Seth appeared in the 2018 Netflix documentary The Most Unknown[22] on scientific research directed by Ian Cheney.

See also

  • User illusion, an understanding of consciousness similar to Seth’s

References

  1. ^ “Being You by Anil Seth – the construction of consciousness”. www.ft.com. Retrieved 20 January 2024.
  2. ^ Lane, Vicky Trendall. “Five University of Sussex academics among top 1% of most cited researchers in the world”. The University of Sussex. Retrieved 20 January 2024.
  3. ^ a b c “Editorial Board”. academic.oup.com. Neuroscience of Consciousness. Retrieved 9 February 2018.
  4. ^ a b “Anil Seth”. The Guardian. Retrieved 9 February 2018.
  5. ^ a b “Anil Seth on consciousness, The Life Scientific”. BBC.co.uk. BBC Radio 4. Retrieved 9 February 2018.
  6. ^ a b “About”. NeuroBanter. 18 January 2014. Retrieved 9 February 2018.
  7. ^ “Anil Seth, D.Phil”. University of Sussex. Retrieved 31 August 2024.
  8. ^ Burnett, Thomas (20 June 2014). “Probing the Mystery of Consciousness”. John Templeton Foundation. Retrieved 31 August 2024.
  9. ^ Anil Seth, “Bhola Seth Obituary“, The Guardian, 3 July 2013. Accessed 21 August 2019.
  10. ^ “Anil Seth at the Sackler Centre for Consciousness Science”. sussex.ac.uk. University of Sussex. Retrieved 9 February 2018.
  11. ^ “Association of Scientific Studies of Consciousness”. theassc.org. Retrieved 9 February 2018.
  12. ^ “Advisory Board”. Human Mind Project. 30 January 2015. Retrieved 9 February 2018.
  13. ^ “Psychology Section”. British Science Association. Retrieved 9 February 2018.
  14. ^ “Who we are”. sites.google.com. BSA Psychology. Retrieved 9 February 2018.
  15. ^ GrrlScientist (17 November 2014). “Royal Society Young People’s Book Prize winner announced”. The Guardian. Retrieved 9 February 2018.
  16. ^ Vowles, Neil. “University celebrates record year for professors in global highly cited researchers list”. University of Sussex. Retrieved 27 November 2019.
  17. ^ “Being You – Anil Seth”. Retrieved 5 September 2021.
  18. ^ Clive., Gifford (2015). Brain Twisters : the science of feeling and thinking. Seth, Anil. Lewes: Ivy. ISBN 9781782402046. OCLC 899705249.
  19. ^ 30-second brain : the 50 most mind-blowing ideas in neuroscience, each explained in half a minute. Seth, Anil, Bekinschtein, Tristan. New York: Metro Books. 2014. ISBN 9781435147843. OCLC 875565756.{{cite book}}: CS1 maint: others (link)
  20. ^ Clive., Gifford (2013). Eye benders. Seth, Anil, born 1976. Lewes: Ivy. ISBN 9781782400844. OCLC 861317419.
  21. ^ Seth., Anil (2011). Modelling natural action selection. Prescott, Tony J., Bryson, Joanna J. , Cambridge: Cambridge University Press. ISBN 9781107000490. OCLC 934350929.
  22. ^ “The Most Unknown (2018)”. www.imdb.com. Retrieved 9 June 2021.


    Skip to toolbar