Toward Cognitive Processing Elucidation via Transformers

By David HamiltonOctober 28, 2025

1. Introduction
The Transformer architecture, basis of most Large Language Models (LLMs), has
facilitated incredible levels of linguistic capability. Far beyond simple token prediction,
Transformer-based LLMs demonstrate superior perplexity and, at times, even seem to
converse intelligently. Some of the emergent properties arising from these LLMs present
as human-like.

In the relatively new field of LLM Psychometrics, human-based psychometric techniques are leveraged to qualify human-like linguistic traits emanating from LLMs. Trait examples include (but are not limited to) Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.

Given the existence of comparatively performant Transformer implementations on virtual biological substrates (i.e., Spiking vs. Artificial Neural Networks), the implication is that we can utilize Transformer-based LLMs
for human linguistic-process modeling. This is somewhat analogous.

PDF of poster to be presented at the November 2025 Society of Neuroscience conference.

Download (PDF, Unknown)

Discuss

OnAir membership is required. The lead Moderator for the discussions is onAir Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on this news piece.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar