1. Introduction
The Transformer architecture, basis of most Large Language Models (LLMs), has
facilitated incredible levels of linguistic capability. Far beyond simple token prediction,
Transformer-based LLMs demonstrate superior perplexity and, at times, even seem to
converse intelligently. Some of the emergent properties arising from these LLMs present
as human-like.
In the relatively new field of LLM Psychometrics, human-based psychometric techniques are leveraged to qualify human-like linguistic traits emanating from LLMs. Trait examples include (but are not limited to) Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.
Given the existence of comparatively performant Transformer implementations on virtual biological substrates (i.e., Spiking vs. Artificial Neural Networks), the implication is that we can utilize Transformer-based LLMs
for human linguistic-process modeling. This is somewhat analogous.
PDF of poster to be presented at the November 2025 Society of Neuroscience conference.

