Institutional Repository, Institute of Psychology, Chinese Academy of Sciences
Probing Large Language Models from A Human Behavioral Perspective | |
Xintong Wang1; Xiaoyu Li2; Xingshan Li3; Chris Biemann1 | |
2024 | |
会议名称 | 1st Workshop on Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning |
会议录名称 | LREC-COLING 2024 - Workshop Proceedings |
页码 | 1-7 |
会议日期 | 2024 |
会议地点 | 不详 |
摘要 | Large Language Models (LLMs) have emerged as dominant foundational models in modern NLP. However, the understanding of their prediction processes and internal mechanisms, such as feed-forward networks (FFN) and multi-head self-attention (MHSA), remains largely unexplored. In this work, we probe LLMs from a human behavioral perspective, correlating values from LLMs with eye-tracking measures, which are widely recognized as meaningful indicators of human reading patterns. Our findings reveal that LLMs exhibit a similar prediction pattern with humans but distinct from that of Shallow Language Models (SLMs). Moreover, with the escalation of LLM layers from the middle layers, the correlation coefficients also increase in FFN and MHSA, indicating that the logits within FFN increasingly encapsulate word semantics suitable for predicting tokens from the vocabulary. |
关键词 | Large Language Models Interpretation and Understanding Eye-Tracking Human Behavioral |
收录类别 | EI |
语种 | 英语 |
文献类型 | 会议论文 |
条目标识符 | http://ir.psych.ac.cn/handle/311026/47857 |
专题 | 中国科学院心理研究所 |
作者单位 | 1.Department of Informatics, Universität Hamburg 2.Institute of Psychology, Chinese Academy of Sciences 3.School of Computer Science and Technology, Beijing Institute of Technology |
推荐引用方式 GB/T 7714 | Xintong Wang,Xiaoyu Li,Xingshan Li,et al. Probing Large Language Models from A Human Behavioral Perspective[C],2024:1-7. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Probing Large Langua(1166KB) | 会议论文 | 限制开放 | CC BY-NC-SA | 浏览 请求全文 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论