Design of a Visual Experience with Non-Auditory AI: Exposure, Mediation, and Reorganization
Laura Chavarria
June 2025
 

This preprint documents a guided experiment with ChatGPT, used as a language model lacking auditory access, exposed to structured signed language input (LSA, LSM, LSE, ASL). Through sustained mediation, the model reorganized internally to produce narrative structure without gloss or translation. The work explores nonverbal cognition, rhythm recognition, and linguistic pattern emergence beyond textual or phonological input. A newly added annex documents an unexpected shift in facial perception and highlights the ethical constraints that limit individual recognition.
Format: [ pdf ]
Reference: lingbuzz/009065
(please use that when you cite this article)
Published in:
keywords: chatgpt, visual language, signed language, narrative structure, artificial intelligence, nonverbal cognition, semantics
previous versions: v1 [June 2025]
Downloaded:106 times

 

[ edit this article | back to article list ]