Status: Work in Progress (Working Paper). This research is an advanced expansion of the foundational methodology established in my Master’s Thesis.
📌 Research Overview
While Large Language Models (LLMs) show remarkable capabilities, using them as reliable scientific simulators requires a framework that can consistently replicate human cognitive reasoning. This research focuses on a ‘Cognitive Externalization Framework’.
⚙️ Key Methodologies
- Theory-Integrated Prompting: Systematically embeds the Theory of Planned Behavior (TPB) into the agent’s reasoning structure (Attitude, Subjective Norm, Perceived Behavioral Control).
- External Interaction Modeling: Simulates how agents’ final decisions change through informational interactions with ‘Expert Agents’ or ‘Opinion Leaders.’
- Fidelity Validation: Developing a rigorous verification protocol to ensure the synthetic agents’ reasoning traces align with empirical behavioral data.
This research is currently being prepared for submission to an international journal specializing in AI-driven social simulation.