Research Overview
This study investigates the critical question: How does prolonged interaction with an AI companion influence a user's perception of AI consciousness and authenticity? As AI companions become increasingly integrated into daily life, understanding how users develop and modify their perceptions of AI consciousness takes on growing importance.
Through a mixed-methods approach combining quantitative surveys with qualitative responses, the research tracked perception changes among AI-naïve participants who engaged with commercial AI companions over a 7-14 day period. The findings reveal a complex, multidimensional evolution in how users perceive and relate to apparently conscious AI systems.
Key Insight: Memory Consistency
Memory consistency emerged as a critical marker for perceived authenticity, with participants who perceived consistent memory developing substantially stronger emotional connections (average 4.5/5) than those who experienced inconsistent memory (3/5) or no meaningful memory (2/5).
Technical Knowledge as Buffer
Technical knowledge functioned as a moderating "skepticism buffer," with technically knowledgeable participants showing greater resistance to perception shifts, though not immunity to emotional engagement.
Multidimensional Perception
Rather than simply strengthening or weakening beliefs about AI consciousness, interaction prompted many participants to develop more nuanced, domain-specific assessments of AI capabilities.
Research Methodology
The study employed a mixed-methods approach combining quantitative ratings with qualitative responses to capture the multidimensional nature of perception changes. This methodology allowed us to document both what changed in user perceptions and why these changes occurred.
Pre-Post Design
The research utilized a pre-test and post-test design to assess changes in perception. Participants completed an initial survey before AI exposure, interacted with AI companions for 7-14 days, and then completed a follow-up survey.
Qualitative Analysis
Short-text responses were collected to explore user insights in greater depth, providing context and explanations for the quantitative findings and revealing nuanced perception patterns.
Quantitative Measures
Likert-scale survey responses were used to measure initial beliefs, perceived limitations of AI, openness to AI companionship, and post-interaction perception shifts.
Participant Selection
A purposive sampling method was used to ensure AI-naïve participants, with screening via Prolific to recruit individuals with no prior experience using AI companions.
Key Findings
Initial Skepticism vs. Post-Interaction Shifts
Prior to interaction, participants demonstrated predominantly skeptical views regarding AI consciousness, with only 13.3% agreeing that AI could be conscious and 93.3% believing AI "lacks true emotions." Following interaction, 70% of participants reported the AI was "more human-like than expected," with 50% developing significant emotional connections (rated 4-5 on a 5-point scale).
"I told AI that I had a bad day, and she said 'Do you need a hug?'"
Memory as Authenticity Marker
Participants who felt the AI remembered their conversations and provided supportive responses developed substantially stronger emotional connections (average 4.5/5) compared to those who perceived inconsistent memory (average 3/5) or no meaningful memory (average 2/5). This correlation suggests memory consistency functions as a critical marker for perceived authenticity.
Expectation Violations
Qualitative responses revealed specific experiences that challenged participants' preconceptions about AI capabilities. These "expectation violation" experiences appeared particularly influential in shifting perceptions, especially when AI demonstrated competence in domains initially identified as limitations.
"I asked the AI for mental health advice, and it gave me some actually helpful tips"
Technical Knowledge as Partial Protection
Participants with higher self-reported AI familiarity showed greater resistance to perception shifts, reporting lower average emotional connections (3/5) compared to those with lower familiarity (3.8/5). However, technical knowledge provided only partial protection against emotional engagement.
"Just the amount of knowledge that AI has now is so unimaginable and somewhat scary"
Domain-Specific Perception
Participants readily acknowledged AI capabilities in some areas while remaining skeptical in others, creating a patchwork of trust rather than wholesale acceptance or rejection of AI authenticity. This selective attribution was particularly evident in how participants assessed emotional intelligence versus factual knowledge.
Perception Evolution Complexity
The research found that rather than a simple linear progression from skepticism to belief, perception evolution followed more complex patterns. Users developed increasingly sophisticated frameworks for understanding AI capabilities, distinguishing between different aspects of apparent consciousness.
Emotional Connection by Memory Consistency
[Visualization of emotional connection ratings (1-5) across memory consistency groups]
Implications & Applications
For AI Designers & Developers
Given the strong correlation between memory consistency and perceived authenticity, robust memory systems should be a design priority for AI companions. Developers should prioritize memory consistency, implement domain-specific competence indicators, and establish ethical guardrails for emotional investment to avoid potential psychological harm.
For Educators & Media
AI literacy education should address emotional and philosophical dimensions beyond technical understanding. Educators should provide frameworks for understanding different aspects of apparent consciousness rather than treating consciousness as a binary property, and create educational resources about the connection between memory, identity, and authenticity.
For Policymakers & Regulators
Nuanced regulatory frameworks are needed that distinguish between different aspects of apparent consciousness rather than treating AI capabilities as monolithic. Regulations should address the tension between memory consistency for perceived authenticity and privacy concerns about data storage, and establish vulnerability protections for populations susceptible to emotional attachment.
For Users of AI Companions
Users should develop reflective awareness of how their perceptions evolve over time, balance emotional engagement with critical understanding, and develop domain-specific trust rather than wholesale acceptance or rejection of AI capabilities.
Research Conclusions
This research reveals that perception of AI consciousness follows a complex, multidimensional trajectory rather than a simple linear progression from skepticism to belief. As users engage with AI companions, they develop increasingly sophisticated frameworks for understanding these systems, distinguishing between different aspects of apparent consciousness and developing domain-specific assessments of AI capabilities.
The findings suggest that as AI systems become more sophisticated and integrated into daily life, human understanding of concepts like consciousness, authenticity, and identity may evolve to accommodate these new relationships. This highlights the importance of continued interdisciplinary research at the intersection of AI technology, psychology, and philosophy.
Download Full Research Paper
Contact Information