Daria Boichenko*
This blog post is a revision of a text written for the seminar “Digital Identities and Socialities” by Philipp Budka for the MA program “CREOLE – Cultural Differences and Transnational Processes” at the University of Vienna.
Over the past two years, with the growing popularity of ChatGPT, I have witnessed various ways in which people around me engage with generative AI models. These interactions encompass academic text summarization, recipe or informal medical advice, and fact-checking. Moreover, some people form a quasi-personal relationship with the technology, while others are hesitant to acknowledge its use or even oppose it.
This divergence in perspectives has raised my curiosity about how my peers, particularly fellow master’s students, perceive generative AI. The objective of this research was not only to comprehend the practical usage of AI in academic settings but also to unravel the intricate layers of students’ perceptions regarding concepts such as authorship, authenticity, trust, and awareness of their digital footprints.
Methodological Reflection
This research project was conducted with five master’s students from diverse cultural backgrounds, all of whom are frequent users of generative AI models for various purposes. One of the methodological tools employed was observational filmmaking, which enabled the documentation of participants’ emotions and the shifts in their engagement with ChatGPT, as well as their reactions to the results of a prompt-based experiment.
In this experiment, participants asked ChatGPT to summarize the personal information it ‘knew’ about them, using their private accounts on the devices they most frequently use to access the service (laptops, mobile phones, or both). The process was filmed with an iPhone 13 camera, and the sound was recorded using an external Zoom H5 microphone.
Although the outcomes varied considerably across cases (due to differences in account settings, the frequency with which users cleared their cache, and other contextual factors) each participant received insights that prompted meaningful reflection. Rewatching their own recorded reactions allowed them to revisit and interpret these emotional responses, providing a valuable foundation for rich and insightful discussions during the interviews.

How and Why We Use ChatGPT

The majority of participants, despite their initial skepticism toward the model, were influenced by recommendations from friends or its portrayal on social media as a “magical tool” that eases learning, which ultimately led them to try it out. In this context, it is relevant to question the extent to which engagement with ChatGPT represents an embodiment of freedom of choice.
The aspect of social participation increasingly encourages individuals to engage with digital realms (Cruz & Thornham, 2015). Despite regular criticism and being far from an exclusive AI language model, it is promoted as a trendy, “shiny thing” and engaging concept, exerting a significant influence on the discourse surrounding it.
Expanding Bourdieu’s concept, the notion of “digital habitus” represents a shared cultural perspective on technologies, shaped by visual media, that actively influences global attitudes toward technology (Romele, 2024). This shared logic fosters an intuitive understanding of AI as being natural, expert, and beyond doubt, influencing our perceptions of what can be achieved or desired in the digital landscape.
Initially, many participants used AI only for academic purposes, however, four out of five participants eventually integrated AI into their daily study routines as well as their personal lives. The imaginary ideas surrounding AI, its capabilities, limitations and modes of thinking play a pivotal role in shaping habitus. Thus, through a process akin to testing its potential, the evolution of ChatGPT has grown from its initial use as a tool for specific tasks to its adoption as a search engine, travel companion, medical advisor, creative collaborator, and occasionally even a personal journal for emotional analysis and reflection on experiences.
Intellectual Ownership
Despite the extensive use of AI as a cognitive aid, the interviewees maintain a sense of agency and ownership over the final product. As one respondent stated, “I take the core idea or summary, then I add my own thoughts. That is why I consider myself to be the author of the text.” This approach to “intelligent revision” allows participants to incorporate AI into their learning process while simultaneously, in their opinion, maintaining their autonomy as authors, particularly in situations where language barriers exist. In such cases, ChatGPT serves as a tool for articulating pre-existing ideas rather than being the origin of those ideas.
Trust (Issues)
The perception of ChatGPT is a matter of considerable ambiguity, particularly in terms of trust and credibility. While the current discourse surrounding AI centers on ethics and fairness, aiming to prevent individuals or consumers from feeling discriminated against (Shadbolt & Deißner, 2018), some participants explicitly emphasize the need for continuous verification as AI is frequently characterized as “superficial,” “limited,” and “imperfect,” particularly in scenarios where academic precision or cultural sensitivity is required. Ultimately, the process of academic writing becomes even more time-consuming and energy-intensive.
Acquiescence to Oversight
It is a distinctive feature of our time that the perception of personal data protection and awareness of the traceability of one’s digital footprint has shifted. Many individuals now view total digital transparency as an unavoidable aspect of modern life, with apparent anonymity and persistent data breaches merely being a part of the digital landscape. As one respondent ironically remarked, “I’m paranoid, but in a world where my phone is an extension of my hand with cameras and microphones, those who need to know already know everything.”
This sort of acceptance also diminishes the significance of personal information, with statements such as “I have nothing to hide” and “No one cares about my data,” indicating a normalization of losing control over privacy. This shift in attitudes and strategies for adapting to the new reality reflects the consolidation of perspectives. This is the process of forming a new digital ethics and habits: a synthesis of practicality and acquiescence to oversight.
Experiment with Prompts
One of the most interesting findings of this project came with a social experiment conducted with participants prior to the interview. The experiment itself was documented, capturing the original, unfiltered emotions following their initial reactions, ranging from laughter to confusion (See photos 1-5). These unexpected details seemed too personal to have been “simply generated” by an algorithm, leading to a more thoughtful consideration of the impact of our digital presence.

Following ethical guidelines, participants were not required to share their results. Instead, I offer an example based on my own experience: ChatGPT not only identified my home address, but also inferred my academic paths and even mentioned a longtime friend of mine whose birthday we recently celebrated. This information was not directly provided by me; rather, it emerged from a web of indirect statements, fragments of queries, references, and the context of my interactions with ChatGPT.
Even in situations where ChatGPT technically does not store personal data in an obvious way, its language generation capabilities demonstrate an uncanny ability to reconstruct personal details based on the narrative of communication. This finding confirms that the digital footprint is not merely a technical imprint, but rather a semiotic structure that reveals one’s personality through linguistic patterns, areas of interest, and cultural codes (Micheli & Büchi, 2018).
Taken together, these patterns suggest that interacting with ChatGPT today is not merely a technical act, but a socially and ethically complex practice. Students constantly navigate a fragile balance between the advantages and fears, trust and skepticism, and the urge for advancement and the need for caution. In this context, ChatGPT serves not only as a tool, but also as a reflection of cultural constructs that embody meanings, values, perspectives, and practices (Bell, 2021).
Perception & Communication Style
Despite being aware of the non-human and non-emotional nature of ChatGPT, participants tended to use polite and “human-like” styles of communication when formulating their prompts. Many respondents emphasized the importance of treating AI with politeness, explaining this as both an instinctual and socially conditioned behavior. One participant stated, “I use the same language as I would with people in real life. I don’t treat it as a non-human that deserves less respect,” reflecting the influence of digital habitus, where communication patterns developed offline often transfer to the digital space.
It is intriguing to note that ChatGPT is often described as a “stranger,” a “tool,” or a “helper,” but not as a friend yet. This ambivalence can be seen in the characterization of ChatGPT as not being a person, but “someone with good English and someone who is eager to help,” so who deserves to be treated politely. This suggests a partial humanization of the model while still lacking full emotional engagement.
Of particular interest is the specific concern about ChatGPT’s service. As expressed in one interview: “because of the way it’s trained, if you say ‘please’, it may be interpreted as an option.” Here we see an irrational yet significant fear that the machine might refuse human requests one day. This highlights deep-seated cultural anxieties about the development of AI, even among those who reject the idea of its consciousness. Some participants have admitted to adjusting their communication style to incorporate “hello” or “please”. One participant said, “I felt embarrassed not to do so”. But at the same time, the friendly and engaging response style of ChatGPT itself is perceived as “ridiculous, inappropriate and even creepy.”
All participants emphasized that they continue to view it solely as a tool, although that may evolve into a more profound relationship in the future, particularly if it gets an embodied form like in Klara and the Sun. One of the students mentioned that using ChatGPT is an essential part of our daily lives and “accompanies progress”, comparing it to the use of a phone, the internet, and headphones. However, at this point in the evolution of ChatGPT, its level of humanization, friendliness, and communication is insufficient to bridge the crucial gap between the emotional and rational perceptions of its users. The algorithmic essence of communication remains evident.
Future with AI
Expressing confidence that AI will be a long-term part of our lives, participants admit that it is a “revolution that we have yet to fully comprehend.” They expressed worries that relying too heavily on these tools could simplify our thinking processes and potentially damage our cognitive abilities. It has been mentioned that it can be harmful to outsource long-term parts of our critical thinking to machines and may lead to “loss of some thinking ability or skills.”
In this context, information overload is not simply an abundance of data, but a sense of confusion in the world of algorithmically generated knowledge (Seaver, 2023). ChatGPT emerges as an “information manager”, organizing content, simplifying complexity, and even assisting in decision-making. The information overload not only presents a challenge, but also provides justification for delegating responsibility to algorithms, which, in turn, significantly influences the formation of digital habits and identity. And, as one participant rightly stated, “the most valuable capital in the future will be the ability to focus, and those who can achieve this will be successful and powerful.”
Conclusion
The ambivalent perception of ChatGPT reflects the complexity of human–AI interaction, which is characterized by a combination of collaboration, skepticism, and reliance. I consider it essential to determine which aspects of human life should remain the domain of human intelligence and which can be delegated to AI—distinguishing tasks that require personal dedication, offer joy or challenge, or are simply monotonous. The trajectory of humanity’s future depends on how we define our role in an era increasingly shaped by AI.
References
- Bell, G. (2021). Talking to AI: An anthropological encounter with artificial intelligence. The SAGE Handbook of Cultural Anthropology, 442–458. https://doi.org/10.4135/9781529756449.n25
- Cruz, E. G., & Thornham, H. (2015). Selfies beyond self-representation: The (theoretical) f(r)ictions of a practice. Journal of Aesthetics & Culture, 7(1), 28073. https://doi.org/10.3402/jac.v7.28073
- Micheli, M., Lutz, C., & Büchi, M. (2018). Digital footprints: An emerging dimension of digital inequality. Journal of Information, Communication and Ethics in Society, 16(3), 242–251. https://doi.org/10.1108/jices-02-2018-0014
- MacDougall, D. (2020). Observational filmmaking: A unique practice. Visual Anthropology, 33(5), 452–458. https://doi.org/10.1080/08949468.2020.1824976
- Romele, A. (2024). Digital habitus: A critique of the imaginaries of artificial intelligence. Routledge.
- Seaver, N. (2023). Computing taste: Algorithms and the makers of music recommendation. The University of Chicago Press.
- Shadbolt, N., & Deißner, A. (2018, October 12). AI & anthropology – An interview with Sir Nigel Shadbolt and Alice Deißner. YouTube. https://www.youtube.com/watch?v=qLgyjyrMzgc&t=20s&pp=ygUTYWkgYW5kIGFudGhyb3BvbG9neQ%3D%3D
* Daria Boichenko is a master’s student in the CREOLE Program at the Department of Social-Cultural Anthropology, University of Vienna.
LinkedIn: https://www.linkedin.com/in/daria-boichenko/
