Investigating users' engagement with LLMs: A multi-case analysis study
Loading...
Supplementary material
Other Title
Authors
Wijayapala, Mudunkotuwa Hitivedi Vidanelage Lahiru Asanka
Author ORCID Profiles (clickable)
Degree
Master of Applied Technologies (Computing)
Grantor
Unitec, Te Pūkenga – New Zealand Institute of Skills and Technology
Date
2024
Supervisors
Kabbar, Eltahir
Barmada, Bashar
Barmada, Bashar
Type
Masters Thesis
Ngā Upoko Tukutuku (Māori subject headings)
Keyword
ChatGPT
prompt engineering (computer science)
human-computer interaction
natural language processing (computer science)
user interfaces (computer systems)
digital literacy
large language models
prompt engineering (computer science)
human-computer interaction
natural language processing (computer science)
user interfaces (computer systems)
digital literacy
large language models
ANZSRC Field of Research Code (2020)
Citation
Wijayapala, M.H.V.L.A. (2024). Investigating users' engagement with LLMs: A multi-case analysis study (Unpublished document submitted in partial fulfilment of the requirements for the degree of Master of Applied Technologies (Computing)). Unitec, Te Pūkenga - New Zealand Institute of Skills and Technology
https://hdl.handle.net/10652/6848
Abstract
RESEARCH QUESTIONS
• What are the impacts of users’ prompts on their experience with ChatGPT?
• How do users with varying levels of education, computing proficiency, and AI experience perform a given task when interacting with ChatGPT?
ABSTRACT
Large Language Models (LLMs), such as OpenAI’s ChatGPT, are powerful AI systems. They understand and generate human-like text with fluency. LLMs show remarkable capabilities across many applications. Their integration into everyday life is rapidly increasing. With rising popularity, ensuring effective user engagement is crucial. However, limited research exists on how different user characteristics, such as education, computing proficiency, and prior AI experience, impact their interaction with these systems. This study aims to explore the relationship between user attributes, prompting strategies, and user experience (UX) when interacting with ChatGPT. Specifically, it investigates how users from varying educational backgrounds, technical expertise, and AI experience formulate prompts and how these factors influence users’ engagement with LLMs.
The conducted research employed a multi-case study methodology. A two-part experiment was conducted with 31 participants who were selected using convenience and snowball sampling methods. The first part of the experiment was to complete a predefined trip planning task using ChatGPT. The second part was collecting participant feedback through a structured survey instrument. The survey captured user background characteristics and user experience metrics. The survey instrument also measured various user experience constructs identified through literature, such as performance expectancy and effort expectancy, self-efficacy, social influence, and trust in the use of ChatGPT.
Exploratory factor analysis revealed strong evidence of the underlying factor structure (performance expectancy, effort expectancy, social influence, self-efficacy, and trust), aligning them with respective theoretical constructs and supporting their validity. Quantitative and qualitative analysis of user prompts collected from the first part of the experiment was the core of the study. Quantitative analysis included measures such as prompt count, word count, lexical diversity, readability and error counts. Qualitative measures were utilized to identify prompt attributes, prompting techniques and the overall prompting approach taken.
Participants were grouped based on their educational level, computing proficiency, and prior AI experience, and their prompting strategies were analysed for clarity, specificity, contextual relevance, and task orientation. The findings reveal that structured, iterative prompting approaches lead to better user outcomes, such as higher user experience and intention to use ChatGPT in the future. As expected, users with higher computing proficiency and prior AI experience employed more effective prompting strategies, while novices tended to rely on less structured, trial-and-error approaches.
Interestingly, higher levels of education were associated with more precise prompting but also with lower overall user experience, suggesting that more educated users may have higher expectations or a more critical view of AI's limitations. The study also found that prior AI experience positively influenced prompting effectiveness, though it did not always correlate with fewer grammatical errors, indicating that clarity of intent is more important than linguistic precision. Further, expert level of computing proficiency and prior AI experience were associated with significantly reduced trust in ChatGPT.
This research contributes to the growing field of human-AI interaction by providing insights into how user characteristics shape AI engagement. The study's findings emphasise the need for user-friendly interfaces and AI literacy to ensure that a broad range of users can effectively engage with AI systems like ChatGPT.
Publisher
Permanent link
Link to ePress publication
DOI
Copyright holder
Author
Copyright notice
All rights reserved
