Enhancing user experience and trust in advanced LLM-based conversational agents

  • Yuanyuan Xu Tongji University, Shanghai 200092, China
  • Weiting Gao Amazon, Seattle, WA 98121, USA
  • Yining Wang Bentley University, Waltham, MA 02452, USA
  • Xinyang Shan Tongji University, Shanghai 200092, China
  • Yin-Shan Lin Northeastern University, Boston, MA 02115, USA
Ariticle ID: 1467
168 Views, 69 PDF Downloads
Keywords: large language models (LLMs); user experience (UX); conversational agents; transparency; data security

Abstract

This study explores the enhancement of user experience (UX) and trust in advanced Large Language Model (LLM)-based conversational agents such as ChatGPT. The research involves a controlled experiment comparing participants using an LLM interface with those using a traditional messaging app with a human consultant. The results indicate that LLM-based agents offer higher satisfaction and lower cognitive load, demonstrating the potential for LLMs to revolutionize various applications from customer service to healthcare consultancy and shopping assistance. Despite these positive findings, the study also highlights significant concerns regarding transparency and data security. Participants expressed a need for clearer understanding of how LLMs process information and make decisions. The perceived opacity of these processes can hinder user trust, especially in sensitive applications such as healthcare. Additionally, robust data protection measures are crucial to ensure user privacy and foster trust in these systems. To address these issues, future research and development should focus on enhancing the transparency of LLM operations and strengthening data security protocols. Providing users with clear explanations of how their data is used and how decisions are made can build greater trust. Moreover, specialized applications may require tailored solutions to meet specific user expectations and regulatory requirements. In conclusion, while LLM-based conversational agents have demonstrated substantial advantages in improving user experience, addressing transparency and security concerns is essential for their broader acceptance and effective deployment. By focusing on these areas, developers can create more trustworthy and user-friendly AI systems, paving the way for their integration into diverse fields and everyday use.

References

[1] Zhuang Y, Yu Y, Wang K, et al. Toolqa: A dataset for llm question answering with external tools. Adv Neural Inf Process Syst. 2024; 36.

[2] Panda S, Kaur N. Exploring the viability of ChatGPT as an alternative to traditional chatbot systems in library and information centers. Library Hi Tech News. 2023; 40(3): 22-25. doi: 10.1108/lhtn-02-2023-0032

[3] Valtolina S, Barricelli BR, Di Gaetano S. Communicability of traditional interfaces VS chatbots in healthcare and smart home domains. Behaviour & Information Technology. 2019; 39(1): 108-132. doi: 10.1080/0144929x.2019.1637025

[4] Stoeckli E, Dremel C, Uebernickel F, et al. How affordances of chatbots cross the chasm between social and traditional enterprise systems. Electronic Markets. 2019; 30(2): 369-403. doi: 10.1007/s12525-019-00359-6

[5] Topsakal O, Akinci TC. Creating Large Language Model Applications Utilizing LangChain: A Primer on Developing LLM Apps Fast. International Conference on Applied Engineering and Natural Sciences. 2023; 1(1): 1050-1056. doi: 10.59287/icaens.1127

[6] Yao Y, Duan J, Xu K, et al. A survey on large language model (LLM) security and privacy: The Good, The Bad, and The Ugly. High-Confidence Computing. 2024; 4(2): 100211. doi: 10.1016/j.hcc.2024.100211

[7] Allouch M, Azaria A, Azoulay R. Conversational Agents: Goals, Technologies, Vision and Challenges. Sensors. 2021; 21(24): 8448. doi: 10.3390/s21248448

[8] Wahde M, Virgolin M. Conversational agents: Theory and applications. World Scientific Publishing Company. 2022: 497-544.

[9] Moore RJ, Szymanski MH, Arar R, et al. Studies in Conversational UX Design. Springer International Publishing; 2018. doi: 10.1007/978-3-319-95579-7

[10] Yang X, Aurisicchio M, Baxter W. Understanding Affective Experiences with Conversational Agents. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. doi: 10.1145/3290605.3300772

[11] Moore RJ, Arar R, Ren GJ, et al. Conversational UX Design. In: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. doi: 10.1145/3027063.3027077

[12] Kim CY, Lee CP, Mutlu B. Understanding Large-Language Model (LLM)-powered Human-Robot Interaction. In: Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. pp. 371-380. doi: 10.1145/3610977.3634966

[13] Abbasiantaeb Z, Yuan Y, Kanoulas E, et al. Let the LLMs Talk: Simulating Human-to-Human Conversational QA via Zero-Shot LLM-to-LLM Interactions. In: Proceedings of the 17th ACM International Conference on Web Search and Data Mining; 2024. doi: 10.1145/3616855.3635856

[14] Motta I, Quaresma M. Increasing Transparency to Design Inclusive Conversational Agents (CAs): Perspectives and Open Issues. In: Proceedings of the 5th International Conference on Conversational User Interfaces; 2023. pp. 1-4. doi: 10.1145/3571884.3604304

[15] Hasal M, Nowaková J, Ahmed Saghair K, et al. Chatbots: Security, privacy, data protection, and social aspects. Concurrency and Computation: Practice and Experience. 2021; 33(19). doi: 10.1002/cpe.6426

[16] Stieglitz S, Hofeditz L, Brünker F, et al. Design principles for conversational agents to support Emergency Management Agencies. International Journal of Information Management. 2022; 63: 102469. doi: 10.1016/j.ijinfomgt.2021.102469

[17] Van Brummelen J, Kelleher M, Tian MC, et al. What Do Children and Parents Want and Perceive in Conversational Agents? Towards Transparent, Trustworthy, Democratized Agents. In: Proceedings of the 22nd Annual ACM Interaction Design and Children Conference. doi: 10.1145/3585088.3589353

[18] Rosruen N, Samanchuen T. Chatbot Utilization for Medical Consultant System. In: Proceedings of the 2018 3rd Technology Innovation Management and Engineering Science International Conference (TIMES-iCON). doi: 10.1109/times-icon.2018.8621678

[19] Godse NA, Deodhar S, Raut S, et al. Implementation of Chatbot for ITSM Application Using IBM Watson. In: Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA). doi: 10.1109/iccubea.2018.8697411

[20] Rohman MA, Subarkah P. Design and Build Chatbot Application for Tourism Object Information in Bengkulu City. TECHNOVATE: Journal of Information Technology and Strategic Innovation Management. 2024; 1(1): 28-34. doi: 10.52432/technovate.1.1.2024.28-34

[21] Chen J, Theeramunkong T, Supnithi T, et al. Knowledge and Systems Sciences. Springer Singapore; 2017. doi: 10.1007/978-981-10-6989-5

[22] Piau A, Crissey R, Brechemier D, et al. A smartphone Chatbot application to optimize monitoring of older patients with cancer. International Journal of Medical Informatics. 2019; 128: 18-23. doi: 10.1016/j.ijmedinf.2019.05.013

[23] Hassenzahl M, Diefenbach S, Göritz A. Needs, affect, and interactive products—Facets of user experience. Interacting with Computers. 2010; 22(5): 353-362. doi: 10.1016/j.intcom.2010.04.002

[24] Lamas D, Loizides F, Nacke L, et al. Human-Computer Interaction—INTERACT 2019. Springer International Publishing; 2019. doi: 10.1007/978-3-030-29390-1

[25] Berni A, Borgianni Y. Making Order in User Experience Research to Support Its Application in Design and Beyond. Applied Sciences. 2021; 11(15): 6981. doi: 10.3390/app11156981

[26] Yusof N, Hashim NL, Hussain A. A Conceptual User Experience Evaluation Model on Online Systems. International Journal of Advanced Computer Science and Applications. 2022; 13(1). doi: 10.14569/ijacsa.2022.0130153

[27] Redmiles EM. User Concerns & Tradeoffs in Technology-facilitated COVID-19 Response. Digital Government: Research and Practice. 2020; 2(1): 1-12. doi: 10.1145/3428093

[28] Williams G, Tushev M, Ebrahimi F, et al. Modeling user concerns in Sharing Economy: the case of food delivery apps. Automated Software Engineering. 2020; 27(3-4): 229-263. doi: 10.1007/s10515-020-00274-7

[29] Kim TS, Lee Y, Chang M, et al. Cells, Generators, and Lenses: Design Framework for Object-Oriented Interaction with Large Language Models. In: Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology; 2023. doi: 10.1145/3586183.3606833

[30] Wu T, Terry M, Cai CJ. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In: Proceedings of the CHI Conference on Human Factors in Computing Systems; 2022. doi: 10.1145/3491102.3517582

[31] Glikson E, Woolley AW. Human Trust in Artificial Intelligence: Review of Empirical Research. Academy of Management Annals. 2020; 14(2): 627-660. doi: 10.5465/annals.2018.0057

[32] Gillath O, Ai T, Branicky MS, et al. Attachment and trust in artificial intelligence. Computers in Human Behavior. 2021; 115: 106607. doi: 10.1016/j.chb.2020.106607

[33] Ryan M. In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics. 2020; 26(5): 2749-2767. doi: 10.1007/s11948-020-00228-y

[34] Omrani N, Rivieccio G, Fiore U, et al. To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts. Technological Forecasting and Social Change. 2022; 181: 121763. doi: 10.1016/j.techfore.2022.121763

[35] Bedué P, Fritzsche A. Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management. 2021; 35(2): 530-549. doi: 10.1108/jeim-06-2020-0233

[36] Vereschak O, Bailly G, Caramiaux B. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proceedings of the ACM on Human-Computer Interaction. 2021; 5(CSCW2): 1-39. doi: 10.1145/3476068

[37] Toreini E, Aitken M, Coopamootoo K, et al. The relationship between trust in AI and trustworthy machine learning technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. doi: 10.1145/3351095.3372834

[38] Ferrario A, Loi M. How Explainability Contributes to Trust in AI. In: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. doi: 10.1145/3531146.3533202

[39] von Eschenbach WJ. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology. 2021; 34(4): 1607-1622. doi: 10.1007/s13347-021-00477-0

[40] Kaplan AD, Kessler TT, Brill JC, et al. Trust in Artificial Intelligence: Meta-Analytic Findings. Human Factors: The Journal of the Human Factors and Ergonomics Society. 2021; 65(2): 337-359. doi: 10.1177/00187208211013988

[41] Emaminejad N, Maria North A, Akhavian R. Trust in AI and Implications for AEC Research: A Literature Analysis. In: Proceedings of the Computing in Civil Engineering 2021. doi: 10.1061/9780784483893.037

[42] Luo B, Lau RYK, Li C, et al. A critical review of state‐of‐the‐art chatbot designs and applications. WIREs Data Mining and Knowledge Discovery. 2021; 12(1). doi: 10.1002/widm.1434

[43] Chaves AP, Gerosa MA. How Should My Chatbot Interact? A Survey on Social Characteristics in Human–Chatbot Interaction Design. International Journal of Human—Computer Interaction. 2020; 37(8): 729-758. doi: 10.1080/10447318.2020.1841438

[44] Zhou L, Gao J, Li D, et al. The design and implementation of xiaoice, an empathetic social chatbot. Comput Linguist. 2020; 46(1): 53-93.

[45] Rahman AM, Mamun AA, Islam A. Programming challenges of chatbot: Current and future prospective. In: Proceedings of the 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC). doi: 10.1109/r10-htc.2017.8288910

[46] Skjuve M, Følstad A, Fostervold KI, et al. My Chatbot Companion - a Study of Human-Chatbot Relationships. International Journal of Human-Computer Studies. 2021; 149: 102601. doi: 10.1016/j.ijhcs.2021.102601

[47] Følstad A, Araujo T, Law ELC, et al. Future directions for chatbot research: an interdisciplinary research agenda. Computing. 2021; 103(12): 2915-2942. doi: 10.1007/s00607-021-01016-7

[48] Thorat SA, Jadhav V. A Review on Implementation Issues of Rule-based Chatbot Systems. SSRN Electronic Journal. 2020. doi: 10.2139/ssrn.3567047

[49] Kumar R, Ali MM. A review on chatbot design and implementation techniques. Int J Eng Technol. 2020; 7(11): 2791-2800.

[50] Nagarhalli TP, Vaze V, Rana NK. A Review of Current Trends in the Development of Chatbot Systems. In: Proceedings of the 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). doi: 10.1109/icaccs48705.2020.9074420

[51] Shingte K, Chaudhari A, Patil A, et al. Chatbot Development for Educational Institute. SSRN Electronic Journal. 2021. doi: 10.2139/ssrn.3861241

[52] Casas J, Tricot MO, Abou Khaled O, et al. Trends & Methods in Chatbot Evaluation. In: Proceedings of the 2020 International Conference on Multimodal Interaction. doi: 10.1145/3395035.3425319

[53] Santos GA, de Andrade GG, Silva GRS, et al. A Conversation-Driven Approach for Chatbot Management. IEEE Access. 2022; 10: 8474-8486. doi: 10.1109/access.2022.3143323

[54] Abdellatif A, Costa D, Badran K, et al. Challenges in Chatbot Development. In: Proceedings of the 17th International Conference on Mining Software Repositories; 2020. doi: 10.1145/3379597.3387472

[55] Ericsson KA, Simon HA. How to study thinking in everyday life: Contrasting think-aloud protocols with descriptions and explanations of thinking. Mind, Culture, and Activity. 1998; 5(3): 178-186.

Published
2024-08-17
How to Cite
Xu, Y., Gao, W., Wang, Y., Shan , X., & Lin, Y.-S. (2024). Enhancing user experience and trust in advanced LLM-based conversational agents. Computing and Artificial Intelligence, 2(2), 1467. https://doi.org/10.59400/cai.v2i2.1467
Section
Article