TY - GEN
T1 - Text Input Through Swipe Gestures Based Personalized AI Agent
AU - Qi, Xiangyu
AU - Weng, Dongdong
AU - Hao, Jie
AU - Li, Zihao
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Text input based on virtual reality is a common technique in interactive systems, but it may face challenges related to input efficiency and task load when interacting with VR hardware and applications. The development of artificial intelligence interaction technologies offers new approaches to addressing text input challenges. In this paper, we propose a swipe gesture-based text input method under a personalized AI agent framework, combining portable devices (e.g. smartphones) with VR input and incorporating user profile information, input habits, and conversational intent. By integrating the GPT-3.5 model to train a personalized AI agent, we emphasize the importance of understanding and responding to human behavior or capabilities from the agent's perspective, enabling text prediction based on specific contexts. The keyboard layout design is based on a disk divided into 8 equal regions, where the outer circle is subdivided into key areas containing letters, and the inner circle serves as the input buffer area. By resolving word ambiguities based on user input and leveraging the extensive capabilities of large language models in context awareness and text prediction, the system allows complete sentences to be generated from keywords. This reduces the number of manual inputs required by the user, improving text input efficiency and enhancing the overall user experience.
AB - Text input based on virtual reality is a common technique in interactive systems, but it may face challenges related to input efficiency and task load when interacting with VR hardware and applications. The development of artificial intelligence interaction technologies offers new approaches to addressing text input challenges. In this paper, we propose a swipe gesture-based text input method under a personalized AI agent framework, combining portable devices (e.g. smartphones) with VR input and incorporating user profile information, input habits, and conversational intent. By integrating the GPT-3.5 model to train a personalized AI agent, we emphasize the importance of understanding and responding to human behavior or capabilities from the agent's perspective, enabling text prediction based on specific contexts. The keyboard layout design is based on a disk divided into 8 equal regions, where the outer circle is subdivided into key areas containing letters, and the inner circle serves as the input buffer area. By resolving word ambiguities based on user input and leveraging the extensive capabilities of large language models in context awareness and text prediction, the system allows complete sentences to be generated from keywords. This reduces the number of manual inputs required by the user, improving text input efficiency and enhancing the overall user experience.
KW - Human-computer interaction (HCI)
KW - LLM
KW - Text Input
KW - Virtual Reality
UR - http://www.scopus.com/inward/record.url?scp=105004792545&partnerID=8YFLogxK
U2 - 10.1109/ICARCE63054.2024.00088
DO - 10.1109/ICARCE63054.2024.00088
M3 - Conference contribution
AN - SCOPUS:105004792545
T3 - Proceedings - 2024 3rd International Conference on Automation, Robotics and Computer Engineering, ICARCE 2024
SP - 435
EP - 439
BT - Proceedings - 2024 3rd International Conference on Automation, Robotics and Computer Engineering, ICARCE 2024
A2 - Xu, Jinyang
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 3rd International Conference on Automation, Robotics and Computer Engineering, ICARCE 2024
Y2 - 17 December 2024 through 18 December 2024
ER -