TY - GEN
T1 - EmoGen
T2 - Late breaking papers from the 27th International Conference on Human-Computer Interaction, HCI International 2025
AU - Guo, Jiayuan
AU - Lu, Zhaolin
AU - Guo, Jinyuan
AU - Chen, Weihan
AU - Yuan, Tian
AU - Zhang, Yue
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2026.
PY - 2026
Y1 - 2026
N2 - Emojis offer an intuitive and affective means of emotional communication in digital environments. However, existing AI-driven music generation systems often rely on text-based inputs, which may be cognitively demanding and less precise in conveying emotions. This study presents EmoGen, an emoji-based interactive application that supports emotional expression in generative music. Users select emojis across five categories: emotion, scene, object, style, and instrument, which are mapped to text prompts for customized music generation with audiovisual feedback. A mixed-method evaluation with eight participants (aged 21–48) was conducted, using the System Usability Scale (SUS) and thematic analysis of interview data. Results showed high emotional alignment between user inputs and generated music (mean = 7.12–7.88/10) and strong usability (SUS = 80.9). Qualitative feedback highlighted use cases such as emotional journaling, event-based music creation, and mood-based self-care. The evaluation results suggest that EmoGen supports active emotional exploration through symbolic musical interaction, externalizing the externalization of emotions into personalized soundscapes. This work demonstrates how intuitive emoji-based interfaces can enhance emotional resonance and contribute to emotional interaction design in generative music systems.
AB - Emojis offer an intuitive and affective means of emotional communication in digital environments. However, existing AI-driven music generation systems often rely on text-based inputs, which may be cognitively demanding and less precise in conveying emotions. This study presents EmoGen, an emoji-based interactive application that supports emotional expression in generative music. Users select emojis across five categories: emotion, scene, object, style, and instrument, which are mapped to text prompts for customized music generation with audiovisual feedback. A mixed-method evaluation with eight participants (aged 21–48) was conducted, using the System Usability Scale (SUS) and thematic analysis of interview data. Results showed high emotional alignment between user inputs and generated music (mean = 7.12–7.88/10) and strong usability (SUS = 80.9). Qualitative feedback highlighted use cases such as emotional journaling, event-based music creation, and mood-based self-care. The evaluation results suggest that EmoGen supports active emotional exploration through symbolic musical interaction, externalizing the externalization of emotions into personalized soundscapes. This work demonstrates how intuitive emoji-based interfaces can enhance emotional resonance and contribute to emotional interaction design in generative music systems.
KW - Emoji
KW - Generative music
KW - Human computer interaction
KW - Musical expression
KW - System usability
KW - User experience
UR - https://www.scopus.com/pages/publications/105028365361
U2 - 10.1007/978-3-032-13164-5_3
DO - 10.1007/978-3-032-13164-5_3
M3 - Conference contribution
AN - SCOPUS:105028365361
SN - 9783032131638
T3 - Lecture Notes in Computer Science
SP - 41
EP - 57
BT - HCI International 2025 - Late Breaking Papers - 27th International Conference on Human-Computer Interaction, HCII 2025, Proceedings
A2 - Schrepp, Martin
A2 - Rauterberg, Matthias
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 22 June 2025 through 27 June 2025
ER -