TY - GEN
T1 - Automated classification of actions in bug reports of mobile apps
AU - Liu, Hui
AU - Shen, Mingzhu
AU - Jin, Jiahao
AU - Jiang, Yanjie
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/7/18
Y1 - 2020/7/18
N2 - When users encounter problems with mobile apps, they may commit such problems to developers as bug reports. To facilitate the processing of bug reports, researchers proposed approaches to validate the reported issues automatically according to the steps to reproduce specified in bug reports. Although such approaches have achieved high success rate in reproducing the reported issues, they often rely on a predefined vocabulary to identify and classify actions in bug reports. However, such manually constructed vocabulary and classification have significant limitations. It is challenging for the vocabulary to cover all potential action words because users may describe the same action with different words. Besides that, classification of actions solely based on the action words could be inaccurate because the same action word, appearing in different contexts, may have different meaning and thus belongs to different action categories. To this end, in this paper we propose an automated approach, called MaCa, to identify and classify action words in Mobile apps' bug reports. For a given bug report, it first identifies action words based on natural language processing. For each of the resulting action words, MaCa extracts its contexts, i.e., its enclosing segment, the associated UI target, and the type of its target element by both natural language processing and static analysis of the associated app. The action word and its contexts are then fed into a machine learning based classifier that predicts the category of the given action word in the given context. To train the classifier, we manually labelled 1,202 actions words from 525 bug reports that are associated with 207 apps. Our evaluation results on manually labelled data suggested that MaCa was accurate with high accuracy varying from 95% to 96.7%. We also investigated to what extent MaCa could further improve existing approaches (i.e., Yakusu and ReCDroid) in reproducing bug reports. Our evaluation results suggested that integrating MaCa into existing approaches significantly improved the success rates of ReCDroid and Yakusu by 22.7% = (69.2%-56.4%)/56.4% and 22.9%= (62.7%-51%)/51%, respectively.
AB - When users encounter problems with mobile apps, they may commit such problems to developers as bug reports. To facilitate the processing of bug reports, researchers proposed approaches to validate the reported issues automatically according to the steps to reproduce specified in bug reports. Although such approaches have achieved high success rate in reproducing the reported issues, they often rely on a predefined vocabulary to identify and classify actions in bug reports. However, such manually constructed vocabulary and classification have significant limitations. It is challenging for the vocabulary to cover all potential action words because users may describe the same action with different words. Besides that, classification of actions solely based on the action words could be inaccurate because the same action word, appearing in different contexts, may have different meaning and thus belongs to different action categories. To this end, in this paper we propose an automated approach, called MaCa, to identify and classify action words in Mobile apps' bug reports. For a given bug report, it first identifies action words based on natural language processing. For each of the resulting action words, MaCa extracts its contexts, i.e., its enclosing segment, the associated UI target, and the type of its target element by both natural language processing and static analysis of the associated app. The action word and its contexts are then fed into a machine learning based classifier that predicts the category of the given action word in the given context. To train the classifier, we manually labelled 1,202 actions words from 525 bug reports that are associated with 207 apps. Our evaluation results on manually labelled data suggested that MaCa was accurate with high accuracy varying from 95% to 96.7%. We also investigated to what extent MaCa could further improve existing approaches (i.e., Yakusu and ReCDroid) in reproducing bug reports. Our evaluation results suggested that integrating MaCa into existing approaches significantly improved the success rates of ReCDroid and Yakusu by 22.7% = (69.2%-56.4%)/56.4% and 22.9%= (62.7%-51%)/51%, respectively.
KW - Bug report
KW - Classification
KW - Mobile Testing
KW - Test Case Generation
UR - http://www.scopus.com/inward/record.url?scp=85088925979&partnerID=8YFLogxK
U2 - 10.1145/3395363.3397355
DO - 10.1145/3395363.3397355
M3 - Conference contribution
AN - SCOPUS:85088925979
T3 - ISSTA 2020 - Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis
SP - 128
EP - 140
BT - ISSTA 2020 - Proceedings of the 29th ACM SIGSOFT International Symposium on Software Testing and Analysis
A2 - Khurshid, Sarfraz
A2 - Pasareanu, Corina S.
PB - Association for Computing Machinery, Inc
T2 - 29th ACM SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2020
Y2 - 18 July 2020 through 22 July 2020
ER -