Better Data Labelling With EMBLEM (and how that Impacts Defect Prediction)
H Tu and Z Yu and T Menzies, IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, 48, 278-294 (2022).
DOI: 10.1109/TSE.2020.2986415
Standard automatic methods for recognizing problematic development commits can be greatly improved via the incremental application of human+artificial expertise. In this approach, called EMBLEM, an AI tool first explore the software development process to label commits that are most problematic. Humans then apply their expertise to check those labels (perhaps resulting in the AI updating the support vectors within their SVM learner). We recommend this human+AI partnership, for several reasons. When a new domain is encountered, EMBLEM can learn better ways to label which comments refer to real problems. Also, in studies with 9 open source software projects, labelling via EMBLEM's incremental application of human+AI is at least an order of magnitude cheaper than existing methods (approximate to eight times). Further, EMBLEM is very effective. For the data sets explored here, EMBLEM better labelling methods significantly improved P(opt)20 and G-scores performance in nearly all the projects studied here.
Return to Publications page