A blended metric for multi-label optimisation and evaluation

Research output: Chapter in Book / Conference PaperConference Paperpeer-review

Abstract

![CDATA[In multi-label classification, a large number of evaluation metrics exist, for example Hamming loss, exact match, and Jaccard similarity – but there are many more. In fact, there remains an apparent uncertainty in the multi-label literature about which metrics should be considered and when and how to optimise them. This has given rise to a proliferation of metrics, with some papers carrying out empirical evaluations under 10 or more different metrics in order to analyse method performance. We argue that further understanding of underlying mechanisms is necessary. In this paper we tackle the challenge of having a clearer view of evaluation strategies. We present a blended loss function. This function allows us to evaluate under the properties of several major loss functions with a single parameterisation. Furthermore we demonstrate the successful use of this metric as a surrogate loss for other metrics. We offer experimental investigation and theoretical backing to demonstrate that optimising this surrogate loss offers best results for several different metrics than optimising the metrics directly. It simplifies and provides insight to the task of evaluating multi-label prediction methodologies.]]
Original languageEnglish
Title of host publicationProceedings of the European Conference on Machine Learning (ECML 2018), 10-14 September 2018, Dublin, Ireland
PublisherSpringer
Number of pages16
Publication statusPublished - 2018
EventEuropean Conference on Machine Learning -
Duration: 10 Sept 2018 → …

Publication series

Name
ISSN (Print)0302-9743

Conference

ConferenceEuropean Conference on Machine Learning
Period10/09/18 → …

Keywords

  • prediction theory
  • metrics
  • mathematical optimization
  • data processing

Fingerprint

Dive into the research topics of 'A blended metric for multi-label optimisation and evaluation'. Together they form a unique fingerprint.

Cite this