TY - GEN
T1 - Metric Learning for Categorical and Ambiguous Features: An Adversarial Method
AU - Yang, X.
AU - Dong, M.
AU - Guo, Y.
AU - Xue, J.-H.
PY - 2021/2/25
Y1 - 2021/2/25
N2 - Metric learning learns a distance metric from data and has significantly improved the classification accuracy of distance-based classifiers such as k-nearest neighbors. However, metric learning has rarely been applied to categorical data, which are prevalent in health and social sciences, but inherently difficult to classify due to high feature ambiguity and small sample size. More specifically, ambiguity arises as the boundaries between ordinal or nominal levels are not always sharply defined. In this paper, we mitigate the impact of feature ambiguity by considering the worst-case perturbation of each instance and propose to learn the Mahalanobis distance through adversarial training. The geometric interpretation shows that our method dynamically divides the instance space into three regions and exploits the information on the “adversarially vulnerable” region. This information, which has not been considered in previous methods, makes our method more suitable than them for small-sized data. Moreover, we establish the generalization bound for a general form of adversarial training. It suggests that the sample complexity rate remains at the same order as that of standard training only if the Mahalanobis distance is regularized with the elementwise 1-norm. Experiments on ordinal and mixed ordinal-and-nominal datasets demonstrate the effectiveness of the proposed method when encountering the problems of high feature ambiguity and small sample size.
AB - Metric learning learns a distance metric from data and has significantly improved the classification accuracy of distance-based classifiers such as k-nearest neighbors. However, metric learning has rarely been applied to categorical data, which are prevalent in health and social sciences, but inherently difficult to classify due to high feature ambiguity and small sample size. More specifically, ambiguity arises as the boundaries between ordinal or nominal levels are not always sharply defined. In this paper, we mitigate the impact of feature ambiguity by considering the worst-case perturbation of each instance and propose to learn the Mahalanobis distance through adversarial training. The geometric interpretation shows that our method dynamically divides the instance space into three regions and exploits the information on the “adversarially vulnerable” region. This information, which has not been considered in previous methods, makes our method more suitable than them for small-sized data. Moreover, we establish the generalization bound for a general form of adversarial training. It suggests that the sample complexity rate remains at the same order as that of standard training only if the Mahalanobis distance is regularized with the elementwise 1-norm. Experiments on ordinal and mixed ordinal-and-nominal datasets demonstrate the effectiveness of the proposed method when encountering the problems of high feature ambiguity and small sample size.
UR - http://www.scopus.com/inward/record.url?eid=2-s2.0-85103277273&partnerID=MN8TOARS
U2 - 10.1007/978-3-030-67661-2_14
DO - 10.1007/978-3-030-67661-2_14
M3 - Chapter in a published conference proceeding
SN - 9783030676605
BT - Lecture Notes in Computer Science, ECML/PKDD 2020
ER -