Multi-task Learning by Maximizing Statistical Dependence

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.
LanguageEnglish
Title of host publication2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Pages3465-3473
Number of pages9
Volume2018
ISBN (Electronic)978-1-5386-6420-9
DOIs
StatusE-pub ahead of print - 17 Dec 2018
EventIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 -
Duration: 18 Jun 201822 Jun 2018

Publication series

NameProceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition)
PublisherIEEE
ISSN (Print)2575-7075

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018
Period18/06/1822/06/18

Cite this

Alami Mejjati, Y., Cosker, D., & Kim, K. I. (2018). Multi-task Learning by Maximizing Statistical Dependence. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (Vol. 2018, pp. 3465-3473). (Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition)). https://doi.org/10.1109/CVPR.2018.00365

Multi-task Learning by Maximizing Statistical Dependence. / Alami Mejjati, Youssef; Cosker, Darren; Kim, Kwang In.

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vol. 2018 2018. p. 3465-3473 (Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition)).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Alami Mejjati, Y, Cosker, D & Kim, KI 2018, Multi-task Learning by Maximizing Statistical Dependence. in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. vol. 2018, Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition), pp. 3465-3473, IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 18/06/18. https://doi.org/10.1109/CVPR.2018.00365
Alami Mejjati Y, Cosker D, Kim KI. Multi-task Learning by Maximizing Statistical Dependence. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vol. 2018. 2018. p. 3465-3473. (Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition)). https://doi.org/10.1109/CVPR.2018.00365
Alami Mejjati, Youssef ; Cosker, Darren ; Kim, Kwang In. / Multi-task Learning by Maximizing Statistical Dependence. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vol. 2018 2018. pp. 3465-3473 (Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition)).
@inproceedings{26a7c24e738348edbc3a2312ebe9f07b,
title = "Multi-task Learning by Maximizing Statistical Dependence",
abstract = "We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.",
author = "{Alami Mejjati}, Youssef and Darren Cosker and Kim, {Kwang In}",
year = "2018",
month = "12",
day = "17",
doi = "10.1109/CVPR.2018.00365",
language = "English",
isbn = "978-1-5386-6421-6",
volume = "2018",
series = "Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition)",
publisher = "IEEE",
pages = "3465--3473",
booktitle = "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition",

}

TY - GEN

T1 - Multi-task Learning by Maximizing Statistical Dependence

AU - Alami Mejjati, Youssef

AU - Cosker, Darren

AU - Kim, Kwang In

PY - 2018/12/17

Y1 - 2018/12/17

N2 - We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.

AB - We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.

U2 - 10.1109/CVPR.2018.00365

DO - 10.1109/CVPR.2018.00365

M3 - Conference contribution

SN - 978-1-5386-6421-6

VL - 2018

T3 - Proceedings (IEEE Computer Society Conference on Computer Vision and Pattern Recognition)

SP - 3465

EP - 3473

BT - 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

ER -