Multi-task Learning by Maximizing Statistical Dependence

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.
LanguageEnglish
Title of host publicationProceedings of CVPR
StatusAccepted/In press - 19 Feb 2018
EventIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 -
Duration: 18 Jun 201822 Jun 2018

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018
Period18/06/1822/06/18

Fingerprint

Random variables
Deep neural networks

Cite this

Multi-task Learning by Maximizing Statistical Dependence. / Alami Mejjati, Youssef; Cosker, Darren; Kim, Kwang In.

Proceedings of CVPR. 2018.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Alami Mejjati, Y, Cosker, D & Kim, KI 2018, Multi-task Learning by Maximizing Statistical Dependence. in Proceedings of CVPR. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 18/06/18.
@inproceedings{26a7c24e738348edbc3a2312ebe9f07b,
title = "Multi-task Learning by Maximizing Statistical Dependence",
abstract = "We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.",
author = "{Alami Mejjati}, Youssef and Darren Cosker and Kim, {Kwang In}",
year = "2018",
month = "2",
day = "19",
language = "English",
booktitle = "Proceedings of CVPR",

}

TY - GEN

T1 - Multi-task Learning by Maximizing Statistical Dependence

AU - Alami Mejjati,Youssef

AU - Cosker,Darren

AU - Kim,Kwang In

PY - 2018/2/19

Y1 - 2018/2/19

N2 - We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.

AB - We present a new multi-task learning (MTL) approach that can be applied to multiple heterogeneous task estimators. Our motivation is that the best task estimator could change depending on the task itself. For example, we may have a deep neural network for the first task and a Gaussian process for the second task. Classical MTL approaches cannot handle this case, as they require the same model or even the same parameter types for all tasks. We tackle this by considering task-specific estimators as random variables. Then, the task relationships are discovered by measuring the statistical dependence between each pair of random variables. By doing so, our model is independent of the parametric nature of each task, and is even agnostic to the existence of such parametric formulation. We compare our algorithm with existing MTL approaches on challenging real world ranking and regression datasets, and show that our approach achieves comparable or better performance without knowing the parametric form.

M3 - Conference contribution

BT - Proceedings of CVPR

ER -