EXTRACT: Explainable Transparent Control of Bias in Embeddings

Zhijin Guo, Zhaozhen Xu, Martha Lewis, Nello Cristianini

Research output: Chapter or section in a book/report/conference proceedingChapter or section

Abstract

Knowledge Graphs are a widely used method to represent relations between entities in various AI applications, and Graph Embedding has rapidly become a standard technique to represent Knowledge Graphs in such a way as to facilitate inferences and decisions. As this representation is obtained from behavioural data, and is not in a form readable by humans, there is a concern that it might incorporate unintended information that could lead to biases. We propose EXTRACT: a suite of Explainable and Transparent methods to ConTrol bias in knowledge graph embeddings, so as to assess and decrease the implicit presence of protected information. Our method uses Canonical Correlation Analysis (CCA) to investigate the presence, extent and origins of information leaks during training, then decomposes embeddings into a sum of their private attributes by solving a linear system. Our experiments, performed on the MovieLens-1M dataset, show that a range of personal attributes can be inferred from a user's viewing behaviour and preferences, including gender, age and occupation. Further experiments, performed on the KG20C citation dataset, show that the information about the conference in which a paper was published can be inferred from the citation network of that article. We propose four transparent methods to maintain the capability of the embedding to make the intended predictions without retaining unwanted information. A trade-off between these two goals is observed.

Original languageEnglish
Title of host publicationCEUR Workshop Proceedings
EditorsR. Calegari, A. A. Tubella, G. G. Castane, V. Dignum, M. Milano
Volume3523
Publication statusPublished - 1 Oct 2023
Event1st Workshop on Fairness and Bias in AI, AEQUITAS 2023 - Krakow, Poland
Duration: 1 Oct 2023 → …

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR-WS
Volume3523
ISSN (Print)1613-0073

Conference

Conference1st Workshop on Fairness and Bias in AI, AEQUITAS 2023
Country/TerritoryPoland
CityKrakow
Period1/10/23 → …

Bibliographical note

Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

Keywords

  • Fairness
  • Knowledge graph embedding
  • Learning representations
  • Recommender system

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'EXTRACT: Explainable Transparent Control of Bias in Embeddings'. Together they form a unique fingerprint.

Cite this