Loading AI tools
Field associated with machine learning and transfer learning From Wikipedia, the free encyclopedia
Domain adaptation[1][2][3] is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning a model from a source data distribution and applying that model on a different (but related) target data distribution. For instance, one of the tasks of the common spam filtering problem consists in adapting a model from one user (the source distribution) to a new user who receives significantly different emails (the target distribution). Domain adaptation has also been shown to be beneficial to learning unrelated sources.[4] When more than one source distribution is available, the problem is referred to as multi-source domain adaptation.[5]
This article may be too technical for most readers to understand. (February 2015) |
Domain adaptation is the ability to apply an algorithm trained in one or more "source domains" to a different (but related) "target domain". Domain adaptation is a subcategory of transfer learning. In domain adaptation, the source and target domains all have the same feature space (but different distributions); in contrast, transfer learning includes cases where the target domain's feature space is different from the source feature space or spaces.[6]
A domain shift,[7] or distributional shift,[8] is a change in the data distribution between an algorithm's training dataset, and a dataset it encounters when deployed. These domain shifts are common in practical applications of artificial intelligence. Conventional machine-learning algorithms often adapt poorly to domain shifts. The modern machine-learning community has many different strategies to attempt to gain better domain adaptation.[7]
Other applications include Wi-Fi localization detection and many aspects of computer vision.[6]
Let be the input space (or description space) and let be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) able to attach a label from to an example from . This model is learned from a learning sample .
Usually in supervised learning (without domain adaptation), we suppose that the examples are drawn i.i.d. from a distribution of support (unknown and fixed). The objective is then to learn (from ) such that it commits the least error possible for labelling new examples coming from the distribution .
The main difference between supervised learning and domain adaptation is that in the latter situation we study two different (but related) distributions and on [citation needed]. The domain adaptation task then consists of the transfer of knowledge from the source domain to the target one . The goal is then to learn (from labeled or unlabelled samples coming from the two domains) such that it commits as little error as possible on the target domain [citation needed].
The major issue is the following: if a model is learned from a source domain, what is its capacity to correctly label data coming from the target domain?
There are several contexts of domain adaptation. They differ in the information considered for the target task.
The objective is to reweight the source labeled sample such that it "looks like" the target sample (in terms of the error measure considered).[14][15]
A method for adapting consists in iteratively "auto-labeling" the target examples.[16] The principle is simple:
Note that there exist other iterative approaches, but they usually need target labeled examples.[17][18]
The goal is to find or construct a common representation space for the two domains. The objective is to obtain a space in which the domains are close to each other while keeping good performances on the source labeling task. This can be achieved through the use of Adversarial machine learning techniques where feature representations from samples in different domains are encouraged to be indistinguishable.[19][20]
The goal is to construct a Bayesian hierarchical model , which is essentially a factorization model for counts , to derive domain-dependent latent representations allowing both domain-specific and globally shared latent factors.[4]
Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades:
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.