Loading AI tools
A statistic used for comparing the similarity of two samples From Wikipedia, the free encyclopedia
The Dice-Sørensen coefficient (see below for other names) is a statistic used to gauge the similarity of two samples. It was independently developed by the botanists Lee Raymond Dice[1] and Thorvald Sørensen,[2] who published in 1945 and 1948 respectively.
The index is known by several other names, especially Sørensen–Dice index,[3] Sørensen index and Dice's coefficient. Other variations include the "similarity coefficient" or "index", such as Dice similarity coefficient (DSC). Common alternate spellings for Sørensen are Sorenson, Soerenson and Sörenson, and all three can also be seen with the –sen ending (the Danish letter ø is phonetically equivalent to the German/Swedish ö, which can be written as oe in ASCII).
Other names include:
Sørensen's original formula was intended to be applied to discrete data. Given two sets, X and Y, it is defined as
where |X| and |Y| are the cardinalities of the two sets (i.e. the number of elements in each set). The Sørensen index equals twice the number of elements common to both sets divided by the sum of the number of elements in each set. Equivalently, the index is the size of the intersection as a fraction of the average size of the two sets.
When applied to Boolean data, using the definition of true positive (TP), false positive (FP), and false negative (FN), it can be written as
It is different from the Jaccard index which only counts true positives once in both the numerator and denominator. DSC is the quotient of similarity and ranges between 0 and 1.[9] It can be viewed as a similarity measure over sets.
Similarly to the Jaccard index, the set operations can be expressed in terms of vector operations over binary vectors a and b:
which gives the same outcome over binary vectors and also gives a more general similarity metric over vectors in general terms.
For sets X and Y of keywords used in information retrieval, the coefficient may be defined as twice the shared information (intersection) over the sum of cardinalities :[10]
When taken as a string similarity measure, the coefficient may be calculated for two strings, x and y using bigrams as follows:[11]
where nt is the number of character bigrams found in both strings, nx is the number of bigrams in string x and ny is the number of bigrams in string y. For example, to calculate the similarity between:
night
nacht
We would find the set of bigrams in each word:
ni
,ig
,gh
,ht
}na
,ac
,ch
,ht
}Each set has four elements, and the intersection of these two sets has only one element: ht
.
Inserting these numbers into the formula, we calculate, s = (2 · 1) / (4 + 4) = 0.25.
Source:[12]
For a discrete (binary) ground truth and continuous measures in the interval [0,1], the following formula can be used:
Where and
c can be computed as follows:
If which means no overlap between A and B, c is set to 1 arbitrarily.
This coefficient is not very different in form from the Jaccard index. In fact, both are equivalent in the sense that given a value for the Sørensen–Dice coefficient , one can calculate the respective Jaccard index value and vice versa, using the equations and .
Since the Sørensen–Dice coefficient does not satisfy the triangle inequality, it can be considered a semimetric version of the Jaccard index.[4]
The function ranges between zero and one, like Jaccard. Unlike Jaccard, the corresponding difference function
is not a proper distance metric as it does not satisfy the triangle inequality.[4] The simplest counterexample of this is given by the three sets {a}, {b}, and {a,b}, the distance between the first two being 1, and the difference between the third and each of the others being one-third. To satisfy the triangle inequality, the sum of any two of these three sides must be greater than or equal to the remaining side. However, the distance between {a} and {a,b} plus the distance between {b} and {a,b} equals 2/3 and is therefore less than the distance between {a} and {b} which is 1.
The Sørensen–Dice coefficient is useful for ecological community data (e.g. Looman & Campbell, 1960[13]). Justification for its use is primarily empirical rather than theoretical (although it can be justified theoretically as the intersection of two fuzzy sets[14]). As compared to Euclidean distance, the Sørensen distance retains sensitivity in more heterogeneous data sets and gives less weight to outliers.[15] Recently the Dice score (and its variations, e.g. logDice taking a logarithm of it) has become popular in computer lexicography for measuring the lexical association score of two given words.[16] logDice is also used as part of the Mash Distance for genome and metagenome distance estimation[17] Finally, Dice is used in image segmentation, in particular for comparing algorithm output against reference masks in medical applications.[8]
The expression is easily extended to abundance instead of presence/absence of species. This quantitative version is known by several names:
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.