Sinkhorn Divergence¶
Debiased entropy-regularized optimal transport, interpolating between Wasserstein distance and MMD.
sinkhorn_divergence(samples_p, samples_q, *, epsilon=0.01, p=2, max_iter=100, tol=1e-06)
¶
Compute the debiased Sinkhorn divergence between two sample sets.
.. math::
S_\varepsilon(P, Q) = OT_\varepsilon(P, Q)
- \frac{1}{2}\bigl(OT_\varepsilon(P, P) + OT_\varepsilon(Q, Q)\bigr)
where :math:OT_\varepsilon is the entropy-regularized optimal transport
cost with regularization parameter :math:\varepsilon.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
samples_p
|
ndarray
|
Samples from distribution P. Shape |
required |
samples_q
|
ndarray
|
Samples from distribution Q. Shape |
required |
epsilon
|
float
|
Entropic regularization parameter. Smaller values approximate the Wasserstein distance more closely but require more iterations. Default is 0.01. |
0.01
|
p
|
int
|
Exponent for the ground cost: |
2
|
max_iter
|
int
|
Maximum number of Sinkhorn iterations. Default is 100. |
100
|
tol
|
float
|
Convergence tolerance. Default is 1e-6. |
1e-06
|
Returns:
| Type | Description |
|---|---|
float
|
The debiased Sinkhorn divergence, non-negative. |
Notes
The debiasing correction subtracts the self-transport costs to ensure that
:math:S_\varepsilon(P, P) = 0. Without debiasing, the regularized OT
cost is always positive even for identical distributions.
Examples: