Rényi Family¶
Entropy and divergence parameterized by order \(\alpha\), interpolating between Hartley (\(\alpha \to 0\)), Shannon (\(\alpha \to 1\)), collision (\(\alpha = 2\)), and min-entropy (\(\alpha \to \infty\)).
renyi_entropy(sample, *, alpha, base=np.e, discrete=False)
¶
Compute the Renyi entropy of order alpha from a sample.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sample
|
ndarray
|
Sample from the distribution. |
required |
alpha
|
float
|
Order of the Renyi entropy. Must be non-negative. Special cases:
|
required |
base
|
float
|
Base of the logarithm (default: e for nats, 2 for bits, 10 for hartleys). |
e
|
discrete
|
bool
|
If True, treat the sample as draws from a discrete distribution and compute frequencies directly. If False (default), estimate the density via kernel density estimation. |
False
|
Returns:
| Type | Description |
|---|---|
float
|
The estimated Renyi entropy of order alpha. |
Notes
For a discrete distribution P = (p_1, ..., p_k), the Renyi entropy of order alpha is defined as
.. math::
H_\alpha(P) = \frac{1}{1 - \alpha} \log\!\left(\sum_{i=1}^{k}
p_i^\alpha\right)
Key properties:
- Non-negative for discrete distributions: H_alpha >= 0.
- Monotonically decreasing in alpha: H_alpha1 >= H_alpha2 when alpha1 < alpha2.
- Reduces to Shannon entropy as alpha -> 1.
- Hartley entropy at alpha = 0: H_0 = log(|support|).
- Min-entropy at alpha = +inf: H_inf = -log(max_i p_i).
For continuous distributions, the density is estimated via KDE and the integral is computed using the trapezoidal rule on the KDE grid.
Examples:
>>> import numpy as np
>>> from divergence.renyi import renyi_entropy
>>> rng = np.random.default_rng(42)
>>> sample = rng.choice([0, 1, 2], size=10000, p=[0.2, 0.3, 0.5])
>>> renyi_entropy(sample, alpha=2, base=np.e, discrete=True)
0.97...
References
.. [1] Renyi, A. (1961). "On measures of entropy and information." Proc. 4th Berkeley Symp. Math. Stat. Prob., 1, 547-561.
renyi_divergence(sample_p, sample_q, *, alpha, base=np.e, discrete=False)
¶
Compute the Renyi divergence of order alpha from samples.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sample_p
|
ndarray
|
Sample from distribution P. |
required |
sample_q
|
ndarray
|
Sample from distribution Q. |
required |
alpha
|
float
|
Order of the Renyi divergence. Must be positive. Special cases:
|
required |
base
|
float
|
Base of the logarithm (default: e for nats, 2 for bits, 10 for hartleys). |
e
|
discrete
|
bool
|
If True, treat samples as draws from discrete distributions. If False (default), estimate densities via KDE. |
False
|
Returns:
| Type | Description |
|---|---|
float
|
The estimated Renyi divergence of order alpha. |
Notes
For discrete distributions P and Q, the Renyi divergence of order alpha is defined as
.. math::
D_\alpha(P \| Q) = \frac{1}{\alpha - 1} \log\!\left(
\sum_{i=1}^{k} p_i^\alpha \, q_i^{1-\alpha}\right)
Key properties:
- Non-negative: D_alpha(P || Q) >= 0, with equality iff P = Q.
- Monotonically non-decreasing in alpha: D_alpha1 <= D_alpha2 when alpha1 < alpha2.
- Reduces to KL divergence as alpha -> 1.
For continuous distributions, the densities are estimated via KDE and the integral is computed using the trapezoidal rule. Log-space arithmetic is used for numerical stability.
Examples:
>>> import numpy as np
>>> from divergence.renyi import renyi_divergence
>>> rng = np.random.default_rng(42)
>>> p = rng.choice([0, 1, 2], size=10000, p=[0.2, 0.3, 0.5])
>>> q = rng.choice([0, 1, 2], size=10000, p=[0.3, 0.3, 0.4])
>>> renyi_divergence(p, q, alpha=2, base=np.e, discrete=True)
0.03...
References
.. [1] Renyi, A. (1961). "On measures of entropy and information." Proc. 4th Berkeley Symp. Math. Stat. Prob., 1, 547-561. .. [2] Van Erven, T. & Harremoes, P. (2014). "Renyi divergence and Kullback-Leibler divergence." IEEE Trans. Inform. Theory, 60(7), 3797-3820.