In information theory, the Rényi entropy is a quantity that generalizes various notions of entropy, including Hartley entropy, Shannon entropy, collision entropy, and min-entropy. The Rényi entropy is named after Alfréd Rényi, who looked for the most general way to quantify information while preserving additivity for independent events.[1][2] In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.[3]
The Rényi entropy of order , where and , is defined as[1]
It is further defined at as
Here, is a discrete random variable with possible outcomes in the set and corresponding probabilities for . The resulting unit of information is determined by the base of the logarithm, e.g. shannon for base 2, or nat for base e.
If the probabilities are for all , then all the Rényi entropies of the distribution are equal: .
In general, for all discrete random variables , is a non-increasing function in .
Applications often exploit the following relation between the Rényi entropy and the α-norm of the vector of probabilities:
Here, the discrete probability distribution is interpreted as a vector in with and .
Rényi entropy of a random variable with two possible outcomes against p1, where P = (p1, 1 − p1). Shown are Η0, Η1, Η2 and Η∞, with the unit on the vertical axis being the shannon.
As approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for , the Rényi entropy is just the logarithm of the size of the support of X. The limit for is the Shannon entropy. As approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.
Hartley or max-entropy
is where is the number of non-zero probabilities.[6] If the probabilities are all nonzero, it is simply the logarithm of the cardinality of the alphabet () of , sometimes called the Hartley entropy of ,
In the limit as , the Rényi entropy converges to the min-entropy:
Equivalently, the min-entropy is the largest real number b such that all events occur with probability at most .
The name min-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies.
In this sense, it is the strongest way to measure the information content of a discrete random variable.
In particular, the min-entropy is never larger than the Shannon entropy.
The min-entropy has important applications for randomness extractors in theoretical computer science:
Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large Shannon entropy does not suffice for this task.
Inequalities for different orders α
That is non-increasing in for any given distribution of probabilities ,
which can be proven by differentiation,[8] as
which is proportional to Kullback–Leibler divergence (which is always non-negative), where
. In particular, it is strictly positive except when the distribution is uniform.
For values of , inequalities in the other direction also hold. In particular, we have[11][12]
On the other hand, the Shannon entropy can be arbitrarily high for a random variable that has a given min-entropy. An example of this is given by the sequence of random variables for such that and since but .
Rényi divergence
As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence.[13]
The Rényi divergence of order or alpha-divergence of a distribution P from a distribution Q is defined to be
when and . We can define the Rényi divergence for the special values α = 0, 1, ∞ by taking a limit, and in particular the limit α → 1 gives the Kullback–Leibler divergence.
: the log of the expected ratio of the probabilities;
: the log of the maximum ratio of the probabilities.
The Rényi divergence is indeed a divergence, meaning simply that is greater than or equal to zero, and zero only when P = Q. For any fixed distributions P and Q, the Rényi divergence is nondecreasing as a function of its order α, and it is continuous on the set of α for which it is finite,[13] or for the sake of brevity, the information of order α obtained if the distribution P is replaced by the distribution Q.[1]
Financial interpretation
A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows[14]
where is the distribution defining the official odds (i.e. the "market") for the game, is the investor-believed distribution and is the investor's risk aversion (the Arrow–Pratt relative risk aversion).
If the true distribution is (not necessarily coinciding with the investor's belief ), the long-term realized rate converges to the true expectation which has a similar mathematical structure[14]
The latter in particular means that if we seek a distribution p(x, a) which minimizes the divergence from some underlying prior measure m(x, a), and we acquire new information which only affects the distribution of a, then the distribution of p(x|a) remains m(x|a), unchanged.
The other Rényi divergences satisfy the criteria of being positive and continuous, being invariant under 1-to-1 co-ordinate transformations, and of combining additively when A and X are independent, so that if p(A, X) = p(A)p(X), then
and
The Rényi relative entropy (or divergence) can be generalized in two possible ways: the straightforward quantum Rényi relative entropy
and the sandwiched Rényi relative entropy
Both converge to the quantum relative entropy in the limit . However, the sandwiched Rényi relative entropy has more convenient properties when used to defined conditional entropies,[16] and has found applications in quantum key distribution.[17][18]
^Devroye, Luc; Györfi, Laszlo; Lugosi, Gabor (1996-04-04). A Probabilistic Theory of Pattern Recognition (Corrected ed.). New York, NY: Springer. ISBN978-0-387-94618-4.
^ abMüller-Lennert, Martin; Dupuis, Frédéric; Szehr, Oleg; Fehr, Serge; Tomamichel, Marco (1 December 2013). "On quantum Rényi entropies: A new generalization and some properties". Journal of Mathematical Physics. 54 (12) 122203. arXiv:1306.3142. Bibcode:2013JMP....54l2203M. doi:10.1063/1.4838856.
^Arqand, Amir; A. Hahn, Thomas; Y. -Z. Tan, Ernest (2024). "Generalized Rényi entropy accumulation theorem and generalized quantum probability estimation". arXiv:2405.05912 [quant-ph].
^Chung, Rebecca R. B.; Ng, Nelly H. Y.; Cai, Yu (14 July 2025). "Generalized numerical framework for improved finite-sized key rates with Rényi entropy". Physical Review A. 112 (1) 012612. arXiv:2502.02319. Bibcode:2025PhRvA.112a2612C. doi:10.1103/tyts-8v8j.
References
Beck, Christian; Schlögl, Friedrich (1993). Thermodynamics of chaotic systems: an introduction. Cambridge University Press. ISBN0-521-43367-3.