Use a modification version of Shannon Entropy => Cross-Entropy.
2.1 Binary Cross-Entropy Loss
Output only takes 2 classes.
yi: True label
p(yi): Predicted label
2.2 Cross-Entropy Loss
Output can takes n (> 2) classes.
q(yc): True label, one-hot encoded
p(yc): Predicted label passed through softmax
Comparing with Shannon Entropy
If p(yc) move colser to q(yc) (minimizes the cross-entropy), Cross-Entropy becomes Shannon Entropy. But Cross-Entropy is often greater than Shannon Entropy then we have Kullback-Leibler Divergence. Kullback-Leibler Divergence measures the divergence between q(yc) and p(yc).
Refer this.
What is the difference Cross-entropy and KL divergence?
You will need some conditions to claim the equivalence between minimizing cross entropy and minimizing KL divergence. I will put your question under the context of classification problems using cross entropy as loss functions.
Let us first recall that entropy is used to measure the uncertainty of a system, which is defined as
for as the probabilities of different states of the system. From an information theory point of view, is the amount of information is needed for removing the uncertainty.For instance, the event I will die within 200 years
is almost certain (we may solve the aging problem for the word almost), therefore it has low uncertainty which requires only the information of the aging problem cannot be solved
to make it certain. However, the event I will die within 50 years
is more uncertain than event , thus it needs more information to remove the uncertainties. Here entropy can be used to quantify the uncertainty of the distribution When will I die?
, which can be regarded as the expectation of uncertainties of individual events like and .
Now look at the definition of KL divergence between distributions A and B
where the first term of the right hand side is the entropy of distribution A, the second term can be interpreted as the expectation of distribution B in terms of A. And the describes how different B is from A from the perspective of A. It's worth of noting usually stands for the data, i.e. the measured distribution, and is the theoretical or hypothetical distribution. That means, you always start from what you observed.To relate cross entropy to entropy and KL divergence, we formalize the cross entropy in terms of distributions and as
From the definitions, we can easily seeIf is a constant, then minimizing is equivalent to minimizing .A further question follows naturally as how the entropy can be a constant. In a machine learning task, we start with a dataset (denoted as ) which represent the problem to be solved, and the learning purpose is to make the model estimated distribution (denoted as ) as close as possible to true distribution of the problem (denoted as ). is unknown and represented by . Therefore in an ideal world, we expect
and minimize . And luckily, in practice is given, which means its entropy is fixed as a constant.the models usually work with the samples packed in mini-batches. For KL divergence and Cross-Entropy, their relation can be written as
so haveFrom the equation, we could see that KL divergence can depart into a Cross-Entropy of p and q (the first part), and a global entropy of ground truth p (the second part).In many machine learning projects, minibatch is involved to expedite training, where the of a minibatch may be different from the global . In such a case, Cross-Entropy is relatively more robust in practice while KL divergence needs a more stable H(p) to finish his job.
0 Comments