What Cross-Entropy Loss Meaning, Applications & Example
Loss function commonly used in classification problems.
What is Cross-Entropy Loss?
Cross-Entropy Loss, also known as Log Loss, is a loss function used for classification problems. It measures the difference between two probability distributions – the predicted probability distribution and the true distribution. A lower cross-entropy indicates that the model ’s predictions are closer to the actual labels.
Formula for Cross-Entropy Loss
The formula is:
\[ L = - \sum_{i=1}^{n} y_i \log(p_i) \]where:
- \(y_i\) is the true label (0 or 1).
- \(p_i\) is the predicted probability of the true class.
Applications of Cross-Entropy Loss
- Binary Classification: Measures the accuracy of models predicting binary outcomes (e.g., yes/no, spam/not spam).
- Multi-class Classification: Extends to multiple categories, comparing the predicted probabilities for each class.
- Neural Networks: Often used as a loss function for classification tasks during model training .
Example of Cross-Entropy Loss
In image classification, if a model predicts a 90% probability for the correct label but the true label is 100%, the cross-entropy loss will be lower compared to a prediction of 50%, indicating better model performance.