Cross-Entropy Loss

2024 | AI Dictionary

What is Cross-Entropy Loss: A loss function for classification that measures the difference between predicted and true probability distributions.

What is Cross-Entropy Loss?

Cross-Entropy Loss, also known as Log Loss, is a loss function used for classification problems. It measures the difference between two probability distributions – the predicted probability distribution and the true distribution. A lower cross-entropy indicates that the model ’s predictions are closer to the actual labels.

Formula for Cross-Entropy Loss

The formula is:

\[ L = - \sum_{i=1}^{n} y_i \log(p_i) \]

where:

Applications of Cross-Entropy Loss

Example of Cross-Entropy Loss

In image classification, if a model predicts a 90% probability for the correct label but the true label is 100%, the cross-entropy loss will be lower compared to a prediction of 50%, indicating better model performance.

Did you liked the Cross-Entropy Loss gist?

Learn about 250+ need-to-know artificial intelligence terms in the AI Dictionary.

Read the Governor's Letter

Stay ahead with Governor's Letter, the newsletter delivering expert insights, AI updates, and curated knowledge directly to your inbox.

By subscribing to the Governor's Letter, you consent to receive emails from AI Guv.
We respect your privacy - read our Privacy Policy to learn how we protect your information.

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

X

Y

Z