What Mini-Batch Meaning, Applications & Example
Small subset of training data used in each iteration.
What is Mini-Batch?
Mini-batch refers to a small subset of the entire training dataset used in each iteration of the training process. Instead of training on the entire dataset at once (batch) or one example at a time (stochastic), mini-batch training strikes a balance, improving both the efficiency and convergence speed of machine learning algorithms, particularly in deep learning .
Benefits of Mini-Batch
- Faster Training: Mini-batch allows parallel processing of multiple data points, speeding up the training process compared to stochastic gradient descent .
- Memory Efficiency: By processing smaller batches, it reduces memory consumption compared to using the full dataset.
- Better Generalization: Mini-batch introduces a degree of randomness that can help avoid local minima, potentially leading to better generalization.
Example of Mini-Batch in Training
In deep learning, you typically divide your dataset into mini-batches. For example, if you have 1000 data points and you choose a mini-batch size of 100, you’ll update the model ’s weights 10 times for each full pass over the dataset (epoch ).
# Example pseudo-code for mini-batch [gradient descent](/ai-dictionary/gradient-descent/)
for epoch in range(num_epochs):
for batch in mini_batches:
# Compute gradient for this mini-batch
gradient = compute_gradient(batch)
# Update weights
weights -= learning_rate * gradient