17. Understanding Mini-batch Gradient Descent
2023. 9. 11. 12:30ㆍGoogle ML Bootcamp/2. Improving Deep Neural Networks
Batch Gradient Descent : m examples(total)
Stochastic Gradient Descent : 1 exmaples(one)
- Stochastic Gradient Descent : pick 1 examples randomly.(=확률적 경사 하강법)
Mini-batch Gradient Descent : (1,m) 그 사이 어딘가 적절한 크기의 examples.
- faster than Stocahstic
- low iteration than Batch
Tip:
1. small training set : use Batch Gradient Descent
- less than 2,000 examples
2. typical : mini-batch size : 64, 128, 256, 512
- depend on computer power
- make sure mini-batch fit in CPU/GPU memory
'Google ML Bootcamp > 2. Improving Deep Neural Networks' 카테고리의 다른 글
19. Understanding Exponentially Weighted Averages (0) | 2023.09.11 |
---|---|
18. Exponentially Weighted Averages (0) | 2023.09.11 |
16. Mini-batch Gradient Descent (0) | 2023.09.11 |
15. Gradient Checking Implementation Notes (0) | 2023.09.10 |
13. Gradient Checking (0) | 2023.09.10 |