26. Tuning Process
2023. 9. 12. 11:35ㆍGoogle ML Bootcamp/2. Improving Deep Neural Networks
Hyper parameters
1. learning rate
2. beta (Adam : beta1, beta2)
3. epsilon
4. number of layers
5. number of hidden units
6. learning rate decay
7. mini-batch size
which is the most important?
- we don't know.
Thus, try random value. (Don't use a grid)
- if number of hyperparameter is small, grid is ok
- but really high dimension(=number of hyperparameter is large), it's difficult to know what is the most important hyperparameter in aplication.
조합최적화 문제?
- 일단 무작위로 조합을 설정 후 여러개를 테스트
- 성능이 좋은 지점을 주변으로 subset 지역을 할당하여 다시 설정된 범위 내에서 여러개를 테스트.
'Google ML Bootcamp > 2. Improving Deep Neural Networks' 카테고리의 다른 글
28. Hyper parameters Tuning in Practice: Pandas vs Caviar (0) | 2023.09.12 |
---|---|
27. Using an Appropriate Scale to pick Hyper parameters (0) | 2023.09.12 |
25. The Problem of Local Optima (0) | 2023.09.11 |
24. Learning Rate Decay (0) | 2023.09.11 |
23. Adam Optimization Algorithm (0) | 2023.09.11 |