TY - GEN
T1 - Slotting Learning Rate in Deep Neural Networks to Build Stronger Models
AU - Sharma, Dilip Kumar
AU - Singh, Bhopendra
AU - Anam, Mamoona
AU - Villalba-Condori, Klinge Orlando
AU - Gupta, Ankur Kumar
AU - Ali, Ghassan Khazal
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021
Y1 - 2021
N2 - In recent years, deep neural networks have made substantial progress in object recognition. However, one issue with deep learning is that it is currently unclear which proposed framework is exaggerated for a specific hitch. As a result, distinct dispositions are attempt before one that produces satisfactory results is discovered. This paper described a distributed supervised learning method for finding the best network architecture by modifying specifications for a perceived task dynamically. In the case of the MNIST information gathering, it is shown that asynchronous supervised learning can agree on a solution space. Setting several hyperparameters can be time-consuming when constructing neural networks. In this post, we'll provide you with some tips and instructions for better organizing your hyperparameter tuning process, which should help you find a good setting for the hyperparameters much faster.
AB - In recent years, deep neural networks have made substantial progress in object recognition. However, one issue with deep learning is that it is currently unclear which proposed framework is exaggerated for a specific hitch. As a result, distinct dispositions are attempt before one that produces satisfactory results is discovered. This paper described a distributed supervised learning method for finding the best network architecture by modifying specifications for a perceived task dynamically. In the case of the MNIST information gathering, it is shown that asynchronous supervised learning can agree on a solution space. Setting several hyperparameters can be time-consuming when constructing neural networks. In this post, we'll provide you with some tips and instructions for better organizing your hyperparameter tuning process, which should help you find a good setting for the hyperparameters much faster.
KW - Artificial intelligence
KW - Convolutional neural network
KW - Deep learning
KW - Hyper-parameter
KW - Hyperparameter tuning
KW - Machine learning
KW - Neural networks
KW - Orthogonal array
UR - http://www.scopus.com/inward/record.url?scp=85123189981&partnerID=8YFLogxK
U2 - 10.1109/ICOSEC51865.2021.9591733
DO - 10.1109/ICOSEC51865.2021.9591733
M3 - Conference contribution
AN - SCOPUS:85123189981
T3 - Proceedings - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021
SP - 1587
EP - 1593
BT - Proceedings - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2nd International Conference on Smart Electronics and Communication, ICOSEC 2021
Y2 - 7 September 2021 through 9 September 2021
ER -