Research on Neural Cells in Artificial Intelligence
Keywords:
Artificial neuron, artificial intelligence, machine learning, artificial superintelligence, intellectual activity.Abstract
This article provides information about the general classification and structure of artificial neural networks and the tasks they solve. The areas where artificial neural networks and their applications are used were also discussed.
References
Jordan, J. Intro to optimization in deep learning: Gradient Descent/ J. Jordan // Paper-space. Series: Optimization. – 2018. – URL: https://blog.paperspace.com/intro-to-optimiza-tion-indeep-learning-gradient-descent/
Scikit-learn – машинное обучение на Python. – URL: http://scikit-learn.org/stable/ modules/generated/sklearn.neural_network. MLPClassifier.html
Keras documentation: optimizers. – URL: https://keras.io/optimizers
Ruder, S. An overview of gradient descent optimization algorithms / S. Ruder // Cornell University Library. – 2016. – URL: https://arxiv. org/abs/1609.04747
Robbins, H. A stochastic approximation method / H. Robbins, S. Monro // The annals of mathematical statistics. – 1951. – Vol. 22. – P. 400–407.
Kukar, M. Cost-Sensitive Learning with Neural Networks / M. Kukar, I. Kononenko // Machine Learning and Data Mining: proceed-ings of the 13th European Conference on Artificial Intelligence. – 1998. – P. 445–449.
Duchi, J. Adaptive Subgradient Methods for Online Learning and Stochastic Optimiza-tion / J. Duchi, E. Hazan, Y. Singer // The Jour-nal of Machine Learning Research. – 2011. – Vol. 12. – P. 2121–2159. 8. Zeiler, M. D. ADADELTA: An Adap-tive Learning Rate Method / Cornell Univer-sity Library. – 2012. – URL: https://arxiv.org/ abs/1212.5701
Kingma, D. P. Adam: A Method for Sto-chastic Optimization / D. P. Kingma, J. Ba // Cornell University Library. – 2014. – URL: https:// arxiv.org/abs/1412.6980
Гудфеллоу, Я. Глубокое обучение / Я. Гу-дфеллоу, И. Бенджио, А. Курвилль. – М. : ДМК Пресс, 2018. – 652 с.