Artificial Neural Network-Powered, Driverless Vehicle Concept Development

https://doi.org/10.61710/akjs.v1i2.63

Authors

  • Sahar R. Abdul Kadeem National University of Science and Technology, Nasiriyah - Iraq
  • Ali Naser
  • Ahmed R. Hassan National University of Science and Technology, Nasiriyah - Iraq
  • Ghufran Abbas Betti Electrical and Computer Engineering Department, University of Tabriz - Iran

Keywords:

Convolutional neural network, Embedded systems, Deep neural network

Abstract

Autonomous cars are now possible due to significant advances in robotics and intelligent control systems. Before these vehicles can safely operate in traffic and other hostile environments, there are many navigation, vision, and control issues. We want techniques that are both cost-effective and efficient, so that the field of research and academia may fully embrace self-driving cars. Within this scenario, we need something that can convert people to autonomous automobiles and include existing vehicles so that academics and explorers can access them. This study proposes a flexible mechanical layout that can be assembled in a short time and installed in most modern automobiles; it can also be used as a stepping stone in the development of autonomous vehicles. Using various actuators, conventional automobiles can be converted into autonomous vehicles. In the context of motor vehicle automation, motors are often used as actuators. In addition to motors, a pneumatic system was developed to automate the predetermined steps. An autonomous vehicle's mechanical arrangement is crucial, and it must be regularly updated and built to be robust in the face of dynamic conditions. We re-implemented two additional convolutional neural networks in an effort to conduct an objective test of their proposed network and compare our system's structure, technical complexity, and performance test during autonomous driving with theirs. This predicted network is around 250 times larger than the Alex Net network and four times larger than Pilot Net after training. Although the complexity and measurement of the publication's system are lower than other models that contribute lower latency and greater speed throughout inference, the operation was claimed by our system, which achieved autonomous driving with an equivalent efficacy as that achieved with two other models. The projected deep neural system reduces the need to infer ultra-fast computational hardware. This is important for cost efficiency, scale, and cost.

References

Han, X., Zheng, H., & Zhou, M. (2022). Card: Classification and regression diffusion models. Advances in Neural Information Processing Systems, 35, 18100-18115.‏

Khan, A., Sohail, A., Zahoora, U., & Qureshi, A. S. (2020). A survey of the recent architectures of deep convolutional neural networks. Artificial intelligence review, 53, 5455-5516.‏

Qi, J., Liu, X., Liu, K., Xu, F., Guo, H., Tian, X., ... & Li, Y. (2022). An improved YOLOv5 model based on visual attention mechanism: Application to recognition of tomato virus disease. Computers and electronics in agriculture, 194, 106780.‏

He, K., Chen, X., Xie, S., Li, Y., Dollár, P., & Girshick, R. (2022). Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16000-16009).‏

Dang, L. M., Min, K., Wang, H., Piran, M. J., Lee, C. H., & Moon, H. (2020). Sensor-based and vision-based human activity recognition: A comprehensive survey. Pattern Recognition, 108, 107561.‏

Zheng, G., Li, X., Zhang, R. H., & Liu, B. (2020). Purely satellite data–driven deep learning forecast of complicated tropical instability waves. Science advances, 6(29), eaba1482.‏

Wang, W., Xie, E., Li, X., Fan, D. P., Song, K., Liang, D., ... & Shao, L. (2021). Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 568-578).‏

Van der Laak, J., Litjens, G., & Ciompi, F. (2021). Deep learning in histopathology: the path to the clinic. Nature medicine, 27(5), 775-784.‏

Guo, M. H., Xu, T. X., Liu, J. J., Liu, Z. N., Jiang, P. T., Mu, T. J., ... & Hu, S. M. (2022). Attention mechanisms in computer vision: A survey. Computational visual media, 8(3), 331-368.‏

Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2023). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7464-7475).‏

Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., & Terzopoulos, D. (2021). Image segmentation using deep learning: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(7), 3523-3542.‏

Li, Y., Zhang, Y., Timofte, R., Van Gool, L., Yu, L., Li, Y., ... & Wang, X. (2023). NTIRE 2023 challenge on efficient super-resolution: Methods and results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1921-1959).‏

Acuna, D., Ling, H., Kar, A., & Fidler, S. (2018). Efficient interactive annotation of segmentation datasets with polygon-rnn++. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 859-868).‏

Wang, T. C., Liu, M. Y., Zhu, J. Y., Liu, G., Tao, A., Kautz, J., & Catanzaro, B. (2018). Video-to-video synthesis. arXiv preprint arXiv:1808.06601.‏

Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., ... & Zieba, K. (2016). End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.‏

Bojarski, M., Yeres, P., Choromanska, A., Choromanski, K., Firner, B., Jackel, L., & Muller, U. (2017). Explaining how a deep neural network trained with end-to-end learning steers a car. arXiv preprint arXiv:1704.07911.‏

Mehta, A., Subramanian, A., & Subramanian, A. (2018, December). Learning end-to-end autonomous driving using guided auxiliary supervision. In Proceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing (pp. 1-8).‏

Chen, Y., Wang, J., Li, J., Lu, C., Luo, Z., Xue, H., & Wang, C. (2018). Lidar-video driving dataset: Learning driving policies effectively. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5870-5878).‏

Ramezani Dooraki, A., & Lee, D. J. (2018). An end-to-end deep reinforcement learning-based intelligent agent capable of autonomous exploration in unknown environments. Sensors, 18(10), 3575.‏

Zhang, Y., Gao, J., & Zhou, H. (2020, February). Breeds classification with deep convolutional neural network. In Proceedings of the 2020 12th International Conference on Machine Learning and Computing (pp. 145-151).‏

George, K. S., Abhiram, A., Jose, A., Madhav, A. S., Najiya, N., & Aswin, S. (2022, March). Steering Angle Estimation for an Autonomous Car. In 2022 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES) (Vol. 1, pp. 168-173). IEEE.‏

Yao, K., & Zheng, Y. (2023). Fundamentals of Machine Learning. In Nanophotonics and Machine Learning: Concepts, Fundamentals, and Applications (pp. 77-112). Cham: Springer International Publishing.‏

Trimboli, M., Avila, L., & Rahmani-Andebili, M. (2022). Reinforcement learning techniques for MPPT control of PV system under climatic changes. In Applications of Artificial Intelligence in Planning and Operation of Smart Grids (pp. 31-73). Cham: Springer International Publishing.‏

Alami Machichi, M., El Mansouri, L., Imani, Y., Bourja, O., Hadria, R., Lahlou, O., ... & Bourzeix, F. (2022, November). CerealNet: A Hybrid Deep Learning Architecture for Cereal Crop Mapping Using Sentinel-2 Time-Series. In Informatics (Vol. 9, No. 4, p. 96). MDPI.‏

Fayaz, S. A., Jahangeer Sidiq, S., Zaman, M., & Butt, M. A. (2022). Machine learning: An introduction to reinforcement learning. Machine Learning and Data Science: Fundamentals and Applications, 1-22.‏

Zaidi, S. S. A., Ansari, M. S., Aslam, A., Kanwal, N., Asghar, M., & Lee, B. (2022). A survey of modern deep learning based object detection models. Digital Signal Processing, 126, 103514.‏

Chen, H. K., & Yan, D. W. (2019). Interrelationships between influential factors and behavioral intention with regard to autonomous vehicles. International journal of sustainable transportation, 13(7), 511-527.‏

Oyekanlu, E. A., Smith, A. C., Thomas, W. P., Mulroy, G., Hitesh, D., Ramsey, M., ... & Sun, D. (2020). A review of recent advances in automated guided vehicle technologies: Integration challenges and research areas for 5G-based smart manufacturing applications. IEEE access, 8, 202312-202353.‏

Jones, E., Devaraj, P. R., Smart, J. J., & Pagoti, D. K. (2020). AUTOMATION IN LONG-HAUL TRUCKING-CURRENT STATE OVERVIEW, LIMITATIONS AND DISRUPTIVE EFFECTS. International Supply Chain Technology Journal, 6(03).‏

Prasad, J. P. (2021). AI Based Wireless Sensor Networks in Real Time Traffic Monitoring using Spherical Grid Routing Protocol.

Khan, M. A., El Sayed, H., Malik, S., Zia, M. T., Alkaabi, N., & Khan, J. (2022). A journey towards fully autonomous driving-fueled by a smart communication system. Vehicular Communications, 36, 100476.‏

Lu, Q., Chen, L., Li, S., & Pitt, M. (2020). Semi-automatic geometric digital twinning for existing buildings based on images and CAD drawings. Automation in Construction, 115, 103183.‏

Zhou, C., Liu, X. H., & Xu, F. X. (2021). Intervention criterion and control strategy of active front steering system for emergency rescue vehicle. Mechanical Systems and Signal Processing, 148, 107160.‏

Zhang, L., Zhang, X., Bao, C., & Ma, K. (2021, July). Wavelet J-Net: A Frequency Perspective on Convolutional Neural Networks. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-8). IEEE.‏

Shi, Y., & Sheng, P. (2021). J-Net: asymmetric encoder-decoder for medical semantic segmentation. Security and Communication Networks, 2021, 1-8.‏

Published

2023-12-14

How to Cite

Kadeem, S. R. A., Naser, A., Hassan, A. R., & Betti, G. A. (2023). Artificial Neural Network-Powered, Driverless Vehicle Concept Development. AlKadhim Journal for Computer Science, 1(2), 17–31. https://doi.org/10.61710/akjs.v1i2.63