Communication is essential for humanity today and in the past. However, some individuals lack verbal communication due to their innate disability and physical losses from accidents. There are sign-language communication methods developed for such people to communicate. Artificial intelligence solutions are offered to remove the disadvantaged situations of people with disabilities due to communication in daily life. Nowadays, rapidly developing image processing and artificial intelligence methods are proper solutions for the problem focused on in this study. Convolution neural network techniques, which have become very popular recently, offer solutions to many problems. On the other hand, the YOLO algorithm shows very high performance in real-time object detection. In this study, we proposed a method for identifying the alphabets which each gesture delivers. This work studied hand detection on images and classification according to hand movements. The American Sign Language (ASL) standard was used as the sign language. The most recent version of YOLO, known as YOLOv5x, is used for gesture detection. Concentrating on the Static Sign-language problem, a study was conducted on the definition of hand movements. The letters āJā and āZā are not included in the data set because movable hand signals are required. Apart from these two letters, a total number of 24 letters are classified. The proposed model achieved a training performance of 99.45% mAP@.5. Moreover, the proposed model has a performance of 97.9% mAP@.5 on the test dataset. The results demonstrate that the model's object detection performance is excellent. A statistical analysis of the training time shows that the training time has been drastically decreased, 4.5 hours with the current model as compared to the existing models in the literature.