Our aim is to produce a model that can recognize hand gestures and signs. We will train a model for the purpose of sign language conversion, a simple gesture recognizing model; this will help people converse with people who are innately deaf and mentally disabled. This project can be implemented in several ways such as KNN, Logistic Regression, Naïve Bayes Classification, Support vector machine and can be implemented with CNN. The method we have chosen is CNN as it gives better accuracy compared to the rest of the methods. A computer program is developed using python language which is used to train the model based on the CNN algorithm. The program will be able to recognize hand gestures by comparing the input with preexisting dataset formed using the American sign Language. We will be able to convert Sign Language into text as output for users to recognize the signs presented by the sign language speaker. This model is implemented in Jypter Lab, an extension to the platform Anaconda documentation. To further improve, we will also add / integrate the inputs into black and white and take input from camera after using the method of Background subtraction. With the mask set to detect the human skin, this model will not require a plain background to function and can be implemented using a basic camera and a computing device**.**