Transcript:
Hello, everybody And welcome to the part 2. Of the Tensorflow Serving example. In the first part, we prepared and labeled a training data set of the model, which should predict the probability of the profit. In the second part, we will create our model using Keras. As you may know. The Tensorflow 2.0. Contains Keras as a high level API, Then we will export our model to the format required by tensorflow serving and in the end, we will deploy our model using the tensorflow serving. Also, I will show how to use different versions of the model. Let’s go to the code. In this example, I am going to use Tensorflow 2.0. Let’s import the models and set the default seed for tensorflow, so we can reproduce our experiments. We are going to split our data set into training and test set, but before we are doing this. We need to normalize our data using the standardscaler function. It is required because our features have different ranges To create our model. I’m using the functional Kera’s API, but you can use the sequential model if you want. Our model is really simple. We have one input layer with five neurons since we use five features, two hidden layers and one output layer. You can see that the output layer has only one neuron as you know. From the previous video, we are using the binary classification model to get the probability of the price, touching the take profit level. Here I am using the sigmoid activation function to get the pure probability of our trade. Also, I am using the Adam Optimizer and the mean squared error as a loss function. It’s not the best loss function for our goals, but it is fine for our example. Let’s train and evaluate our model. For evaluation, we will use the test set and test labels. Here you can see the error and the accuracy of our prediction. Let’s store our predictions for the future. Keras provide us with simple methods to save and load our models. Let’s check if our loaded model still predicts the same values as before. Okay, we haven’t got any assertions That means the values are the same. Now let’s save our model to the saved_model format. This format is needed for the tensorflow serving. Keras has the experimental methods, export_saved_model and load_from_saved_model. As you may guess we are using the first method to export our model and the second one to restore it. We need to specify the path where we want to export our model. Here one means the first version. We need this special structure to let the tensorflow serving recognize all your models and all versions of your model’s, Lets. Execute this code and check if we’re still getting the same predictions. After the restoring our model from the saved model fromat, we still have the same results before we start deploying our model. Let’s create another version of it just to add one more hidden layer. Train this model store, the predictions and export. This model is version 2. Now we are going to start the tensorflow serving from the docker. How to install docker on Windows? You can see on my previous video. Let’s prepare our test environment. Before we start the docker, we need to create the modelsconfig file. This file is self-explanatory. The model_version_policy parameter is set to all so we can use all versions of our models without this configuration. The tensorflow serving always uses the latest version of the models. Now we start the docker container with the following parameters: open the RESTapi port and bind our local models folder with a folder in the container. Also we bind our config file with the same file in the container. Then we start thetensor srving and tell it to use our config file. After the docker container is started, we can check. Which models are available? Here we see that two versions of our model are available. Let’s check how our models work. We are going to use the features from only one event to check if our models works correctly when we are using the tensorflow serving. You can see that we got the same predictions using the REST API in the Python notebook. In the end, let’s use the rest API from the Python notebook to connect to the tensorflow serving and check if our prediction is the same as we got early from our model. Oh, we have a problem here as I mentioned earlier. The tensorflow serving uses the latest model by default, but we are comparing the prediction from our first model with the prediction from the second. Let’s fix our request to use the first version of our model. Now we don’t have any errors. The predictions are the same. This is all. I hope this information was interesting and useful. See you in the next video bye.