WHT

Aurélien Géron | Tensorflow 2.0 Changes

Aurélien Géron

Subscribe Here

Likes

1,339

Views

44,366

Tensorflow 2.0 Changes

Transcript:

Hi, I’’m, Aurélien Géron. And today I’’m going to present what will change in the upcoming TensorFlow version 2.0, A few weeks ago on August 13th, Google announced. “tensorflow 2.0, is coming”. It will focus on ease of use. In particular, eager execution will become the default mode and there will be a big cleanup. Deprecated APIs will be removed and there will be much less duplication. A preview version will be released. “later this year”. The date is not defined yet. The actual release of Tensorflow 2.0, will happen a few weeks after that. My guess is that it will be somewhere around December or January. So why is Google doing this? In March this year, Andrej Karpathy tweeted this nice graph comparing the percent of Machine. Learning papers that mention each Deep Learning library. Clearly, Caffe, Theano and Torch are going down Plus Mxnet Cntk and Chainer remain pretty low, while Tensorflow and Keras are shooting up and most Kera’s users use TensorFlow as the Backend. So the TensorFlow curve is probably higher still. But notice that PyTorch is shooting up as well. People who prefer PyTorch usually argue that it’’s much simpler than TensorFlow. And until recently, this was true. For example, this is how you can use Pytorch to compute 1 + ½ + ¼ + ⅛ + … and so on which converges to 2. It’’s pretty straightforward code. You just execute the computations. You need very naturally. In contrast, this is how you would do the same thing in TensorFlow. First there’’s the construction phase, where you build the graph. Then the execution phase, where you evaluate some nodes in the graph. You have to create a session. You have to initialize all the variables, etc. It’’s much more verbose and error prone Harder to understand Plus it’’s hard to debug. For example, suppose you try to evaluate both. The addition and the division operations in the same call to `sessrun()`, then the result becomes unpredictable. This is because TensorFlow does not see any dependency between these operations, so it runs them in parallel and the order is not guaranteed. This is easy to fix, but it can be hard to catch. Here is another more common example: X is used to compute Y, which is used to compute Z. If you evaluate Z and you get a bad result, such as a NaN value or an exception, TensorFlow won’’t tell you where the problem came from. In this case, it’’s easy to see that the problem comes from Y, But when there are many operations, it can tricky to debug. Tensorflow actually has a dedicated debugger to help you, but it’’s not that easy to use. For the same reason, graph mode is also hard to profile. You can easily measure how long it takes to compute Z, but this won’’t tell you how much time it took to compute Y or X. Fortunately, since Tensorflow 1.4, you can use eager mode instead of graph mode. As you can see there’’s, no construction, phase execution, phase graphs or sessions. You just compute what you need and you get the results right away. This makes TensorFlow just as easy to use as PyTorch. It simplifies coding, debugging and profiling. In this example, I’ve used constants, but you could use variables instead like this and update them inplace by using the assign method. Now, in Tensorflow 2.0, eager execution will be the default. So you won’’t need to enable it. And you will be able to just create variables with tfVariable() like this. Okay, so eager mode is great. It simplifies TensorFlow tremendously, So you might wonder what’’s the point of graph mode. Well, graph mode has its benefits. The first is performance. If you place a set of operations directly on a GPU device, they will run there without all the back and forth with Python and the CPU. This can significantly boost performance, especially if you are dealing with deep models and small batch sizes. Also, once you have created a graph, it is possible to use a great feature of TensorFlow called XLA XLA analyses the graph and improves its performance, both in terms of execution, speed and memory footprint. For example, it can fuse together groups of operations so that they can run faster on the device and use less memory. Another benefit of graphs is that they make it simpler to deploy your models to any device whether it’’s a mobile phone or a cluster of GPU servers. You are not tied to a particular language. You could train a model using Python and run it in Java or vice versa. Now the good news is that TensorFlow supports both eager mode and graph mode. For example, this program runs in eager mode, but within the `with` block, it runs in graph mode. It is not executing eagerly. After the `with` block, the program would be back to eager mode. A simple way to mix eager mode and graph mode is to run in eager mode and use the `defun` decorator to create a function that runs in graph mode. You can call this function from eager mode and TensorFlow will automatically take care of creating the graph, starting a session, feeding the data and getting the resulting tensor. This way you can mix the simplicity of eager mode with the power of graph mode. By the way, you can also create such a graph function using the `make_template()` function. Another easy way to create a graph is using the new autograph module First. You simply write a regular Python function (in this example, the square_sum() function just computes the square of the sum of a and b) then you call the to_graph() function to convert it to a TensorFlow function in this case called tf_square_sum(). This function can then be used both in eager mode and in graph mode using tensors. Right now we are calling it in eager mode so it runs in eager mode. But now we create a graph and within the `with` block, we call the `tf_square_sum()` function so it runs in graph mode: it just creates the operations required to perform the computations. It just adds them to the graph. It does not actually run them yet. Then we start a session and we evaluate the resulting tensor and we get the result. In this simple example, the benefit is not obvious, so let’’s look at a more complex function. Here I just wrote a simple fibonacci() function that computes the n’th number in the famous Fibonacci sequence. Notice that this function contains a loop. So when autograph converts this function to a TensorFlow version, it knows that when it is run in graph mode, it should create a graph that uses TensorFlow’’s `while_loop()` operation. The graph itself will contain the loop. If you have ever tried to use loops in TensorFlow graphs before you probably found it painful. So you will love autograph? Okay, so Tensorflow 2.0, is starting to look really good. It uses eager execution by default, but it also supports graph mode, which can provide better performance and portability, and it comes with nice features such as function, decorators and autograph that make graph mode much easier than before. But not so fast. Pytorch enthusiasts will point out that PyTorch is more pythonic. It’’s more object-oriented. In short, it has a cleaner design. For example, take variable sharing. Suppose you want to build this neural network which is meant to compare two inputs? Xa and Xb. This is called a Siamese network. Typically, you want the two lower layers to share the same weights since they are supposed to do the exact same thing to their inputs. So how do you do this in Tensorflow 1 Well, you create the placeholders for XA and XB. Then you create the two lower, dense layers, H1A and h1b. But you make sure they have the same name “h1” and you ensure that the second layer has reuse set to True, so it will reuse the variables created by the layer with the same name. This name-based design is not ideal. If you use the same graph to build multiple models, you may run into name conflicts. The implementation relies on tfvariable_scope() and tfget_variable(), which are pretty difficult for beginners to understand. It feels brittle and unecessarily complex. The good news is TensorFlow. 2.0, will drop this approach altogether, avoiding relying on global state and more generally being far more object oriented. In particular, Tf 2.0, will rely heavily on the Kera’s API To build the same Siamese network. You start by creating two inputs. Then you create a single. Dense layer, which holds some variables. And you simply call it once for Xa and once for Xb. That’’s all there is to it. Sharing variables is easy, intuitive. Pythonic object-oriented. I’’m just in love with Keras and It’’s really well integrated in Tensorflow. Another improvement is the deprecation of collections. For example, consider this TensorFlow 1 code. We create an optimizer and we use it to create a training operation that will minimize the loss. But how does TensorFlow know which variables it should tweak in order to minimize the loss? Well, under the hood. It looks up the list of trainable_variables(). And this list simply comes from the collection named TRAINABLE_VARIABLES. Once again, this design is not ideal. It relies on a set of collections that are attached to the graph. It’’s a sort of global state for the graph and using a global state is usually bad practice in programming. For example, this implicitly assumes that there is only one model per graph. Plus since eager mode does not use graphs at all collections, only work in graph mode. The good news is that TensorFlow 2.0, will deprecate collections. This will lead to cleaner code. For example, if you use tfkera’s, each layer handles its own variables, so if you need to get the list of trainable variables, you can just ask each layer or, in this example, ask the model that contains the layers. Okay, Let’’s Summarize the improvements in Tensorflow, 2.0, regarding variables: Variable_scopes are removed. You must handle variable sharing in an object-oriented way such as using the same Kera’s layer as many times as needed tfget_variable() is also removed. You should use objects to handle the variables such as Kera’s layers and models. You must now pass the list of variables to train to the optimizer’’s `minimize()` method. Use an object such as a Kera’s model to handle all the variables you need. Also Tfassign(a, B) will be removed. You should use aassign(b) instead. And by the way, it will soon be possible to perform item assignment using the assign() method on a variable slice. All right, so Tensorflow 2.0, will support eager mode and graph mode nicely. It will be much more object-oriented, including high level Apis, such as Kera’s Wonderful. But wait, Pytorch fans will point out that PyTorch’’s API is nice and clean, while TensorFlow’’s API is cluttered and full of duplicates and deprecated APIs And that’s true. For example, you can create layers with tflayers or tfkeraslayers and so on. Well, Tensorflow, 2.0, is getting rid of tflayers you should use. Kera’s layers instead and the Kera’s losses and Kera’s Metrics will be based on Tflosses and Tfmetrics. Another problem is that tfcontrib is completely packed with various projects, some of which have already been merged into the core. API such as Kera’s Others have been abandonned, but they are still here and so on. Well, Tensorflow, 2.0, will get rid of all of tfcontrib. Some projects will be merged in the core API others will be moved to separate projects and some will just be deleted. Finally, the API will be better organized. For example, everything related to debugging will be placed in the tfdebugging package and so on. Some functions will be moved. Others will just have a new alias. Tensorflow, 2.0, will try to better balance the need for a fairly flat hierarchy that makes coding less verbose and the need for well organized packages to make it easier to find related functions. A migration tool will be provided to convert Tensorflow 1 code to Tensorflow 2.0, code. As always with these kinds of tools, it will probably still require some hand tuning, but at least it will take care of a significant part of the migration. Plus, there will be a tfcompatv1 package for backwards compatibility so this will facilitate the transition. In summary, Tensorflow 2.0, will be awesome. Clean object-oriented pythonic, eager by default with easy-to-use graph features, making it highly portable and optimized. It will still have a rich. API, including Keras, the Data API TF Hub and much more plus a large community great documentation, many projects built on top of it TPUs available on Google Cloud and more. One last thing: Google launched a public design review process so anyone can propose improvements for TensorFlow 2.0, or comment on existing proposals and participate in the discussions. Just go to the tensorflow/community project on Github and look at the pull requests labeled RFC: Proposed. If you have questions about the development or the migration, you can ask them in the Google Group discuss@tensorfloworg. You can also follow all the development discussions by joining the developers@tensorfloworg group. In case you want to learn more about TensorFlow and Deep Learning in general, I am currently updating my book Hands-on Machine Learning with Scikit-learn and Tensorflow to Tensorflow 2.0, with Tfkeras, the Data API TF Hub how to deploy on Google Cloud using TPUs plus a new chapter on Bayesian Learning and Bayesian, Deep Learning and more content on unsupervised learning such as clustering and more. I hope you enjoyed this video. If you did, don’t hesitate to like, subscribe share and all that and click on the little bell if you want to be notified about my next videos. Have a great day.

0.3.0 | Wor Build 0.3.0 Installation Guide

Transcript: [MUSIC] Okay, so in this video? I want to take a look at the new windows on Raspberry Pi build 0.3.0 and this is the latest version. It's just been released today and this version you have to build by yourself. You have to get your own whim, and then you...

read more