Hi, everyone, this is Abdul from Pythonest. Finding the cause and place of problem is very important before trying to implement the solution. If a program running too slowly are using too much. Ram, then you will want to fix. Whichever parts of your code are responsible. You could, of course directly. Try to fix what you believe might be the problem, but be careful as you will often end up fixing the wrong thing. Rather than using your intuition, it is far more sensible to having defined some hypotheses or at least define a direction towards the problem before making the changes to the structure of your code. Here’s the concept of profiling comes into the place Profiling. Lets us find the bottlenecks, so we can do the least amount of work to get the biggest performance gain. Practically you will aim for your code to run fast enough and lean enough to fit your needs. Profiling will let you make the most pragmatic decisions for the least effort. Any measurable resource can be profiled, Not just the CPU time and memory. In this video, we are going to look at CPU time and memory usage. You could apply similar techniques to my year network, bandwidth and disk. I/O Python provides many excellent modules to measure the statistics of a program. This makes us know where the program is spending too much time. And what to do in order to optimize that particular part. So here are some options for profiling in python? The very first one is using timers. Timers are easy to implement and they can be used anywhere in the program to measure the execution time. By using timers, we can get the exact time and can improve the program where it takes too long. Let me write a very simple example to measure the execution time for a print statement, then we will expand it to measure the execution time for a function. The time is a built in module, so we can directly import it if you have installed the Python on your system as import time, then I will notice the starting time by calling the time function of this module as start equal to timetime() after that, I will print my statement and once again notice the time as the ending time as end equal to time time() and then we can take the difference of starting and ending time to get the execution time And here’s the execution time. Now let’s expand this to a function and prove that making the right changes to the code really affect the execution time, so let’s define a function as Myfunc and let’s perform some operations inside that function. So A= 5 + 3 and B = 4 + 4 then C = a + B. And then let’s define that D and save the result of the division of C by B and return that D variable, it’s a very simple function, but it will prove you that the way of coding effects efficiency of the program, so let’s notice the starting time as start equal to timetime() and calls the function myfunc() and finally grab the execution time once again, taking the difference between starting and ending time and here’s the execution time of this function, so let me remove the C variable and directly calculate the sum and division of a and B and save it to D. So if we were making this very little tweak to change a little bit, optimize a record to get a bad result, a better efficiency and notice the execution time. You can see a noticeable drawn in the execution time this time. Our function takes less time as compared to the previous execution. The second option is the Cprofile module. It’s a built-in profiling tool in the standard library. It hooks into the virtual machine to measure the time taken to run function it sees. This introduces a greater overhead, But you get correspondingly more information. Sometimes the additional information can lead to surprising insights into your code. Cprofile module provides all information about how long the program is executing and how many times the functions get called in a program. So let’s do some practical work with Cprofile module. We can import it as import Cprofile and let me call its run function and execute a simple statement as Cprofilerun() and then put a very simple statement inside that. Okay, that’s great, so let’s try to measure the statistics about a function with Cprofile. So let me define a very simple function named F() and print a simple statement inside that function. Once again, we will call the RUN() function of Cprofile module, but this time we will pass the function F() inside that the result shows us that five function calls in just zero seconds because it takes it in milliseconds, the third and last module for profiling we are going to explore in this video is line, profiler. The line profiler is the strongest tool for identifying the cause of cpu-bound problem in Python code. It works by profiling individual functions on a line-by-line basis, So you should start with Cprofile and use the high level view to guide which functions to profile with line_profiler. It’s worthwhile printing and annotating versions of the output from this tool as you modify your code. So you have a record of changes (successful or not) that you can quickly refer to. You can install it using the PIP command so simply run Pip install line_profiler and we can import the LineProfiler class from this line_profiler module as from Line_profiler import LineProfiler. Let’s define a function with an argument as RK and simply print that argument, then after creating the parameter creates the instance of this LineProfile class, And now we have to call the print_stats() function to get the statistics of this function. You can see the time consumed by this function having looked at profiling techniques, you should have all the tools you need to identify bottlenecks of CPU and RAM usage in your code. In upcoming videos, we will look at how Python implements the most common containers so you can make sensible decisions about efficiently representing larger collections of data.