I've used Scalene at various times in the past few years, and always liked using it when I want to dig deeper compared to cProfile/profile. You might also want to look at:
In my experience, memory profilers that show line or function which allocated memory, are not very useful. What I usually want instead is to get a graph of all objects in memory with their size and path from root, to see who is using most memory. To see for example, that there is some cache or log that is never cleared.
For PHP and for browser's developer tools there is such kind of profiler. But judging by screenshots, this profiler cannot produce such a graph. So its usability to solve memory leak/high consumption issues is limited.
To summarize, what I usually need is not a line number, but a path to objects using/holding most memory.
PathOfEclipse 43 minutes ago [-]
You are comparing two different types of things:
1. Allocation profiler
2. Heap analyzer
Allocation profilers will capture data about what is allocating memory over time. This can be captured in real time without interrupting the process and is usually relatively low-overhead.
Heap analyzers will generally take a heap dump, construct an object graph, do various analyses, and generate an interactive report. This generally requires that you pause a program long enough to create a heap dump, which is often multiple GB or more in size, write it to disk, then do the subsequent analysis and report generation.
I agree that 2) is generally more useful but I assume both types of profilers have their place and purpose.
1st1 14 hours ago [-]
For profiling memory consider far more advanced memray.
Underneath it's still substantially similar to good old Windows NT.
There's a Linux "subsystem". Well, two of them. WSL1 is an API translation layer that ends up being cripplingly slow. Don't use it. WSL2 is more of a VM that just runs a Linux distro. This is before you get into third party compatibility layers like cygwin and mingw.
emeryberger 11 hours ago [-]
(Scalene author here) Nope, but WSL2 (Windows Subsystem for Linux) is, and Scalene works great with it.
motbus3 7 hours ago [-]
I used many times and sometimes I used py-spy. That helped me to improve several projects where people told me there were network problems but it was actually not
embeng4096 2 hours ago [-]
+1 for py-spy. I would love to try out Scalene but the application I have does something funky with pickling for multiprocessing, breaking Scalene in the process (my fault, not Scalene's, I'll be looking into that). Py-spy worked, including catching all the sub-processes. Feeding the py-spy JSON output into https://speedscope.app makes for a very easy way to profile in a time crunch if you don't have time to get familiar with CLI tools or can't install stuff locally but have a browser and internet connection.
emeryberger 2 hours ago [-]
Scalene author here - please file an issue!
embeng4096 2 hours ago [-]
I will! I'll try to pare down my customer's code into a minimal example I can post.
carlmr 2 hours ago [-]
>where people told me there were network problems but it was actually not
Always ask if they assume it's network bound or they have measurements. Measurements may sometimes be wrong, but assumptions are more often wrong than right in performance engineering.
alfons_foobar 6 hours ago [-]
kudos for actually looking what the problem is.
As a (former) NetEng, it bothers me to no end that so many people claim "it's the network" when their application is slow / broken, without understanding the actual problem.
kristianp 17 hours ago [-]
This profiler was mentioned in the context of rewriting js tools in faster languages here:
If people wonder why there are so many tools for the slowest language on the planet:
In addition to being slow and unsuited for abstractions, people write horrible code with layers and layers of abstractions in Python. These tools can sometimes help with that.
People who do write streamlined code that necessarily uses C-extensions in Python will probably use cachegrind/helgrind/gprof etc.
Or switch to another language, which avoids many categories of other issues.
blackbear_ 6 hours ago [-]
Opening throwaway accounts just to rage against things you don't understand is really shameful.
codedokode 4 hours ago [-]
Python might be not great on using CPU time, but is saves a lot of human time for writing the code compared to "fast" languages.
Rendered at 16:48:46 GMT+0000 (Coordinated Universal Time) with Vercel.
1] https://github.com/joerick/pyinstrument
2] https://github.com/benfred/py-spy
3] https://github.com/P403n1x87/austin
4] https://github.com/bloomberg/memray
5] https://github.com/pyutils/line_profiler
For PHP and for browser's developer tools there is such kind of profiler. But judging by screenshots, this profiler cannot produce such a graph. So its usability to solve memory leak/high consumption issues is limited.
To summarize, what I usually need is not a line number, but a path to objects using/holding most memory.
1. Allocation profiler
2. Heap analyzer
Allocation profilers will capture data about what is allocating memory over time. This can be captured in real time without interrupting the process and is usually relatively low-overhead.
Heap analyzers will generally take a heap dump, construct an object graph, do various analyses, and generate an interactive report. This generally requires that you pause a program long enough to create a heap dump, which is often multiple GB or more in size, write it to disk, then do the subsequent analysis and report generation.
I agree that 2) is generally more useful but I assume both types of profilers have their place and purpose.
https://github.com/bloomberg/memray
Sigh, why infest everything with "AI".
That's a problem with many of the profiling tools around Python. They often support Windows badly or not at all.
Underneath it's still substantially similar to good old Windows NT.
There's a Linux "subsystem". Well, two of them. WSL1 is an API translation layer that ends up being cripplingly slow. Don't use it. WSL2 is more of a VM that just runs a Linux distro. This is before you get into third party compatibility layers like cygwin and mingw.
Always ask if they assume it's network bound or they have measurements. Measurements may sometimes be wrong, but assumptions are more often wrong than right in performance engineering.
As a (former) NetEng, it bothers me to no end that so many people claim "it's the network" when their application is slow / broken, without understanding the actual problem.
https://lobste.rs/s/ytjc8x/why_i_m_skeptical_rewriting_javas...
The rewrite discussion is here: https://news.ycombinator.com/item?id=41898603
In addition to being slow and unsuited for abstractions, people write horrible code with layers and layers of abstractions in Python. These tools can sometimes help with that.
People who do write streamlined code that necessarily uses C-extensions in Python will probably use cachegrind/helgrind/gprof etc.
Or switch to another language, which avoids many categories of other issues.