Actively measuring and profiling Code
When profiling any code or system workload, I always think of Brendan Gregg’s definition of “casual benchmarking”, which is: “You benchmark A, but actually measure B, and conclude you’ve measured C”. Profiling Erlang systems is frequently very difficult because managing and understanding concurrency is difficult. This talk suggests ways to help Erlang developers understand how they can measure A, confirm they’re measuring A, and quantify how much their measurement of A is actually affecting A’s behavior. In other words, let’s profile the profilers!
* Briefly survey the tools available today in the Erlang/OTP distribution and in the larger open source world.
* Present a new analysis tool, “flame graphs”,and how they can be useful for profiling systems and finding resource limitations.
* Use Erlang’s built-in tracing and DTrace/SystemTap dynamic tracing to measure the side-effects of other profiling & tracing tools (usually by slowing down execution time, but that’s not the only side effect!)
* Suggest hypotheses to test when running any profiling tool, create tests to confirm/refute each hypothesis, and how to analyze the results.
* Give a small view into the flexibility and power of Erlang’s tracing frameworks. They’re big and often confusingly documented. By the end of the talk, you’ll know where & how to begin your own journey toward deeper understanding of your own apps and systems.
* Developers trying to improve the speed of their Erlang code.
* Developers trying to understand the interactions of many Erlang processes in concurrent systems.
* Anyone who wants to understand how to find & fix flaws in performance measurement procedures, from single Erlang VMs to very large distributed systems.