Day 5: Profiling
☀️ Mood: Feeling well rested
🎵 Soundtrack: This is Enya
📚 Reading Github Issues
Started today by reading the 52 new github issues that were still open since I read issues last Thursday. (Actually I started the day by doing an hour of expense reports that ended up being totally wrong. But that is less fun to write about.)
Trends I noticed from the issues:
- People really care about compiling: 1, 2. I continue to be shocked by how many people know how their code is compiled and actually have opinions on it.
- Lots of issues about generics: 1, 2. This is a new proposal that is currently being worked on. I haven’t dug deeply into this yet, but I expect I will sooner rather than later given the number of issues I am reading about it.
- Some clean up work: 1. I like seeing that technical debt is continuously being taken care of. I wish I snagged this issue though. I should’ve offered to work on it first. Darn.
👀 The one that caught my eye for today though, was this
one about improving go
vet to catch common memory leaks with time.Tick
and time.NewTicker
.
Then my thought process went like this:
- does gorouter (or any CF code) have this memory leak?
- it has a similar looking one
here
- how can I prove (or disprove) if that code is leaking memory?
- I should learn more about profiling in go!
- how can I prove (or disprove) if that code is leaking memory?
- it has a similar looking one
here
📈 Profiling
🎥 I started by watching “GopherCon 2019: Dave Cheney - Two Go Programs, Three Different Profiling Techniques”. Dave (a former fellow VMeer!) also has it expanded as a blog post.
I followed the examples he did in his talk and used his profiling package.
Basically all it took was adding the following line to the beginning of my code:
defer profile.Start(profile.CPUProfile, profile.ProfilePath(".")).Stop()
// or
defer profile.Start(profile.MemProfile, profile.ProfilePath(".")).Stop()
// or
defer profile.Start(profile.TraceProfile, profile.ProfilePath(".")).Stop()
Then, after the program ran, I would run:
go tool pprof -http=:8080 cpu.pprof
// or
go tool pprof -http=:8080 mem.pprof
// or
go tool trace trace.out
Things I learned about profiling:
- pprof can show you the source code where the memory allocation/cpu usage is happening.
- different types of profiling have different overheads.
- the memory profiling (and others??) just take samples. By default mine sampled 1 out of every 4096 allocations. But using the profiling package I was able to tune it and make it profile every allocation.
- the go tool trace uses chromes built in javascript debugger.
🧠 profiling on my own in order to test my newfound profiling skills
I decided to turn to my beloved Project Euler. I use Project Euler problems to practice when I am trying to learn a new language. One of the things I like about the problems is that the naïve solution will take minutes or hours or days or years.
I settled on problem 14: Longest Collatz Sequence. I did the naïve solution with no goroutines, which takes nearly 7 minutes to run. By looking at the trace profiling it was very evident that only using 1 CPU was a huge bottleneck. (I mean, obviously, but it’s fun to use the tools).
I started adding goroutines for EVERY number, which then resulted in CPU starved goroutines.
Tomorrow I plan to make some more optimizations and continue profiling. Then
I’ll go back to where I started and use my new profiling knowledge to exaserbate
and monitor the time.Tick
issue.
✨ Meta Thoughts
I’m 1/13 weeks down in this sabbatical. I feel a little floudering. Am I actually contributing to golang yet? No, except some examples on day 1. Am I learning a lot about go though? 🤷♀️ Yes, and that’s good enough for me for now.