November Happy Hour will be moved to Thursday December 5th.

Magnus Stråle
Feb 8, 2010
  5263
(2 votes)

The truth is out there…

I have just spent a couple of late evenings hacking away on a piece of performance critical code and I have naturally been profiling it extensively. Here are a few lessons learned that might help some of you doing the same.

  1. Running code under a profiler will affect the performance characteristics
    In most cases a profiler will give a good indication of your performance bottlenecks, but there are occasions when it will point you in the wrong direction. My biggest mistake was to spend 4+ hours rewriting a small (10 lines) routine over and over again since dotTrace (running in Tracing profiling mode) indicated that it took > 20% of the execution time. It seems as if the Tracing profiling mode in dotTrace will skew the results heavily when making frequent calls to small routines. When running the same code in Sampling mode my small method didn't even show up. Doing "custom profiling" with the Stopwatch class also showed the same result as with Sampling mode.
    [UPDATE! using the EAP version of dotTrace 4 and running with CPU Instruction tracing the Tracing mode worked a lot better - comparing it to Sampling mode the difference was usually in the range 1 - 2 % which is quite acceptable]
  2. Use real-life data when measuring performance
    This is a no-brainer, but I was running with highly artificial data for a long time since I was "just in early development stages - will fix it later"-mode.
  3. Don't wait too long with performance testing
    Use performance testing early in the project to evaluate the efficiency using various approaches to solving the problem at hand. Doing a complete rewrite after you have a nice, working (but too slow) solution is not fun.
  4. Don't optimize too soon
    This may seem contrary to the previous paragraph, but this is just trying to point out that early performance testing should not make you focus on fine-tuning a few methods, but keep your eyes on the big picture. Keep your code as clean as possible for as long as possible.

I have personally found that "Optimize too soon" is usually where I fail. The simple reason is that it is so much fun comparing measurements and see that you have just cut another 2% in execution time...

Finally a good source of inspiration is Michael Abrash series of articles on optimizing the Quake engine. http://www.bluesnews.com/abrash/

Feb 08, 2010

Comments

Sep 21, 2010 10:33 AM

Interesting article Magnus!
I think #4 is worth some extra emphasis. It's all to easy and fun to make premature optimizatons that increase the complexity of the code while developing a small unit. However, when doing that you don't really see the big picture and you might be making very small improvements that will complicate or even hide greater improvements that you can do later, when taking a few steps back from the code.

/ Joel Abrahamsson

Please login to comment.
Latest blogs
Optimizely SaaS CMS + Coveo Search Page

Short on time but need a listing feature with filters, pagination, and sorting? Create a fully functional Coveo-powered search page driven by data...

Damian Smutek | Nov 21, 2024 | Syndicated blog

Optimizely SaaS CMS DAM Picker (Interim)

Simplify your Optimizely SaaS CMS workflow with the Interim DAM Picker Chrome extension. Seamlessly integrate your DAM system, streamlining asset...

Andy Blyth | Nov 21, 2024 | Syndicated blog

Optimizely CMS Roadmap

Explore Optimizely CMS's latest roadmap, packed with developer-focused updates. From SaaS speed to Visual Builder enhancements, developer tooling...

Andy Blyth | Nov 21, 2024 | Syndicated blog

Set Default Culture in Optimizely CMS 12

Take control over culture-specific operations like date and time formatting.

Tomas Hensrud Gulla | Nov 15, 2024 | Syndicated blog