Archive

Posts Tagged ‘Epiphany’

Measure always, optimize sparingly

April 19, 2010 Leave a comment

At work today, I found myself falling into a classic coder mental trap, one that’s worth sharing.

We have an automated test suite – we call them our “sanity tests” – that we’re expected to run prior to checking code into the repository. It’s a quick validation run that helps keep the code moving in a positive direction – we can know almost instantly if we’ve made changes that break the system, and back out bad code before anyone else is affected by it. It’s really the only sane way to code, I honestly don’t know how people and teams stay sane without… sanity tests.

Between each test run, we try to initialize the system back to a “known good” state, so failures from a previous test don’t result in false positives in later tests. The system “reset” routine destroys some intermediary data, and depending on the amount of data generated, it can take a while.

Or so I thought.

But I digress.

The total test time is very small, a few seconds on a nice Linux box and maybe half a minute on a slower Windows system. But for whatever reason, the time it takes to reset the system has been bugging me. I keep thinking to myself, if only I could shave the 8.37 seconds down to 4.83 seconds… that would be totally awesome.

So finally, I broke down and gave in to my obsessive compulsive nature and mentally geared up for a good half hour of debug/hack/burn to optimize the reset.

More than half an hour later, I put the finishing touches on the routine that destroys the intermediary data – because that was obviously the slowest, most annoying part.

Right?

Uh… no.

I literally smacked my own head in disgust.

I had fallen for the classic engineer “infinite optimization loop.” The special OCD place we all go where everything is open for tweaking, improving, and optimizing to death. The place where ROI is always infinite and there are no time constraints.

I went back and measured the reset routine, and the piece I had optimized accounted for only 1/3 of the reset time. My optimizations were about a 50% speedup, or 1/6 of the reset time, and only .3 seconds shaved off the best total system time.

Holy cow, I’m a flippin’ rockstar… move over Elvis, here comes dork boy.

When I was in the middle of this grand epiphany, I remembered the sage advice of my uncle – a carpenter of 30+ years: “measure twice, cut once.”

In software engineering, we’re lucky because many physical constraints just don’t apply. There is no board, no saw. So there’s no reason we can’t cut a hundred times, measure, and go back and cut some more.

But the problem is, sometimes we get so far removed from physical constraints, we can far too easily believe there are no constraints. But time, money, and schedules are at odds with the Utopian, ideal engineer’s optimization loop.

So I’d like to propose a motto for coders of all races, creeds, and compilers:

Measure always, optimize sparingly.

Atto

Advertisements
Categories: Epiphany Tags: , ,

Clock Ticks per Lifetime

April 12, 2010 Leave a comment

Okay, so this is a slightly random thought, but I was thinking about clock ticks. And that led to an oddly profound thought, which I’ll share in a minute.

As an engineer, perfectionist, and anal-retentive type of guy, I’m constantly measuring, analyzing, and trying to optimize everything to death.

I am a geek’s geek. The type of geek that is better kept in the basement, the kind you’d be afraid to get too close to a real life customer. I think hacking around reading the Linux kernel source code is fun.

Anyways, part of my job as a software engineer is to “profile” code. I use various nifty tools and gizmos combined with manual instrumentation and analysis to find the slowest/most time-consuming parts of a software project — and then make them faster. Usually, it’s not too tricky to find the worst part, and most of the time it’s not too difficult to make the slowest part faster — it’s generally much harder to work on the right peice of code for the right amount of time.

It’s a lot easier to get lost in optimizing the wrong part of the code, the code you want to fix even though it only impacts 5% of the total runtime. You can spend a week making your program start 10% faster, and not realize that nobody flippin’ cares if your program starts in 2.1 seconds instead of 2.35.

There are other times when you completely miss an important clue, the alarm bells are ringing like crazy and the red flags are going nuts – but you (or someone you know) is asleep at the wheel. The lights are on, but nobody’s home… and you don’t optimize a small but critical piece of code. The reasons this happens are as varied as the seasons, but one common malady is lack of context. The Foo module only takes 749 us (and Bar takes 200 ms), but you don’t realize that Foo runs in a tight loop and is the critical path. So you waste time and money “fixing” Bar and wonder why your software is buggy and slow.

It’s these little contextual clues that make all the difference. These key vital bits of information that provide a frame of reference and give shape to a project.

A good analogy that comes to mind are the corner peices in a puzzle. One of the first things a kid learns about puzzles is that putting the corner pieces in place makes everything just flow together. Trying to build a puzzle without the corners (or sides) is much more difficult. There’s no framework to base the rest of the puzzle on, no structure to latch onto.

Getting to the point of this rambling story, I had a minor epiphany of sorts when thinking tonight about clock ticks. It wasn’t really a corner piece, maybe just a small side of the puzzle that I snapped into place. But I wanted to share it here in case anyone else finds it interesting.

I was thinking about processor clock ticks, and started to do a little math. Common processor clock speeds are in the GHz range, let’s pick 2.66 GHz just for fun. Let’s pretend you’re a respectable geek and you have a Core i7 system which you haven’t overclocked, it’s running at the stock 2.66 GHz clock.

2.66 GHz is a lot of hertz. How many hertz? Well, a gigahertz is a billion cycles per second, so 2.66 GHz is 2.66 times a billion cycles/second.

That means your Core i7 processor has a clock period of

1/(2.66*10^9) sec =~ 375 picoseconds

375 picoseconds is not a lot of time. It happens 2.66 billion times every second, which is amazing because your Core i7 is getting something done every few clock ticks – for simplicity’s sake, let’s say your CPU gets a billion something’s done every second.

That is pretty neat, and totally geeks me out – but it made me think and wonder about how that relates to people. Assuming the average life expectancy is, say, 70 years, how many “clock ticks” do we get in our lifetime?

Drumroll to cue in some more math…

(2.66*10^9 Hz) * (3600 sec/hour) * (24 hours/day) * 
(365.26 days/year) * (70 years/lifetime) = 5.876186 × 10^18

That is a lot of clock ticks.

So, if you’re still with me, and the tedious algebra didn’t send you running for safety – the thought struck me that maybe, just maybe, with approximately 6 x 10^18 clock ticks in my life, I have enough time.

There’s enough spare clock cycles for me to not be a complete perfectionist, to live a little and make mistakes. To learn from those mistakes, and make other stupid mistakes to learn from.

With 6 x 10^18 clock cycles (of which I have at least 3.5 remaining), there’s enough time to relax and enjoy my job, family, church, and life. To not waste time, but choose instead to live it more fully and purposefully.

So the next time someone like me is stressing about some peice of code that doesn’t matter, or stressing over little details that aren’t 100 percent relevant – tell them to relax and burn a few clock ticks being human.

There are enough clock ticks in your life to get everything done, and have fun in the process.

Cheers,

Atto

Categories: Epiphany Tags: , , ,