Archive for May, 2010

Cloning a clone: Hg vs Git

May 12, 2010 13 comments

I’m in the progress of learning about git and mercurial (Hg), and so you’ll probably see a number of posts from me about this subject.

I know, I know, cue the “git rawks!”, “Hg is da bomb!”, and “they’re both lame, stick with SVN!” debates… anyways, I’m coming at this from a total newbie’s perspective. In the past I’ve written about TortoiseGit and some of the issues I had, so I decided to skip the GUI and go commando command line only.

The first experiment I’ll try where I pit one DVCS against the other is the “cloning a clone” scenario. In the mercurial tutorial, this is described as the “blessed” way of doing things – see the section Making and Reviewing Changes. Whenever you have an experimental change to make, you should clone your local repo (which is itself a clone of someone else’s) and push changes up to your primary cloned repo when they’re stable. Then push them up to the place you cloned them from later. Also, this is described/required by the Integrator workflow in the Pro Git book, specifically Chapter 5.1 – Distributed workflows.

Cloning a Clone: Hg

In the spirit of brevity, I’ll just stick to a terse command line dump. I’m working on Ubuntu 9.10 with Hg 1.3.1, which is installed via

sudo apt-get install hg

First things first… let’s clone someone’s public repo. Heck, how about hgsubversion?

atto@zues:~$ hg clone
destination directory: hgsubversion
requesting all changes
adding changesets
adding manifests
adding file changes
added 608 changesets with 1432 changes to 181 files
updating working directory
144 files updated, 0 files merged, 0 files removed, 0 files unresolved

So far, so good. I now have a clone of hgsubversion. Now, let’s clone the clone.

atto@zues:~$ hg clone hgsubversion my-hgsubversion
updating working directory
144 files updated, 0 files merged, 0 files removed, 0 files unresolved

Yawn. This works so flawlessly, it is actually boring me to sleep.

What happens if I now make conflicting changes in each of the repos, then try to mash those conflicts together into an unholy mess?

Edit line 42 in both repos, commit, etc.

atto@zues:~/hgsubversion$ hg status
atto@zues:~/hgsubversion$ hg commit -m "Conflict in remote"
atto@zues:~/hgsubversion$ cd ../my-hgsubversion
atto@zues:~/my-hgsubversion$ gvim (make changes)
atto@zues:~/my-hgsubversion$ hg commit -m "Conflict in clone clone"

Ok, it took longer to type this post than it took to make these changes. Boring so far…

At this point, I have two conflicting changes – one in my clone of hgsubversion, and one in my clone-clone of hg-subversion. Now, a sensible thing to do would be to push changes from the clone-clone up to the clone (pretend it’s an integration branch in SVN speak, or a testing playground of sorts).

atto@zues:~/my-hgsubversion$ hg outgoing
comparing with /home/atto/hgsubversion
searching for changes
changeset:   608:5776ac5c7b12
tag:         tip
user:        Foo Bar
date:        Wed May 12 21:25:09 2010 -0700
summary:     Conflict in clone clone

atto@zues:~/my-hgsubversion$ hg push
pushing to /home/atto/hgsubversion
searching for changes
abort: push creates new remote heads!
(did you forget to merge? use push -f to force)
atto@zues:~/my-hgsubversion$ hg incoming
comparing with /home/atto/hgsubversion
searching for changes
changeset:   608:45a0d7d3e796
tag:         tip
user:        Foo Bar
date:        Wed May 12 21:24:40 2010 -0700
summary:     Conflict in remote
atto@zues:~/my-hgsubversion$ hg pull
pulling from /home/atto/hgsubversion
searching for changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files (+1 heads)
(run 'hg heads' to see heads, 'hg merge' to merge)

So, I tried to push up some changes, but doing so would’ve created a two-headed monster, so Hg told me “did you forget to merge? use push -f to force”. That’s actually pretty friendly, maybe I should do a merge.

atto@zues:~/my-hgsubversion$ hg merge
QFSFileEngine::open: No file name specified
0 files updated, 1 files merged, 0 files removed, 0 files unresolved
(branch merge, don't forget to commit)
atto@zues:~/my-hgsubversion$ hg stat
atto@zues:~/my-hgsubversion$ hg diff
diff -r 5776ac5c7b12
(cut diff)
atto@zues:~/my-hgsubversion$ hg commit -m "Fix merge issues. Bad developer, no pizza"

The odd message about QFSFileEngine::open: was the point in time where Kdiff3 popped up to help me resolve the merge. No surprises here, the merge went off flawlessly as expected, cool.

Now I can push my changes (dumb as they are) up to the original clone:

atto@zues:~/my-hgsubversion$ hg outgoing
comparing with /home/atto/hgsubversion
searching for changes
changeset:   608:5776ac5c7b12
user:        Foo Bar
date:        Wed May 12 21:25:09 2010 -0700
summary:     Conflict in clone clone

changeset:   610:b333c539af3d
tag:         tip
parent:      608:5776ac5c7b12
parent:      609:45a0d7d3e796
user:        Foo Bar
date:        Wed May 12 21:29:59 2010 -0700
summary:     Fix merge issues. Bad developer, no pizza

atto@zues:~/my-hgsubversion$ hg push
pushing to /home/atto/hgsubversion
searching for changes
adding changesets
adding manifests
adding file changes
added 2 changesets with 2 changes to 1 files

Cool. That just… worked. I don’t think I was confused for even a second, things just worked as I would expect them to. I created a conflict I knew Hg couldn’t automatically resolve, it told me what to do, and it took <5 min total to run the experiment.

How about git?

Cloning a Clone: Git

Here I’m installing git-core and git-svn, git-svn isn’t strictly necessary but if I get time I’d like to experiment with pulling (and hopefully pushing) to an SVN repo. If it works, then I can use git all day long and seamlessly push to SVN when my changes are stable.

sudo apt-get install git-core git-svn

Now, let’s clone a public repo so we can experiment and have a little fun.

atto@zues:~$ git clone
Initialized empty Git repository in /home/atto/hg-git/.git/
got f5493088320f5587bcbcf701bd82ce6625a50e27
walk f5493088320f5587bcbcf701bd82ce6625a50e27
(cut 200+ lines of debug-type info)
walk 01bddec68dab48693af40968567ce4cff535f269

OK, it’s a little verbose – I mean, come on, do I really need to care about each and every changeset and it’s hash? But hey, at least it worked with no issues.

Now let’s clone the clone…

atto@zues:~$ git clone hg-git my-hg-git
Initialized empty Git repository in /home/atto/my-hg-git/.git

Fast forward a bit, I’m gonna skip the part where I make conflicting changes and commit them to the two different clones. BTW, don’t forget to do a “git add” before committing, which is annoying but probably a useful feature somehow.

Now, how do we tell what changes are ready to go out without actually pushing them?

There is no “git outgoing”, but after fishing around a bit I discovered that “git status” has some useful information:

atto@zues:~/my-hg-git$ git status
# On branch master
# Your branch is ahead of 'origin/master' by 1 commit.
# Untracked files:
#   (use "git add ..." to include in what will be committed)
nothing added to commit but untracked files present (use "git add" to track)

Hmmm OK so I’ve got one commit that needs pushing, still not sure which one but I’ll leave that be for now.

Of course, doing a “git push” isn’t smart enough to figure out where to push to or what branch to push… so I finally ended up with

atto@zues:~/my-hg-git$ git push origin master
To /home/atto/hg-git
 ! [rejected]        master -> master (non-fast forward)
error: failed to push some refs to '/home/atto/hg-git'

Well, this error mesage looks annoyingly familiar… is this git’s unique way of telling me that I need to pull and merge first?

atto@zues:~/my-hg-git$ git pull origin master
remote: Counting objects: 5, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 3 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From /home/atto/hg-git
 * branch            master     -> FETCH_HEAD
CONFLICT (content): Merge conflict in
Automatic merge failed; fix conflicts and then commit the result.

Cool, that did something useful-ish. It didn’t pop up kdiff3, or any editor for that matter – but perhaps that’s a matter of configuration? Anyways, off I go to gvim and resolve the conflict by hand.

atto@zues:~/my-hg-git$ git add
atto@zues:~/my-hg-git$ git commit -m "Fix for conflict in clone clone"
[master fabff96] Fix for conflict in clone clone

With any luck, now I can finally push my two changesets (the conflict and the conflict fix) to origin master:

atto@zues:~/my-hg-git$ git push origin master
Counting objects: 10, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 730 bytes, done.
Total 6 (delta 4), reused 0 (delta 0)
Unpacking objects: 100% (6/6), done.
warning: updating the current branch
warning: Updating the currently checked out branch may cause confusion,
warning: as the index and work tree do not reflect changes that are in HEAD.
warning: As a result, you may see the changes you just pushed into it
warning: reverted when you run 'git diff' over there, and you may want
warning: to run 'git reset --hard' before starting to work to recover.
warning: You can set 'receive.denyCurrentBranch' configuration variable to
warning: 'refuse' in the remote repository to forbid pushing into its
warning: current branch.
warning: To allow pushing into the current branch, you can set it to 'ignore';
warning: but this is not recommended unless you arranged to update its work
warning: tree to match what you pushed in some other way.
warning: To squelch this message, you can set it to 'warn'.
warning: Note that the default will change in a future version of git
warning: to refuse updating the current branch unless you have the
warning: configuration variable set to either 'ignore' or 'warn'.
To /home/atto/hg-git
   be73d6a..fabff96  master -> master

Well, that’s downright confusing, but at the end of the barf message, it looks like something happened. Trotting over to the original clone and checking the log, I can see that I now have three new commits – one in the clone, two in the clone-clone.

So… I think it worked.

Just to be sure, I opened and checked the line in question… and it’s not been updated.

Well, in SVN and Hg you do an “update” command to bring the working copy into latest state… does that work for git?

atto@zues:~/hg-git$ git update
atto@zues:~/hg-git$ git help
(nothing useful)

Argh. I know this one. It’s somewhere in my brain…. processing…. processing… the lights are on but nobody’s home.

Oh yeah, the log above said something about running “git reset –hard”. That sounds a bit scary, but these are dummy test repos so who cares if I blow them away by accident?

atto@zues:~/hg-git$ git reset --hard
HEAD is now at fabff96 Fix for conflict in clone clone

Oooooooh yeah, now that’s what I’m talking about… suddenly, has the new, updated, correct information.

Why does that take a 2-page error/warning message?

So, at this point, I’ve successfully cloned a clone, and pushed conflicting changes up from the clone-clone to the clone – just the same as in Hg.


It doesn’t take a genius to see two things right off the bat:

  1. git’s error mesages range from confusing to overly verbose to completely useless
  2. git required more time reading help files and guessing which commands to run

Sure, #2 could be explained quite simply by my lack of understanding, my complete idiot self not reading enough documentation and manuals, me being a typical user. And… guess what? That’s correct.

I didn’t read any documents, manuals, tutorials, or howtos. And I shouldn’t have to, not for simple things like conflict resolution and cloning (aka checkouts in SVN speak).

Either way, git’s apparent lack of useful error messages makes it critically apparent that git wasn’t designed for ease of use or to ease the transition from any other SCM. Git was designed by Linus as a change/patch management tool for serious kernel hackers, and even now, years later… it’s just not easy to switch.

And Hg?

Well, there are other issues with Hg (like HTTPS behind a corporate proxy, for example) that are well known and documented. But for now, as I sit here writing this incredibly long and boring post… Hg just works for me.

I didn’t have to read the manual, I just started typing commands. What little bit I remembered about Hg is floating around from reading the Hg book several months ago, and so I’m starting basically at ground level.

I want to like git… I’ve read of good things about git… but the barriers to entry are significantly higher, such that I’m having a hard time learning how to use it. Trying to teach a team of other developers to use git? Wow, I’m not even sure I want to think about that. My head is spinning just imagining how to explain to someone why they have to “git add” before a “git commit”, much less what “origin” and “master” are and why you have to “git reset -hard” whenever you push.

So I guess that’s the bottom line… as a newbie idiot user, git confuses the heck out of me, and Hg just works.



Categories: Uncategorized Tags: , , ,

Three Things I Love About C

May 12, 2010 20 comments

Like many other coders out there, I am multilingual.

Well, I’m not really multilingual per se… I can speak English passably and I know enough Spanish to make a fool of myself. Obviously, I’m talking about programming languages.

In no particular order, I have (at one time or another) dabbled in Scheme, Pascal, QBasic,, C#, Prolog, Python, PHP, pl/SQL, Perl, Ruby, x86 Assembly, machine code, bash/shell, Java, GLSL, and C++, and C. And probably some others I’ve forgotten.

I like Ruby and Python a good deal but don’t spend much time using them, most of the code I write goes into filesystems or drivers – so not suprisingly, I’m a big fan of C.

Here are three of my favorite things about C, things that absolutely tickle me pink and make me love coding in C.

Inline Assembly

What other language lets you do this:

asm("int $3");

Or better yet,

asm("leal -4(%esp), %ecx");

Inline assembly into C programs gives incredible flexibility and power to the system designer. Of course, in the wrong hands, inline assembly can hose up the system and make software suck. But you can’t deny the raw awesome computing power afforded by being able to inject assembly directly.

The Preprocessor

From macros to textual replacement to conditional compilation, the C preprocessor let’s you do some neat things which I find both frustrating and amazing.

A few stupid examples:

#ifdef __KERNEL__
#define READ_REGISTER(addr) ioread32(addr)
#define READ_REGISTER(addr) call_other_func(addr)

This can be used to support multiple OSes and/or environments without incurring runtime penalties in a common, shared codebase.

#define MAGIC_NUMBER 5
#ifdef __KERNEL__
#define printf(...)   printk(KERN_ERR, __VA_ARGS__)
printf("%d", MAGIC_NUMBER);

Now to be fair, you can accomplish the same thing in other languages – you can define constants in place of magic numbers, and define your own debug printf which detects the environment and does something different based on OS or other factors. But typically other languages push this decision making process into runtime, impacting performance.

When it comes to macros, sure, you can do all kinds of stupid things – for example,

#define MIN(a, b) ((a) < (b) ? (a) : (b))
foo = MIN(var1, func1(args));

If the return value of func1 can vary between function invocations, for example it accesses a shared data structure – you just set yourself up for a big fat race condition that could take you days to hunt down. Good luck with that.

All things considered, the preprocessor provides power and flexibility I’ve often found missing in other languages.


Ok, obviously all the other languages out there have some concept of references or pointers… but C thrives in weird pointer math and pointer calculations. What other language lets you get away with code like

((unsigned char*)dword_pointer)[byte_offset] = byte;

Now at this point purists will usually be panicking and hyperventilating at the sheer craziness of such an attempt – why would ANYONE EVER want to commit such an act of sedition?!! Well, because sometimes you do. Maybe you have code that needs to iterate through an array a dword at a time, but this one function needs to change a specific byte. Heck if I know, point is this kind of flexibility rocks.

Another way this is manifest is in C’s handling of multi-dimensional arrays – or rather, C’s lack of handling multi-dimensional arrays.

int foo_array[5][10]
int bar_array[50];

foo_array[43] = 10;
bar_array[43] = 10;

Both statements above set element 43 (byte offset 172) to 10. C doesn’t complain, because all arrays are really just pointers to virtually contiguous blocks of memory.

For that matter, you can even do the following:

struct foo bar;
unsigned char * wacky_ptr = (unsigned char*)&bar;
*(wacky_ptr + 10) = 5;

Holy crap batman, what kind of blasphemy is going on here? I’m taking a pointer to a struct, typecasting it to a byte pointer, and setting byte 10 of the structure to 5. Why would anyone ever want to do that?!!

Who cares?

Point is, you can. And there are legitimate, actually useful reasons for wanting to do this. Sure, you can abuse this flexibility and write completely useless and unmaintainable code… but in the right hands, this type of flexibility works wonders.

And that really is the crux of the issue for me. C affords you all kinds of power and flexibility not present or possible in other languages. You want to inline assembly and muck around with processor registers? Sure, go ahead, whatever. You want to #define printf to be an infinite loop and confuse the heck out of your coworkers? Why the heck not, it’s your choice Mr. Programmer. You want to do crazy weird things with pointers? Be my guest!

The reality is, C seperates the men from the mice. Coders from posers. The flexibility C provides can be just enough rope to hang yourself – or in the right hands, it can be a powerful tool that creates amazing and beautiful software.

And that is what I love about C.

Next time tune in for what I hate about C 🙂



Categories: Uncategorized Tags: , ,

For the Love of Code

Chris Wanstrath of github fame wrote a very intriguing gist the other day, which was picked up on proggit and other geek news sites. Naturally, being a geek myself, I picked up the feed via RSS on my phone, and started geeking out.

Chris starts out with a laundry list of tips on how to become a famous blogger and Rails rock star, and I hate to admit that he had me with the first paragraph. Being an obscure, faceless blogger I was totally eating up his ideas and making a big, long mental checklist of TODOs, planning out my trail to fame and glory as I followed his sage advice. Let’s see… I need to work on my blog template, get a domain name, start contributing to open source code, go to Ruby/Rails conferences… some other stuff… and whammo! I’m a frickin’ rock star!

Yeah, I was pretty into the blog post, having already spent my geek cred before it hatched. Or something.

Much to my surprise, Chris did a quick 180 and started talking up the true Ruby heroes, those who care about the code first and blog as an afterthought. Those people are the real rock stars, the people who are in it for the pure, unadulterated love of code.

Personally, I look up to the good developers. The people who don’t care about their RSS subscription count, who blog as an afterthought. People who aren’t concerned with how many Twitter followers they have and work on their pet project every week because they love it. Who’ve contributed to Rails for years because it’s their passion and aren’t overly concerned with their speaker’s bio.

People who care about code, first and foremost.

This paragraph hit me like a baseball bat to the temple. The way he crafted the post was so masterful, I literally groaned out loud… I walked right into that one.

Anyways, the rest of his post is fascinating and motivating, you should go read it right after you finish this page.

Thinking about Chris’s post and coding for code itself made me think about my own experiences with code, specifically how I got started and how I got hooked. So buckle down, strap in, and keep reading – queue a long boring personal story… right… now.

My first exposure to programming was about 15 yrs ago, when my little brother came home from the library with a stack of computer books. We sat down in awe and flipped through the mystical pages filled with illegible characters and symbols, and finally settled on a sample program written in QBASIC. We spent a good half hour typing up the sample program in edit.exe, and gave it a whirl. Now QBASIC is a ghastly language written to confuse and befuddle those unfortunate enough to come within 100 yards… seriously, it has got to be one of the worst languages on the planet. But I digress.

The program was supposed to be some kind of logic puzzle, you would input the dimensions of a cube. Then the computer would calculate the surface area and tell you how many other boxes could fit inside, or some other stupid thing.

I was surprised and dissapointed by the program, I kept waiting for it to pop up a 3D picture of the boxes and graphically show how to stack the boxes inside each other. Instead I was greeted by some ugly text in the DOS terminal, and that was lame.

I mean, the program was at least 75 lines, that’s enough lines to do anything with a computer, right?

I walked away in frustration, vowing to never code again.

Two years later, my brother got into ASM fires (anyone remember those? Good ole days) and I was hooked. We spent hour after hour hacking on the pallettes, experimenting with different blending techniques, etc. QBASIC logic puzzles? Meh. ASM fires and voxel engines? Now that’s hot. We spent an entire summer once writing a 3D “tank wars” clone, and that was awesome. I had become hooked on code.

But it really wasn’t until years later, after numerous more programs, successes, and failures that I realized I had grown to love code. Code is a language, a culture, a way of living. Once it gets under your skin, it’s there to stay. You can take a coder away from his computer, stick him on a desert island, and he’ll count the days by marking notches in the nearest tree branch… in binary.

Great code comes from the minds of those who code for code itself, from projects driven by passionate, motivated people who love the code they write.

So here’s to coding… for the love of code itself.

If you have a great story about your first experience coding or with code you loved creating, post it below for old time’s sake 🙂


Categories: Uncategorized Tags: , ,

Saving Time with Hudson CI

I ran across the following blog post about Hudson CI, it makes the ridiculous claim that you can install and configure Hudson in less that 5 minutes. So, I decided to give Hudson a shot, if nothing else just to prove them wrong. Everyone knows software doesn’t ever just work and always takes 10x as long to install as you think it will.

Boy, was I wrong.

From the command line in my Ubuntu server:

$ sudo echo deb binary >> /etc/apt/sources.list
$ sudo apt-get update
$ sudo apt-get install hudson

It automatically installed the correct dependencies, then installed and started hudson. Within a few minutes, I was pointing my browser to http://localhost:8080 and configuring Hudson.

Surprised but still skeptical, I clicked through a few pages and within a few minutes Hudson was happily building and running our integration tests.

I was stunned.

I’ll admit, I’m not the world’s expert when it comes to validation or agile methods, but I’m no dummy either. And I’ve had some bad experiences with other validation platforms, bad enough to scare our whole team away from them for a while.

Our testing methodology hasn’t changed much – we still run the sanity tests by hand before each checkin, and we still run longer stress tests periodically… but Hudson has simplified our validation and made it… almost fun to find bugs.

After getting the sanity checks to run after every checkin, I set up our stress tests to automatically run every night, with longer validation runs over the weekend. No more nagging team members, no more “did you run the stress tests?”. Just checking the dashboard to find out how the tests ran.

It was so easy to configure, I went ahead and ran the sanity tests in a tight loop 100 times. That’s something I never would have done normally… but Hudson made it all too easy. And guess what? Hudson found a bug for me. When we run the sanity tests 100 times in a loop, they crash intermittently. Could be a test setup issue, could be a real bug, not sure yet. But we never would’ve found it without Hudson. Could we have found the bug by hand? Duh, of course… but we probably wouldn’t have. Having a CI tool like Hudson makes you step back and look at your validation holistically, and that’s a good thing.

The bottom line for me, what has me tickled pink and uber excited is the time I’m saving. And not just the few minutes of hassle nagging people to run the tests, or the collecting of log files and reports… no, that’s nice and every minute of development time saved is money in the bank. But Hudson now has us testing our software 24/7, continuously integrating… and my gut tells me this will add up in much larger savings down the road. The earlier we detect and fix bugs, the more money we save. And that makes me happy.

Now to be fair, there are still some things I haven’t gotten working – parallel builds elude me for the time being, and email notifications don’t work behind our proxy server (yet) – but today, so far, I’m loving Hudson. Are there other CI servers that would work fine? Probably, but who cares! Hudson is good to me so far.

It. Just. Flipping. Works.

So go download it and start using it. You’ll wonder why you waited so long.