Friday, August 26

Why story points are a more accurate estimating unit than hours

If you've been a developer for very long you've more than likely been on the project that is over due, over budget, and is nowhere close to being done.  There is no light at the end of the tunnel, the client is mad, management is mad, tempers are short, and no one is having very much fun.

I'm not going to pretend I have the answer to that. If I did I'd be writing a book instead of a blog and all my code would be written for fun and curiosity because I wouldn't need to write code for a living anymore.  However, I have a suggestion for where the problem lies, and that is in estimation.

There are lots of ways to be wrong during the estimation process... it's a lot easier to be wrong than right.  I think those reasons have been covered ad nauseum.  You know the least about the project in the beginning, you don't know if the client has described their needs correctly, you aren't sure the client even knows their needs or has the ability to articulate them and, above all, you don't know what you don't know.  For the most part, that stuff is given and there's very little you can do about it.

Many software companies try to use past performance on similar features to indicate future results.   However, in general past performance on a feature has very little to do with future results unless the problems are identical.  Most people are familiar with the concept that sometimes it's the last 5% of the work that takes 90% of the time.  So we know and accept that small differences in functionality can make huge differences in development time.

There are several reasons agile projects succeed where waterfall projects fail, first is adaptability.  It's easy to succeed when you simply modify the definition of success to fit into what you can accomplish given other constraints.  However, one could also say the definition of success is usually wrong from the beginning so hopefully it moves correctly to a more realistic definition.

Second is brutal honesty as early as possible... and this is probably where most agile projects that fail fall apart.  Being brutally honest with clients is not very easy and no one's idea of a good time.... let alone doing it before anyone else perceives it as a problem.  The logic enters in that "maybe I can fix this before anyone notices it was a problem".  63 hours later in a task that was supposed to last 8, you are forced to admit that your attempts have failed.  The difference is that now you have reached the same result and there are 10-12 other tasks that have suffered because of it.  So you try to work extra to catch up causing more problems in your ability to perform professionally as well as in your personal responsibilities to your family and to yourself.

How does that have anything to do with story points as an estimation tool?  The answer is, honesty matters.  Not that developers are being intentionally deceptive, but we have a tendency to estimate the best case or what we think is the worst case.  No matter how honest a developer tries to be in giving an hours estimate, there is always a tendency for the thought to lodge in the back of his mind that they have to give a number of hours that is acceptable rather than the number of hours he actually thinks the work is going to take.  In addition, especially in the beginning of the project or in the estimation process, the tasks are so large that most just throw a rock in the general direction and call it good because there's no way they can get enough detail to respond with an estimate in time to be considered unless they do.

Estimating in points is an estimation of complexity... something that a developer probably has a better concept of than how many hours it will take to do something she has never done before.  If you think about the estimate in hours, the normal conclusion is, "If you've done it before, this time it will take less time to finish".  However, complexity does not change based on the number of times you have done something.  In fact, an estimate of complexity becomes more accurate with repetition.  Past experience may have an impact on velocity or it may not, it all depends on the impact of the slight changes, but those should be factored in during the estimation of complexity.

The best part about estimating in points is, historical performance CAN be a valid input.  We can say that, in general, our teams finish x points per week(velocity).  This allows us to develop a timeline based on average velocity that is more accurate than using past performance by feature.  How can I make that assertion?  Because quite simply, even similar functionality is still different.  So creating a one page checkout for an e-commerce site is not necessarily going to take even a similar amount of time across different projects.  Since points apply to overall complexity, not to individual features, the sample set is larger so you have more data to draw on and you can develop a better average.

The idea is to get an average baseline that is more accurate so you deliver on time and within budget or recognize that you are off your timeline earlier, allowing you to set customer expectations properly.   The more you can expand your sample set, the greater your chances of achieving an accurate average.  By using a point system,  you have a unit of measure that stretches across many different projects with different functionality.  The result is a wider cross section and more data from which to base your timeline estimates.  While I do not propose that this as the magic answer to the problem of inaccurate estimates, I do believe it is a way to be more accurate.  When paired with other agile principles such as brutal honesty and adaptability, you are in a better position to succeed in your project as a whole while maintaining happier customers and a happy, well balanced staff.


Enhanced by Zemanta

Thursday, August 25

x2x beats synergy in a 2 computer configuration

I have used synergy for years so when my laptop is on my desk I can easily use one keyboard to control both my desktop and laptop.  However, either through recent changes in X windows or changes in synergy, I often have issues where left clicking and keyboard operations are simply broken.

Enter x2x.  There are a number of limitations, not the least of which is an inability to share clipboard and you are limited to 2 computers.  The clipboard is a slight limitation, but I've never had  a need for more than 3 or 4 displays and two computers can easily keep up with that requirement.

The most important things to me are speed and stability and I get both of those with x2x.  I also don't need start software on both machines to make it work, ssh does all of that for me.  However, I do need to have x2x installed on both machines.  On Ubuntu that's s simple command:

sudo apt-get install x2x

Next, on the computer that is sharing the keyboard and mouse(the from machine), execute this command:

ssh -XC user@host x2x -east -to :0.0

If you have set up key authentication, this will automatically log you in and start the session.  If you specify -east like I did and you move your mouse off the right side of the screen, it will now show up on the host that you specified.   I have not had the opportunity to do so yet, but at some point I will try it the opposite direction, specifying -from instead of -to in order to see if I can make this work every time I turn my laptop on and my desktop is available on the network.  You can do this with a simple startup script that does a single ping to a machine on the network and returns the result.  Theoretically(as I said I haven't tested yet) this should work.


ping -c 1 host
result=$?
if [ 0 -eq $result ]; then
    ssh -XC user@host x2x -west -from :0.0
fi

EDIT: Confirmed, the code above works quite well.  Set that up as a startup script in Gnome and it will automatically share mouse and keyboard whenever you start up the machines on the same network.

Enhanced by Zemanta

Wednesday, March 30

ActiveInbox makes Gmail perfect

I am a huge fan of GMail.  I use it for almost everything and I have since it really was in beta.  I use it for personal mail, I've set up my whole family on it for their mail, and I even use it as a viewer for my mail at work.  They aren't cheap, they'd buy me Outlook, but besides the fact that they don't make an Outlook for Linux, I like GMail's tools better.  Google has it's hooks in me for sure.  However, the addition of a one browser extension makes Gmail everything it wasn't before.  That extension is ActiveInbox.

If you haven't tried it before, ActiveInbox add a GTD workflow that those of us who fight with their inbox every day a little bit of clarity of mind.  You have all the tools.  Quick labels for status, you get context, references, and some tools like a quick view on emails(no loading the email in a new page, the mail comes up lightbox style), and in the Plus product($2 / month) you get killer features like tickler/deadlines and notes on emails, which is something I've wanted forever and never had a good option.

When you add these tools to the newer tools like multiple inboxes and Smart Labels, it really gives you the tools to get your inbox to zero quickly and accurately without having to do a lot of manual sorting and archiving.  That's the real killer part about it, because everyone starts with a plan, but unless that plan is easy to implement or you have the discipline of a Marine, the plan eventually falls apart and the inbox goes back to chaos.


Enhanced by Zemanta

Thursday, March 24

Perl 9 from the wife's perspective

In case you missed the not-so-recent interview on linuxformat.com, there was a fantastic and hilarious response from Gloria Wall(Larry Wall's wife) that I thought could be highlighted.


LXF: You said that Perl 6 was your one chance to break backwards compatibility. Do you think Perl 9 might be the same thing again?
GW: There's not going to be a Perl 9.

Larry went on to talk about how Perl might get to a major release 9, but the way I picture that in my head is my wife responding to a question like that about something I was working on that had consumed a major portion of my life.  To me, that is some funny stuff... in the spirit of "Wife says no, Apple says yes".

Wednesday, March 16

There's no going back now - A.K.A my newfound love for git

I have toyed with version control systems for years now, seeking a way to have a single process that can accommodate the way I  need to work with version control for different projects and applications.  I have done my time with sccs, CVS, SVN, ClearCase, and so on.  I've also had a brief affair with bazaar.  But so far I think I've found the one to settle down with... and that is git.

My requirements are pretty simple.  As a developer, I want simple things to be simple and hard things to be possible.  I want to work with SVN because that's what everyone usually uses.  But I want the ability to commit and revert locally before sending it all to the central repository.  I also want to check it all in at once... there's no reason to check in the entire history of my experiments.  What I check into the the central repo should be a clean change.  My version control should do it's job and stay out of the way while exposing enough features to me that I can make up for mistakes and share my code efficiently.

As an admin, I want the freedom to be able to pick exactly what goes into a deployment branch.  If feature a, b, and c were planned for a release but c is lagging behind and won't make it, I want the ability to just get a and b.  Git gives me that fairly easily.  It also allows me to make up for the shortcomings of other version control systems.  For instance, the cherry pick feature allows me to pick up individual commits and pull them into another branch. Git makes working with branches very simple and while I'm not terribly proficient with it yet(i.e. I still have to do a lot of google searching to get what I want), I have yet to run into a way that I want to manage my files that I could not do in git.

As an end user, and this really seals the deal, I have a need that I've always wanted to manage with version control and that is, store other data... OpenOffice documents, notes, etc... the stuff that goes in your "Documents" directory.  I can also do that with git and the file compression makes it easy to put my documents on a shared drive so I have it on all my machines, it's always backed up, always available, and always up to date.

In short, if you're thinking of evaluating version control systems, I suggest trying git as your first option.  You may end up trying out others but my guess is you'll like git enough for it to be your vcs of choice as well.
Enhanced by Zemanta

Thursday, January 6

How to extract ISBN from PDF e-books

Here's a super simple script to extract the majority of US/English ISBN numbers from a pdf file in Linux.  The dependency, if you don't already have it is pdftk... but if you do anything with pdf files, you probably have it.  If you need to look for other formats or you have suggestions for improving the script, please feel free to comment.

for file in $(ls)
do  
    echo --- $file
    pdftk $file output - uncompress | grep -o "\([0-9]\{3\}-\)\?[0-9]-[0-9]\{3,5\}-[0-9]\{3,5\}-[0-9X]"
done


Make sure you remove spaces first though because they really leave a mark with this script.  You can do that like this:http://techgurulive.com/2008/09/22/how-to-remove-spaces-from-filenames-linux/