Monday, December 20

A mono-free Ubuntu desktop

An interesting thing turned up with the latest release of Ubuntu.  The replacement of f-spot with shotwell along with the ppa for Synapse has freed me from my mono dependency.  Previously there was no way I was giving up gnome-do and I have a love-hate relationship with f-spot.  However, since I couldn't find anything I liked better, f-spot it is.

Now, no one is going to confuse me with RMS as far as free software goes.  I am not necessarily dumping mono because of any free software concerns.  I am all for making money if you can off writing software, and I don't ascribe to the idea that software is knowledge that should be shared with everyone.  In my opinion, software is knowledge applied, which inherently makes it the intellectual property of the person or people who applied it.  The thing that makes open source software so practical and,  by nature free, is the fact that many people collaborate, so many that you either need to form some type of organization out of the contributors if you are going to make the software proprietary, or you open source and give it away because it just isn't worth the trouble to determine ownership share and so on.   That's a decision that is solely the developers and shouldn't be anyone's business in terms of pressure to make it open or closed.

That being said, the practical nature of open source usually means a better product for a number of reasons, not the least of which are artificial timelines and resource limitations that software companies are hampered with.  Don't get me wrong at all, I love open source software; I love the collaboration, I love the fact that it's a way to reduce cost for businesses, and I love the fact that such good software is available.  Lets face it, there are a large number of small businesses around the world who simply couldn't compete without open source software.  I just don't like the idea that the community can enforce it's sometimes artificially moralistic sentiment that software should be free... which is usually formed by people who just want software for free(as in beer) and don't give a rip about freedom(as in speech).

So why do I care about getting rid of mono at all?  I just don't like useless crap on my machine.  One of these days I'll get around to writing the article on why C# was a stupid idea and basically a redundant language but, in essence, I have seen mono in the past as a means to an end, one of those means being real interaction with an exchange server.  However, hasn't become a reality and I don't think it will be.  So right now, aside from the two programs I have needed it for in the past, mono is mostly useless to me.

Anyway, since I did the update from 10.04 to 10.10 and I have been heavily reliant on gnome-do mono came along for the ride.  However, a recent article I read on Synapse made me realize that, with the addition of Shotwell, I have replacements for all the mono-dependent software on my machine.  So out with Tomboy, F-Spot, and Gnome-do.  In with GNote, Shotwell, and Synapse.  So far so good.  I don't find that I'm missing much if anything and it feels good to keep my system clean.

Enhanced by Zemanta

Wednesday, October 13

Why does agile need a label?

Scanned image of standard 3x5 notecard / index...Image via WikipediaScrum, LEAN, XP.... for many years people have attempted to apply these types of labels to the art of agile software development and delivery.  In fact, one could even go so far as to call the meta-label of "agile" in the same way.  Each of these communities have devoted followings of people who believe they have the best way to deliver software.  Can we just extract some practices and just call them good software development without the brand-able label, t-shirts, associated conferences, certifications, etc,etc?

The agile manifesto is so much simpler.  Value individuals and interactions, working software is the number one goal, customer collaboration and response to change.... these are the concepts that so eloquently described the essence of the movement.  Yet, it didn't take long for people to define what those meant in terms of processes with tools that helped implement the processes.  It didn't take long for people to start saying, "This is how agile will work, it can't work that way." or "You MUST have a stand up meeting" and "You must use story cards only on white 3x5 index cards that must be pinned, not stapled, glued, or otherwise fastened to a bulletin board with exposed cork only and no one can do anything if they aren't told to do so by the index card and if you use software that's a tool and that's not agile."

What we're missing here is that many of these practices were already being done in a somewhat disorganized way because they worked.  Having a colleague look at a problem with you was just called "helping" before it had the term "pair programming".  Talking to a customer to refine requirements during the development process was just being complete and paying attention to detail before it became a part of a process.

It would have been amazing to be in the room when the authors of the Agile Manifesto were defining these things, but they weren't discovering new territory.  They were defining and simplifying to the concepts that, in their experience, made the difference between successful and failed projects.  At least that's my impression of the Agile Manifesto having not actually been in the room. :)

Do what works.  If you want to label something as agile, that's it.  It means at the end of a period of time you have to spend some time evaluating if your choices worked or not, but it's a lot better than having people roll their eyes and perform a task because it's required of them, not because it does something valuable.
Enhanced by Zemanta

Saturday, October 9

A way for Linux to succeed as a gaming platform

If there's one thing where I have to concede the superiority of Windows to Linux, it is in the gaming world.  Not because DirectX is so much better than OpenGL or because the engines are better or anything fundamental to technology.  It's a 100% business decision, no one is willing to spend millions of extra dollars in game development, possibly to the detriment of gameplay, for the game to be cross-platform for the 1% of their population it will serve.  It's the same reason a lot of software and native hardware drivers are not available for Linux.  In most cases, the open source community writes the drivers they need for hardware and the projects are small enough that most hardware ends up getting support by people who need to do it in order to make their equipment work.  However, for large, complex software such as big title games, it's tough for open source developers to product equivalent products.  So for the most part, the gaming landscape for linux pretty much stinks.

There are a few major issues.  The number of different distributions is one of them.  With differences in dependencies and the fractured nature of the way linux is distributed, it is tough for a gaming vendor to predict all the different dependencies available to them.  There needs to be an abstraction layer that takes care of all this, but such an abstraction layer could definitely hinder performance.

Another issue is cohesiveness.  Just as there are lots of different dependencies on different distributions, there are lots of different choices as far as window and display managers.

There is, however, a way for Linux to succeed as a gaming platform and that's for hardware vendors like Nvidia and ATI to use the open source nature of Linux to their advantage and write their own "console" that runs on Linux.  They have already started doing this in a lot of the quick-boot media only distros that ship with some laptops.  You may be familiar with these if you have a computer where you can pop in a DVD and bring up a player without booting the entire operating system.  These players are often embedded into the chipsets and, more often than not, there's a Linux kernel running somewhere under the covers there.  Why not take that idea and give users a way to suspend their currently running session(regardless of hardware) to disk, and reboot their system in the "console" giving the user the opportunity to install and run gaming software titles.  This could work regardless of operating system, provide gaming vendors with the same consistency they now get with DirectX on Windows, give users nearly bare-metal performance since the system is devoting almost all its resources to the gaming environment with no interruption for overhead from OS services, and give all  users access to the full gambit of gaming titles regardless of operating system they are using.

I know what you're going to say, technically this is not gaming on Linux in the same way you may have thought about it previously.  But it is using the Linux kernel to provide a working system for a hardware vendor to work from that already provides a massive amount of hardware support, is low on resource consumption, inherently secure, and proven to work well in environments where optimum performance is a must.  It's already proven that it can work as a quick booting, media environment on numerous PCs, and everyone knows that the fact that it is open source means that game developers can optimize their code to the environment rather than having to guess at how Windows is doing something behind the scenes.

Sunday, August 8

Did I do *anything* today?

Ever get to the end of your day and ask yourself that question?  I often have that sinking feeling that I just got nothing done because I have nothing to show for it.  I may have spent the day doing useful things like helping a colleague accomplish a task or giving direction or helping with sales, but if there's nothing to show for it at the end it's tough to know what just happened.

In the windows world there are apps like "RescueTime" that monitor your environment and tell you what you've been doing... but that I know of, there is nothing like that for Linux.  That is, until now.

The amazing thing is, I could do it all in a shell script of less than 100 lines. It gets a little hack-ish in that I have to depend on the dbus api from gnome-screensaver, but it's a fairly accurate way to do it. Anyway, if it's something you've been wanting for linux, give it a shot and see if i does what you are looking for.

** Disclaimer:  This app will record what you are looking at if it is reported by xprops or dbus.  It does not log keystrokes and does not send any information off to me or anyone else.  That said, I am not responsible if this script logs you doing something you weren't supposed to be doing nor the ramifications of that action.  I use it to monitor the way I spend my time while on my computer.  If your wife catches you watching porn that is 100% your fault.

Thursday, June 17

How to set up Exchange synchronization with Gmail

For many of us, we prefer the tools in Gmail to the tools in a client like Outlook.  I love the organization tools in Gmail and the fact that I can pull up my full email environment anywhere.  A a linux user, Microsoft's web interface doesn't work nearly as well as it does in IE, a fact which I am more than prepared to live with given the fact that I like the way Gmail automatically and more intelligently threads conversations and much better search functionality.

Before starting this process, it would be helpful to get the POP3 and SMTP hosts, ports, and any special login information from your Exchange administrator.

First, set up the ability to pull your exchange mail into your Gmail account.  You can do this from the Settings > Accounts page in Gmail.

  1. Click on "Add a mail account you own"
  2. Enter fill out the form. You may need help from your exchange admin for the values you'll need.
    1. If you are going to get your Gmail and Exchange mail in the same place, accept the option to automatically label incoming mail from that account.
    2. Where possible, it is a good idea to use SSL when retrieving mail
    3. Leaving a copy of the message on the Exchange server is a good backup in case your gmail account isn't working properly or you want to go back to using Exchange/Outlook at some point in the future.  You need to make sure you follow your company's record retention schedule and you don't allow your inbox to go over it's limit.
  3.  Next you will be asked if you want to send mail from this address.  Answer yes and click on "Next Step"
  4. Follow the prompts.  When it comes to the selection for "Send mail through your SMTP server", select "Send through 's SMTP server".  You may need to get the following information from your Exchange admin: 
    1. SMTP Server: Port:
    2. Username:
    3. Password:
  5. Click "Add Account"
At this point you will get an email to your Exchange account verifying that you have the rights to send through this account.  Follow the directions in the email and you will be able to send mail via your Gmail account and the recipients of your mail won't know the difference.

If you are already an Exchange user and like to use features like rules and search folders, you can very easily replicate that functionality in Gmail.  Gmail calls rules "Filters".  You can create new filters either by clicking "Create a filter" in the minuscule print next to the search box.  Your other option is to select messages like the ones you want to create a filter for, go to the "More Actions" drop down, and select "Filter messages like these".  Unlike Outlook, you won't get as nice of a wizard to walk you through the steps, but it's still fairly easy.  Gmail uses Operators or special keywords you can use in your searches to increase the accuracy of your searches.  These are helpful in both filters and Quick Links, Google's answer for search folders. 

Quick Links are a "Labs" feature or something that Google is actively developing.  There are many useful Labs features that will make it easier for you to make the switch from Exchange to Gmail.  Basically, Quick Links allow you to turn any search you do in mail into a Search folder or something you can access again with one click.  Once you enable the Quick Links gadget, return to your inbox and, if you need a search that only returns unread mail in your inbox, search for: "label:unread in:inbox".  If that search returns what you're looking for, click "Add Quick Link" and you will always be able to return to this query without the need to retype it.

Other labs features that are useful in this setup are:
  • Canned Responses - You can use these to send out frequently used messages as well as create auto-responders that work off filters.
  • "Don't forget Bob" - If you often send mails to the same groups of people, Google this extension will automatically remind you if you suddenly leave one person in the group out.
  • Google Calendar Gadget - Adds a more PIM/Outlook type feel by giving you a view of your calendar agenda side by side with your mail.
  • Mail Goggles - We've all sent an email we wish we could have back.  Whether we typed it up while we are tired, angry, or otherwise incapacitated, having a clear head while communicating with others is a good thing unless you like collecting unemployment.  Goggles makes sure you are operating with a clear head when you send mail.
  • Refresh POP accounts - Google doesn't offer a schedule to recheck your pop accounts so your inbox may not always be right in sync with your Exchange email.  This extension causes the refresh link to also check your pop accounts.
  • Undo Send - Somewhat like "Mail Goggles" but a little less demanding.  If you tend to hit the Send button a little too aggressively, this is a good extension for you.  It essentially holds onto your mail for a few seconds before actually sending it, allowing you to cancel the send if you notice something wrong.  This is the extension you wish you had for your mouth when you say something, realize your mistake mid sentence, and wish you could pull it all back.
  • "Got the wrong Bob?" - Makes sure you're sending mail to the right person when names are similar. The algorithm is based off your groups of often mailed people so it will most likely work better the more you use it.

Hopefully you'll find these suggestions useful in making the transition from Outlook to Gmail.

Tuesday, June 8

The devil you know...

For all of my chest beating and pontificating about how much Verizon sucks and how awesome the Evo 4G is and so on... last Sunday I went out and bought a Droid Incredible... selling myself and my family to Big Red for 2 more years.

The shame.  Don't get me wrong.  I still hate Verizon Wireless.  I just don't hate them enough to pay $200 more in up front fees and $40 more per month in service fees for services I will rarely, if ever, use.

 Strike 1, as I've mentioned, I live in a rural area.  4G won't be here for a while, I'm not spending $480 per phone($10 service fee x 2 phones x 24 month contract) for service I might get once or twice a week when I have to go back to civilization.

Strike 2, my parents are never... ever... EVER going to use data.  Yet if I wanted the unlimited data plan, I was going to have to pay an extra $10 per phone in service fees for each of their phones because they share with me.  The whole idea of that is, they don't have to pay all the extra overhead for service and they get a cell phone in case of emergency.

Strike 3, well, when I played with the Evo, my first impression was, "Wow, this is the coolest phone I have ever seen!"  And to be honest, that is still my impression.  It blows every single phone made for the US market off the planet.  However, if signal is king then service offering is queen.  A cool phone is just a concubine... used until the next cool thing comes along.  Price is only the deciding factor when two services are equal or at least mostly equal.  As phone companies continue to find new and better ways to keep people from using services they haven't paid for, my question to them is... WHY?  Why do you do this?  The mobile hotspot was one of the key features I wanted in the Evo and then Sprint decided to upcharge it... significantly.  That makes the Evo like a world champion sprinter with his legs tied together.  They say, "Hey, we'll untie his legs for you if you pay us an extra $30 a month."  Uh... no.  I'll untie his legs myself and when he's done I'll beat you with his gold medal.

Let me be clear Sprint... if you're reading this... which you probably aren't but it makes me feel better to say it publicly... you lost $500 in equipment fees, and customer that would have been paying about $150 a month for service because of your stupid $30 up-charge.  You, the reader, at this point may say, "Well, it's only $30, why don't you just pay for it if you want it."  The answer is, and I mean this in the kindest possible way,  "BECAUSE I'VE ALREADY FRIGGIN PAID FOR IT!!!".  I'm paying $200 for a phone that has this capability built in.  I am buying an unlimited data plan.  At the risk of assuming you understand the meaning of the word "unlimited", why do you nickel and dime people because they are going to use a different device.  Again, it's what Verizon does that makes me hate them.

So, rather than pay the extra money and move to a service provider who does all the same things I hate about my current provider, I decided to save a tiny bit of money, get a very good phone(HTC Incredible), and stick with the devil I know.

Oh, and one more time just for posterity... I still hate you Verizon.

Saturday, May 22

Google TV ... SWEET!

So I can get a web browser and online tv content and my photos and audio all in one thing.  Thats so AWESOME! 

Well, it would have been in 2005 anyway.

The fact that you can already do this on PS3, MythTV, Apple TV, Boxee, Windows Media Center, and a number of open source projects, one has to wonder, "What's the point?"  The only benefit I see is that Google has the power to bring in the networks and cable TV.  Otherwise, already done very well and this is a really blah announcement.

I would love for someone to show me how I'm missing the big picture and how this revolutionizes TV.  But from what I saw in their little video, it didn't seem like they were doing much if anything more than Boxee.

Thursday, May 20

Block selection in Eclipse Helios M7 - FINALLY!

If you are used to having block selection in editors like vi, jEdit, TextPad and so on, it's probably been grating on you for years that you can't use those tools in Eclipse.  Those days are over... block selection is here.  To toggle it on, use Shift+Alt+a.  Vwrapper doesn't yet work with shift+v but I'm sure that's  just an update away.  Finally, some real text editing tools in Eclipse!

Who needs broadband? Uh... everyone.

A recent Scientific American article outlines the National Broadband Plan and asks the question, "Who needs high-speed broadband". The answer is, a lot more people than need digital TV. 

Ok, maybe 100 Mbps for rural and 1 Gbps might be a little high, but what can we do with such a network?  It means we can forget cable, satellite, and over-air TV; all content can be available 24x7 anywhere you have a connection.  It also means we(the US) will have wasted billions on subsidizing digital tuners so our population can hook their brain into a device that does little more than transfer garbage shows, mindless entertainment, and a solid stream of marketing directly into our brains.

Look, I own a TV, I like watching it.  I like a few shows, and there are some sports that I really love to watch.  However, do not be fooled, there is very little value in TV and it exists in it's current form for the single purpose of providing a medium to sell you stuff.  The actual shows haven't been the point of TV since cable started allowing advertisements(remember when the idea of cable was to pay some money so you could watch tv without commercials).

Instead, we could have spent that money on a reasonable intermediary goal for this National Broadband Plan.  There are plenty of Americans who live in rural areas and don't have access to the kind of speeds that city dwellers get on their iPhone.  If they are willing to shell out $80 a month they can get a marginally faster connection via satellite but it barely qualifies as high speed.  Instead of worrying about 100Mbps, why not get them to 1Mbps and then go from there?

So, why is broadband any more worthy of a place to spend money than digital TV?  It's simple... the value of interaction on the internet is higher than TV.  Can you buy and sell stocks with just your TV?  Can you work from home via your TV?  Can your business actually sell a single product using the TV alone?  The answer to all of those is, by itself, no.  Sure there's QVC where you can buy something you don't need for the "low, low price" of 400% of what it cost to build, but you need a phone or the internet to actually make the transaction.  I'm not really in favor of the government spending money on it, but if they are going to be spending money on something, they should be spending it on something that makes sense.

Is advertising a major part of the internet?  Of course it is.  It is just as much of an advertising tool, if not more so, than TV.  But the point is, that is all TV is good for.  There are so many more uses for ubiquitous broadband that, if federal money is going to be spent on entertainment, it should not even be a contest as to where it goes.  Yet, the 2009 budget allotted over half a million for coupons so everyone could get their TV fix via 1s and 0s. 

Why does this matter to me?  I work from home.  I love working from home.  I also love living in a rural area.  Broadband internet provides a lifeline for me to be able to live where I want, work at a job I love, and not spend 4 hours a day commuting.  If you want an initiative that can, in and of itself, provide a single means of making the world a better place right now, using broadband to allow a greater number of people to work from home is it.  What would happen if the number of people on the road at rush hour were reduced by 10, 20, maybe even 30 percent?  What would happen if we spent less time commuting to work and more time either actually doing work or spending time with our families?  What if you weren't forced to live in a certain area because your job uprooted you?

From an environmental perspective, less cars means less pollution, less wear and tear on roads, less accidents, and lower oil consumption.  I've done the mass transit thing in arguably one of the best systems in the US.  If we're honest with ourselves, it's not the panacea everyone likes to think it is because the majority of the population is on the outside looking in.  It's ok, better than driving every day, but it's still a huge amount of time spent in a non-optimal work environment, away from my family.  It's costly, insecure, and prone to delay because of weather or other break downs which cost companies millions in lost work time.  Additionally, I've never in my life been as sick as often as when I rode that train every day.  Mass transit is an epidemic waiting to happen.  All that and it still requires some form of energy, either electricity or other, to run.  It's creating pollution somewhere, maybe less than an equivalent number of cars, but it's still a problem.

While we may not need 100Mbps now, eventually the time will arise when we will need that kind of capacity.  We shouldn't allow current need to be a barrier to working hard on getting broadband to every area of the country as soon as we possibly can.   If that means we stop tearing up perfectly good roads so DOT workers get a paycheck and the job creation statistics go up, so be it.  They can lay fiber optic cable instead of asphalt and spend their time on moving forward rather than supporting the legacy operating environment. There is simply no better infrastructure investment that can affect a wider array of issues we experience today than this.

Reblog this post [with Zemanta]

Monday, May 10

Moblin: The 'Almost there' GUI for Netbooks

The guys at Intel have one thing right, the average netbook user is probably going to be more interested in social networking, status, and media than writing papers and such.  It's just the nature of netbooks.  With Moblin, you are getting a very nice operating system with a lot of very smart things built in. 

That said, I think another review of Moblin is redundant.  What I think is more important is what is not in Moblin.  The major, glaring hole is Facebook.  I get Twitter and, but no Facebook? 

Second, synchronization, at least to Google AFYD, is poor.  I can't easily figure out how to sync my calendar, which is the idea of having a desktop that gives you the status of your world at all times.

Third, please, no more rpm/yum.  It is definitely better than it was, but apt is just better.  If not apt, then portage... but in my experience, you are always one step from dependency hell with yum.

Side note... where the heck is Gentoo in the netbook world?  It seems like there should be people raving over Gentoo on netbooks but there just isn't.  It seems like a perfect fit to me.

Ok, back to the list, 4th on the wish list is easier access to configuration.  It's not terribly clear to me where I go to configure the thing and it took about 5 minutes of poking around in the different screens before I found what I wanted.  One of the things I like about Ubuntu is that it's easy to find things.  For a user-level gui like Moblin seems to want to be, ability to find things should be one of the top priorities.

Fifth and final, it is not clear at all to me how to manage my session.  Shutdown, suspend, switch user, none of this seems to be available to me.  My wife and I both use the netbook, we need this to keep our preferences separate and so she can remain a user while I maintain admin rights.

But aside from those five things, I love the way the distro looks, I love the idea and where they seem to be headed.  I just find it a cool but useless distro until they add some of the basic features that I really need.  I guess it's back to UNR for me, at least for now.
Reblog this post [with Zemanta]

Tuesday, April 27

Why you should NOT use a web framework

I honestly can't believe I just wrote that.  I have been a huge proponent of using frameworks for a long time now and I still think they are valuable.  There is great value in frameworks and proponents have written about why you should use them ad nauseum.  Every conference you attend will invariably have a session or two on the state of web frameworks, new web frameworks, and how they are fixing all the things that have ever sucked in the whole world when writing web apps.

B.  S.

Like I said, I do believe in web frameworks, but many developers start the process of building a new web app with choosing a framework before even evaluating the need for one.  You don't need a framework to build a good web application.  I find the arguments of "plumbing is done for you" and, "you get reusability" to be unrealized potential and often over-complicate very simple concepts.  Before adding complexity to an already complex system, you need to make sure you are actually going to use the features that create that complexity.  When is the last time you rewired struts application inside the struts-config.xml so it did something different or reused an action and forwarded somewhere else when you were done?  Those kinds of things are just generally not done and if they are it's a lot more painful than advertised.

One reason I like frameworks, and what I see is the most useful reason to have them is that you can throw a whole bunch of developers who come from different backgrounds and different skill levels and, since they have to fit into the framework you choose, you generally get similar output.  That said, just because Struts or Seam or Shale or Stripes or Spring help you implement MVC doesn't mean you actually get MVC.  I can't tell you the number of applications I've worked on where the entire concept of the separation between view and control layers is just lost.  When you tell me it's MVC and it's not, it's almost more confusing than if you just throw up your hands and say, "stuff is everywhere".

One of the major reasons not to use a framework is the amount of overhead required for additions and changes.  I know... this is supposed to be one of the benefits, but why is it that a simple change to a theme requires a change to the struts-config.xml, the tiles-defs.xml, the Action class, the ActionBean class and so on?  In some web apps, the overhead is reasonable.  In others, I can't actually justify it.

For instance, if you are working in a portal, the portal IS the framework.  Why do you need to use a framework INSIDE the framework?  Then you start matching up spring with struts and the amount of indirection makes it very difficult for a new developer to come into a project and trace through what is actually happening.  If it takes more than a couple weeks for a new developer to be productive in your environment it's time to take a look at the complexity of the application and make sure it is warranted.  Could a new developer trace through the code or would they be stuck because it's not apparent that you are injecting values with Spring or using some AOP concept because the original developer thought it was pretty cool.  This is also a reason I don't endorse the ExpandoMetaClass functionality in Groovy.  Its neat that you can override methods but if you can't expect a class to function the same way every time, you are really opening yourself up to a lot of unnecessary issues.

Author and speaker Scott Davis at the 2007 No Fluff, Just Stuff conference said something so ridiculously simple that I almost dismissed it as obvious the first time I heard it.  If you are trying to decide what web framework to use, stop deciding and don't use anything.  Then if you find that you need it, add it later and do some simple refactoring.  If you hit the hardest problems first and you get through them without using a framework, it's likely you won't need it.

The one thing this type of development requires is consistency from your team.  If you are a very small team, keeping things under control is easy.  If you are more than 5 developers, it will be a little harder to make sure your developers are following good design principles or you're going to end up with a mess.  The thing is, I've seen so many web apps that use frameworks and are still a mess, I'm starting to be convinced that they don't actually prevent the mess, they just structure it.

Lastly, if you are going to use a framework go for simplicity over bells and whistles or configurability.  Most times, configurability is a very difficult thing to accomplish, it adds a ton of complexity, and the goals are often not realized.  By the same token, bells and whistles are often more trouble than they are worth.  The more focused your use for a framework, the more value you'll actually see from it.  I agree with the idea of deciding how to build your application first, then pick the simplest possible framework that allows you to accomplish your goals where one of those choices is no framework at all.

Thursday, April 15

How to pick a log level

While I, in no way, feel that I am the expert in this area this is how I select which log level I use for a message.  The number of applications that fail to have a coherent strategy for this is mind boggling.  Logging should not be a second thought or something that doesn't matter.  Especially when fixing a bug is extremely time sensitive, the more information you can concisely pack into a log message the better.  So here are the guidelines I use for selecting the log level I am going to use.

  1. Debug - Messages necessary for debugging a piece of functionality.  A value of some sort should always be appended to the message as there is very little that can be gleaned from the state of the application if you don't know some values.  If there is only one concatenation, I am not concerned with enclosing it in isDebugEnabled, but if there are more than one, if you have an object that calls toString() or if you have multiple log statements, it should be enclosed in isDebugEnabled.
  2.  Info - Informational messages, can be anywhere but these should be used very infrequently.  These should not be concatenated so there is no real reason to use isInfoEnabled.  An example of an info message would be to notify a user who is watching the log that a long running process has finished or something like that.
  3. Warn - Use these whenever the application is in danger of getting into a state that could cause an issue.  For instance, if a call to the database returns 2 values when it should have only returned one but it is still valid to just take the top one, it would be prudent to log a warning to say what is happening so, if it ends up causing a problem we can see it in the log.  Also, warnings are typically the default level used in production applications by default.  That means that, from warn on, you can usually count on the message getting logged even in production.  When choosing if I'm going to append information from the state of the application, I try to be very careful to take the impact of that decision into consideration.
  4. Error - This should be self explanatory.  I will stop short of saying this should be in every exception handling block, but it should be in most.  At the very least it provides a way for us to know when an exception occurs where it is normally swallowed.  I will say that we should NEVER, EVER swallow exceptions.  By swallowing, I mean: try { ... } catch (Exception e) {//no code to handle the exception}.  A log message is the very least we can do.  In addition, the method log.error(String message, Throwable t) is the only one to use here.  Do not append the output of e.getMessage() as the really useful information is in the stack trace and should be captured.
  5. Fatal - I don't think there is much reason to do this in web applications, but if we do find a place, this is for problems that will probably take the server down.  Losing connectivity to the database would be an example of a fatal exception where we would need to restart the machine.
There are two caveats to this list.  Some loggers include a trace level.  I feel this is unnecessary and, if it is actually done, should probably be done by injection with AOP.  The idea of cluttering up the code with "entering this method... ", exiting this method..." makes me shudder.  If I wanted to read through 25 lines of log messages in order to get to the real code.... well... I don't want to do that.

The second caveat is some applications have a monitoring tool like Nagios that looks for errors and sends out an alert.  If this is the case in your system, you probably either want to be discrete with your use of error, monitor fatal instead of error or implement your own log level for bugs that are serious enough to set off a pager at 3 am.  I think, in most cases, I personally like the 2nd option because most applications have no need for Fatal and, when you do get one it is cause for serious concern.

So that's my method.  Perhaps some of you out there have others.  Remember, sharing is a good thing. :)
Reblog this post [with Zemanta]

Monday, April 12

James Gosling leaves Oracle

There's not really much good that can be inferred from Gosling leaving Oracle and his comments don't do anything but solidify that inference. 

"As to why I left, it's difficult to answer: Just about anything I could say that would be accurate and honest would do more harm than good," he said.

Really?  This doesn't give me great hope for the direction Oracle is steering Java.  If mod_plsql is any indication as to what an Oracle influence on a programming language would be I think those of us who make a living writing Java code may want to start expanding our horizons with more urgency.  

In my opinion, as long as they leave the JVM alone they can have the language.  As more and more alternatives come along, the language itself seems like a detail.  It's one of the reasons I think Microsoft's decision to chase after Java with C# is fruitless because they are chasing a dying language.  Dying... but far from dead as so many others are quick to call it.  Java is used in almost every sector of every business world-wide.  Languages like that don't just die.  Disagree?  Why then are people still writing new applications in Cobol?  I believe that there are some businesses that will still be using Java about 20 years from now.  However, for better or worse, the fact that Oracle has their hands in it means it's going to change... somehow.  More and more it seems like those changes don't jive well with the old guard from Sun.

Wednesday, April 7

10 Useful Google Spreadsheet Formulas

Ran across this one in my reader today and I realized I could probably replace half the crappy software built for investors today with a Google spreadsheet and the Google Finance functions.

Tuesday, April 6

Staying on topic - an off topic rant

Why is it so difficult for bloggers to stay on topic?  At CheckedException, I really try hard to keep the posts focused on technology because that's what I said I was going to talk about.  To me, it's like a contract... I tell you what I'm going to say, you decide if you want to read.  You may read a few times and think I'm full of crap, but at least I've fulfilled my contract with you and, of course, you're free to stop reading at any time.

To clarify, I'm not talking about the occasional off-topic post or rant(like this one) where you have no other outlet but your blog.  I get it.  What annoys me is when the incessant posts about your trips, your skiing, your kids and all the rest of the things in your life start to outnumber your posts relevant to the reason I subscribed to you in the first place.  Most people subscribe to your blog for the content, what they might learn, to get a pulse on what people are thinking in the industry, and so on.  To me, if you are a technical blogger and your only recent posts are about your day skiing, you have broken your contract.  I told a fairly popular blogger that one time because his recent posts were all related to politics or skiing, or his kids, and just about everything else BUT technology.  His response was, "I'd be happy to give you a refund."  Cute... but missing the point.  Unsubscribe.

Of course it's your blog and you can write about whatever you want.  But if you tell people you are going to write about one thing and then you write about something completely different you have failed those of us who subscribed to your blog in the first place because we thought you might actually have some insightful things to say.  When you use your blog as a sign that says, "Hey everyone, look how frickin awesome my life is!" I start to think that if I ever had to have a conversation with you it would end with me punching you in the face for being a pompous douchebag. 

Jealous?  Sure,  I'll admit that I'd love to play hookey because we got 15 inches of powder the night before.  I just think I'd tell everyone about it on Facebook rather than over my blog where I told people I would be writing about technology.

I'm not asking for you to not have a life.  I'm simply asking you to stay on topic and, if you want to write about something else, start another blog and for the love of semicolons, keep the RSS feeds separate!!!

Thursday, March 18

Joel Spolsky expounds on the virtues of distributed version control

Not one day after I decided to unsubscribe from joelonsoftware... one of the first blogs I started reading regularly, he came up with a beauty right after announcing he was "retired" from blogging.  You can read his thoughts here

By the way, am I the only one who thinks Joel's new puppy looks a little like Joel?

I'm no fan of geek-worship.  There are a bunch of geeks out there who seriously worship other geeks and there are whole churches following the likes of Joel Spolsky and Matt Raible around.  The point is, I don't think that just because Joel Spolsky or Matt Raible say something that it becomes an indisputable law... and I don't think the fact that Joel Spolsky says distributed version control is here to stay means it is.

However, I have to say I agree with him.  Distributed version control has moved from the fringe to one that is more and more accepted by the corporate world.  The thing I don't agree with is, I don't think the world thinks in versions without thinking of change sets.  I don't think it is such a major paradigm shift of "thinking in versions" vs "thinking in changesets". 

Even if you "think in versions", you are only doing so because you are interested in the changes that come with that version.  Shifting your thinking to "Joe's version" or "Mike's version" is really not that different.  Also, ignoring change sets is not really something people on teams can do forever because eventually everyone's change set becomes a released product and all the change sets end up merged together anyway.  The sooner you can do that and debug the issues, the better.

What a change to a distributed system does is change the workflow.  Take an example where two developers, Joe and Mary, are working on web services.  Joe is working on the producer, Mary is working on the consumer.  Since the producer does not yet exist in source control, Mary has nothing to test against but her unit tests.  That works while she's developing functionality, but eventually she will need to test the integration.  Lets say Joe isn't done with the producer yet, but he has enough done that Mary could do some stub testing.  Joe checks his code into source control.  At that point, if a build was done for QA, there would be broken functionality in the build.  No one has really tested that this code works with a consumer, and admittedly, it doesn't actually do anything yet but provide Mary a way to test her code.

In a distributed world, Joe and Mary can easily work together by taking changes from each other and then, when the final product is complete and working, they can commit the changes to the build branch without releasing anything that is not working.

That, in my mind, is the benefit in the corporate world where large development teams have individuals concurrently working on small, inter-related pieces of functionality.

While I can't say I would rather switch to programming in C++ than go back to the world of centralized version control, the virtues of it are a good addition to any development team.  As we speak, my team is experimenting with distributed version control using Bazaar.  The details of that will follow here at some point when our experiment is complete.
Reblog this post [with Zemanta]

Wednesday, March 17

Sprint + Nexus One? Goodbye Verizon!

Sprint NextelImage via Wikipedia
Today a colleague(and would be contributor to CheckedException) passed me a link to some beautiful news.  Nexus One is coming to Sprint and soon.

In an official press release(more than has come from Verizon I might add), Sprint officials have verified that Sprint will get the Nexus One with availability being announced "soon".  And with that, it's adios to Verizon.

I can't say that I've ever been as excited to get a new phone as I have to get the Nexus One.  I even considered switching to t-mobile, but as I said before signal is king and there are too many dead zones for me to accept t-mobile's lack of coverage in this area.  Sprint, on the other hand, has good coverage in my area and great data speeds.

This news coupled with the release of the HTC Supersonic(if that comes to fruition) means that Sprint will firmly supplant Verizon as the holder of the premium Android phones on the market.
Reblog this post [with Zemanta]

Friday, March 12

Using MS SQL from Linux

As a programmer who works primarily on Linux machines, both on the desktop and server, I have found the number of free clients for SQL Server to be sparse at best.  There are a few... Squirrel SQL comes to mind as one.  Oracle SQL Developer works as well.  However, each of them have their difficulties.  I have had weird crashes with Squirrel and I can't use any of the advanced features in SQL Developer.  Such is the life of a linux based developer trying to work with the world of Windows.

One bright spot I discovered recently is SQSH.  It is essentially a shell for SQL Server that operates something like the shells for mysql or oracle that I am familiar with.  Why is that exciting at all you ask?  Ever tried to step through a large result set on the console?

The answer is, it's not so much the SQL part that is exciting, it's the shell part.  It's a real shell that goes beyond what is available in any of the other shells I am aware of.  Sure Oracle's SQLPlus will allow you to do variable substitution and some macros, but it's not nearly as powerful as sqsh.  In addition, you can use pipes!  That means, as I alluded to earlier,  you can step through large result sets by piping the output of a query to less just like you would in a linux or unix console.

You get flow control and even backgrounding.  So if you know a query is going to take a while, you can add an & to the end just like you would in a Linux shell and it will run in the background.

Here are all the features.  Even more awesome is the fact that it can be installed via apt-get on Ubuntu.  There is some extra setup required to make available the data sources you connect to, but it is very easy to get going quickly.

Just another tool for your developer toolbox.

Friday, March 5

More on Apple, HTC, and competition

The more I have thought about this issue, the more it stinks.

Patents were invented to prevent competitors from stealing invention from the original inventors for a reasonable period of time so the original inventor could recover the cost of R&D.

The truth is, Nexus One and iPhone are not competitors.  Why would I make such an obviously insane statement like that?  I mean, look at them, there are literally hundreds of blog articles and professional reviews comparing the two and there are more similarities than differences.  Still, as with many such things, the small differences make all the difference.

One difference is domain.  There are countries where US Patent and copyright laws have no jurisdiction because there is no treaty to enforce it.  With wireless, the network is the domain. Nexus 1 is not a serious competitor on AT&T because it can't use full 3G... it is limited to the slower EDGE network.  The iPhone doesn't work with any of the carriers who offer the HTC phones in question.  So, there is a significant deterrent already in place to switching networks(early cancellation fees).  In that case, it is the features of the network, not the phone itself, that is creating demand.  I wouldn't switch to AT&T to get an iPhone not because I think iPhones suck, but because I don't want AT&T.  In truth, anyone with a data plan would have been insane to keep their Blackberry over the last few years instead of getting an iPhone.  The barrier was network then and it will continue to be network now.

The other difference is community.  Apple has a community all their own and the Apple loyalists are the ones who are out there buying up every new Apple gadget the minute it comes out.  They are also the ones posting all over the internet about how much the Nexus One "sux" in comparison to the iPhone.  These people are not going to switch networks to buy something that is "just like" their iPhone.

On the other hand, people who are interested in Nexus One and other Android phones are interested in getting a lot of the same functionality of the iPhone on their current network.  Also, many of them are going to be developers who are interested in an open platform.

In addition, if the products are truly as similar as Jobs wants you to think they are, why would people switch from what they have, pay the cancellation fee, and pay the high price of a new phone just to get something they already have?  And as far as new customers go, AT&T is the breaking point there.  Signal is king, if I would have accepted bad signal for the phone I wanted, I would have switched to T-Mobile when the N1 was first launched.

Bottom line, different networks, different target consumers... too different for this to really matter to Apple and too different for there to be a claim of serious competition.  What it does is call into question(again for the thousandth time) is the validity of patents on software.  I am all for protecting the rights of businessmen and inventors... what I am not in favor of is fooling the patent office into issuing a patent for something like "gestures over an icon for unlocking a device" because they obviously don't understand the technology.

Like I said, the more you think about it, the more it stinks.

Wednesday, March 3

Time to jump ship on Apple

Party's over open-sourcers.  It was great while it lasted. 

We *had* a corporate, innovative company who, while they have never necessarily been a "friend" to open source, embraced it openly.  But we discover that they were only our friend for as far as that friendship would advance them.

We *had* an ally against the great evil in the mountains of Redmond.  Their witty commercials made us laugh.  Their rising market share, eating away at the Windows market share, was an encouragement to us all.  Maybe the general public can leave their world dominated by Windows to try something different.

We embraced them as "enemy of my enemy".  But now, they have revealed their true nature... they are the same thing.

Apple's lawsuit against HTC is the realization of some of the cautiousness I have approached them with over the years since they announced they were moving their platform to BSD.  Steve Jobs, while he gets credit for building an amazing package, is a proverbial wolf in sheep's clothing.  Sure, he likes to play the part of super-cool, techno-geek... but in truth, he's just the charismatic guy who makes the geeks write his papers for him.  Steve Wozniak anyone?

I'm not saying Jobs is bad at what he does... but he's no revolutionary thinker and he's definitely not the "geek behind" Apple.  He is most certainly not worthy of the adoration to the point of deification that Mac enthusiasts have showered him with.   He has a lot more misses than he does hits.  He's sort of like an astrologer... he comes up with all kinds of wacky proclamations like "no one uses Java anymore" and "we aren't going to open our platform to external developers because no wireless provider want's a rogue app to take down their entire network".  Then there's the fact that they signed an exclusive deal with one of the worst networks as their provider for the iPhone.  But people don't remember that stuff, even though people outside of major cities are reminded of how bad AT&T is every day.  They remember his hits and they think he's some kind of techno-fortune teller who's intuition on technology is not to be trifled with.

If you look at  what they do best, Apple is an eye-candy company.   They have taken something technically good(bsd), put it on some average hardware, wrap it in a nice box, and charge a premium price for it.  There's nothing wrong with that, it's a part of business to build/package something into a product people will buy.  However, let us not forget that they did it, once again, by stepping on the shoulders of geeks.

For all of Bill Gates' faults, he could at least write some code.

But, I digress.

Suing HTC over what really amounts to the Android interface is ludicrous.  One could argue that HTC modified Android with the SenseUI, but Nexus One is included in the suit... so that argument has no merit.  Google is the maintainer of the Android project, if Apple is going to sue anyone, it should be Google.

Besides, the suit amounts to being over multitouch(not used on the N1), gestures... which has been around for a long time, and their "object oriented user interface"... uhh, yeah, had that on my Treo in 2003.

And why now?  Why just HTC?  Blackberry Storm uses gestures and an object oriented touch interface.  LG has a whole line of iPhone knock-offs.  This lawsuit is a CYA for Steve Jobs because they signed a deal with a crappy network and somone else is now truly competing with them... especially as N1 is set to launch on Verizon's network.  If Jobs was truly the technologist he claims to be, he would quit worrying about what everyone else does and go invent something else.  The fact is, he's not.  The innovation at Apple is growing pretty stale.  In fact, I think it's fitting that they tie all their products together with an i.  iPod, iPad, iPhone are all essentially the same thing.  They add some phone hardware and a little bit of software to an iPod and there you have it, iPhone.  Take the same idea, expand the form factor, and you've got an iPad.  The idea studio at Apple seems to be drying up, so Jobs is resorting to Microsoft-like tactics to keep the competition at bay.  In my opinion, that puts him solidly in the Microsoft category, and makes him an enemy of FLOSS.  The truth is, he always was.

In the end, lets face it.  This is all about self-interest.  I want my Nexus One the DAY it comes out for Verizon and if this lawsuit delays that by a single day I'm going to be one ticked off geek.
Reblog this post [with Zemanta]

Sunday, February 14

4 Simple Steps to free North American phone calls with Google Voice, Gizmo5, and Empathy

Image representing Gizmo5 as depicted in Crunc...
If you've got Google Voice and you have wondered, "How can I make free phone calls using this thing?", then you've no doubt seen the many articles telling you how to use Gizmo to forward to Skype.  The problem in the past has been the limited call time.

Image representing Google Voice as depicted in...Well, I just made a 15 minute outbound call using Google Voice+Gizmo+Empathy, obviously surpassing what has been a 3 minute limit in the past.  I am not sure if the limit has been lifted, increased, or if it was something to do with forwarding Gizmo to Skype, but when I use Empathy as a softphone connection to Google Voice it is working great.  Perhaps its a feature of the Gizmo5 aquisition by Google.

The important difference here is setting up a SIP phone rather than using call forwarding to Skype.  I had it set up at one point but it always seemed like a shaky solution.  While I have not been a huge fan of the Ubuntu decision to go to Empathy from the stalwart Pidgin, this seems to be the first realization(at least for me) that it will be a better long term solution.

Here's how to set it up.
1. Sign up for a Gizmo5 account at  Sorry, if you don't already have a Gizmo5 account you are out of luck as they are not accepting new users until they relaunch.
2. When your account has been set up you will need to set up Empathy(or your softphone of choice).  On the Account Overview page, make a note of your SIP number.  Then use the SIP settings for configuring your SIP account.  An important note, your user id is your SIP number, NOT 17471115555.  Do not copy and paste it because it won't work.  Below is an example of what the SIP setup should look like in Empathy.

At this point it probably wouldn't be a bad idea to make a phone call to this number to make sure it works correctly.

3. Set up Google Voice

4. Once finished, you will have to verify the phone number.  Make sure you enable the dial pad on your softphone in order to enter the verification number.

And that's it.  4 simple steps to free phone calls to the US and Canada.
Reblog this post [with Zemanta]

Wednesday, February 10

Linksys WRT310N is a FAIL

I bought this router a couple months ago and, for the most part it seems to do it's job.  It's nothing fancy, but I have had a very strange issue for a while now where I simply end up going to the wrong page.  Facebook ends up on Myspace.  Endless SSL issues where the site I am trying to hit is not the SSL certificate I get.

My first thought was that someone was messing with my router.  It's happened before even though it's locked down to mac addresses, the radio is off, and the router is secured.  After searching through the logs, confident that no one was in there maliciously, I headed to the next likely culprit, my ISP.  Where I live in the country, broadband internet choices are limited.  I chose DSLExtreme... funny name, but their service is good and their support staff not only speaks *good* English, they are actually knowledgeable and they don't just mindlessly follow a flowchart.  However, the fact that they are not a Verizon or AT&T sometimes makes me wonder if there may be wonkiness that the big guys have already worked out.  Still, after talking to the techs there for about a half hour, we found nothing.  They suggested I clear the DNS cache on my router and my computer.  I did that and it seemed to work for a while, but the problem came back not long after.

In retrospect, now that I know what the issue is, it's pretty obvious why it worked immediately after clearing the cache, but the problem cropped up again so quickly that I actually thought it was not fixed.  So I dealt with the issue for a while until this morning, when I actually got so fed up with it that I went searching.  Turns out there's a whole community of people with the same problems as me.  The user forum for the 310n has a thread started sometime last September which points to the issue that also occurs on the 160N

The worst part about it is, Linksys is not fixing the issue.  Multiple people have gotten the "Sorry, we've never heard of this issue before." ... FAIL!  In an interconnected society like we have now with user forums, social networking and so on, you can't tell more than one person that or we all know about how bad you fail.

The workaround works.  I might even venture into the DD-WRT world, which has an experimental version of their firmware that is supposed to work.  Still, this seems like a pretty obvious issue that needs to be fixed and not with a workaround like editing network settings on the computers.

Monday, January 25

Polyphasic sleep... an also-ran story.

Yeah, I'm trying it.  The duties of husband, father, son to aging parents, and geek simply demand more than 16 waking hours.  No, this blog is not about to become an Uberman blog... although I'd probably get more hits that way.

As I sit here at 5 am EST, I am about 5 days into the adjustment period.  It is difficult... not the most difficult thing I have ever done, but it is definitely not easy.  In doing this I have discovered some things about myself, which are always fun.

1. I am not as undisciplined as I thought.  I just really have to want something.  The idea of only having to sleep 2 hours a day and being able to maintain that over long periods of time is extremely enticing.

2. It is friggin cold in my office at 5 am.

3. I like sleeping in.  I like the feeling of knowing I'm supposed to get out of bed and rolling over instead.

4. My hardest nap to get up from is the one from 4am to 4:20am.   I usually need every alarm at my disposal to do it.

5.  In the 5 days I have done this, my time required to fall asleep has greatly decreased.  I am now falling asleep in 1-4 minutes instead of 20 minutes to an hour.

What I can't find are references to ill-effects on health.  I have seen a few testimonials but no hard evidence.  I think it is worth study.

Friday, January 22

The Door to Nowhere

Ever seen something like this:
epic fail pictures
see more Epic Fails

It's one of those things you wonder... what were they thinking when they designed something silly like that?

You see examples of this all the time in code... hey, I'll admit it, I've *done* stuff like this before.  You build an opening for extension to something that can't possibly exist in your application.

This is one of the reasons I believe lean software development is so difficult.  Anyone can walk by this door and notice that it has absolutely no conceivable use besides making it easier for someone to fail at suicide.    However, layer upon layer of abstraction sits in our code today with absolutely no conceivable use except that it's "abstract" and "extensible".  Architects draw pictures of nicely abstracted applications, neatly separated concerns,  and built in abstraction for extensibility.  They then look at the picture, pat themselves on the back, and then hand it off to a peon developer.

Lord Architect: "Here developer, all you need is right here, just code it up.  It shouldn't take you longer than an hour."  
Programmer: "Why do I have to build three classes just to build a calculator that can add and subtract?"  
Lord Architect: "Because around here we use interfaces for everything so it is extensible."
Programmer: "But you have a method in the Calculator interface called, checkSpelling... isn't it a calculator?"
Lord Architect:"We design code for re-usability and extensiblity.  This fits with the long term design concept and it's already been approved.  We can remove it later, but write it like that for now."

Bottom line, building software is not like building a bridge, or as in the example, a mall with doors that make sense.  However, there are striking similarities in the finished product, the difference being that, in software, they are a lot harder to recognize until you get your hands deep into the code.

 While not a silver bullet, there is a way to avoid a pitfall like this and it comes from the book Practices of an Agile Developer.  Architects must write code.  It is a lot easier to see the stupidity of an idea as it is being built than trying to work with it after the fact.

An addition of my own, developers must challenge the architect when something doesn't make sense.  Most developers are just as qualified to be architects as the architects themselves and they can often see waste as they are building it.  I guarantee the construction workers who installed this door said, "This is pretty stupid."  If you are in an environment where architects design applications but don't code, the onus is even greater upon the developer to raise a concern if something doesn't make sense.  Architects, no matter what they think they are, are not gurus or gods of programming.  If the design doesn't make sense, they should be open to the idea that what they designed won't work in practice.  A mature, professional architect should not even want a programmer who is just a robotic extension of themselves. 

In addition, if you are a programmer, you are expected to think, not just type as fast as you can to get the architect's design down on paper.  It is unacceptable for a developer to say, "This is stupid" but then build it anyway.  If they ask a question, maybe there actually is a reason and they can be much more satisfied in their solution.  Working together and fostering an environment of open communication where people give and take constructive criticism is essential to producing applications that do not succumb to problems like the Door to Nowhere.

Reblog this post [with Zemanta]

Monday, January 18

Measuring SSD performance : Follow up

After writing up this post I decided to go out and see what else I could do to speed things up.  I saw a suggestion for tweaking sreadahead, but ureadahead is what we use in Karmic and it already detects and optimizes for SSDs.

I did however see numerous suggestions for the addition of elevator=0 to the grub boot parameters will increase performance even more.

Here are my results:
$ sudo hdparm -t /dev/sda
 Timing buffered disk reads:  462 MB in  3.00 seconds = 153.85 MB/sec

$ sudo hdparm -t --direct /dev/sda
 Timing O_DIRECT disk reads:  642 MB in  3.00 seconds = 213.91 MB/sec

According to that, the improvement is minimal, but the dd test gives slightly different results:
$ sudo dd if=/dev/sda1 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 1.04065 s, 202 MB/s

Wow, 178 to 202.  And it does feel even snappier in my environment. To implement this tweak, go to your /etc/default/grub and make the following change:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash elevator=0"

Then update the grub configuration:

sudo update-grub

Note, DO NOT make changes to /boot/grub/grub.cfg.  This file is overwritten every time update-grub is run, which happens whenever an update is made that the bootloader needs to know about(new kernel, etc).
Reblog this post [with Zemanta]

Wednesday, January 6

Measuring SSD performance Inc.
First off, I'd like to express my undying love for  It's a paradise of geekyness that I'm not totally sure how the world did without prior to 2001.

Ok, let me compose myself here and get down to business.  I just did an upgrade of my main desktop, the one I do all my work from, to a new Phenom II X4 965 BE, added an 850W power supply and finally upgraded to a Solid State Disk in opting for a 30G OCZ Vertex Turbo.

Of all the improvements, the most noticeable was the SSD by far.  Sure my machine now compiles code while my 4 cores running at the stock 3.4ghz barely acknowledge my existence. Sure, I have been working for the last 10 minutes and completely forgot that the machine was transcoding my entire library of videos and music in the background.

The thing I notice is the nearly instantaneous response I get when I ask for something.  Applications open instantly. Boot time is ridiculously fast.  My computer can now beat my TV from cold boot to working environment.  It takes me 36 seconds to reboot this thing from the time I say "restart" to the time I'm back at a working screen.  Post to working desktop is about 16 seconds.

You may say, "But you're running linux, why would you have to reboot?"  The fact is, booting is a very disk intensive process and it's something almost everyone is familiar with.  So even though I reboot very little, maybe once a month, it's still a reminder that this disk is smokin fast.

There are some theoretical drawbacks.  SSDs have a limited number of write cycles, at some point, once those cycles are used up, you get unusable space on your disk.  It's sort of like the argument that processor dies can crack if you turn your machine off and on due to thermal expansion.  When was the last time you actually had a die crack? My father, the miser that he is, shuts off his machine every single time he gets up to save on power.  He has a 6 year old Dell Dimension Craptaculous with dust coming out of every air hole and his processor is still working.  If you know enough to be worried about thermal damage, you probably aren't the type of person who is going more than 6 years between processor upgrades.

The same is true of the SSD.  By the time this thing succumbs to it's write cycle limit we're going to be storing data in crystals and you won't be able to buy a hard drive that's less than a terabyte.

You may have also noticed that the disk is fairly small.  I have always been a fan of small boot drives.  As recently as 2 years ago, I was still running a 10G boot drive at an awful 5400rpms.  Still, seek time was greatly reduced just by the fact that there was less space to seek. 

As with anything, there are advertised and actual performance numbers.  My drive is advertised for sequential write at 100 to 145 MB/s and seqential read up to 240MB/s.  Below are the actual numbers I get using hdparm.  Can you find the SSD?

$ sudo hdparm -t /dev/sda
 Timing buffered disk reads:  460 MB in  3.01 seconds = 152.81 MB/sec 

$ sudo hdparm -t --direct /dev/sda
 Timing O_DIRECT disk reads:  640 MB in  3.00 seconds = 213.32 MB/sec

$ sudo hdparm -t /dev/sdb
 Timing buffered disk reads:  216 MB in  3.01 seconds =  71.78 MB/sec

$ sudo hdparm -t --direct /dev/sdb
 Timing O_DIRECT disk reads:  250 MB in  3.00 seconds =  83.32 MB/sec

$ sudo hdparm -t /dev/sdc
 Timing buffered disk reads:  228 MB in  3.01 seconds =  75.69 MB/sec

$ sudo hdparm -t --direct /dev/sdc
 Timing O_DIRECT disk reads:  198 MB in  3.03 seconds =  65.41 MB/sec

$ sudo hdparm -t /dev/sdd
 Timing buffered disk reads:  172 MB in  3.02 seconds =  57.02 MB/sec

$ sudo hdparm -t --direct /dev/sdd
 Timing O_DIRECT disk reads:  176 MB in  3.02 seconds =  58.33 MB/sec

Here's more using dd:
$ sudo dd if=/dev/sda1 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 1.18009 s, 178 MB/s

$ sudo dd if=/dev/sdb1 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 2.3639 s, 88.7 MB/s

$ sudo dd if=/dev/sdc1 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 3.04812 s, 68.8 MB/s

$ sudo dd if=/dev/sdd2 of=/dev/null bs=4k skip=0 count=51200
51200+0 records in
51200+0 records out
209715200 bytes (210 MB) copied, 3.4337 s, 61.1 MB/s

While I don't appear to be getting the promised 240MB/s, it still laid waste to all the other disks in there, which are all 7200RPM WD drives, new as of last April.  In all honesty, the test would be a lot more fair if I had a Velociraptor to compare to, but from what I've read, it's still no contest.

So if you are looking into things and thinking now may be the time to try out an SDD with your next upgrade, I say, don't think about it again. Just do it.  The numbers speak for themselves.  That said, there are some things you can do to get the most out of your drive. Obviously, this applies to linux users... if you're using Windows I'm sure there's some kind of freeware optimizer out there.

1. Most filesystems are, by default, set up for traditional hard drives.  In /etc/fstab, set the options noatime and nodiratime on your ssd.
2. While I did a lot to discourage you from using the write cycle argument as a reason not to get one, there are some things you can do to cut down the number of write cycles you actually make.  If you're still on the fence and thought to yourself, "What about log and tmp files?", worry no more.  Just create ramdisks in your fstab setup.  My first impression of this idea was, "You aren't consuming memory with log files on my machine."  While I might not suggest this for a server, I did some checking on sizes and the log file was never over 6MB and the others were 5-10K.  Nothing to worry about on a modern system.  This is probably not a bad idea on any system with more than a couple gigs of memory.

tmpfs    /var/log     tmpfs    defaults    0  0
tmpfs    /tmp         tmpfs    defaults    0  0
tmpfs    /var/tmp     tmpfs    defaults    0  0

3. Manage your swappiness.  Add the following to your /etc/rc.local file.  This will discourage a lot of swapping.  It makes your machine more memory dependent, but, again, if you're going to drop $100+ on a solid state drive, you are more than likely operating at more than 2GB of memory.

sysctl -w vm.swappiness=1
sysctl -w vm.vfs_cache_pressure=50


You'll notice I didn't take all their suggestions.  I considered these to be the most obvious in relation to the SSD itself.

Reblog this post [with Zemanta]

Monday, January 4

Update on TypeMatrix progress

QWERTY 2030 USBImage by balise42 via Flickr
This is pretty much the last update on this.  It has now been a solid two months since I received my TypeMatrix keyboard and I decided to update based upon something I noticed the other day.  Very simply, I noticed that I was consistently and accurately finding all the keys, including the symbol keys, and that my typing speed has nearly returned to normal.  Just to verify, I went through the typing tests in gtypist and found that my speed was consistently hovering around 95 wpm.  My previous speed was 102, so I am 93% of my previous typing speed... those speeds include symbols.  Considering that I have started going back and forth between traditional keyboards and the TypeMatrix again, that is not bad.  I am continually progressing in speed, so I believe I will very quickly overtake my old speeds.

In my opinion, this makes the move to TypeMatrix complete.  It's been two months and several thousand lines of code, emails, Facebook updates, and what not... this does not seem promising for my ability to learn a completely new layout(moving to Dvorak), which is a bit disappointing.  Given that key locations are only modified on the TypeMatrix, Dvorak may just be a project for my kids rather than myself.
Reblog this post [with Zemanta]

PGP Encryption for Java

Setting up java to work with an encryption provider can be difficult.  From the day that PGP was declared a weapon, setting up encryption became a lot harder.  The saga of PGP is a fairly entertaining story and worth a read.

These steps should bring you from nothing to a working cryptographic development environment on a Linux machine using Eclipse and Maven.

 1. Create a new maven project from the quickstart archetype
 2. Make sure the project is set up to build to 1.6 both in the POM and the Eclipse builder
 3. If you do not already have the gpg package installed on your machine: sudo apt-get install gnupg
 4. Execute gpg - -gen-key ( this will walk you through the process of generating a new gpg key )
 5. Create a keys directory in your project and move the files generated during key generation there.
 6. On an Ubuntu machine, the basic java policy files are stored in /etc/java-{major_version_number}-sun/security, Go to the downloads page for the correct version of Java, find the JCE Unlimited Strength Jurisdiction Policy Files(, unzip and move the jar files to the /etc/../security directory with some type of tag in the name to delineate them as the unrestricted files.  Then move the current files from JAVA_HOME/jre/lib/security to /etc/.../security and tag them as restricted.
 7. Create a symbolic link to the unrestricted files in the JAVA_HOME/jre/lib/security directory.
Reblog this post [with Zemanta]

Sunday, January 3

If you do this... well, there's no fixing you.

I can't believe it took me over a year to write about exceptions given the name of this blog, but here we go. From the "really? *facepalm*" department.

Ron White has a saying that I mostly agree with. "You can't fix stupid." To make this technically correct, and what I think he meant to say is, "You can't fix stupid people." Stupid things you can certainly fix. Developers make stupid mistakes all the time... that's one reason we have statically typed code, checking at compiler time, and static code analysis. Then there's stuff like this where you just have to shake your head. (This is actual code, not a contrived example.)

try {
   //contents that throw an exception
} catch (LocalException e) {
} catch (SystemException e) {
) catch (Exception e) {

I'm sorry, but there is no other way to say it, that is stupid and it should require no explanation why. There's absolutely no reason for that, even in test code or sandbox/mess around code. Does it have a major impact? Not really.  If it's not high volume and it's unlikely that an exception will be thrown inside the try then no one will probably notice. Most importantly, it doesn't fail... but to say it works is like saying a flat tire "works".  Make no mistake, this is broken.

Besides the fact that you should NEVER print a stack trace into nothing, in the event an exception is actually thrown you will force the program to perform three operations where the output is exactly the same [wrong] thing.

I'm a big fan of writing code as simply as possible. I have been known to over-complicate things before and I've usually paid for it. But this isn't simple, it's either lazy or stupid; it might be both. I'm not sure which is worse... and I don't think there's a fix for either.