Pages

Tuesday, 1 October 2013

Does your company choose software because it looks pretty?

I've noticed that technical departments get a bit excited when they get the opportunity to choose new software. Once the budget has been signed off there's a feeling that all our woes will disappear with the arrival the New Software. The prospect of a new system monitoring, business reporting system or  version control system, or anything else can easily whip up heated debates fuelled by political, technical and economic visions within the company. Nevertheless, the word from management is that the New Software will save us all.

Yet, like the average Hollywood blockbuster, when it arrives it often fails to deliver. You can't do what you were able to do before, it seems more complicated, there's bugs, and worst of all, it may suck even more than the current solution. And while you're cursing the changes you have to make to get less done, you're thinking "How the hell did this happen?"

Here's my answers to this question.

1. A failure to fully understand the business problem.

This is probably the biggest mistake. People often complain about software, but are the problems more related to poor training or bad implementation? And what exactly does the solution look like? Sometimes people get carried trying to implement a vision of what they would like rather than solving business problems that they have.

2. Consulting with the wrong stakeholders.

For some reason, there is an understanding that managers are the best for determining software requirements. I think managers are probably best at being managers and it's not reasonable to expect them to have a detailed understanding the current problems with monitoring virtual infrastructure, ticket management, backups, etc.

If you need to understand monitoring, talk to Operations. If you want to understand ticket management, talk to the people who work with tickets all day.  It's not to say that management don't have valid requirements, but they must be taken into account with the people who actually use the systems on a day to day basis.

3. Going with the latest craze or marketing hype.

I remember when I was evaluating monitoring systems a few years ago and I looked at a relatively new product called Zenoss. One of the big marketing claims was that it "agent-less" so you didn't have to install any software on the client machines. Except the SNMP agent. Which doesn't have the capabilities of a decent monitoring client. So you had to create scripts that were triggered by SNMP requests or over SSH. Which as far as I am concerned, is worse solution than installing the monitoring clients in the first place.

There's always hype, but you have to keep focused on what's going to solve your problem and not the problem that some marketing is trying to convince you that you have.

4. Not testing with key users.

I don't always know what to say when the New Software has been worked on for months under stealth (be it for some political reason, or simply because the implementation team didn't know any better), is finally unveiled and the true awfulness of the solution comes to light. The shock is often compounded when I found out how much was paid for the turkey.

Tests should always be done with end users to make sure things keep on track. Users can often tell you quickly if a product is a wrong fit saving everyone time and money

5. Choosing the software with the best looking interface

I like good looking software and interfaces do play a part in being productive. But I am yet to find a correlation between the aesthetics of a user interface and the quality of the software. Sometimes these interfaces aren't actually easy to use, they just look pretty in the screen shots. And like the Sirens, it's only when you take the plunge does the nightmare truly begin...

You can only get caught out if you don't really test the software or don't have a good checklist of things you need the software to do.

Have I missed anything? Let me know...