Tuesday, September 14, 2010

New York City Voting Machine Problems - Implementation? We Don't Need no Stinking Implementation Plan!

I've spoken out before on voting machines and the related quality issues which cause me concern. So, this article caught my eye. It seems that today in New York there were major problems (not glitches) in the rollout of new voting machines. One might think perhaps the cause was software or hardware related. However, upon reading the account, it seems that the problem was largely in the implementation.

$160 million spent and this was the result.

I like the "teachable moment" and this is one. It doesn't really matter how good the software if you don't have the hardware plugged in!

http://newyork.cbslocal.com/2010/09/14/new-yorkers-head-to-the-polls-on-primary-day/

Wednesday, September 01, 2010

Do You Test SaaS, Or Does SaaS Test You?


Imagine you are sitting at your desk one morning and your phone rings. It's not your boss - it's your boss's boss, the CIO. He's upset because the online sales database is down and the entire sales staff is paralyzed. "The sales software is broken!" he exclaims. Then he asks, "Didn't you test this?" You take a deep breath and say, "No, that software is a service we subscribe to. We have no way to know when it changes." The CIO isn't happy, but now sees the reality of Software as a Service (SaaS).

I believe that SaaS is not a trend, but a major force that will shape the future of IT and, therefore, software testing. In fact, SaaS could dramatically change the way we think about and perform software testing from here on.

This article has a definite angle toward the risks of SaaS. I fully embrace the benefits and appreciate them. I use SaaS, as you will see later in this article. My point here is to point out the risks. Only you can weigh them against your own benefits.

What's Different?

1. You have little or no control over the software, the releases, and any remedies for problems.

In some cases, you may get the choice of when to accept an upgrade. However, even when you are on a specific version of the application, the vendor may choose to make a small change without advance notice.

In other cases, you may come in to work and notice that things on the application look a little different. Or, you get the dreaded phone call that something is broken.

But here's the real rub: Even if you isolate the problem and report it, you are still at the vendor's mercy to fix the problem.

At least with Commercial Off-the-shelf (COTS) software, you get the choice when to deploy it. So at least you get the chance to test and evaluate it beforehand in a test environment.

2. You have almost zero knowledge of structure, and often no knowledge of new and changed functionality.

Forget white-box testing, unless it is for customized interfaces (for example, APIs). Now, let's say you have noticed some functional changes. What really changed? What stayed the same? What are the rules? When are they applied? Many times, none of this is published to the customers.

Not only is SaaS a black-box, it is more like many black boxes, all in a cloud. Your SaaS application is most likely comprised of many services, each with their own logic. In fact, some of the services may be supplied by a vendor you are not even aware of.

3. There is no lifecycle process for testing.

In the past, we could test at various levels - unit, integration, system, and UAT. That's all gone with SaaS. It's all UAT. Sure, you can test low-level functionality and integration, but it's from the customer, not the developer perspective. In the lifecycle view of testing, your tests can build on each other. In the UAT or customer view, testing is a "big bang" event. So, forget "test early and often." That changes to just "Test often."

4. Testing is post-deployment.
There may be some exceptions in the case of beta testing, perhaps. But for most people, you get to see the software only after it is deployed. That's too late to prevent problems. All you can do is race to find them quickly. In other words, you are in reaction mode instead of prevention mode. Then, even if you find the problems before your customers do, they may not be fixed for some time.

5. Test automation is fragile (and futile).

You get no return on investment because ROI is achieved when tests are repeated. When a new version is released, there's a good chance your automated testware will not work. This means you are at risk of reworking your test automation at every release. That said, some people may find test automation of basic functions a way to monitor when changes have been made.

6. Test planning is also futile.

This is because there is little detailed knowledge about the application in advance of using it. There is no specification basis for testing unless you write them. Use cases might be effective for describing work processes supported by the application.

Is There Any Hope?

Yes, and it's called validation. Not validation in the sense of "all forms of testing", but validation in the sense of making sure the software supports your needs.

In Point #6 I mentioned use cases. This is a form of test planning you can perform. You can create test scenarios based on work flows you perform in your organization. However, you must not think in terms of software behavior. Instead, you must describe work processes that can be tested no matter which software is being used.

You can create use cases to describe work processes, not software processes. These can be used as a basis for testing. The flows are perfect for mapping test scenarios.

You can assess the relative risk of work flows to prioritize your testing. You can even use pairwise testing to reduce the numbers of combinations of test scenarios and test conditions.

However, only in certain situations will this help you avoid problems: 1) You are a beta tester or 2) you get to choose when to apply an upgrade.

There Will be Pain
You arrive at work on Monday morning and discover one of your SaaS applications has changed. You and your team scramble to test the high-risk scenarios (manually) and discover that many things are broken. You call the vendor only to learn that your wait time on hold is 30 minutes. You file issue reports and just get the auto-responder messages. Days go by. You keep trying, your manager tries calling, but hey, you're just one customer out of thousands. In many cases, there is no "Plan B."

This is the extreme end of the risk scale. Not all releases fail, and when they do fail, not all fail to this degree. However, the risk is real.

My Story

Just last week I experienced a problem that would have been very painful had it happened at a time I was teaching an online class. My online training services provider upgraded to a new version of the web presentation platform, but gave no notice. All they indicated was that the site would be down for maintenance on Friday evening. Turns out, they introduced a totally new and upgraded platform with several improvements. However, several key things failed: 1) I could record a session, but never get the file (I wasted an hour recording a session to find this out!), 2) I could not upload a file for presentation, 3) I could not install the new screen sharing software on Windows XP. There's not much you can do with web meeting software behaving like this.

I called tech support on Saturday. Guess what? Nobody home, even after a major release. I filed 3 problem tickets which as of this date have still not been closed out, even though the problems were fixed about 2 days later. (I did get one response.) I'm happy things are working now, but troubled about the way the release and post-release were handled. If I had been scheduled to teach an online class on Monday, I would have been stressed out for sure.

My case is fairly low-impact, but it did show me the risks of SaaS. I encourage you to keep these risks on your radar because like it or not, SaaS is in your future.

I would like to hear your experiences and ideas for testing SaaS, so leave a comment!