Thursday, December 27, 2012
Jim Rohn, is to reflect at the end of a period of time and think about:
What went well?
What could have been better?
What can we build on?
What can we be thankful for?
What have we learned?
Mr. Rohn talked about the impact this would have over years and even a lifetime. I am convinced this is a big part of gaining true wisdom.
Prolific leadership author and coach, John Maxwell, says he does this on a weekly basis sitting in his hot tub. For me, I tend to reflect more on an annual basis (not in a hot tub).
In my "Becoming an Influential Test Team Leader" tutorial, we discuss this as one of the 15 ways you can add value to your team without spending a lot of money. This really is "low hanging fruit," but we tend to miss it. Even individually, as a leader, this is a good time to think back over the year and roll the lessons into major themes to remember and value.
I would caution you about being too hard on yourself. Sometimes, hard introspection is needed, but we have enough negativity coming our way from the external. Imagine what a coach might be telling you. Not the coach that was always berating you, but the one that you may have found encouraging, yet holding a hard line of accountability.
I don't know how many things you will have on your list. By the way, this is a good time to journal them. (You don't keep a journal? Fix that in 2013!) I typically have about ten to twelve things that stick out over the year. It is interesting to go back several years to see if I really am learning from my personal retrospectives.
In many ways, life and work is a test. So think of this as your annual "test summary report." I hope things went well for you this year. I hope in the areas they didn't go so well, that you find 2013 to be a better year. Inasmuch as things depend on you, I hope you gain the skills and knowledge to excel. In the areas that are circumstance-driven, I hope you will find peace and endurance. I'm pulling for you!
Thursday, December 20, 2012
However, my mind is going a different direction.
In our neighborhood, we have several pizza places, some small mom and pop places, some large chains, some franchises. You probably have the same mix in your area. They have varying levels of quality. In fact, the "big name" is the worst by far. The smallest place was the best by far. The word "was" is because they closed this week, which really disappointed me.
What has happened over the years is that the price point for a "large" one topping pizza has reached $5. Anything over that is seen as "too much". Quality isn't a factor, only speed and price. So a race to the bottom has ensued.
Really, at $5 a pie, that's in the frozen pizza range!
I see the same thing in software development and testing. How much is "expensive" for an iPad app? $1.99? $5.99? $19.99?
How much is a bug worth? $1, $5, $5,000, $40 Million (If you are Nasdaq!)
How much are testers worth? (What if they really knew your business and were the glue that holds a project together?)
How much is a good developer worth? (What if they were crazy good, not "kings and queens of their domain" and your products were amazing?)
Pizza has become a commodity and unfortunately so has software development, testing and training.
How much is a really good pizza worth to you? I'll bet it's more than $5. How much would you pay in a place like Rome, or with a good friend or loved one?
Cheap pizza, like cheap testing gives a false sense of value. Cheap doesn't necessarily mean bad. However, cheap is often bad. Value is good outcomes for a reasonable cost.
By the way, I would have gladly paid much more to keep my favorite place open. At this point, I must confess that I am part of the problem whenever I buy things on price over value. It fuels the problem.
As testers and software people, we need to provide value to a market that wants cheap and quick. When we stretch things too thin, we provide little value and even negative value at times.
Now...I know you all will suggest great places for pizza that is not cheap. Feel free. I travel a lot and keep a list. I just need a good place close to me!
I wish you all a very blessed Christmas and a very prosperous and healthy 2013!
Friday, December 14, 2012
2:00 EST - 3:00 EST
Presented by Dexter Oliver of Integritas Solutions and Randy Rice, of Rice Consulting Services, Inc.
The Centers for Medicare & Medicaid Services (CMS) has mandated that everyone covered by the Health Insurance Portability and Accountability Act (HIPAA) use the ICD-10 coding standard by October 1, 2014.
This mandate will impact all entities that report and pay claims. This includes hospitals, clinics, insurance companies and physicians. Furthermore, service providers such as medical coding services and other vendors, such as software companies will also be greatly impacted.
The U.S. Department of Health and Human Services predicts that claim errors will rise to between 6% and 10% of all claims, up from an annual 3% under ICD-9. There are many technical challenges with the ICD-10 conversion because of the breadth and depth of how the classification codes are used in both clinical and business processes throughout the healthcare ecosystem. HIM Managers must partner with their IT organizations to implement solutions as well as develop testing strategies to mitigate the major financial and operational risks associated with this transformation.
In this FREE webinar, you will learn three critical steps to help develop a testing strategy that will help organizations achieve compliance cost effectively.
Register at: http://www.anymeeting.com/PIID=E951DB80854A30
Wednesday, December 12, 2012
Copyright 2012, Dimensions: 7" x 9-1/4", Pages: 320
Click to see on Amazon.com
While your situation will likely not be the same as Google’s, there is a lot to be learned in how they do things in development and testing. That’s because they seem to have the secret formula in getting features to market quickly and with good quality.
Not only did this book give me ideas about how to make testing software more productive, it can give anyone a perspective of software testing not found anyplace else. Most other books address testing from the perspective of “Here’s how testing should be performed.” This book comes from the angle of “Here’s how we do testing.” There is a big difference.
It is tempting to skip the preface and introduction when reading a book. However, these provide critical context and a good summation of what you can expect to take away from the book.
You will see several perspectives of testing at Google:
First, there is the historical perspective of how Google matured both as a company and test organization.
Second, you will read how James Whittaker, an already accomplished and notable testing guru, joined Google and had to do innovative things of value to carry his weight there.
Third, you will read perspectives by the co-authors and their interviews with developers, testers and managers at Google about their roles and responsibilities.
Finally, the authors outline in complete detail both how Google tests, and why they do things they way they do. Some key takeaways for me were:
· Using tours as a basis for exploratory testing,
· The concept of writing a 10-minute test plan,
· The value of crowdsourcing for testing,
· Getting maximum value from early testing from test engineers who are developers at their core (People always want to get better testing earlier in projects. This book explains how to do that!),
· Seeing Google's testing framework in action.
I can highly recommend this book to people who are looking for new ideas to revamp testing processes and organizations.
Monday, November 19, 2012
In the most recent version of CTE-XL, 3.1.3, the vendor has made virtually every feature except drawing a tree (which you can do with many other free tools) a "professional" option. The pro version is 1,250 Euros per node-locked user, plus VAT. So, the free version is worth what you pay for it. Is the pro version worth the cost? Probably, if you are going to use it a lot and don't plan to deploy it to a lot of users. My issue is, why bother with a "free vs. pro" option if the free version gives little idea of the concept of the tool, which is test generation?
So, it was good while it lasted. For those of you with prior versions installed, stay with them.
Just as an example, here is a tree created in an awesome, free tool called yEd:
You can't generate test cases with yEd, but you can't do that anymore with CTE-XL, either. At least with yEd you can also create flowcharts, UML models and many others.
In the meantime, I am looking for a good developer and others who would like to join in an effort to develop a similar tool to stay in the Free category of test case generation tools. If interested just comment below.
Thursday, November 01, 2012
Now it's our turn to return the help.
or by enrolling in an online e-learning course. We are donating 10% each week for all e-learning enrollments. Please understand this is not a sales ploy. It's just our way of helping by sharing what we earn.
Wednesday, October 03, 2012
I promised the people in my Tuesday tutorial at StarWest 2012 the Word versions of the two self-assessments: One on the people issues in testing and the other on building core competencies in testing. Here they are!
Core competency self-assessment
Feel free to modify for your own purposes. Thanks to everyone who attended my tutorial. It was a great time!
Friday, September 07, 2012
Don't you love catchy names?
Unfortunately, almost every store in the village, including the coffee shop is closed because the Summer season is over. The owner of the shop (at least I think she is the owner :-) ) walked by a few minutes ago, unlocked the door to go in and I heard her say to someone, "I just need to get the cinnamon rolls." (They have AWESOME ones.)
Me being vocal self, said "Sure, I'll take one," and just laughed. A few minutes later, out she comes with one on a paper plate. Keep in mind, the shop is closed!
I said, "Please let me pay you." She said, "Oh, just come back in sometime."
Isn't that great? I'll be back tomorrow morning for sure!
So, I'll just finish my awesome cinnamon roll and wish everyone a great weekend.
Friday, June 01, 2012
This one makes me go, "Hmmm."
The quote that got my attention was, "Greifeld [Nasdaq Chief Executive Robert Greifeld] also said extensive testing that was performed ahead of the deal failed to spot this problem."
I wish I had a dollar for every time I have heard or said that.
In other reports, Greifeld alluded to coding problems.
"Nasdaq’s systems fell into a “loop” that prevented the second-largest U.S. stock venue operator from opening the shares on schedule following the $16 billion deal, he [Geeifeld] said."
Friday, May 25, 2012
Here are the slides in PDF:
Here is the recorded session:
Friday, May 04, 2012
The first one happened right here in my home city of Oklahoma City. We have this awesome NBA team, The Thunder, that is in the playoffs.
When the tickets went on sale online, some people in Oklahoma City were not allowed to buy a ticket. They received an error message to the effect they didn't live in Oklahoma, Kansas or Arkansas. OK....Turns out, there is a business rule that said priority for tickets go to people who live in the area. Fine. But, these people live in OKC. What happened was that the front office used a file from the U.S. Postal Service to validate the zip codes from customers. I mean, who better to know which zip codes are valid, right? However, there were a few zip codes missing.
This brings up an interesting and common testing problem. How does one find such a defect? In this case, it's a conundrum.
You could say, use equivalence partitioning and test each zip code as a class. But, how do you identify all the zip codes? Especially, if the postal service info is incomplete to begin with! Plus, that would take a long time to do manually. Automation could do the job quickly, but you need a secondary source to compare the correct codes.
My solution would be to either 1) do an automated test using the known customer zip codes (like from a mailing list), or 2) do a manual test that samples from the customer list. This still would not assure the defect would have been found.
In the end, the team did the right thing and sold the affected people seats at a discounted price, once they learned of the problem. Now, perhaps, hopefully, this can be a test case for the future!
The second issue occurred in Auburn, California. Here's the account according to NPR:
"Traffic jams in California - well, they're nothing new. But one recent tie-up on Interstate 80 was noteworthy since it was caused by a computer glitch. The Placer County court accidentally summoned 1,200 people to jury duty on the same morning. Taking their duty seriously, residents tried to be on time at 8:00 a.m. and were in a line of traffic with other would-be jurors. The court apologized but said a real jury summons could be coming soon."
"We apologize profusely," court executive officer Geoff Brandt said of the error.
My take: I'm skeptical. Sounds like there could be some human error involved! I mean, if I input the wrong selection criteria in for the summons, I wouldn't want to take the blame either.
I doubt we will hear of the root cause, but it would be interesting to know.
Keep taking those backups!
Thursday, April 19, 2012
Wednesday, April 18, 2012
This is the chart for short-term, mid-term and long-term goals.
Tuesday, April 03, 2012
According to the study, QA and Business Analysts are #3 in the "Top 5 Tech Professionals Businesses Want Now" category:
"With more dollars available to IT projects, managers are focusing on quality control and assembling more accurate project requirements. Quality assurance professionals can relieve developers so they can focus on coding, while business analysts can help build trust among stakeholders and serve as go-betweens for technology and business."
You can download the report here:
Want to get into QA or Business Analysis? Check out these courses:
Or, if you want to get certified in testing, we have the training for you!
Add value to your company by getting the knowledge to support better project results!
Wednesday, March 28, 2012
So many ideas are swirling about, which is a good thing. The challenge is sorting them out to make sense in your own situation.
I was observing the panel discussion about agile testing and some things struck me:
1) What happened to XP... and then, Crystal and then...? What I mean is they apparently went out of style. In 2001 I was at a conference where XP was the solution to all problems. Now, it seems there is a new kid on the software block, Kanban. So, the discussion is around Scrum vs. Kanban.
This all makes me think there is "nothing new under the sun." People seem to want to follow the new thing, which I totally get, but why? Typically, it's because the present methods have problems. Why?
The reasons go to deeper issues, I think, than the flavor of the methods being used. I keep going back to my mantra that "the magic is not in the method." It is something deeper and more esoteric than a methodology.
I am convinced I was on an agile team in 1979.
2) People, tools and process...if you gravitate to just one or two of these, you will be out of balance.
OK, so you love the people and the team. That's great and the right people, all being able to get along is key to good projects. But...even the best teams need the right tools and the right methods to get the job done.
3) The customer is king, regardless what your process is. Your customer may be the end-user or the CEO. If they want something, you better get it to them (thanks, Scott Barber, for making this emphatic point today!). If you stick with your process in opposition to what your customer wants or needs, you will be replaced.
I once was teaching a class when on the afternoon of day 1, half of the class had to leave. Some of you cynics are thinking "I can totally see that" but no, it wasn't due to the teacher. It seems they had to restore functionality of a change back to the original version. They got the requirement totally right. They implemented the change 100% correctly. However, a powerful stakeholder got so much push-back on the change, he said "pull it." So, even though everyone had done their job correctly, there was still a problem that had to be dealt with. (By the way, this was all based on a new state law that was passed and was being implemented. The stakeholder was a state senator.)
4) Software development is a creative process and has to be managed as such. It's also a knowledge-based process and knowledge-based workers are not the same as other types of people. Otherwise, we could create software with software all the time, not just some of the time. I know, we have visual tools and the like, but remember CASE tools? There are some principles of manufacturing that can be applied well in software, but we must remember that software is intellectual in nature.
5) Release processes drive almost everything else. If you get this wrong, you can get really bound up. Some of us can recall the famous "I Love Lucy" episode where she and Ethel work in a chocolate factory wrapping the candy as it travels down the conveyor belt. The candy keeps coming faster and faster. They can't keep up and start eating it, stuffing it...funny stuff. I see this in software projects all the time. Too much work, too fast pace, people just try to get the stuff out of the door. Then when things don't work, the testers get blamed.
No matter which method you use, if you can't control scope and pace, you won't succeed. You will always be behind and your customer will always be complaining.
Well, those are some thoughts in the airport. I would like to hear your thoughts.
Tuesday, March 13, 2012
Check out the newest podcast where I interview Sam Lightstone, author of the book, Making it Big in Software.
Sunday, January 22, 2012
Here is the recording:
Here are the slides:
There was a question about the need to read/understand code for the Advanced Technical Test Analyst certification. Here are two resources for those that would like to know about coding:
Here are my ISTQB Advanced course links:
Wednesday, January 11, 2012
I hold all three ISTQB advanced certifications and will explain the things you need to know about ISTQB advanced certification.
Time: 1/20/2012 1:00 PM (UTC-06:00) Central Time (US & Canada) for 60 minutes
I hope to see you there!
Thursday, January 05, 2012
There is a great "secret" I tell software professionals. That "secret" is that if you want to rise to the top of your field, it's not that hard to do because so few people do the simple things to rise to the top. You will learn those things in this book.
Making it Big in Software brings a great perspective to the idea of breaking through to becoming an elite thought leader in the software profession. The first thing that caught my eye was the stellar nature of the people Sam Lightstone interviewed for this book. These include James Gosling, the inventor of Java, Steve Wozniak, inventor of the Apple computer, Grady Booch, co-founder of Rational Software, and many other luminaries in our field.
Lightstone anticipates the question most people have right out of the gate, "Why bother?"
"But with long hours, considerable stress, and no guarantees, the obvious question is whether it’s even worth trying to make it big. I believe the answer is unequivocally yes. The most compelling reason is that, in most cases, you have to show up to the office and work like a lunatic anyway—it’s really not optional (if you want to eat). So if the difference between being a midlevel career programmer and making it big is an incremental strategic investment of time and energy, then it’s more than worth it for you and for your family. In the long run, the benefits are significant: a more satisfying career, greater influence and impact within your company and the industry, more fun, and more money. And while there may not be less “crap” to do, at least it’s strategic work rather than “grunt” work."
Some of the enticing things about making it big are:
- Fun and interesting work
- Corporate and industrial influence
- The betterment of society
- Freedom to work on what you want, when you want (Lightstone makes clear that this is what you want to work on, not how much you have to work!)
(As an aside, I humbly say that as a minor software testing celebrity, I experience these things on a regular basis and it is good. While travel can become a chore, it is cool to teach in Rome twice a year.)
Lightstone lays out all kinds of practical, real-world advice, such as "What to Look for in a Company." I like this list and it reinforces a key idea that you do not have to strike out on your own to make it big.
"1. Is this a company that has experience in building professional, high-quality systems?
2. Are there really talented people here I can learn from?
3. Is the position I’m being offered one that is interesting, with long-term growth potential on something I can believe in?
4. Do they have savvy business executives who really understand the business requirements for success and have a track record for delivering it?
5. Does the company have clarity of vision for the product it produces?
6. Is there an independent research arm?
7. How does the company innovate, and how profound has their innovation been?
8. Is the work environment pleasant and flexible, and does it suit my lifestyle?
9. Does the company seem stable? Do I believe it will still be around in ten years?
10. Is the pay in line with industry standards?"
The book goes back and forth between interviews and practical guidance, which is a good thing. The interviews let you get inside the heads and hearts of these gurus, while the guidance gives you a plan of attack.
Good economy or bad economy, it doesn't matter in terms of the importance of standing out and making your mark. Economies rise and fall. We all need to learn to not let our jobs distract us from our careers. This book helps light the way to do that. I highly recommend it to anyone in the software field!
Recently I was listening to a radio interview about genetically modified crops. During the discussion, the guest mentioned the fact that farmers are starting to see “killer weeds.” These weeds are resilient to known herbicides which makes them very difficult to control or eradicate.
Being the tester I am, I immediately thought of the “Pesticide Paradox” that Boris Beizer wrote about in the 1980’s in his book, Software Testing Techniques. That principle says that just like bugs that become resilient over time to pesticides, software tests can become ineffective at finding new defects.
Another way to express this principle is to realize that your tests will grow weaker over time in terms of finding new defects.
My way of saying this is that when you repeat tests with the same conditions and test data, the tests become more confirmatory in nature than being discovery-oriented.
Similarly, there is the minefield example in which we can see that the safest way to cross a minefield is to walk in the footsteps of someone who has been successful previously in crossing from one side to the next. Seeing software defects as a mine, if we go down proven paths we don’t hit the mines.
Of course, in testing our goal is to hit the mines. As I say, “Failure is not an option, it is an objective.”
This discussion about killer weeds got me thinking about some other tie-ins with software testing and the “killer defects” we deal with.
Weeds are Ubiquitous
I don’t have a green thumb, I have a killer thumb. I’m sorry, but plants and I don’t get along that well. I have been successful in growing some things, like tomatoes, but my climate is harsh and the main things I can grow are grass and weeds.
However, I seem to have no problem with growing weeds. Have you ever noticed how resilient weeds are? They can grow in the cracks of a sidewalk!
Similarly, software defects are everywhere. Those of us in the weed control business of software (testers), have an abundant supply of defects to find. But like the killer weeds, it seems that no matter how often we spray, or pull, or mow, the weeds return.
Even when I pull the weeds out by the roots, they return because the seeds from neighboring yards (and my own yard) float into the yard or garden. Before long, there they are again.
What are the seeds of your defects and where do they originate? Inadequate requirements? Bad code? Inadequate testing?
The good news is that in software development we can actually control the seeds of defects at their source. It’s not always easy. In fact it hardly ever is easy, which leads me to the next point.
Weed Control is Maintenance
I really wish I could just spray or weed once a year, but that doesn’t work. In Oklahoma last summer, we had 63 days that were 100 degrees or more, plus we were in a very severe drought. Basically, most of the vegetation died, then caught on fire. I told people, “Welcome to Hell.”
Every blade of grass in the vacant lot next door to me was brown. There were inch-wide cracks in the clay soil that had the same consistency of bricks. There were also bright green weeds! Yes, the weeds thrived even in the absence of water in soil as hard as bricks.
Have you ever asked why, despite all your testing and other efforts, the defects seem to occur? The best answer I have is that people are human and humans make mistakes. These mistakes can be misinterpretation of needs, incorrect implementation of solutions, or just forgetfulness.
Another reason for mistakes may be the rate and nature of change. The faster we go, the easier it is to forget something or to simply become careless.
Perhaps the number one thing I hear testers complain about is maintaining their tests. This causes me to ask, “Where did this false expectation start that says tests are maintenance-free?”
The fact is that most tests will require maintenance, just like the software we are testing. However, the thing we deal with as testers that is unique is that changing one line of code may require changes hundreds of tests.
Some may say that this is why agile testing is great. I agree that agile approaches help. However, even in agile there are tests you want to remember and repeat. Yes, these may be automated. However, that may not matter in terms of maintenance since changes ripple through automated tests as well as manual ones.
Timing is Critical
Since I am horticulturally-challenged, I hire my friend Marty to spray and fertilize my yard five times a year. The first spraying in late winter is a pre-emergent. This treatment is very important to kill the weeds while they are still in the seed stage. Failure to catch weeds early means you play catch-up the rest of the year!
In software projects, early defect prevention and detection methods are key to stopping defects before they “bloom” and become larger and nastier. Activities such as reviews, walkthroughs and inspections can be your pre-emergent detection of defects.
Know Your Weeds
Marty doesn’t spray for all possible weeds in my yard. Instead, he sprays for those weeds common to our area. If rogue weeds appear, he sprays them individually as needed.
If you have ever ventured out to the hardware or garden store to buy weed killer, you know that there are many kinds, each oriented to specific types of weeds – grassy weeds, broadleaf weeds, and so on. There are treatments such as Roundup that will kill every bit of vegetation, but his is not a good option if you want to keep the good plants. That is overkill except when you want to clear out everything.
In testing, we need to know the types of defects we most often encounter. We also know that defects tend to cluster and we also know that people fall into patterns of repeating mistakes.
If we orient our tests to find the defects we most commonly encounter, that’s a good first pass. In the past, I have called this “starting a bug collection.” Now, I can also call it “a weed collection.”
Then, we can drill-down down as needed to branch out and find new types of defects.
Basically, we can pull weeds (a.k.a software defects), or we can work on preventing them. Pulling the weeds is hard work and seems to never end (sound familiar?). Preventing weeds takes discipline as well, but spraying in advance is easier and cheaper in the long run. Similarly, preventing defects with process improvement and finding defects early with reviews pays off in the long run.
If you know your source and types of defects, you can test wider and then focus on trouble spots. Just remember that the tests that work well today, may not be as productive in finding new defects the next time you test.
In the second installment of this article, I’ll explore why these killer defects are so hard to eradicate and the results of failing to find and eliminate them.