Not to be outdone when it comes to airport chaos (we don't want DFW and O'Hare to get all the glory), the new Terminal 5 at London's Heathrow airport is not being described as a "glitch" but a "meltdown". At last a more accurate description!
I often comment in my "Process Improvement Using Root Cause Analysis" workshop that it is very interesting to see root cause analysis play out in the real world. So, the reason I'm mentioning this situation (still ongoing at the time of this writing) is because it's interesting to observe - kind of like watching a train wreck. Of course, I'm not one of the impacted passengers on British Airways.
I kept thinking, how similar this situation sounds like some of the computer system implementations I have seen, except not on such a grand scale.
Here's one link for the story:
http://www.express.co.uk/posts/view/15802/BA-facing-Heathrow-baggage-meltdown-
There have been cascading problems:
1) Lack of parking for baggage handlers caused them to spend time looking for where to park, therefore showing up late for work, thereby delaying the handling of checked baggage.
2) Lack of security staff to even let the baggage handlers into the airport.
3) A coding error that prevented from baggage handlers from logging onto the baggage handling computer system.
4) Lack of training for the baggage handlers which caused confusion of where to pick up bags and how to take them to the planes.
5) Then...a breakdown of the transit system that carries people from terminal 5 to the satellite terminal 5B.
Is this deja vu all over again? Anyone remember Denver International Airport's "state of the art" automated baggage system that was finally scrapped last year after $193 million?
http://www.thedenverchannel.com/news/4580090/detail.html
In the Evening Standard of March 28th, we find some interesting information:
1) The new automated baggage system which has 10 miles of conveyor belts, 140 computers, designed to process 12,000 bags per hour had never been tested in a live terminal. That would be quite a load test, but still...there were many things both manual and automated that failed that such a test might have found.
2) Small delays in a conveyor system can have a huge impact. Remember the old "I Love Lucy" episode where the chocolates just kept on coming?
3) People played a huge role, although not their fault. Lack of parking, security and training amounted to "no hands on board."
4) There apparently was another computer error that failed on being able to "stack and shelve" baggage that needs to be held for several hours for longer layovers. This required human intervention to correct (not the code, but getting the bags to the right places).
5) There was training...just not enough. The baggage handlers got 5 days of training.
6) There were practice runs since September, but problems apparently were seen as late as last week.
7) Other problems: broken down walkways and elevators and airport monitors not working
8) The clock is ticking. More flights scheduled to move into T5 on April 30.
As one person commented, "I don't think there was one person here who knew what was going on."
The situation got so bad that British Airways suspended baggage check in at the worst peak.
It will be interesting to see how this all plays out. I certainly wish them all the best success!
By the way, I'll mention one of my favorite services: www.pressdisplay.com. For $10 a month you can read up to 30 issues of papers from around the world, just like you see them in paper form.
Stay tuned!
Dedicated to thoughts about software testing, QA, and other software quality related practices. I will also address software requirements, tools, standards, processes, and other essential aspects of the software quality equation.
Monday, March 31, 2008
Friday, March 28, 2008
Census Project Going South
Those that have been reading this blog know that I'm big on two things: 1) Not calling major problems "glitches" and 2) Understanding and overcoming the human-computer interface concerns.
Well, this is a story that blends both of these. I was reading our local paper this week and came across an interesting article, "Census glitches may cost billions". " So, I thought "Hmmm. This looks interesting. That's a pretty expensive glitch! What's up with that?"
The skinny is that it looks like the 2010 census project is in trouble. The government figured that the manual method was just too old fashioned for 2010, so they (the census bureau) decided to fund a $596 million project to use handheld computers. (The project cost is now at $647 million.) Now, problems are emerging that will likely add another 2 billion dollars to the original tally of 11 billion dollars for the 2010 census. This works out to about $43 for every man, woman and child in the USA, using the current estimated U.S. population of 303,729,132 at http://www.census.gov/.
At risk is both the accuracy and feasibility of conducting the assessment. (Think "voting machines" and you get the idea.) The problems are so serious that census officials are considering (gasp) pencil and paper!
Further information obtained from congressional testimony reveals that the census bureau was "unprepared to manage a $600 million contract for the handheld computers that will be vital."
Unprepared?? I realize I'm on the outside barely looking in, but good grief, why embark on the project if you don't have the ability to run it? Of course, knowing how the government procurement process works, I'm sure someone was told the project would be no problem at all.
Project management may sound boring, but good PMs know how to bring a project in on time, within budget and at the agreed upon scope. This knowledge is not kept in tightly locked vaults, but is openly available for anyone who wishes to learn. The problem is, the PMs who really need to learn project managent, don't think they need it!
Most projects like this, however, have lots of blame to go around. Things like poorly defined requirements and contracts, lack of communication, difficult project sponsors, expectations that are too high, technical solutions for non-technical problems, etc.
The problems boil down to:
1) The handheld computers are too complex for some temporary workers.
2) The original programming wasn't efficient enough to transfer the high volumes of data generated.
I believe communication is the underpinning of all IT. Interestingly enough, census director Steven Murdock admitted that"communication problems" between census officials and the contractor (Harris Corp) have caused "serious issues."
This is why I'm a big fan of pilot projects. This is a huge scope of implementation. It's good to try things in the small world before ramping up to the big world. Sure, technology changes a lot in 10 years, but at least you have a lot of time to phase in a technology. Plus, you can pull the plug at $1 million instead of $600 million.
So, don't be surprised if you get a paper form to fill out in 2010, or see census workers using "Plan B" - pencils and paper. The sad thing is, it will most likely be a $2 billion "lesson learned" with our money that could be used for more important things. (Unfortunately, there are so many project failures that I doubt the lessons will ever be learned!)
I hope I'm wrong and they can get it sorted out - correctly, that is! Census bureau - We're counting on you! (Sorry about that.)
What project management or testing lessons do you see in this story?
Well, this is a story that blends both of these. I was reading our local paper this week and came across an interesting article, "Census glitches may cost billions". " So, I thought "Hmmm. This looks interesting. That's a pretty expensive glitch! What's up with that?"
The skinny is that it looks like the 2010 census project is in trouble. The government figured that the manual method was just too old fashioned for 2010, so they (the census bureau) decided to fund a $596 million project to use handheld computers. (The project cost is now at $647 million.) Now, problems are emerging that will likely add another 2 billion dollars to the original tally of 11 billion dollars for the 2010 census. This works out to about $43 for every man, woman and child in the USA, using the current estimated U.S. population of 303,729,132 at http://www.census.gov/.
At risk is both the accuracy and feasibility of conducting the assessment. (Think "voting machines" and you get the idea.) The problems are so serious that census officials are considering (gasp) pencil and paper!
Further information obtained from congressional testimony reveals that the census bureau was "unprepared to manage a $600 million contract for the handheld computers that will be vital."
Unprepared?? I realize I'm on the outside barely looking in, but good grief, why embark on the project if you don't have the ability to run it? Of course, knowing how the government procurement process works, I'm sure someone was told the project would be no problem at all.
Project management may sound boring, but good PMs know how to bring a project in on time, within budget and at the agreed upon scope. This knowledge is not kept in tightly locked vaults, but is openly available for anyone who wishes to learn. The problem is, the PMs who really need to learn project managent, don't think they need it!
Most projects like this, however, have lots of blame to go around. Things like poorly defined requirements and contracts, lack of communication, difficult project sponsors, expectations that are too high, technical solutions for non-technical problems, etc.
The problems boil down to:
1) The handheld computers are too complex for some temporary workers.
2) The original programming wasn't efficient enough to transfer the high volumes of data generated.
I believe communication is the underpinning of all IT. Interestingly enough, census director Steven Murdock admitted that"communication problems" between census officials and the contractor (Harris Corp) have caused "serious issues."
This is why I'm a big fan of pilot projects. This is a huge scope of implementation. It's good to try things in the small world before ramping up to the big world. Sure, technology changes a lot in 10 years, but at least you have a lot of time to phase in a technology. Plus, you can pull the plug at $1 million instead of $600 million.
So, don't be surprised if you get a paper form to fill out in 2010, or see census workers using "Plan B" - pencils and paper. The sad thing is, it will most likely be a $2 billion "lesson learned" with our money that could be used for more important things. (Unfortunately, there are so many project failures that I doubt the lessons will ever be learned!)
I hope I'm wrong and they can get it sorted out - correctly, that is! Census bureau - We're counting on you! (Sorry about that.)
What project management or testing lessons do you see in this story?
Tuesday, March 25, 2008
Welcome to Oklahoma Where Your SSN is Public Information
Big rant ahead. Today's news in Oklahoma is that after 9 months of redacting what would normally be considered private information - You know, stuff like social security numbers, dates of birth, everything you need to commit identity theft - now it's deemed OK to have it our there after all.
Initially, the news that this information was on the web from the Oklahoma County Clerk's office caused quite a stir. Then, it was learned that the information has been there for quite some time and that contractors have been working for 9 months to obscure it from public view.
Then, here comes the judges. Today, the Oklahoma Supreme Court rescinded an earlier order issued on March 11 that restricted access to court records containing private information.
The press here threw a fit because that's a good place to dig for news.
According to the court:
"The Supreme Court of Oklahoma is very aware of privacy and identity theft concerns of individuals related to personal data that may appear on the Court's Web site. We are cognizant that many businesses and individuals rely on the information court clerks have placed on our Web site. Personal privacy balanced with reliable public information is critical for every free society.
"Due to the very important issues for all concerned, the Supreme Court is hereby withdrawing its Privacy and Public Access order... handed down March 11, 2008, to give the issue further study and consideration." Free-speech advocates praised the court's decision to reverse course.
“We're happy that they withdrew the order,” said Mark Thomas, executive vice president of the Oklahoma Press Association. “A broad, sweeping closure of massive public records is not the answer to identity theft problems.”
He should have followed that with, "Go subscribe to Lifelock. You'll need it if you live in Oklahoma."
So now the whole issue will be revisited who knows when, which will give crooks around the globe all the time they need to get the information that's out there.
A spokesperson from Hackers United to Rip You Off said "We agree that private information should be made public. In fact, we work hard to make this happen on a daily basis. We applaud the Oklahoma State Supreme Court and the Oklahoma Press Association for this courageous move. Now, I've got to finish ordering my new 60" flat panel television with my new credit card from Best Buy I got today."
Okay, that's a fictitious quote for those of you might not get my sarcasm.
What I don't get is why a simple distinction can't be made between sensitive information (SSN, Date of Birth, etc) and public information. I would also like to point out that:
1) Over 30 states protect such information, and
2) Private companies (like TJX) get fined and prosecuted for losing this kind of information
Man...
Initially, the news that this information was on the web from the Oklahoma County Clerk's office caused quite a stir. Then, it was learned that the information has been there for quite some time and that contractors have been working for 9 months to obscure it from public view.
Then, here comes the judges. Today, the Oklahoma Supreme Court rescinded an earlier order issued on March 11 that restricted access to court records containing private information.
The press here threw a fit because that's a good place to dig for news.
According to the court:
"The Supreme Court of Oklahoma is very aware of privacy and identity theft concerns of individuals related to personal data that may appear on the Court's Web site. We are cognizant that many businesses and individuals rely on the information court clerks have placed on our Web site. Personal privacy balanced with reliable public information is critical for every free society.
"Due to the very important issues for all concerned, the Supreme Court is hereby withdrawing its Privacy and Public Access order... handed down March 11, 2008, to give the issue further study and consideration." Free-speech advocates praised the court's decision to reverse course.
“We're happy that they withdrew the order,” said Mark Thomas, executive vice president of the Oklahoma Press Association. “A broad, sweeping closure of massive public records is not the answer to identity theft problems.”
He should have followed that with, "Go subscribe to Lifelock. You'll need it if you live in Oklahoma."
So now the whole issue will be revisited who knows when, which will give crooks around the globe all the time they need to get the information that's out there.
A spokesperson from Hackers United to Rip You Off said "We agree that private information should be made public. In fact, we work hard to make this happen on a daily basis. We applaud the Oklahoma State Supreme Court and the Oklahoma Press Association for this courageous move. Now, I've got to finish ordering my new 60" flat panel television with my new credit card from Best Buy I got today."
Okay, that's a fictitious quote for those of you might not get my sarcasm.
What I don't get is why a simple distinction can't be made between sensitive information (SSN, Date of Birth, etc) and public information. I would also like to point out that:
1) Over 30 states protect such information, and
2) Private companies (like TJX) get fined and prosecuted for losing this kind of information
Man...
Monday, March 24, 2008
Members vs. Non-members
I saw an interesting factoid in USA Today on March 6, taken from a survey of 1,200 working adults 18 and older about the value of membership in business associations.
First, the median income of members was $77,397 vs. $52,585 for people who were not members of any business association.
Second, 74% of people who said they are members of an association said they are satisfied with their job, as compared to 50% for non-members. I think this may be the most interesting finding.
Why would this be? My take is that when you are a member of an association of peers, you feel more connected and you feel you are taking skills and ideas back to your job you can use. This adds value to your career and affects your overall outlook on your job.
Certainly, you can hear people at association meetings complain about their jobs. In fact, one of the big reasons for belonging is that you can find a new job easier.
As an example of the value of associations, in my last posting I discussed the presentation at the meeting last week of the Red Earth QA SIG here in Oklahoma City. We had a good sized group there and people had a great time visiting with each other, and we learned about starting a testing center of excellence - plus some great sandwiches from Jason's Deli.
Attention managers!! If you want free or inexpensive training, check out your local QA chapter. If you don't have one, think about starting one. That's what we did in OKC over a year ago. It's not always easy, but it's not impossible, either.
I'm toying with the idea of starting an online software QA and Testing community (at "at-large" group) with a teleconference meeting monthly for people who live in places where there isn't enough interest to get a small group together. Post a comment or e-mail me if you are interested.
Finally, I'll just say that my very first association membership at the Kansas City QA Association (KCQAA) was where I learned a lot about software QA and testing, but it was also where I connected with many other people and associations (QAI, Bill Perry, Jim Brunk and many others.) I would not be where I am today had I not took two hours a month out of my evenings and attended KCQAA meetings. I'm a believer!
First, the median income of members was $77,397 vs. $52,585 for people who were not members of any business association.
Second, 74% of people who said they are members of an association said they are satisfied with their job, as compared to 50% for non-members. I think this may be the most interesting finding.
Why would this be? My take is that when you are a member of an association of peers, you feel more connected and you feel you are taking skills and ideas back to your job you can use. This adds value to your career and affects your overall outlook on your job.
Certainly, you can hear people at association meetings complain about their jobs. In fact, one of the big reasons for belonging is that you can find a new job easier.
As an example of the value of associations, in my last posting I discussed the presentation at the meeting last week of the Red Earth QA SIG here in Oklahoma City. We had a good sized group there and people had a great time visiting with each other, and we learned about starting a testing center of excellence - plus some great sandwiches from Jason's Deli.
Attention managers!! If you want free or inexpensive training, check out your local QA chapter. If you don't have one, think about starting one. That's what we did in OKC over a year ago. It's not always easy, but it's not impossible, either.
I'm toying with the idea of starting an online software QA and Testing community (at "at-large" group) with a teleconference meeting monthly for people who live in places where there isn't enough interest to get a small group together. Post a comment or e-mail me if you are interested.
Finally, I'll just say that my very first association membership at the Kansas City QA Association (KCQAA) was where I learned a lot about software QA and testing, but it was also where I connected with many other people and associations (QAI, Bill Perry, Jim Brunk and many others.) I would not be where I am today had I not took two hours a month out of my evenings and attended KCQAA meetings. I'm a believer!
Thursday, March 20, 2008
Establishing a Testing Center of Excellence
Today we had a good presentation at the Oklahoma City Red Earth QA SIG. Carey Schwaber of Forrester Research discussed the value and approaches for establishing a Software Testing Center of Excellence (COE). Thanks to Carey, as well as David Vance of Forrester, who facilitated the presentation.
I enjoyed the presentation because the Testing COE is an effective way to deal with opposite poles of how software testing is organized.
There has been a debate in the software testing community for many years now (dating back to the early 90's) about which is better - independent, centralized testing or decentralized, distributed testing. Really, there are pros and cons to each approach.
In my book, Surviving the Top Ten Challenges of Software Testing, two of the challenges are strongly rooted in centralized testing groups. One challenge is "Testing What's Thrown Over the Wall". Another challenge is the "Lose/Lose Situation", where testers are seen as solely responsible for high quality. On one hand, testers are paid to find defects. However, if they find too many defects, then they are the problem.
In both of these challenges, the isolated nature of the test team often contributes to the problem. It is important to understand that the challenges can be overcome, but there's a gravity that keeps pulling toward the problems.
Then there's the other ditch I often see in performing test assessments. That is, there's a lot of testing activities being performed throughout the organization, but they typically aren't co-ordinated very well. In fact, it's common in this situation for the activities to be in conflict with each other. The value for the investment in testing is reduced to a very low level. I often say that "there's a lot of stuff laying on the floor", meaning that the testing "process" (and I use that term lightly) is in pieces and in ineffective.
The great thing about a testing COE is that it can be a balance between these opposite poles. A Testing COE, as I often define it is "a facilitation team that supports the efforts of software testing across the orgination and promotes effective software testing approaches for all projects."
The big takeaway for me from Carey's presentation was how the scope of the testing COE can span four levels, from establishing the guidelines and standards for testing through the fourth level, which actually performs testing except for developer testing. I think it's a good idea to have a growth path for a testing COE.
By the way, the four levels of COE scope presented by Carey are:
1) establishing the guidelines and standards for testing
2) level 1 activites, and also provides a common infrastructure for testing (such as test environments, test data, test tools, documentation templates, etc.)
3) level 1 and 2, plus performs independent verification and validation to supplement the testing performed by the project teams
4) levels 1, 2 and 3, plus performing all testing except for developer testing (such as unit testing)
Just like a true QA team (one that manages quality - not a test team), a testing COE can be marginalized because of the perception that it doesn't contribute materially to projects. Although the support role of QA is hugely important, when push comes to shove, people choose testing over QA because testers find defects which can be reported and fixed.
So, a testing COE needs to add tangible value by actively engaging in projects.
Establishing a testing COE requires high-level management leadership and investment. It also requires organizational buy-in so that people will actually accept the leadership of the testing COE.
If you are feeling the pain of the extremes in software test organization, you may want to consider proposing and establishing a testing COE. If you need help in doing that, call or e-mail me!
I enjoyed the presentation because the Testing COE is an effective way to deal with opposite poles of how software testing is organized.
There has been a debate in the software testing community for many years now (dating back to the early 90's) about which is better - independent, centralized testing or decentralized, distributed testing. Really, there are pros and cons to each approach.
In my book, Surviving the Top Ten Challenges of Software Testing, two of the challenges are strongly rooted in centralized testing groups. One challenge is "Testing What's Thrown Over the Wall". Another challenge is the "Lose/Lose Situation", where testers are seen as solely responsible for high quality. On one hand, testers are paid to find defects. However, if they find too many defects, then they are the problem.
In both of these challenges, the isolated nature of the test team often contributes to the problem. It is important to understand that the challenges can be overcome, but there's a gravity that keeps pulling toward the problems.
Then there's the other ditch I often see in performing test assessments. That is, there's a lot of testing activities being performed throughout the organization, but they typically aren't co-ordinated very well. In fact, it's common in this situation for the activities to be in conflict with each other. The value for the investment in testing is reduced to a very low level. I often say that "there's a lot of stuff laying on the floor", meaning that the testing "process" (and I use that term lightly) is in pieces and in ineffective.
The great thing about a testing COE is that it can be a balance between these opposite poles. A Testing COE, as I often define it is "a facilitation team that supports the efforts of software testing across the orgination and promotes effective software testing approaches for all projects."
The big takeaway for me from Carey's presentation was how the scope of the testing COE can span four levels, from establishing the guidelines and standards for testing through the fourth level, which actually performs testing except for developer testing. I think it's a good idea to have a growth path for a testing COE.
By the way, the four levels of COE scope presented by Carey are:
1) establishing the guidelines and standards for testing
2) level 1 activites, and also provides a common infrastructure for testing (such as test environments, test data, test tools, documentation templates, etc.)
3) level 1 and 2, plus performs independent verification and validation to supplement the testing performed by the project teams
4) levels 1, 2 and 3, plus performing all testing except for developer testing (such as unit testing)
Just like a true QA team (one that manages quality - not a test team), a testing COE can be marginalized because of the perception that it doesn't contribute materially to projects. Although the support role of QA is hugely important, when push comes to shove, people choose testing over QA because testers find defects which can be reported and fixed.
So, a testing COE needs to add tangible value by actively engaging in projects.
Establishing a testing COE requires high-level management leadership and investment. It also requires organizational buy-in so that people will actually accept the leadership of the testing COE.
If you are feeling the pain of the extremes in software test organization, you may want to consider proposing and establishing a testing COE. If you need help in doing that, call or e-mail me!
Tuesday, March 18, 2008
Now, What Time is My Flight?
Image this. You're on the run at DFW airport (typically running between terminals) and you need to find out the status of your connecting flight. You look at the monitor and see this.
Fortunately, my flight to Oklahoma City scrolled up one line, so I could see the actual flight time!
By the way, notice how many flights were late that evening!
Of course, I'm a software tester, so I'm very used to seeing error messages, confirmation messages, etc. - even while making a presentation!
But I wonder what the people think who (like my mother, bless her heart) calls me to ask, "What does this mean?" I can just image someone screaming at the monitor "No!, Don't abort the script!"
Now I'm not bashing American Airlines or the good people who work at DFW. I just think it's interesting to see when the error messages pop up in major places (Times Square, etc.).
By the way, here is the enlarged picture of the message:
On a somewhat-related topic, back on January 25th, I posted an observation about why there's one guy working at the postal counter and another guy working the "automated postal center".
Well, last week I found out why. After standing behind a fellow for 10 minutes while he tried to mail a package, it became apparent that some people need training to use these things. It's not confusing or hard for me to use, but as this fellow apologized "Sorry, this is my first time using this." I offered my help, but he was still struggling. In this case, that other postal worker acting as a tutor would have been very helpful.
I don't know what all this means, but I know that:
1) Usability is subjective
2) Usability issues become fewer with more experience and/or training
3) Coaching can give a false impression that people can use software easier than they actually can use it.
and...
4) Perhaps this is a long societal learning curve. As people get better using technology, maybe they won't struggle quite so much. However, as we software people keep pushing the envelope, the learning curve will keep moving.
As long as the confusion is at the post office or grocery store, not much can be harmed. However, think about the new technology being introduced into automobiles. That's scary!
Your thoughts?
Randy
Wednesday, March 12, 2008
In the March Newsletter
The March newsletter is out. Yea!! If you are not on my list, then you can get on the list at:
http://www.riceconsulting.com/newsletter.htm
If you just want to read the March issue, you can read it here:
http://www.riceconsulting.com/newsletter_march_2008.html
The feature article is about understanding and assessing stakeholder risk tolerance on a project. This is an interesting and important topic that I haven't seen addressed before.
I also review an SOA book called "SOA Approach to Integration".
By the way...if you live in Chicago, close to Chicago, or need a good excuse to visit Chicago...I am coming back to Chicago this spring to present two really valuable workshops:
Agile and Exploratory Testing (April 8 - 9, 2008)
Process Improvement Using Root Cause Analysis (April 10, 2008)
Register three or more people and get a 10% discount. To see the complete brochure, click here: http://www.riceconsulting.com/chicago-april-2008.html
That's it for today!
Randy
http://www.riceconsulting.com/newsletter.htm
If you just want to read the March issue, you can read it here:
http://www.riceconsulting.com/newsletter_march_2008.html
The feature article is about understanding and assessing stakeholder risk tolerance on a project. This is an interesting and important topic that I haven't seen addressed before.
I also review an SOA book called "SOA Approach to Integration".
By the way...if you live in Chicago, close to Chicago, or need a good excuse to visit Chicago...I am coming back to Chicago this spring to present two really valuable workshops:
Agile and Exploratory Testing (April 8 - 9, 2008)
Process Improvement Using Root Cause Analysis (April 10, 2008)
Register three or more people and get a 10% discount. To see the complete brochure, click here: http://www.riceconsulting.com/chicago-april-2008.html
That's it for today!
Randy
Monday, March 10, 2008
Learning the Ropes
On Saturday I participated in a ropes course with other members of the leadership at the South Campus of LifeChurch (www.lifechurch.tv). It was a great time and I hadn't done one of those courses for about 10 years.
I think my best lesson learned was on the island exercise. For those of you unfamiliar with that exercise, there are three wooden platforms (islands) - two of them about 3x3 feet and one about 2x2 feet. There are also three wooden planks of different sizes - long (about 8 feet), medium (about 6 feet) and short (about 3 feet). The goal is to get the entire team from one big island to the last big island without either boards or people touching the ground.
If a board touches the ground, you lose it. If a person touches the ground, they get a "disease". Our leader, Stephan, was the first to succumb to a disease. His disease was that everything he said had to be the opposite of what he really meant. So, if he thought we should use the long board, he would say, "Don't use the long board."
After over 30 minutes, we still did not successfully complete the event, but we did learn some good lessons about managing resources.
The thing I could really relate to is that as a consultant, I am used to people only following half of what I recommend (or less). It's funny. Companies pay me significant money to help them improve software testing processes, ask me how to fix them, then sometimes do just the opposite of what I recommend. Sadly, many times their efforts fail.
Never mind that I have seen many approaches succeed and fail in other companies. Some people just have an ego that says "We're too different here."
Sorry, ranting over.
The thing that I took away was that when we have to filter someone's language (negative to positive, one language to another), it takes time and concentration to get it right. It also shows how important good communication is when working on a problem. Things like speaking the same language, understanding the same thing.
The final thing I'll say was that jumping four feet across a gap at ground level is no big deal. At 30 feet above the ground, even with a line tied on, my heart beat a little faster. But then, I felt the joy of facing a fear and doing it anyway.
Face your fears today!
Randy
I think my best lesson learned was on the island exercise. For those of you unfamiliar with that exercise, there are three wooden platforms (islands) - two of them about 3x3 feet and one about 2x2 feet. There are also three wooden planks of different sizes - long (about 8 feet), medium (about 6 feet) and short (about 3 feet). The goal is to get the entire team from one big island to the last big island without either boards or people touching the ground.
If a board touches the ground, you lose it. If a person touches the ground, they get a "disease". Our leader, Stephan, was the first to succumb to a disease. His disease was that everything he said had to be the opposite of what he really meant. So, if he thought we should use the long board, he would say, "Don't use the long board."
After over 30 minutes, we still did not successfully complete the event, but we did learn some good lessons about managing resources.
The thing I could really relate to is that as a consultant, I am used to people only following half of what I recommend (or less). It's funny. Companies pay me significant money to help them improve software testing processes, ask me how to fix them, then sometimes do just the opposite of what I recommend. Sadly, many times their efforts fail.
Never mind that I have seen many approaches succeed and fail in other companies. Some people just have an ego that says "We're too different here."
Sorry, ranting over.
The thing that I took away was that when we have to filter someone's language (negative to positive, one language to another), it takes time and concentration to get it right. It also shows how important good communication is when working on a problem. Things like speaking the same language, understanding the same thing.
The final thing I'll say was that jumping four feet across a gap at ground level is no big deal. At 30 feet above the ground, even with a line tied on, my heart beat a little faster. But then, I felt the joy of facing a fear and doing it anyway.
Face your fears today!
Randy
Friday, March 07, 2008
SOA Test Training in Tampa
Just got back in from Tampa, FL where I conducted a private workshop there on SOA Testing. (BTW, thanks for those of you who were in the class for making it a good one!) It's quite a temperature change from the 70's to the 30's coming back home.
It is an interesting challenge to be both learning and teaching. Although I have been working with a variety of companies in SOA testing, some as early as 1999, even those companies are still learning. It's like many people (including myself) are on "the bleeding edge" of this topic. We're still learning as we go. I hope to be sharing many more lessons learned in SOA testing soon on this blog. I would also like to hear more of your experiences.
By the way, here is an interesting quote (not about SOA, but about learning while doing):
"You must learn in real time and in action. You cannot afford to wait until everything is perfect to go out and do what you want to do. If you wait for perfection to go out into the world and do big things, you're never going to get there - or anywhere else for that matter. Many people hold themselves back because they think they have to know everything about how to do something before they actually do it. This is not true. You can and should learn while doing."
Michael Port
If you are into testing SOA, tell others about my blog so we can all get into the discussion.
One big lesson I have learned in SOA testing is that when you go to the conferences, read the articles, etc., many of the approaches are oriented toward a particular vendor's toolset and/or methodology. While the tools are great in providing leverage in SOA testing, vendor stuff happens (like being sold, etc.), which can place your entire testing effort in limbo. Plus, not everyone has the deep pockets for the tools. At least there are open source tools like soapUI that can help.
And it's not only because of tools there may be a skewing of information. It may be the variety of opinions from many people as to what works and what doesn't. These opinions are shaped by many things - the business itself, technologies, people, etc. I'm not saying to disregard anything, just test it for yourself.
It would be much like someone doing a Google search on "software testing" and trying to build a test approach just from the wide variety of opinions and techniques. Take it from a guy who has been around for awhile in software testing that you still need to try, prove and adapt your own testing approaches.
When trying to make it up the learning curve on SOA testing, keep in mind that everyone is learning. As validation, just look at articles written 3 years ago on SOA and compare them to what is being written today. The approaches are maturing. What was important then may not be so important today.
One of the things I learned in teaching the class was a great tool/service called generatedata.com. This is a really cool web-based script that generates test data and lets you export it in a variety of formats, even in SQL commands. I plan to add this to my list of cheap and free test tools and also add a video tutorial soon. Thanks to Ronan Madjar for finding that one and calling it to my attention!
I look forward to continually improving and extending this course to bridge the gap between SOA development and testing. With your help, we can do it!
It is an interesting challenge to be both learning and teaching. Although I have been working with a variety of companies in SOA testing, some as early as 1999, even those companies are still learning. It's like many people (including myself) are on "the bleeding edge" of this topic. We're still learning as we go. I hope to be sharing many more lessons learned in SOA testing soon on this blog. I would also like to hear more of your experiences.
By the way, here is an interesting quote (not about SOA, but about learning while doing):
"You must learn in real time and in action. You cannot afford to wait until everything is perfect to go out and do what you want to do. If you wait for perfection to go out into the world and do big things, you're never going to get there - or anywhere else for that matter. Many people hold themselves back because they think they have to know everything about how to do something before they actually do it. This is not true. You can and should learn while doing."
Michael Port
If you are into testing SOA, tell others about my blog so we can all get into the discussion.
One big lesson I have learned in SOA testing is that when you go to the conferences, read the articles, etc., many of the approaches are oriented toward a particular vendor's toolset and/or methodology. While the tools are great in providing leverage in SOA testing, vendor stuff happens (like being sold, etc.), which can place your entire testing effort in limbo. Plus, not everyone has the deep pockets for the tools. At least there are open source tools like soapUI that can help.
And it's not only because of tools there may be a skewing of information. It may be the variety of opinions from many people as to what works and what doesn't. These opinions are shaped by many things - the business itself, technologies, people, etc. I'm not saying to disregard anything, just test it for yourself.
It would be much like someone doing a Google search on "software testing" and trying to build a test approach just from the wide variety of opinions and techniques. Take it from a guy who has been around for awhile in software testing that you still need to try, prove and adapt your own testing approaches.
When trying to make it up the learning curve on SOA testing, keep in mind that everyone is learning. As validation, just look at articles written 3 years ago on SOA and compare them to what is being written today. The approaches are maturing. What was important then may not be so important today.
One of the things I learned in teaching the class was a great tool/service called generatedata.com. This is a really cool web-based script that generates test data and lets you export it in a variety of formats, even in SQL commands. I plan to add this to my list of cheap and free test tools and also add a video tutorial soon. Thanks to Ronan Madjar for finding that one and calling it to my attention!
I look forward to continually improving and extending this course to bridge the gap between SOA development and testing. With your help, we can do it!
Wednesday, March 05, 2008
The Loss of Two Quality Leaders
I just want to post my comments on the loss of two people that have impacted my journey of understanding what software quality is all about.
The first person I'll mention is Rodger Drabick, the author of "Best Practices for the Formal Software Testing Process."
I had the pleasure of working with Rodger and learning from his vast knowledge in software quality. When I went to my first QAI software testing conference in 1989, Rodger had been speaking about software testing and QA for many years. After hearing Rodger speak, I truly realized how much I didn't know about software quality! I have always looked up to him as one of the people of which we stand on their shoulders to practice software testing and QA today. I will remember him as one of the foundational people in my career. I will miss him!
Rodger is survived by his wife, Karen, his mother, two daughters, three grandchildren, and his four siblings. An obituary is posted here: http://www.mountvernonnews.com/notices/12/11/03.html
The family suggests memorial contributions be made to the Juvenile Diabetes Foundation, 120 Wall St., 19th Floor, New York, NY 10005: https://www.jdrf.org/index.cfm
The other person was Dr. Joseph Juran, who passed away last week from natural causes at age 103. I didn't know Dr. Juran personally, but he also shaped my view of quality through his writings, and impacted the worldwide quality movement for many, many years. Dr. Deming quoted Juran often!
I was very inspired as I read the tribute to Dr. Juran's at http://www.juran.com/. At age 103, he was still making a contribution, working on another book and caring for his wife of 81 years, Sadie.
As quoted in the press release, "In 1937, Dr. Juran coined the Pareto Principle, which millions of managers rely on to help separate the “vital few” from the “useful many” in their activities. He also wrote the first standard reference work on quality management, the Quality Control Handbook, first published in 1951 and now moving into its sixth edition."
These men both ran the race well and they will be missed by many.
The first person I'll mention is Rodger Drabick, the author of "Best Practices for the Formal Software Testing Process."
I had the pleasure of working with Rodger and learning from his vast knowledge in software quality. When I went to my first QAI software testing conference in 1989, Rodger had been speaking about software testing and QA for many years. After hearing Rodger speak, I truly realized how much I didn't know about software quality! I have always looked up to him as one of the people of which we stand on their shoulders to practice software testing and QA today. I will remember him as one of the foundational people in my career. I will miss him!
Rodger is survived by his wife, Karen, his mother, two daughters, three grandchildren, and his four siblings. An obituary is posted here: http://www.mountvernonnews.com/notices/12/11/03.html
The family suggests memorial contributions be made to the Juvenile Diabetes Foundation, 120 Wall St., 19th Floor, New York, NY 10005: https://www.jdrf.org/index.cfm
The other person was Dr. Joseph Juran, who passed away last week from natural causes at age 103. I didn't know Dr. Juran personally, but he also shaped my view of quality through his writings, and impacted the worldwide quality movement for many, many years. Dr. Deming quoted Juran often!
I was very inspired as I read the tribute to Dr. Juran's at http://www.juran.com/. At age 103, he was still making a contribution, working on another book and caring for his wife of 81 years, Sadie.
As quoted in the press release, "In 1937, Dr. Juran coined the Pareto Principle, which millions of managers rely on to help separate the “vital few” from the “useful many” in their activities. He also wrote the first standard reference work on quality management, the Quality Control Handbook, first published in 1951 and now moving into its sixth edition."
These men both ran the race well and they will be missed by many.
Subscribe to:
Posts (Atom)