Wednesday, March 30, 2011

SCRUM - Definition of Done?

Background


Definition of done in software development companies varies one after another. Some company has additional constraints to fulfil while others may be very little. Definition of done is crucial and one must do it right and customized according to the needs to ensure continuous deliverable. For instance, in TDD environment, we usually need some unit tests per feature introduced with sufficient tests coverage that is predefined (usually > 90%). Next, we need some code review and have the code checked in after the review and yet has the Continuous Integration Build completed and all unit tests passed.

Do we need one in SCRUM?


Certainly we need one. Indeed, we need it to a much wider extends. In SCRUM, we create user stories and then planned out the deliverables in the release plan and the sprint plan. At the end of each sprint, we will mark whichever user stories that are "done" as done. The business users can then expect a somewhat "workable" solution to be delivered.

The question now is, how much do we need to constraint when we are in SCRUM? How much do we want our business users to be able to expect from the delivered "workable" solution at each sprint?

If a solution that works but may be with many defects due to not properly tested is considered acceptable by the business users as a potential deliverable, you can then schedule your QA tests to come in after the sprint!

In many occasions, in fact in SCRUM, this is not acceptable. You cannot say you've completed a user story where it is highly potential with defects, especially the functionality or workability defects. When you mentioned a done to the user story, we are expecting it to be stable, workable with minimal functional defects and met some level of quality measure.

So in Agile or SCRUM, we do tests much sooner and users stories that failed the tests can never be made DONE. We bring the test team into our SCRUM team and they are in our team. Soon as we completed the development work, the tester will do the tests according to the user story's needs. We don't delay the tests and wait till the end-of-sprint and packaged the deliverable to test. Many SCRUM teams failed, one of the key reason is that we do post sprint test approach. You always need to factor in additional period of time for the tester to tests and the defects of this sprint can only get fixed next at sprint. Your SCRUM fails, you are running waterfall approach (miniature waterfall style). So, for the SCRUM Master, please change this to in-sprint testing for the better success; for the Product Owner, please voice your concern of having it (defects and quality) delayed to next sprint. You don't always have that next sprint and worst, you are always wasting a sprint in addition!


Compulsory Items in SCRUM DoD List


  1. Automated Unit Tested with Good Coverage (above 90%)
  2. Continuously Integrated and Successful Build
  3. Automated Integration Tested
  4. QA Tested (with Test Team)
  5. All Defects Fixed.

Tuesday, March 29, 2011

SCRUM - Endlessly Growing Product Backlog

Background


Often when we have run our SCRUM over a period of time (a year or two), we noticed the product backlog gets expanded at an increasing velocity. Up from some 1 or 2 new user stories introduced to several tens of new user stories introduced within a sprint. Is this a good indication or worries that one should look into?

User stories within product backlog indicates some missing business features or value that one must look into and plan out properly to ensure the business satisfaction. This usually has to be inline with the product roadmap and how we want to grow the product over the time. This roadmap can be changed as time comes, to ensure it meets the current and near future (not too far in the future) business needs. Remember that phrase we talked about, Just in time", "Just enough" and "Just because"!!!

Why It Happens


  • Everyone raises a user story they "think" it is value added.
  • No clear inidcation of product roadmap telling where we want to be within a fixed schedule.
  • No planned out of fixed schedules.
  • Overloaded with too many technical driven user stories

How to Turn It Around


  1. Who can raise user stories?
  2. We must ensure and limits the people with ability to raise a user story to those who posses business interests. We are dealing with delivering genuine business values that benefits the business users, not architects, developers, consultants.
  3. Where we want to be at a bird's eye view?
  4. There must be a fixed schedule and a fixed cost. I'm not talking about short term plan like a sprint or a release, but a very high-level objective that we want to achieve to fulfil the business after several releases. This is a directive measure, a pool of resources as part of the costs and how are we planned to spend these costs to achieve our business direction or intends (roadmap). Important to note here, I'm not suggesting a fixed or permanent roadmap that cannot be altered, rather a roadmap for everyone to follow if nothing is suggesting a change needs. If there is a change, we prioritized some and deprioritized some in and out of the roadmap. If it is too far in future, we may opt to drop that from the product backlog if it is not at all important or its value depreciated over the time. Why waste effort of tracing some potentially not needed features if it is not at all fullfilling the business now and near future. Remember, Just in time", "Just enough" and "Just because"!!!
  5. Where we want to be at a bird's eye view with a minimal lookahead?
  6. Just like what we have discussed above but we need to know the pipeline of the fixed schedules. However, lookahead lightly and at a higher level of abstract then the current fixed schedule.
  7. Should we raise technical user stories?
  8. We can get endless of technical driven user stories, especially true for the sake of perfection. Not saying we cannot have a task nailing down some technical or architecture aspects, but it all must be driven by the business value. Example, I need to have an Online Store that serves all my customers and I have 20 thousand customers with my company loyalty card membership. This suggests the needs to load balance the Web Application and there is a need of scaling out. We may need a proper cache (distributed) or if we have a highly clustered database farms that is durable, reliable and efficient. However, we are not creating it as a primary citizen in the product backlog. This at best is a supporting (dependent) user story to fullfil the first user story above it. Whenever the business value gets deprioritized, all its supporting user story goes the same direction, unless you have no better things to do.

Friday, March 18, 2011

Never write you own Message Queuing Framework

Software development is fun. Especially true if you are developing an Enterprise Application. There will be many challenges, world class challenges! High throughput, efficiency and low roll out cost and many sophisticated and complex business logic and workflow that goes beyond what you have learned and practiced in the past. If you are not ready for them, you better look elsewhere.

Very common in enterprise solutions to have distributed application support that require high response time and yet reliable communication and services to deliver the requests. Secured, durable and reliable messages encapsulating business operational messages that get processed at the distributed end, usually the server tiers, are common. These are normally done with many ready made Messaging Framework, such as MSMQ, AppFrabic Service Bus, Enterprise Service Bus, WebSphere MQ. There are also a couple opensource messaging framework like RabbitMQ, ZeroMQ, AMQP.

We have more than enough varieties in MQ selection however, there always some techy developers or architects would like to invent one them-self within their own team. Often, the discussion is surrounding the fact of freedom and lightweight modules that suits best into the solutions, teams or the company. It is always good to have the experience of developing such framework in your resume, but that is often bad for the team, and company.

Never invent one there is a ready made solution.
It takes several years for a framework to be matured and tested against many genuine requirements within the businesses and made many fixes, enhancements, simplification, optimization, customization, usability and security tests while accomplishing the needs of being orderable, durable, reliable and securable. Developing one yourself is a short sighted move. You are ignoring the facts that a framework takes millions of hour and efforts to be matured and testified. You are thinking that one man view is always better than the effort of a group of people. You might be right for the first few months, but as your solution becoming more matured and many more challenges coming onboard, you will quickly find that you were shooting at your own feet for the decision you made.

You often find that your invented MQ framework does not scale as good. Has many shortcomings in throttling, failed-safe, and security. Does not have a dead letter policy supports. Loosing the sight of dynamic expansion of queues. Priority overwrites and poison message control. Worst when you have no idea to maintain/support disastrous recovery. Worst when you find your own MQ framework does not support distributed transaction, transaction flow (distributed but integrated transaction model) and does not log properly and not autonomous. Your solution may ended up just a distributed but serialized messaging gateway that processes messages from different sources in a single queued fashion and starts to suffers when a business starts growing large.

Other than business factors consideration, you owe your team or your company at large by requiring many innocent teammates to endlessly support and fixing defects that were common but long fixed in the matured framework. You start to find yourself heading to where other frameworks is doing by doing more work and getting less throughput. You burn many pricey man-hours for the company to get less. You tricked your team members with more long working hour to support and defect fixing to deliver sub-standard solution to the users. You make your product less competitive to the competitors and you shifted the team focus from product development to platform components that are no better but long available. You sincerely given your competitors a chance to close up your leading role or gap in the industry. You put your team to loose enthusiasm. You bid your company and the team to the losing end to only build a good resume in you.

From the business, team, and product perspective, there are no reason for one to develop your own messaging framework. Unless you would want to compete with the giant in the market, this is not your cup of tea. Just stay away from losing sight, you should and must maintain your business focus and continuing your industry leading role.

Final words, don't reinvent the wheel if this is not your strength and does not record in your business roadmap. Do what you do best and leave the worries to the experts.

Monday, March 14, 2011

Test Like A User

Often, for a developer to be good, one must think like a user. Same goes to the testers.

So, what it says, before you start writing your test cases, you must think, "If I am one of the users, I would want something to work like this. It must not allow me to proceed if I ever mistakenly committed certain things. Best if it can give me some alert (warning).". I know many would say to me, this is a user requirement my friend, and testers are testing the product based on the requirement! Well, yes and no.

Do you how many good requirement surfaced during test phases. Users may have overlooked, and testers is to tests whether that is going to be painless experience for a user other than the workability. I know many users created many requirements which eventually they themselves find it not useful and painful to use. They may have mistaken one or two steps which unfortunately were not noticed.

As a tester, you must ensure you know the context of the topic you are testing. If it involves Integration Layer with Integrations testing, you must ensure you understand how this integration is done. Why it was done like that (the flows and steps) and can it be better?

Many times, the tester came back to the developers, and asked things like: -
"Why when I'm migrating my Database from the console application, I need to tell which server I'm migrating to and what new name my database is going to be?"

"I'm testing on a command to restore a database which optionally takes a database synonyms fixes. Why must I specified the synonyms when I'm restoring it to a different database name?"

I often ask my testers right after they asked questions. Such as, do you know what is a database migration? Do you know what is a synonyms?

Sadly, the answers I often get from the testers are "No" or something like, "I'm not certain" or telling me something that is totally irrelevant. I'm just thinking, how can you do the tests as a tester if you don't even know what you are going to expect and how it is going to behave. Worst when one can't even tell me what errors can they expect as a tester.

Testers are not a dump "machine" who runs routine works and verifies the outcome with a predefined results only. It is so much broader so the testers can act like a user and telling the developers/users/BA or even the genuine users themselves with something like this: -
"Look, what is done is good, but I certainly think as a user I would want something additional to make it more useful (usability) and informational (support). I understand this is not as important compared to many other features, but certainly we need to be more painless to use it."

Hey testers, wake up. Stop your routine boring tasks of matching results. Try step out of the box and thinking like a genuine user and use the product like you are owning one later. You can make a different!