Tester Performance Metrics

Situation:

The MSN data center backend services team suffered from a poor alignment of incentives for testers and test developers, resulting in an emphasis on activities that did not sufficiently improve the quality of the services.

Hindrance:

The standards in use for evaluating the performance and productivity of testers and test developers in the organization were based on raw bug counts and how quickly the individual handled problems in the production environment, rather than on assurance of delivered customer value and prevention of problems in production.

Action:

I worked with program management to identify and prioritize real-world usage scenarios for the services, and assisted the test teams I managed in aligning their test efforts with those scenarios that delivered the greatest customer value. I also championed a reduced emphasis on raw bug counts, and an increased attention to code coverage and problem prevention.

Results:

As a result of my efforts, testers were able to prioritize activities that they could prove were making the product better instead of pointlessly inflating their bug counts, allowing improved alignment of tester incentives with activities that verifiably improved the product.

Competencies:

This experience shows my ability to identify and analyze organizational problems, and work cross-functionally to improve processes.