I recently worked with a great product management organization—great in the eyes of the management team that assembled them, at least.
This inbound team of heavily technical PMs was responsible for the 2-3 year product roadmap for the company. As a result, their main constituents were the entire engineering community and, more specifically, the software and hardware architects interspersed through that community.
Success Factor
Inside the organization, individual PMs acted as subject matter experts (SMEs) in the various disciplines that made up the software/hardware product suites sold by the company. One of the individual deliverables was the product requirement—that ubiquitous, yet fuzzy, essence of the PM’s analysis, insight and judgment of what an engineering team needs to focus on for profitable product delivery.
The 12 Blind Men Around the Elephant Problem
The feedback from the engineering team was the typical “we cannot understand what these mean” and “we can’t measure that” response. PM management, when faced with this response and the lack of acceptance, realized that they really had no way to judge the quality of a requirement themselves.
Further, they realized they had no way to provide formal guidance or feedback in the form of performance reviews or other HR-oriented methods for PM ranking and motivation.
A Rubric? Isn’t that a Cube Game?
Remember writing term papers in high school? Let’s apply a well-known technique your English teacher used to determine a grade: The Rubric.
Rubrics can be created in a variety of forms and levels of complexity. However, they all contain common features which:
- focus on measuring a stated objective (performance, behavior, or quality);
- use a range to rate performance; and
- contain specific performance characteristics arranged in levels indicating the degree to which a standard has been met.
Let’s take a look at how one might be built to evaluate a product requirement. (Source: www.sdsu.edu).
Here, we have a list of characteristics, which are examples of the Immutable Laws of Requirements. In the case above, we have three levels of observable qualitative characteristics that can be applied to a product requirement (or groups taken together).
Note that an optional point system has been assigned to each level. Generally, I’ve found that using hard numbers tends to produce too much focus on the absolute score rather than the overall message a rubric can provide.
Next, the blank columns allow a summing and weighting function. Drop the numbers scores and you can do away with the summing function. The weighting function, however, can be used to assign relative weight to each of the Immutable Laws, based on the judgment of PM management.
So, how do we apply it? Generally, the rubric is built in collaboration with the engineering community. This ensures your team understands that the rubric represents guidance by the engineering community for acceptable product requirements.
How did it work for the organization mentioned at the beginning of this post? Pretty well.
Understand that many of the PMs had come from engineering and found the discipline of writing objectives and concise product requirements more difficult than they originally thought—even with their former engineers now sitting on the other side of the table!
PM management, however, now had a tool to help develop more disciplined, quality requirements and raise overall PM credibility across the entire engineering community.
The discipline of Product Management is in the details. By using the well-known Rubric method, you can offer your PMs more guidance on what you expect from them. If you would like to know more, contact me and I’ll be glad to discuss your situation in detail.
* Attributes: Immutable Laws of Requirements: www.EgressSolutions.net
Very interesting article. I particularly like the approach to quantify the quality of product requirements. It seems to me that it’s a developed form of the SMART used to assess requirements in software development among other sectors of business (specific, measurable, assignable, realistic, time-related). It would be interesting to see if machine learning techniques could be used to estimate the scores of requirements for the table listed in your blog. The earlier upstream that the poor quality of requirements the better so this would be fantastic tool for authors of product requirements.
You are spot on with the example of SMART. I personally like this approach to testing goals and objectives. The framework we used for this exercise is grounded in the concept INVEST (Independent, Negotiable, Valuable, Estimable, Sized Appropriately, Testable). It is very similar and has the same effect to get product teams to improve the quality of the written requirements. As for automating and analyzing the scoring, that is very intriguing. A big hurdle to overcome would be creating a standard model that would work for most a wide range of software projects. I like the concept a lot!
I am filling in for David… Keep watching and reading.
Hi Gerard, thanks for the comments. Agreed that SMART is certainly what many people would tend to gravitate to. The good news/bad news about these approaches is they can be applied equally for a product requirement (what the feature is) or for a engineering requirement (how the feature is implemented). That is, they can be applied broadly.
When a Checklist is not a Checklist…
When you inspect the Immutable Laws entries, you quickly see that it is really a check-list of core PM elements. In other words it answers the question, “Did the PM actually demonstrate the diligence called for by the company and how well was the articulation of the PM requirement executed?
Evaluator = Evaluatee?
The key element to overcome for the project above was that the first-level managers had also been engineers – they came from the same perspective as their direct reports. It was, therefore, easy for them to fall into the same trap of approving an engineering requirement instead of a product requirement. They needed a method that allowed them to empirically evaluate PM staff for ranking purposes and for completeness – something that could pass muster with HR if needed based on the PM job profile. A tall order, indeed.
The Win-Win
One aspect that emerged as we developed the rubric was to engage a peer team from the engineering community in its development. Without key thought-leaders’ engagement from that community, application of the whole process would have been ‘staring at one’s navel’. That is, when product manager’s requirements passed the rubric evaluation, they were perceived as more valid and worthy in the engineering community. This was the win-win for the PM and engineering communities – and the company, of course.
So, while SMART and INVEST are excellent teaching methods of approaching the writing of requirements – the need was for measurable ways of evaluation that took as much subjective judgement out of the process as possible for both the manager and the PM.
Again, thanks for the comments – keep them coming!