A common aspect to many of my projects is assessing the current state of a given program or service in an organization to see what is working well and where are the opportunities for improvement. In some situations I’ve found it useful to make use of strongly agree/disagree scale (the Likert scale), especially when you are dealing with services that are not overly process-driven, or where you are considering a service with an eye for something other than pure optimization. I’ll explain a bit further with an example.
There are many frameworks, with CMMI being the most notable, that provide guidance on assessing the maturity of an organization. These are process-focused models, and most use an assessment scale the same as or similar to what is defined by CMMI, which has the following levels:
- Quantitatively Managed
The situation where I ended up using the Likert scale I was one where we were considering the readiness of an organization to reach a goal of offering services outside of their region (even internationally). An early step towards this goal involved an assessment of how well-positioned they were at the present time, and then from that to look at where they needed to be and the gaps that must be addressed.
This analysis had a process aspect but several others aspects covering areas such as governance and staff training. After struggling to find one cohesive model that could address all these areas (and do so without taking several months to complete) we eventually made use of a set of statements to which we then let stakeholders react. These statements were grouped across governance, process, roles, data, and technology. For example:
- (governance) Differences in legislation will not have a significant impact on our work.
- (process) Our processes are consistent across the organization
- (roles) Our skills and training are generalizable outside our region
- (data) We have access today to the required data to perform our work outside of our region
- (technology) Our systems have sufficient licenses to take on additional volume
Within each area we defined 5-7 questions where stakeholders could choose one of five answers:
- Strongly disagree
- Neither agree nor disagree
- Strongly agree
After getting answers from a range of stakeholders and heat-mapping the results we could see some clear areas and themes on which to focus. What was perhaps the biggest positive of this method was that the results were intuitive because all the meaning was there in plain language statements, without a need to explain the levels of the scale and deal with different interpretations of them. The scale simply relayed the stakeholders reaction to the statements and in a crowdsourced way we were able to quickly get a good sense of the strengths and weaknesses of the organization relevant to their objectives.
I hope you find this useful in some of your work. Even if this does not fit your task exactly you may find it useful when doing an initial assessment before a more full-blown one, or simply as a way to jiggle your thinking in a particular area.