Examining the ongoing challenges of delivering high-quality, value-added ERP services in Higher Education.
Saturday, September 21, 2013
I have been thinking a lot lately about the importance of success criteria. Every program objective, no matter how seemingly straight-forward it may seem, can be misinterpreted by stakeholders. Making this worse is the fact that objectives are often boiled up to simple, jargon-free statements (something that I have advocated on this blog in the past) that work far better for marketing purposes yet conceal much of the subtlety beneath. The trick is to publish that tight, pithy, approachable statement to the masses but to simultaneously maintain a more rigorous and complete definition that uses the appropriate number of words to tell the whole story. One might draw a parallel between this exercise and the Agile technique of writing a pretty little user story on the front of the index card and writing a whole bunch of "and I’ll know when it is done" statements on the back. That audience-facing statement depends on so much more.
Let’s dissect a component of my very own pithy (or so I like to think) mission statement from my current project: "excellent user experience." This phrase, clean as it may be, contains multitudes. Unpacking each of those words could fill a page, and while that may sound rather tedious, imagine the trouble I’ll be in if I don’t clearly articulate the measures and then claim victory.
Is it reasonable to expect that 100% users will think that our interface is the greatest ever? Hell, no. That would guarantee failure -- there is not a single thing in human history that has received universal praise -- no work of art, dish of food, piece of music, and certainly no graphical user interface! This will not be an easy one -- we will need to think about benchmarks, objective and subjective measures, anecdotes. One of my project sponsors has advised against establishing a vision predicated on comparison against a weak baseline -- having user satisfaction of 50% may be an improvement over 40% but it isn't anything to tout -- but for something such as user experience, such flawed measures are at least better than no measure at all.
Let’s pick another that isn’t quite so fluffy -- "automate process x" -- I have plenty of words to substitute for "x" in my world (degree audit, tuition calculation) but the lesson applies to any process in any industry. The declarative sentence unadorned by conditional modifiers is the problem – do we fail if certain manual steps survive? Or if we exclude certain use cases from automation? As any experienced process engineer knows, there is a point of diminishing returns when it costs more to automate the next case than to leave it alone. But do your stakeholders see it that way? Only if you set their expectations appropriately and gain their support for an achievable success metric.
Here, it may be best to break it down and focus on the characteristics of the use cases that are truly included and aim for 100% completion. To take another example here, let's say we have 50 academic programs. Of those, 40 (80%) have rules that can be codified into an automated logic; 10 (20%) do not. If we successfully implement degree audit in 38 departments, that's 95% completion of our target. But if never reset expectations on success, we might have stakeholders questioning us about our lousy 24% failure rate!
I don’t mean to suggest letting yourself off the hook or taking it easy and setting the bar low. This is simply an argument in favor of explicit and transparent measures. Spend the time to reach common understanding on what success looks like or you might doom yourself to the unpleasant feeling of a project perceived simultaneously as a success and a failure, depending on whom you ask!