Yesterday I received a call from a number I didn’t recognize, and I let it go to voicemail. The message was from a guy named Nathan who sold us the car I drive to work every day. We bought the car three years ago, anticipating the arrival of our twins. About a month after we got the car, Nathan had emailed me a survey to fill out about the car and our experience with it, and included a coupon to a local business as an incentive. I completed that survey, and I haven’t heard from Nathan until now. Curious, I called him back. He said he was just calling to see how our car had been treating us. It was just a casual conversation with a guy who clearly enjoys his job and believes in what he does—but it got me to thinking about what he might have done with that initial survey, and about the kind of information he might have been able to gain if he had taken a more formal approach to this follow-up call.
In the initial survey, I’d been asked about the sales experience and my first few weeks with the car, all rated on a standard Likert scale. Would I describe the sales experience as positive? Was the car performing as I expected? Rate your level of satisfaction with various features of the car, and so on.
I’m guessing the reason he called yesterday was to see if I might be interested in a trade-in. But even if the motive was purely to get me to come spend some more money at the dealership, that was an opportunity to get some great data about longer-term usability and performance in the car, and about my satisfaction. Given that I’ve driven the car over 10K miles and now have three children, I might have had some very interesting things to tell the manufacturer about ease of use, safety, and basic features of the car that I might not have even thought about in that first month.
It called to mind a problem we encountered at my previous job, where we developed e-learning modules for delivery via the web. We asked clients to submit a satisfaction survey post-implementation, and our work projects typically got great user reviews and great accolades from award organizations. But when it came time to submit a response to an RFP that asked for long-term performance and effectiveness data, we couldn’t do it. We simply lacked that feedback mechanism. Consequently, we 5/have lost potential work and revenue to competitors who were better prepared to show lasting results from their trainings.
Think about your work. Aside from arguments we could entertain about today’s needs for agility versus maturity, I think we can all agree that there is value in varying windows of time for feedback, whether you’re in ITSM, design, sales, or any other line of work. When you complete a project, release a new version, or close a deal, everyone celebrates. You gather customer and press quotes, obtain user feedback, and compare your work to others in the market—but are you gathering comprehensive performance and satisfaction data to support your product’s benefits and your consumers’ experience? And if so, how are you putting that data to use? Does it inform design and upgrades? Does it launch new application performance management (APM) projects? Does it impact sales strategy? What analytics are you investing in?
Today’s consumers are presented with myriad comparable options for just about everything, from marketplaces themselves, to cars, to appliances, to e-learning, to software, to support. The winners in any vertical will be those who can demonstrate not just immediate benefits, but long-term value. Build, support, or sell things you believe in, get the data to back them up, and find ways to analyze and monetize that data. That’s what puts you ahead of the pack.
Find me at Monika Bustamante – Google+