#Weeknotes 98 (1 Nov) — Customer satisfaction score, UX Brighton, and the wider definition of intelligence
.
Work wise:
This week, in addition to the ongoing wireframe development, I got asked to support another colleague in coming up with an MVP Customer satisfaction reporting strategy. CES and CSAT are new acronyms for me and I’m immediately excited and grateful for the opportunity to work on something unfamiliar and learn something new. I learned that CES stands for customer effort score. It’s that 5-star rating you usually give after a transaction of service or use of a product. It’s those kiosks at places like the airport and the zoo where you see 5 different buttons in a row with smiley faces from one looking very upset to one looking very excited. My initial impression of them was quite positive. What a great and effortless way to capture feedback! That is until I saw kids approaching, who had no clue what their intentions were, pressing all the buttons as if it were a drum set. Those colourful buttons are just too tempting not to press. The usefulness of data is tied to its accuracy. What is the result of those kiosks’ data? If it’s positive, how can we be sure what is positive to keep doing it? If it’s not very positive, what can we do about it? What should we even improve? I know a service that prompts people to provide a satisfaction rating at the end of submitting requests. It looked like it was part of the request form. So it was no shock that the CES was overwhelmingly positive despite user research indicating otherwise. People thought they needed to give a rating in order to raise their request! It’s better to give a good rating in case that gets factored into the quality of the request being resolved.
Context is key. Data without context is the opposite of useful. It’s things taken out of context, it’s misleading and shouldn’t be used altogether. With CES, there’s no contextual information which renders it not very helpful on its own. That’s where CSAT comes in to help. CSAT stands for customer satisfaction score. It’s really a series of scores based on a survey that collects both quantitative and qualitative data. The survey is usually short, usually 2–5 minutes to complete, and gets sent out to individuals at a specific point, usually the end, of using a service or product. It takes a bit more effort and time to complete. But because of this, there’s more assurance that the data is accurate. Time is precious. If someone is willing to take time out of their day to provide feedback to ultimately improve a service, we should listen. However, creating surveys is much more complex than the few questions on the form that one’s eyes can see. Like any good piece of writing, the intent needs to be clear, and one needs to have some understanding of the intended audience. CSAT surveys can be a lot more contextual. Where CES asks for a simple effort score at a specific touchpoint, CSAT can get feedback on the entire journey and experience. You can go very detailed in asking at which stage of the service or usage of the project they had challenges. There’s free text to allow more context and individual experiences but it’s mostly quantitative to make analysis more manageable. In this sense, CSAT isn’t unlike the user research surveys I’ve developed in the past. The goal isn’t to measure everything but what matters and can move the needle. Asking people to provide information that doesn’t get used is malpractice. Collecting information that is just for reporting sake and doesn’t get actioned is wrong. When people realise that their time isn’t valued and feedback is not listened to, they stop caring. Why bother? As product development teams, we need to think about the ultimate outcome we are striving for. A service or product that’s fit for purpose and empowers individuals or one that just adds another layer of complexity into people’s lives?
When it comes to designing CSAT forms, one needs to understand the key insights we wish to uncover. Is it about identifying which point the people are struggling with? We need to understand the context of the people as they’re engaging with our products and services. How familiar and confident are these users when interacting with the services? It’s all about being able to ultimately identify what’s working well to continue and what’s not. Without the right level of context, we won’t know the actual challenge areas to address and improve.
I quite enjoyed thinking about these questions and how to collect useful data that will make a difference.
When requirements drive and break designs. We need requirements to give steer and direction. But they need to be tied to the desired outcome. Requirements fail when they are tied to a solution. It’s like creating a solution for a problem that hasn’t been defined. Not a recipe for success.
Requirements should be loose and value-driven. They should be flexible and iterative as we increase our understanding of the problem while we accumulate data and evidence. There are soft and hard requirements. The need to differentiate between “needs” and “wants” is critical. I’ve been in too many client review sessions where new “requirements” creep in, often from the same individual which seems more like a wishlist rather than actual requirements. There needs to be evidence of the need to back it up. Those “wants” can be turned into requirements if it’s determined that those are real needs that the people we’re designing for have.
Life wise:
It was a successful Halloween. I took our nearly 3-year-old son Layton to trick-or-treating for the first time and I saw a different side of him. A normally shy boy, but when it came to candies, he was at the forefront of door-knocking and grabbing treats from the treat bowls. The power of sugar is not to be underestimated! I need to come up with a strategy to limit the sugar intake of this child over the next few months… and years! 🙈
Another highlight this week was attending the UX Brighton conference today. It was nice to be in a place full of people who understand the value of human-centred design. It’s a reassurance that we’re not alone and that user-centred thinking is still very much needed with the emergence of AI tools. The talks were all centred around AI and many people in the audience have been wondering about the usefulness of AI in our work. My takeaway is that while AI has a lot of advantages going for it (analysing patterns and processing large amounts of data in a short time), there are still things that we humans are currently better at (contextualising, intonation, nuance, lived experiences, holistic thinking, social queue considerations etc). It’s all about how we can more effectively work with AI technology rather than one replacing the other.
My favourite talk was by Maggie Appleton who talked about the emerging issues with low-quality AI-generated content on the web which isn’t always factual and gets used as training data to create more poor-quality content on the growing scrapyard of the internet. I really don’t wish for a future where all the “free” content on the web is rubbish AI-generated stuff that isn’t based on reality. This is why I love conferences like this which is essentially a platform to share learnings, raise risks, and call for a collective effort to make our future a brighter place.
The conference was also the perfect excuse for me to use my ReMarkable tablet. I love having a digital copy of things to refer back to and ReMarkble is great at doing that while giving off the feeling of writing on paper.
Things I came across:
My favourite article this week is a 2-year-old post by Adam Mastroianni titled “Why aren’t smarter people happier?” It talks about our flawed ways of measuring intelligence and the implications.
I love the following bits from the article:
“There is, unfortunately, no good word for “skill at solving poorly defined problems.” Insight, creativity, agency, self-knowledge — they’re all part of it, but not all of it. Wisdom comes the closest, but it suggests a certain fustiness and grandeur, and poorly defined problems aren’t just dramatic questions like “How do you live a good life”; they’re also everyday questions like “How do you host a good party” and “how do you figure out what to do today.”
This is something I struggle with as well. Many important things are hard to quantify so as a result we don’t measure them or factor them into our product or service creation. It’s really flawed. Just because something isn’t easily measurable, it doesn’t mean it’s not worth measuring, or at least trying.
Photo of the week:
Quote of the week via UX Brighton talks:
“Every project is a culture change project.”