I know they’re saying this about every industry right now, but it’s machine learning. Not just in terms of products and interfaces that UX designers work on, but also in terms of the UX process. One of the biggest problems with UX as an org function is that it doesn’t scale very well. I think ML can fix that, and I wrote this article about it on LinkedIn this morning:
By now, most companies of a certain size have made at least one attempt to institutionalise UX. But few (if any) have managed to truly make it a fluid part of their everyday design processes.
Oftentimes UX initiatives are doomed from the beginning because the company’s culture simply lacks sufficient interest in the user’s experience. Sometimes UX fails because of functional silos that prevent the designated “UX Team” from doing their job. Other times management just doesn’t equip its UX team with enough resources to be effective.
But what’s always ignored in the post mortem of every failed UX initiative is a simple economic flaw that underpins all of the above problems: UX doesn’t scale. At least not in its current paradigm.
Today, UX is all about qualitative research and user testing.
But the more users you have, and the more universal your products are, the harder it is to extrapolate general insights from the feedback of a few users. It gets harder to define people through a persona. This problem is multiplied by the number of products you offer and the number of design projects you have at any given time. The bigger the company, the harder it gets to amalgamate or summarise feedback. And recruiting 500 user research participants per year is a a heck of a lot harder than recruiting 5 for a small, limited-scope study.
To make things worse, UX research is essentially a separate function from the actual design process. It’s a function owned by one or a small group of people, and for them, organizing and scheduling user research gets significantly more complex as research activities get ramped up across different projects.
Perhaps the most painful part: the feedback loop between research and design is too long, putting companies in an unfortunate position: if they do as much user research as they should, they’ll never ship on time. Every missed deadline adds to the backlog.
So where does that leave us? There’s no doubt that the basic concept of UX is incredibly powerful. Focus relentlessly on the user, and business success will follow.
And UX research is fundamentally needed because corporate designers and decision-makers are biased. Without objective user data, the design process quickly devolves into Groupthink, Design-By-Committee, Stakeholder Bargaining, and Convenient Assumptions.
So how can we capture all the nuance and complexity of the user in such a way that we can seamlessly insert it into the design process and make it fundamentally work at “BigCo” scale?
The answer, in my opinion, is machine learning.
Every time we collect user feedback, we should use that to “learn” and predict how users will respond to future design initiatives. Not based on our gut feel or “expertise”, but an objective understanding of users’ preferences.
It seems counter-intuitive, but the future of UX research might be an AI. That’s certainly how we’re approaching things at EyeQuant, where our web service offers instant, personalised feedback on website UI based on past user research.
Using predictive algorithms based on past research could enable us all to get meaningful user experience insights almost instantly, with extremely low marginal cost, while radically reducing the need to co-ordinate and plan live user research sessions.
There’s already a niche market for this, and I think it’s now a question of when, not if, this type of UX research goes mainstream.
What do you think: is machine learning the key to making UX scale?
[taken from here: https://www.linkedin.com/pulse/ux-doesnt-scale-how-can-we-fix-kurtis-morrison]