No offense to the Red Hot Chili Peppers, but just because I listen to alternative music and “people like me who listen to alt music” enjoy hearing the Peppers, it does not mean that I want to hear them. I am not “people like me”; I am me. And I want the smart computers behind my favorite services to know me enough to deliver the content I want.
It doesn’t matter which service we talk about: voice controlled apps like Alexa, Siri, or Google Home; chat bots in Messenger, Slack, or WeChat; music stations and playlists from Pandora, Spotify, Apple Music, or YouTube Red; video on Netflix, Amazon Prime, or Hulu Plus – I want them to learn MY preferences, and do better for ME.
To keep the reference stories consistent, I’ll stick with music. But you can apply this to any of the myriad service categories:
1.) Discovery and preference are not the same thing
Almost every service out there makes this fundamental mistake when focusing their product: they are nearly all designed around discovery and keeping us there to see what’s next, but not tailored to directing content based on patterns, preferences, or input. My inputs are only a data point for the masses.
I have specific tastes in music. I listen to a really diverse set of music, but it’s still specific. What I play is often based on my mood at the time, or what I’m doing (writing, exercising, traveling, cooking), or triggered by something stuck in my head that just has to be played for me to move on. Most of the time, I want to listen to known music. Sometimes, I want to discover something new.
The magical algorithms that power our favorite services fail to differentiate.
There is this unspoken drive to create insights from so-called big data, where a service is expected to take millions of data points and then distill down what ought to be pretty good for most people. To me, especially as a product person, that’s the saddest end goal possible. I’d prefer awesome for me, or even terrible for me (possibly great for others), but not pretty good.
Every time Pandora plays the Chili Peppers for me, even though I’ve thumb downed it many times, I am reminded of the letdown of pretty good and how far away we still are from useful machine learning in delightful products. Real AI (artificial, or as I call current iterations, “augmented”, intelligence) and deep learning should primarily be applied to me individually. Make the service awesome just for me.
2.) Surprise and discovery are not sustainable
This is the continuation of the false premise from above. With the services all being focused on discovery, their product assumption is that discovery is the singular mechanism to maintain our engagement.
I spent many years in the game industry, and have built gamification elements into business and consumer software. There are some well-documented, psychological elements that drive user engagement. Things like the Zeigarnik Effect, where an unfinished list or quest causes mental tension, or endowed progress, where part of the task list is completed, making it more likely that I will finish the rest, and so on. The goal being the release of dopamine, or pleasure in your brain, so that you’ll do more, or do it again.
However, dopamine is fickle. Well, our brains are fickle when it comes to dopamine. That’s why most game ‘rewards’ are variable, so that we don’t grow immune to the effect. But if the content service (ex: Pandora), repeatedly uses the same mechanism to keep me listening and pleased, then the effect begins to fade. And it really fades fast if they keep playing the music I don’t want to hear. “Surprise, we got it wrong,” is not the dopamine hit either of us was hoping for.
A sustainable content delivery service will keep me engaged because it plays the music I like and want to hear, and then, occasionally helps me discover new tunes when I don’t know what to listen to next.
3.) Mass context is not my context
When I say “mass context”, I actually mean cop-out navigation. Few, if any of the services, offer any sort of real context filtering. They usually just default to showing “these other things are similar to that thing”. This White Stripes station is similar to your Cage the Elephant one. A dummy cop-out for real navigation or intelligence.
Context would mean that when I’m at the gym, Pandora suggests or plays my workout music. If I’m at work, it would switch to less aggressive tunes. If I’m moving, on the bus, or in my car, then it might try artists I’ve chosen while traveling previously. THAT is context. That is smart, or at least, it feels smart. And that is delightful.
There are actually very simple uses of technology — even just time of day inference — that can be applied for improved context. Music I want to hear on Tuesday night at 8pm is very likely different than what I want to hear on Friday night at 8pm. The service doesn’t even have to know me, or be smart, to create a differentiation based on time and date. But if it does know me, and Pandora easily could, then the customizations based on my specific contexts could be wonderfully unique and amazing – to me.
And me is all that really matters to any of us.
To be clear, time of day inference is closer to the pretty good for most people issue. But if used to create a specific context for me and my music tastes, then it becomes a powerful hit of dopamine –well placed content that feels magical in that moment.
Summary
We’re mostly doing AI and deep learning wrong right now. It is hard to deliver singularly intelligent and context-aware applications for each unique person who uses a product. It is easier to build pretty good for most and sell a lot of ads or raise funding dollars. But as computing grows faster, cheaper, and more easily managed, the winners (or the coming disruptors) will be those who invest in delighting each of us as our own me. They will likely also sell more ads and be worth more, too.