The problem with algorithms

For those of us who spend time thinking, building, testing, and refining personalization algorithms, it’s all too easy to ignore their key limitation: they’re impersonal. We spend enormous amounts of time adapting algorithms to our user’s particular characteristics and context, but they will [likely] always lack empathy, sympathy, and basic kindness.

This post from Eric Meyer captures the challenge well. If you’ve used Facebook this holiday season, you’ve undoubtedly experienced its “Year in Review” posts hogging your timeline. For Meyer, the algorithmic “feature” reminded him of events that he will not soon forget:”

A picture of my daughter, who is dead.  Who died this year.

Yes, my year looked like that.  True enough.  My year looked like the now-absent face of my little girl.  It was still unkind to remind me so forcefully.

And I know, of course, that this is not a deliberate assault.  This inadvertent algorithmic cruelty is the result of code that works in the overwhelming majority of cases, reminding people of the awesomeness of their years, showing them selfies at a party or whale spouts from sailing boats or the marina outside their vacation house.

But for those of us who lived through the death of loved ones, or spent extended time in the hospital, or were hit by divorce or losing a job or any one of a hundred crises, we might not want another look at this past year.

My experience was similar to Meyer’s. When I received the prompt to see my “Year in Review,” I worked hard to ignore it. After a challenging year, I’m thankful for that decision.

For me, this experience is an important reminder: personalization can be impersonal. When designing personalization algorithms, we have to both predict what our users want to experience, and what experiences they want to avoid.

Update: Facebook apparently agrees.