Listen: I am ideally happy. My happiness is a kind of challenge. As I wander along the streets and the squares and the paths by the canal, absently sensing the lips of dampness through my worn soles, I carry proudly my ineffable happiness. The centuries will roll by, and schoolboys will yawn over the history of our upheavals; everything will pass, but my happiness, dear, my happiness will remain, in the moist reflection of a streetlamp, in the cautious bend of stone steps that descend into the canal’s black waters, in the smiles of a dancing couple, in everything with which God so generously surrounds human lonliness.
There’s something odd about these social platforms being so neutral in so many of their operations (seemingly) in that they don’t endorse movements per se; they want to get out of the way and let users express themselves. Yet they force a non-neutral stance on every user when they make language choices such as favorite, friend, like across a set of interactions that can and do mean so much more than that.
There should be an experience of these services that doesn’t force blanket meaning on our actions, or if they do, they do so with the lightest possible meaning and the clearest possibly explanation of consequences. When I like or favorite the first few times, the service should explain to me what that means and where this action lives on. “Like” sounds innocent, but it isn’t. “Favorite” is innocuous until you’re caught favoriting something offensive or dumb (like U.S. immigration policy).
Baratunde Thurston, “Kyle Kinane Vs. Pace Salsa Is Really About Failed Product Language And Design”
I love this so much. There are other parts of the post I want to challenge a bit later, but for the moment: damn straight.
The description of something is different from the thing itself. The language describing use signifies the intention of the interaction’s creator, but it does not define the user’s use. A platform that wants to encourage openness, sharing, greater participation, etc.—and it doesn’t matter whether it’s for social good or to enrich the platform’s owners—should adapt its language to the users’ behavior. A platform that wants to control its users—again, regardless of purpose—should insist on adapting its users’ behavior to its language. Twitter, Facebook, and the rest of the social web position themselves as the first kind, but their interfaces suggest they’re the second.
Maybe it turns out companies can make money in the latter but not in the former; in his post, Baratunde describes one way Facebook is doing that. That’s fine. That’s how capitalism works. But it’s insulting to us all and will continue to damage relationships among people if those companies keep using language that’s incongruous with use.
For their own sake, these companies should care about this. Interacting with others in a particular way requires people to trust the veracity of that interaction. The less we trust a particular interaction, the less honest we, the users, will be in how we use it. The less we are honest, the less advertisers will value the data about our use that lets them target fertile audiences. That targeting is efficiency. That targeting is advertising dollars well spent. Advertisers who are inefficient lose. They will go somewhere else.
Twitter, Facebook, Tumblr, and all the rest need us to trust them. Their UI should reflect that.
"I work so many hours at the factory. I need to find a way for my daughter to live a better life than me."
“How do you do that?”
“I’m not sure. No time to think about that.”
Jean Renoir, La Règle du Jeu