I'm not a massive fan of cars. Never having learnt to drive, their design pretty much passes by. But a wife working at BBC Magazines means a bathroom floor covered with old copies of Top Gear. Yesterday I came across a review of the Audi A1 (I'd link but topgear.com just returns a 500) which said:

Even its stop/start system is behind the Mini's - it keeps finding reasons not to stop at all. Not that it gives you its excuses, so I don't know how I can alter my driving style to make it more active. Too warm? It's not summer yet. Too cold? The coming of spring made no difference. Battery low? Shouldn't be. Aircon or heater on? Nope, have been careful to avoid that. Lights on? That changes nothing.

This struck a cord with a conversation going on on Twitter about personalisation and personalised recommendation. Which had been triggered by an Eli Pariser article in The Guardian which said, roughly:

the increasing personalisation of information [..] threatens to limit our access to information and enclose us in a self-reinforcing world view.

The opposing view was taken in a post by Better the Mask saying, roughly:

A lot of this article, I think, reads like a digital complement to the Reithian view on broadcasting - that it should be public service, give people what they need not what they want. High-minded, certainly, and noble in a certain light, but also highly problematic. Who decides what "we" as a community need?

Much of the debate seemed to centre on the usual paternalist reading of Reith with "low culture" as the sugar to make the "high culture" pill go down. I'm not sure that's entirely accurate. I don't remember ever seeing "inform, educate and entertain" rendered with bolds or italics. And as Tony Ageh might say, scheduling Top of the Pops next to Panorama was as much about exposing Top of the Pops to Panorama viewers as it was about exposing Panorama to Top of the Pops viewers.

I'd probably go further and say any attempt to break down culture into high and low is itself paternalistic and just leads to the usual sneering at the poor old Daily Mail reader. It also ignores the connections between things. It's usually not that many skips of the graph from "low" to "high"; there are no continents in culture.

And from a personalised recommendation perspective all the anecdotal evidence of user testing I've seen seems to suggest that people value recommendations outside of their bubble. Obviously that doesn't mean recommending Bells on Sunday to Westwood fans (or vice versa). But neither does it mean recommending Casualty from Holby City. People like to be surprised by recommendations, not locked into content ghettos.

All that said, there is one thing that bothers me about "personalised" content services. Recommendation engines take a large graph of data and compress it into a smaller set of one to many recommendations; compression for recommendation is just some inference over a data set to reduce too much choice to some choice. For personalised recommendation, part of the original graph is the user's past activity. There's some truth in the adage that, if you don't know your past, you don't know your future (who am I to disagree with Chuck D) and basing recommendations for future behaviour on observed past behaviour makes some sense.

The problems come when some system starts making inferences and you have no idea why. Like the Audi A1 stop/start system if you can't tell why a system is making some assumption you can't tweak your behaviour to change those assumptions and the whole thing just becomes frustrating. For recommendation engines the metric of measurement tends to be about what is returned. But for a useful and usable system why is equally important. And too often why becomes a black box with the intercession of magic. A polite, useful system would explain the assumptions it's making and the logical leaps it's taking. And allow you to help it to help you.

So given a standard e-commerce application I might be recommended products on the basis of products I've bought in the past. Which might work until the point that somebody else uses my account to buy things. At which point I start getting recommendations for things I have no interest in. Same deal for a TV recommender based on my past consumption.

All this is fine if I can see and modify any data that's been collected about me; so long as I can tell the system, "no, I didn't buy or watch that so please stop recommending me stuff on the basis that I did." Or, "yes, I did watch that, but my tastes have changed / it was crap."

But too often the data collected about me is hidden from view and when it is exposed I can't change it. But this is probably just me banging on about #userowneddata again...