There was something of a kerfuffle recently when it became public knowledge that travel website Orbitz were recommending different price ranges of hotels based on the user's operating system. Data mining had told them that Mac users typically pay a premium of upto 30% on a night's stay so they were using data to improve content recommendation, and in the process their chances of selling products at premium prices.
Dynamic personalisation of content based on data is nothing new of-course. Amazon does it all the time, generating highly customised pages that pull on your purchase history and other data making it highly unlikely that any two of Amazon's millions of customers will see the same home page when they're logged in (a remarkable thing, if you stop to think about it). Systems that optimise decision-making around merchandising and pricing by modelling the result of individual discounts or promotional changes are also not new.
But whilst Orbitz may not have been showing different prices for the same goods to different customers, it seems that a growing number of retailers may be doing just that. Using sophisticated software that combines the data they already hold on customers along with location and other cookie data stored on people's browsers, retailers can detect price-sensitivity and adapt content in real-time.
Such 'price-customisation' software uses sophisticated algorithms to identify, for example, when a user may be willing to pay more, or whether they are likely to need an additional pricing incentive to buy. Specialists quoted in this Economist piece on the subject suggest that allocating discounts using price-customisation software can bring more than double the return than offering the same discounts randomly, and that at least six out of the ten biggest US online retailers are now customising prices in some way.
Kevin Slavin, in his brilliant TED talk on how algorithms shape our world (highly recommended if you haven't yet seen it) gives an example of the potential downside in pricing algorithms. Many different merchants (two million of them) use Amazon's platform to sell goods and some are using data-mining software and algorithms similar to that which have been developed to inform trading decisions on the stock market. These algorithms can be set to follow simple rules (e.g. to ensure that prices track, or are always pegged below, those of competitor) or can act in more dynamic ways to adjust prices on the fly in response to changing conditions. As algorithms respond to each other and get locked in loops, says Slavin, anomalies can appear such as the genetics book 'The Making Of A Fly' being inadvertantly offered for sale on Amazon for more than $23 million in 2010. Similarly, algorithmic trading was believed to be responsible in the same year for a short period of huge stock-market volatility which saw the second biggest point swing in the history of the Dow Jones Industrial Average happen in the space of twenty minutes.
Slavin says in the intro to his talk that we need to transition our thinking about contemporary maths from something that we derive and extract from the world to something that actually starts to shape it. Algorithmic curation of content is inevitable. Along with professional and social curation (ovelayed with a layer of self-curation) I believe it will be one of the fundamental determinants for how we discover and consume content in the future. Perhaps we need to start getting used to the fact that algorithmic customisation of pricing is just as inevitable.