The Algorithm + the Crowd are Not Enough

In the last decade, the online world has been ruled by two, twin forces: The Crowd and The Algorithm. The collective “users” of the Internet (The Crowd) create, click, and rate, while mathematical equations add scalability and findability to these overwhelming quantities of data (The Algorithm). Like the moon over the ocean, the pull of these two forces help create the tides of popularity (and obscurity) on the Internet. Information is more accessible, useful, and egalitarian than ever before.

But lately, at least to me, the weaknesses of this crowdsourced + algorithmic system are showing, and the next revolution feels inevitable.

Bolster Bolts on Flickr 


Let’s start with some examples of the crowdsourced, algorithmic advances:

Netflix – we watch, rate and review. Netflix recommends (they even crowdsourced their algorithm).

Amazon – we browse, buy, rate and review. Amazon datamines until they know exactly what to recommend (apparently a big part of their business).

Expedia – we fly, rent, sleep and vacation. Expedia’s machines analyze, set prices and determine how to maximize profit.

Google – we create content, cite the content of others and share socially. Google crawls, sorts, serves and ranks (based largely on a human-created link graph).

Facebook – we friend, update and like. Facebook builds systems to perfect ad + content targeting back to us.

Reddit/Digg/StumbleUpon – we submit and vote on content. They order the submissions and determine how long and to whom they’re visible.

Yelp – we dine, visit and shop then review and rate. Yelp classifies, ranks and recommends.

We’re not solely beholden to these algorithms – we could dig deeper, search further and collect more input before making a decision (and many times we do). But, with the machines making assumptions about what we want, it’s often easier to embrace that default decision than to choose an alternate path.

Compare this to alternative, pre-Internet methodologies:

The Video Store – The “staff picks,” “critic’s choice,” and “award-nominated” sections would help us choose what to rent.

Consumer Reports Magazine – Experts analyzing and testing every aspect of a product would rate and recommend their choices for the best. Cook’s Illustrated‘s “America’s Test Kitchen” is another good example.

Travel Agents – Travel used to be booked by individuals that had unique access to databases of information about transportation and lodging

Zagat Survey – These crimson-red guidebooks compiled restaurant reviews and ratings from anonymous raters sending in surveys from dozens of cities.

Personal Referrals – When no reference resource was available, friends, family, co-workers, and service personnel (concierges, 411, etc.) could provide suggestions; though biased, high relevance and value make them stalwarts even today.

Let there be no confusion – my opinion is that most of the time these  predecessors  to our modern, crowdsourced+algorithmic solutions don’t hold a candle in quality, reliability or usefulness. Their reach was often too limited,  occasionally  corrupt and sometimes just plain wrong.

But these predecessors  do have advantages – most notably that everyone can understand how the recommendation was made. Compare that to the mystery of our algorithmic+crowdsourced services:

Why does a page rank first in Google for a particular query? Why does one link stay on Reddit’s homepage for hours while another, with a similar number of votes, fall off in just a few minutes? Why does Facebook show me ads for customer service jobs at Comcast? Why did Amazon recommend buying whole milk with this Badonkadonk Land Cruiser?

If we don’t understand why these suggestions were made, couldn’t that bias us against trusting future recommendations from these services?

Fred Wilson recently   made a compelling case that we shouldn’t invest in something we don’t understand:

…sectors of the venture capital market are filling up with investors chasing returns. And some of them do not understand what they are investing in. I got a call a few weeks ago from an individual investor who wanted to invest in one of our portfolio companies. He asked about the company and from his questions it was pretty clear he did not understand the business very well. He went ahead and made an offer to invest. That scared me.

I’ve been visited recently by a number of foreign investment vehicles, many of whom are investing billions of dollars of sovereign wealth. They all want to get into our funds and our deals. When I talk to them about why, they can’t really articulate a cogent argument about the economic potential of the social web. But they see the returns and want some of them too. That scares me.

I’d argue that some people will find it equally hard, and perhaps similarly foolish to trust suggestion/ranking services whose algorithms they can’t understand. These same people might turn to recommendation sources they can easily grasp and results they can logically  dissemble.

My point isn’t that Google, Netflix, Amazon, Yelp or any of the others are doomed. But I do think there’s an opportunity brewing for entrepreneurs, websites and companies to add editorial components to the algo-crowd paradigm.

Plenty of startups are already investing in this space:

Quora /  FormSpring /  StackExchange – as compared to answers from Google or even Wikipedia, where either the ordering of results or the source of the answer are unknown and free of responsibility, these modern Q+A sites have put the power in the hands of crowds, algorithms and experts.

Techmeme / Memeorandum / Mediagazer –   a crowd of bloggers produces content and cites each others’ work. The algorithm compiles and sorts while curators (Gabe Rivera’s editorial team) insure quality and timeliness.

Alltop – Guy Kawasaki’s curated collection of feeds aggregates and sorts content by sector. Currently, the algorithmic portion is missing, but I suspect it can’t be long.

Oyster / Raveable – Travel review sites are notoriously problematic, either too noisy with user comments and reviews of specious quality or bereft of authenticity by financially motivated affiliates. Oyster solves this pounding the pavement, sending editorial reviewers to hotels and writing their own reviews, then leveraging user data + algorithmic rankings/organization to help users find the ideal choice. Raveable combines light editorial curation with a powerful datamining and sorting algorithm to ease the hotel finding process.

Groupon / LivingSocial / Gilt Group – Coupon and deal sites have been around the web for a decade, but these three (and   a gaggle of similar ones) revolutionized the industry and are becoming some of the biggest brands online thanks to social features (crowdsourcing their marketing and deals), algorithmic filtering for personalization/localization and editorial curation (so the deals they show are actually worth clicking).

TheSixtyOne / Pandora / / Spotify – Music recommendations have been getting better and better, and the latest crop are all moving toward a mashup of algorithmic, crowdsourced and editorial sources to bring symphonic rapture to users.

I think we’re going to see more of this in the future, possibly even in some of the sites/services I first referenced (Yelp, Amazon, Netflix, et al.). It’s my belief that the algorithm and the crowd can be made even stronger with the addition of a third leg – the opinionated,  benevolent  editor(s).