Skip to content
K2 kingsley2

Archive Restoration

Scoble, the Noble Bottom Feeder's Case for Noise

Original June 2008 essay arguing that early adopters have to be bottom feeders of information, processing noise rather than filtering it. Republished with a 2026 retrospective on why the bottom-feeder thesis lost ground to algorithmic curation and AI summaries.

The essay below is a 2026 editorial reconstruction of a post originally published 3 June 2008 on blog.kingsley2.com, defending Robert Scoble’s preference for “noise” over “news” in his information diet. The Kingsley 2.0 archive preserves the substance of the original and pairs it with a section on what changed for the bottom-feeder thesis after FriendFeed, Google Reader, and most of the chronological feeds the essay assumed.

The 2008 setup

The post was a response to a single sentence Robert Scoble had written in a FriendFeed conversation that May: “interesting, but I want the noise, not the news.” The author agreed, and the essay was the explanation of why.

The thesis was straightforward. If you only read curated news, you get the same information as everyone else, at the same time, and you cannot move faster than the average. Big-bang news flattens the field. Trickle-up news, the early signal that has not yet been amplified into a headline, is the only differentiator. Therefore early adopters have to be bottom feeders, in the literal sea-creature sense: they have to sift through far more material than they need, accept that most of it is sand, and rely on the body’s gill-slit-equivalent to push the sand back out.

This framing landed because 2008 was the last moment when the public-web noise was small enough that a determined individual could process most of it. FriendFeed was open. Twitter was open. Google Reader was free, fast and free of algorithmic ranking. Technorati indexed blog posts within minutes. The tools that the essay listed (Yahoo! Pipes, twhirl, Google Reader keyboard shortcuts) were available to anyone willing to set them up.

The author’s own contribution to the noise-processing toolkit, the Social Media Firehose, was described in the essay as a Yahoo! Pipes mashup that listened for mentions of salesforce.com in real time. The phrase “social media firehose” had not yet been claimed by enterprise vendors and would not be for several years.

The tools the essay named, and what happened to them

The 2008 essay name-checks a specific stack. Tracking the fate of each tool is the easiest way to see why the bottom-feeder strategy stopped scaling.

  • FriendFeed: acquired by Facebook in August 2009 for $47.5M, kept on life support, shut down in April 2015. The conversation pattern that the essay assumed (public, real-time, multi-source) lost its primary venue at that point.
  • Yahoo! Pipes: shut down in September 2015. The Firehose itself died with Pipes; no commercial alternative replicated the low-friction signature.
  • Google Reader: shut down in July 2013. The decision was widely understood as the moment social platforms officially won over RSS as the way most people consumed feeds.
  • twhirl: an Adobe AIR Twitter client that the author used to keep ambient Twitter alerts on the edge of the desktop. Adobe killed AIR support in 2019; the client had been effectively dead since 2010.
  • Technorati: gradually pivoted from blog search to advertising, shut down its search index in 2014 and folded the advertising business by 2018.

The picture is unambiguous. Every piece of infrastructure the 2008 essay assumed for bottom-feeding has been retired. The closest 2026 equivalents (Reeder + Mastodon, n8n + open APIs) work for narrow domains but not at the bandwidth the essay implied.

Why “process noise” beat “filter noise” in 2008

The most-quoted phrase in the original is “find ways to process the noise, not just filter it.” The verbs were chosen deliberately. Filtering implies rejection: you set a rule and the rule discards the rest. Processing implies engagement: you let the material pass through, and you change yourself in the process.

In 2008 this was an arguable preference. Filtering was supplied by the platforms (Twitter lists, Google Reader folders, FriendFeed friends-of-friends). Processing was supplied by the user, and the cost was real: time, attention, and the cognitive overhead of holding a wide context in working memory. The argument for processing was that you got to be early, that you developed a personal taxonomy that other people did not have, and that the taxonomy compounded.

By 2010-2012 filtering started winning. Twitter’s algorithmic timeline arrived. FriendFeed shut down. Facebook’s EdgeRank became the model that every consumer social platform adopted. The user no longer chose what to process; the system chose what to filter.

By 2018 even the people who had defended processing were spending most of their time inside algorithmic feeds. The reason was simple: the noise had grown faster than personal processing capacity. A 2008 niche generated a hundred public posts a day. The 2018 equivalent generated tens of thousands.

By 2026 the question is no longer “filter or process” but “which agent processes for you”. The bottom-feeder is now an LLM with a recurring task: read this stream, summarise the anomalies, alert me when one of them crosses a threshold. The 2008 essay’s underlying claim (that the work of processing is irreducible) still holds; the agent doing the processing is no longer the human.

Continuous partial attention, then and now

The essay endorses Linda Stone’s term “continuous partial attention” approvingly. The author describes a deliberately chaotic desktop with semi-transparent twhirl windows in the peripheral vision, designed so that signals could bubble up without interrupting deep work.

In 2026 the same idea reappears in two specific places:

  • Ambient AI dashboards: tools like Granola, Cleo and Notion’s daily digest live in a desktop strip, summarising upstream activity without forcing attention. The form factor is the descendant of twhirl, though the content is now generated rather than raw.
  • Voice-first agents: Siri, Alexa, ChatGPT voice mode and the various Meta Ray-Ban surfaces have made ambient awareness audible instead of visual. The continuous partial attention now applies to a different sense.

Both descend from the 2008 desktop the essay described. Both omit the part that mattered most to the author: the raw stream. The 2008 bottom-feeder watched the noise; the 2026 ambient AI watches the noise so that you don’t have to.

What the essay got right that 2026 forgot

The 2008 essay’s most durable claim is this: the people who are paid to know about a thing arrive after the people who chose to know about it. The corollary is that organisational structure (paid-to-know roles, dashboards, reports) tends to lag the informal information practices of motivated individuals.

In 2026 the same gap exists, but the informal practitioner is now competing against generative summarisation rather than against a sluggish news cycle. The difference is qualitative. A motivated individual in 2008 could move faster than the institutional news. A motivated individual in 2026 has to move faster than an agent that costs $20 a month and never sleeps.

The 2008 essay does not solve that problem; it could not see it. But the discipline it described (process don’t filter, build your own taxonomy, accept the cognitive cost) is still the right answer for the niches where speed matters and where the agents have not yet been trained on the exact corner of the world you care about. The agents will come for those niches eventually. The window where bottom-feeding still pays off keeps shrinking, but it is not yet zero.

For the companion piece on what the author was building at the time, see Walking the Line Between Listening and Stalking, the 2008 essay on Social Media Firehose ethics. Browse more entries via the social media topic and the blogging topic, or step back to the full archive index.

FAQ

Who is Robert Scoble?
An early tech blogger and Microsoft evangelist (2003-2006) who later worked at PodTech, Fast Company TV, Rackspace and as an independent. He was one of the most widely-followed tech voices on Twitter and FriendFeed in 2008 and a frequent subject of debate about early-adopter behaviour.
What was FriendFeed?
A real-time aggregator launched in 2007 by ex-Googlers Bret Taylor, Paul Buchheit and Jim Norris. It let users follow each other's activity across Twitter, Flickr, Delicious, Google Reader and dozens of other services in one stream. Facebook acquired the team in 2009 and the service shut down in April 2015.
Does the bottom-feeder strategy still work in 2026?
Only in narrow niches that retain chronological, unfiltered feeds: niche Mastodon instances, specialist Discord servers, focused RSS readers. The wide-radius version the essay described is not available because the platforms that enabled it (FriendFeed, Twitter API, Google Reader, Technorati) are all gone or closed.
What replaced continuous partial attention?
AI summaries (Perplexity, Arc Browser's Max, Notion AI feed digesters) and algorithmic feeds that prioritise engagement over recency. The trade-off is that you no longer feel the noise; the system feels it and tells you the conclusion.