Chris Bilson (@cbilson) had a good description regarding my post about Twitter’s scaling/architecture challenge.
“Kevin Rose and Leo Laporte tweet at the same time = crossing the streams”
I dunno if Proton Packs have exponential load challenges, but the end result for a server can feel similar. Is my post I pointed out that Twitter has to determine delivery options and potentially deliver between 100 million and 1 billion updates per day.
But that’s in a day. 1 billion messages in a day are a piece of cake when spread over 24 hours. What if 1 billion messages have to be delivered in an hour? Or all at once?
Take my list of the top-10 Twitter accounts and imagine them all at TED, WWDC, Google I/O, or your local unconference. These ten users, if each sends an update around the same time create 321,928 messages that need delivery (total number of followers for top-10 accounts). This is an awesome amount of message delivery. If those ten users live-blog or get conversational and send ten updates in an hour… 3,219,280 (again, that’s from only 10 users).
I don’t illustrate this to state it’s these power user’s fault. Absolutely the opposite. They’re generating amazing amounts of traffic, which is a wonderful thing, and the algorithms are the problem.
It’s possible to optimize algorithms and modify systems for maximum performance. I bring up Twitter’s challenges because I’m wondering if this is a challenge beyond present day computing.
To open some minds, here’s an impossibility often overlooked: Huge numbers in a deck of cards (just to show impossibilities can stem from small initial numbers).