Amazon AWS releases CloudFront – here’s how it works

Amazon released CloudFront to public beta today. It’s a simple way to get free content publicly available to the edge of the network (closest to the recipient downloader).

From the AWS announcement:

Amazon CloudFront is a web service for content delivery. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no commitments.

Here’s how it works:

  1. Upload your content files to S3.
  2. Call the CloudFront API, specifying the S3 bucket.
  3. Use your S3 bucket’s already created domain and filename in your website (no change here).
  4. When a customer clicks a link the closest file is automatically routed.

UPDATE:
So, uhhh, what’s special here? From AWS’s description, CloudFront is simply setting a flag for your S3 bucket that means it’s to be distributed around the cloud or remain in a single location.

This should be an addition to the S3 service, not a separate service. The added step of having to let CloudFront know you want the edge network coverage is an unnecessary chore. It would be better as an attribute for the S3 bucket. This would make S3 a stronger brand, keep it clearer for those navigating AWS, and simplify the process of pushing content (1 less step).

Here’s how much it costs:
Continue Reading

Chrome after a day of use

Been using Google’s Chrome browser for a day, and so far it’s a great experience.

No. Sorry. That’s an understatement. It’s revolutionary. As the comic describes (yes, Google released a comic to introduce Chrome), this browser takes the web to impossible places. It makes the web more like an operating system, allowing each site (tab) an independent process and memory allocation, improving javascript speed – via a virtual machine – by orders of magnitude, and integrating more cleanly with Gears.

I’ve liked the browser enough to find Windows more useful than OS X this morning. A strange feeling indeed.

I’d mentioned in an email yesterday that the question was whether Chrome would quickly grab users away from Internet Explorer and other browsers or if it would be incremental in its chipping. My expectation is that it will still be chipping, but it’s going to be much faster than expected.

Just as the Google search tool grew wildly popular purely out of speed and relevance, the Chrome browser will gain huge momentum because of speed and relevance. Since there are still massive amounts of IE6 installations out there (proving that not everyone goes out and upgrades) a swing won’t happen over night, but it will gain ground more quickly than Firefox or Safari (Opera not mentioned since Chrome pretty much destroys the reason for Opera’s existance – speed).

A couple questions are begged… Is this where we really see web 2.0 take hold? I think so. And how does this affect Google in terms of monopolizing the web? They now own search and could quickly dominate the browser.

Easy way for spammers to follow more than 2,000 on Twitter (and get better results)

The 2,000 follower limit, it would seem, was put in place to prevent mass following and spam on Twitter. This was pretty frustrating for me since I fell in to their beyond-the-limit zone (I followed over 6,000 people because I loved the information, but couldn’t add any more).

I’m not complaining too much, as I’m enjoying the more traditional use of my Twitter account for now, but this is a ridiculously short-sighted fix.

I haven’t seen much attention drawn to the following facts (pun wasn’t intended):

  1. People are more likely to recipricate a follow request from someone with a low following/friend count.
  2. There isn’t a legitimate way to prevent someone from having multiple Twitter accounts (accounts are tied to email addresses).
  3. The Twitter API limits are based on account, not where the call is coming from (one server can make many requests on behalf of other accounts).

From the above simple observations, one can see the easy way to follow an unlimited number of people.

  1. Create a large number of accounts.
  2. Follow a smaller number of people with each account (you’ll have better reciprocation).
  3. Follow a lot of people (the API limitations will apply per account, so your follows-per-hour will actually be quite large).

The people running Twitter are great. They’re really trying to do the right thing. So maybe I’m completely wrong when I anticipate the above and say that this looks like a Facebook move. Facebook’s 5,000 friend limit works for Facebook. Facebook’s API is advanced and robust and complicated enough to not get terribly nailed by multi-account mass spam following.

Additionally, the information load on Facebook is different. You get a clear picture of who a person is that is friending you. You’re given enough information to make a decision. On Twitter, this isn’t the case.

So what’s going to happen?

  1. Spammers are already adapting to the limitation, as described above.
  2. Tweeple will stop trusting low follow-count users (do you trust an eBay user without feedback?)
  3. Twitter’s servers will still be inundated and over capacity.

I blame it on Scoble

How to build a really successful web 2.0 service on top of another service and screw it all up

Twicecream – a fake service to demonstrate a point about single sign-on…

In web 2.0 there is a determination to screw up potentially great services. It’s my number #1 pet peeve with software development these days. Here’s a fictitious example of a service you might create…

You’ve built a service that automatically Twitters your geo-position and the name of an ice cream parlor when you’re in front of it. Your phone buzzes when an ice cream parlor is detected and begins sending photos to SnapTweet and TwitPic, including Zagats ratings and commentary. Other patrons respond back and generate conversations. This is your social network: Twicecream – a social network for twittering ice cream enthusiasts.

In front of Ben & Jerry’s on the Wharf, Zagats 4-stars, pics: http://twicecream.com/abc123

Congratulations! You just failed.

You didn’t fail by creating a service few would use. You failed because you didn’t utilize the authentication mechanism your patrons preferred. You built an unnecessary barrier to your garden by requiring an unnecessary account creation. Don’t do this, it’s arrogant and inefficient.

Your patrons have Twitter accounts. Twitter has an API. Your service should have asked the patron to log in with their Twitter credentials.

This isn’t just for social networking. This goes for all web services. SaaS solutions that require secondary account creations are a bad idea. Single sign-on, whenever possible, should be used.

The whole idea is to simplify access to what the customer needs. If you’re requiring unnecessary account creations, you’re screwing it all up.

Crossing the streams – large numbers of Twitter updates

Chris Bilson (@cbilson) had a good description regarding my post about Twitter’s scaling/architecture challenge.

Kevin Rose and Leo Laporte tweet at the same time = crossing the streams”

I dunno if Proton Packs have exponential load challenges, but the end result for a server can feel similar. Is my post I pointed out that Twitter has to determine delivery options and potentially deliver between 100 million and 1 billion updates per day.

But that’s in a day. 1 billion messages in a day are a piece of cake when spread over 24 hours. What if 1 billion messages have to be delivered in an hour? Or all at once?

Take my list of the top-10 Twitter accounts and imagine them all at TED, WWDC, Google I/O, or your local unconference. These ten users, if each sends an update around the same time create 321,928 messages that need delivery (total number of followers for top-10 accounts). This is an awesome amount of message delivery. If those ten users live-blog or get conversational and send ten updates in an hour… 3,219,280 (again, that’s from only 10 users).

I don’t illustrate this to state it’s these power user’s fault. Absolutely the opposite. They’re generating amazing amounts of traffic, which is a wonderful thing, and the algorithms are the problem.

It’s possible to optimize algorithms and modify systems for maximum performance. I bring up Twitter’s challenges because I’m wondering if this is a challenge beyond present day computing.

To open some minds, here’s an impossibility often overlooked: Huge numbers in a deck of cards (just to show impossibilities can stem from small initial numbers).