Singpolyma

Archive of "Google"

Archive for the "Google" Category

Why the SGNodeMapper is a bad idea

Posted on

Don’t get me wrong, I love Google’s Social Graph API, it’s a great way to speed up the discovery of XFN data by using Google’s cache.  What does not make sense to me, however, is their ‘NodeMapper’ concept that is built in to the API.  It maps multiple URLs from a site on to, not a single URL, but a SGAPI-only URI scheme.  It maps using URL patterns that are known about the site, so it doesn’t even work on the web in general.  When it does work, what is it useful for?  URL consolidation.  The problem is, that the only thing you can do with a nodemapped URI is (1) use it as a unique key or (2) turn it back into a URL to get data.

I don’t get it guys.  How is this better?  Is there even a reason to consolidate things like FOAF files backwards to the main page, since most people will enter the main page itself as input anyway?  Even if it was useful, shouldn’t it actually map to the main page and not to some proprietary URI scheme?

Thoughts?  Anyone see a use for this that I’m missing?  Or is this misfeature just adding a layer of data that someone might use and that we’ll have to hack around again later?

FBgmail v2 For GMail v2

Posted on

Most everyone knows by now that Google released a major update for GMail some time ago. This broke most Greasemonkey scripts for GMail including my FBgmail v1. Google however, in their brilliance, released an “API” specifically for Greasemonkey on the new GMail! I have harnessed this to create a version of FBgmail for the new GMail.

JSON from Google

Posted on

JSON (and particularly JSONP) is one of the most useful data formats for hackers. For quite some time we have had to do all sorts of hackish things to transform the data available from Google services to JSON(P). No longer.

Blogger, Google Calendar, and Google Base now all support JSONP. This was announced on Blogger Buzz and is perhaps one of the greatest advances in hackability in (at least recent) Google history.

On top of this, SearchMash (a Google susiduary) now offers JSON feeds of Google’s Web, Images, Blog, and Video searches. For JSONP a proxy service is required, but this still allows for getting Google’s data out where we can hack with it.

One thing that this will mean very soon is an upgrade to peek-a-boo for BETA that makes use of this and will definately be faster.

Don’t JUST Eat Your Dogfood

Posted on

Blogger (and Google in general) have been getting some bad press lately. They decided to make a counterpost on Buzz, not a bad idea. Some people are cynical about the entire post, but it has good and bad points.

They’re right, Google is only a company and its employees are only human. Mistakes are made and they’re very good at fixing (at least the big ones). The do eat their own dogfood and that is why bugs like this can be spotted faster.

It’s also what causes the press.

If a single Blogger blog malfunctions, no one cares. Unless that single blog is the Google blog. They can’t expect bloggers not to notice this. This isn’t the old-style press where we wait until it’s a proven, serious, do-or-die issue before printing. Bloggers write while the news is hot. If in two hours it’s all fixed, that’s not the point.

Are our expectations of Google and Blogger too high? Undoubtedly. We all know that as consumers, especially geeky consumers, we demand more than is possible. That’s just the way things are. Users of my hacks expect far more than I can deliver, and I expect far more than Blogger can deliver. The mark of greatness is handling this well.

Their post makes some good points, and I won’t disagree with them completely. We can give them some slack. However, they try to get out of responsability a little too much. EAT your dogfood, FIX the service, and BE RESPONSIBLE for what happened, even if it’s no big deal. They’ve done their PR bit, but actions speak louder than words. It is far more powerful to say ‘we did’ than ‘we do’. So while I sympathise with them, I must side with the great penguin:

Don’t give me excuses – give me results!

Google Calendar Feed Cleaner

Posted on

Google Calendar‘s feeds are not the most useful in the world. They are sorted by edit date instead of when they’re happening, and they’re only available in ATOM.

No longer. Using the extended feed information Google tacks into their calendar feed it is possible to create a clean, sorted feed with the start date as the pubDate. Just go to the Google Calendar Feed Cleaner and enter the URL to your calendar’s feed, select the format you want (XHTML for testing, RSS 2.0, or JSON(P) ), and enter the max items to be in the results (default 5). You will get a nice, clean feed of your Google Calendar data.