So first, the plugin. I have basically ported the MT Actionstream plugin to WordPress (you can see it in action on my main page). This is pretty cool, and it means we can easily share settings with other efforts.
New in this release is the ability to import a list of services from another page (like MyBlogLog or FriendFeed) using SGAPI.
Code lesson: sometimes the WordPress docs lie. They say that you pass a function name (or array with object reference and function name) to wp-cron hooks to schedule regular actions. Not true. You pass the name of a WordPress action (added with add_action).
Blogging Lesson: Blog your own work. This plugin has been covered by at least four blogs now (more than most of my stuff) and not yet by me. I just posted the plugin on the DiSo mailing list and people liked it. I’m not complaining, but I’ll definately post my own stuff up front in the future!
Since APP (understand here) is mainlined in WordPress, it makes sense to use it in DiSo efforts. I doubt that my OAuth plugin will work here, but it’s worth testing. It may mean using headers, but with comment and discovery support we should be able to build a distributed commenting system, at least for WordPress.
I’ve thought about other APIs that would be useful for DiSo. For example, adding friends or groups. APP does not fit this, but the general concepts do. Perhaps APP can be abstracted into more of a CPP.
GET on main endpoint to list items (ATOM can always be the main wrapper here).
POST to main endpoint to create new items.
PUT to node to edit.
DELETE to node to delete.
Authentication unspecified (HTTP Basic or OAuth work well).
If the content of your POST and PUT requests is ATOM, you have AtomPub. The same basics can easily work with other content. (The other content types could be encapsulated in ATOM entry bodies on the GET list, or XOXO).
For example, a POST body of XFN+hCard could add a friend. A PUT body of hCard could edit a profile (ie, to add groups).
I would also like to suggest that POST on a node could be used to add comments (create new content on a content node).
Don’t get me wrong, I love Google’s Social Graph API, it’s a great way to speed up the discovery of XFN data by using Google’s cache. What does not make sense to me, however, is their ‘NodeMapper’ concept that is built in to the API. It maps multiple URLs from a site on to, not a single URL, but a SGAPI-only URI scheme. It maps using URL patterns that are known about the site, so it doesn’t even work on the web in general. When it does work, what is it useful for? URL consolidation. The problem is, that the only thing you can do with a nodemapped URI is (1) use it as a unique key or (2) turn it back into a URL to get data.
I don’t get it guys. How is this better? Is there even a reason to consolidate things like FOAF files backwards to the main page, since most people will enter the main page itself as input anyway? Even if it was useful, shouldn’t it actually map to the main page and not to some proprietary URI scheme?
Thoughts? Anyone see a use for this that I’m missing? Or is this misfeature just adding a layer of data that someone might use and that we’ll have to hack around again later?
I got back Manday morning from SGFooCamp. This is a sort of himp of my thoughts of the event and the results.
The first thing, for me, was the actual networking that went on. I met lots of people that I’ve followed online for some time and many more. The informal discussions that “just happened” were, I think, informative to all.
The talks I attended were all excellent. I was inspired with a number of ideas that will make it into my DiSo plugins. A lot of insight goined into what users expect vs what I tend to to think of as a Geek.
Two specific points of awesome:
1) Talk-turned-flame-war about DataPortability Workgroup. Good to see the air cleared at least. Way too much hype there.
2) Witnessing one of the first usefully federated XMPP PubSub messages. Seeing just how fast it can be.
If I got one thing from the (un)conference it would be the value of a demo. Onward to DiSo hacking!
We all hate SPAM. We all love Akismet. GMail is also great at killing SPAM. Why are Akismet and GMail so great? They have huge databases of SPAM from their many users to train filters with.
Only one problem : they’re commercial and closed. Same old story, if they go down or evil we’re screwed.
Solution : decentralise.
The way that I’ve been thinking this could work is threefold.
First off, write a plugin for WordPress/other things that logs all SPAM in the WordPress database and allows anyone to easily access this list in standard formats. This could hook into Akismet and other solutions to track what existing solutions mark as SPAM, as well as what users manually mark as SPAM/ham.
Then, create a site that simply lists sites that are publishing SPAM data, with links.
Third, create simple server software that either scrapes sites publishing, accepts submissions of data, or has a public API for individual SPAM submissions (like Akismet) or a combination of the above. This server could also include filter logic that trains itself and offers a public API, or that could be other servers that rely on these ones.
The big thing is that this code all be open source so that anyone can run a server. Each server would either scrape from all publishing sites, or publishing sites could cache a lists of operating servers to submit to. Either way, we end up with a multiple-server environment with distributed data / load.