I am pleased to announce version 0.2 of my WordPress Actionstream plugin!
It can be downloaded from the normal place.
New this release:
- Better microformats support in the output
- Some architecture improvements and bug fixes
- There is now a sanity check for zero items or less items than requested
- Posts on the host blog are now added to the actionstream
- There is a well defined way to add stream items (say, from another plugin). Just create an array with the fields you need (be sure to specify identifier and created_on – GUID and unix time of publish, respectively) – usually includes title and url. Then instantiate an object of class ActionStreamItem and save in like so:
(new ActionStreamItem($array_data, 'service', 'data_type', $user_id))->save();
- There is now a hook for other plugins to add available services. Example:
actionstream_service_register('feed',
array(
'name' => 'Feed',
'url' => '%s'
),
array(
'entries' => array(
'html_form' => '[_1] posted <a href="[_2]" rel="bookmark" class="entry-title">[_3]</a>',
'html_fields' => array('url', 'title'),
'url' => '{{ident}}',
)
));
So first, the plugin. I have basically ported the MT Actionstream plugin to WordPress (you can see it in action on my main page). This is pretty cool, and it means we can easily share settings with other efforts.
New in this release is the ability to import a list of services from another page (like MyBlogLog or FriendFeed) using SGAPI.
Code lesson: sometimes the WordPress docs lie. They say that you pass a function name (or array with object reference and function name) to wp-cron hooks to schedule regular actions. Not true. You pass the name of a WordPress action (added with add_action).
Blogging Lesson: Blog your own work. This plugin has been covered by at least four blogs now (more than most of my stuff) and not yet by me. I just posted the plugin on the DiSo mailing list and people liked it. I’m not complaining, but I’ll definately post my own stuff up front in the future!
Another post based on a previous tweet. This took me at least an hour to debug, so I thought it might be worthwhile sharing.
IE, apparently, gets unhappy when you append nodes to the end of a node it hasn’t finished rendering yet. In practice, this means it blows up when you say document.body.appendChild before the page has loaded. The easy solution? Append to a node that has already loaded! What node is almost guaranteed to be there when the body is rendering? The head node of course! Here is code:
document.getElementsByTagName('head')[0].appendChild(script);
Since APP (understand here) is mainlined in WordPress, it makes sense to use it in DiSo efforts. I doubt that my OAuth plugin will work here, but it’s worth testing. It may mean using headers, but with comment and discovery support we should be able to build a distributed commenting system, at least for WordPress.
I’ve thought about other APIs that would be useful for DiSo. For example, adding friends or groups. APP does not fit this, but the general concepts do. Perhaps APP can be abstracted into more of a CPP.
GET on main endpoint to list items (ATOM can always be the main wrapper here).
POST to main endpoint to create new items.
PUT to node to edit.
DELETE to node to delete.
Authentication unspecified (HTTP Basic or OAuth work well).
If the content of your POST and PUT requests is ATOM, you have AtomPub. The same basics can easily work with other content. (The other content types could be encapsulated in ATOM entry bodies on the GET list, or XOXO).
For example, a POST body of XFN+hCard could add a friend. A PUT body of hCard could edit a profile (ie, to add groups).
I would also like to suggest that POST on a node could be used to add comments (create new content on a content node).
Don’t get me wrong, I love Google’s Social Graph API, it’s a great way to speed up the discovery of XFN data by using Google’s cache. What does not make sense to me, however, is their ‘NodeMapper’ concept that is built in to the API. It maps multiple URLs from a site on to, not a single URL, but a SGAPI-only URI scheme. It maps using URL patterns that are known about the site, so it doesn’t even work on the web in general. When it does work, what is it useful for? URL consolidation. The problem is, that the only thing you can do with a nodemapped URI is (1) use it as a unique key or (2) turn it back into a URL to get data.
I don’t get it guys. How is this better? Is there even a reason to consolidate things like FOAF files backwards to the main page, since most people will enter the main page itself as input anyway? Even if it was useful, shouldn’t it actually map to the main page and not to some proprietary URI scheme?
Thoughts? Anyone see a use for this that I’m missing? Or is this misfeature just adding a layer of data that someone might use and that we’ll have to hack around again later?