Creative Problem Solver. Programmer. Bodysurfing. Sometime Comics.
Blogger since 2001.

own yr www rn! #IndieWeb

Have I ever posted on Leap Day?

I genuinely don’t know. There have been 5 since I started the site. 2004. 2008. 2012. 2016. 2020.

And today is Leap Day and I already posted today.

Have I ever posted on Leap Day?

Let’s write a WordPress shortcode and find out!

 * Expect [on_this_day month="February" day="29"]
 * Both month and day are required
 * And return an ordered list of posts from that day grouped by year
 * @param $atts
 * @param $content
 * @param $tag
 */ function ( $atts, $content, $tag ) {
    $atts = shortcode_atts([
       'month' => '',
       'day' => '',
    ], $atts, $tag);
    $month = $atts['month'];
    $day = $atts['day'];
    if (empty($month) || empty($day)) {
    // if month is non-numeric, convert it to a number
    if (!is_numeric($month)) {
       $month = date('m', strtotime($month));
    // and the same with the day...
    if (!is_numeric($day)) {
       $day = date('d', strtotime($day));
    $args = [
       'post_type' => 'post',
       'post_status' => 'publish',
       'posts_per_page' => -1,
       'date_query' => [
          'month' => $month,
          'day' => $day,
       'orderby' => 'date',
       'order' => 'ASC',
    $posts = get_posts($args);
    $posts_by_year = [];
    foreach ($posts as $post) {
       $year = get_the_date('Y', $post);
       $posts_by_year[$year][] = $post;
    $output = '';
    foreach ($posts_by_year as $year => $year_posts) {
       $output .= sprintf('<dt>%s</dt>', $year);
       foreach ($year_posts as $post) {
          $output .= sprintf('<dd><a href="%s">%s</a></dd>',
    return '<dl class="on-this-day" >' . $output . '</dl>';


I add that code to my theme. And now, when I add to a page or post with a shortcode that specifies a month and a day, like so…

[on_this_day month="February" day="29"]

…I get output like this:

Mr. Bird
Gravatar WordPress verification for self-hosted WordPress
Have I ever posted on Leap Day?

So before today, I posted once on 29 February. In 2008. I’m surprised! That was fun.

Since it’s Leap Day it’s also Delete Your Drafts Day. I deleted 3 lingering drafts I will not miss. I turned 1 into a post which will publish tomorrow. So there are 4 drafts, which is good enough for me. I wrote about my odd schedule for checking on drafts last week if you care to read it.

Happy Leap Day!

Unrelated goofiness: ‘30 Rock’ Co-Creator Tells The Story Behind Leap Day Williams Watch the weirdness and check it out on Nestflix.

P.S. is there a WordPress shortcode you wish existed? What would it be?

Gravatar WordPress verification for self-hosted WordPress

Gravatar is doing something interesting and ambitious with their icon site. For a long time it’s been the place you can update a pictorial avatar in one place and have it propagate to things like GitHub, Slack and others. Now they are allowing you to “verify” links associated with that account by doing authentication flows with individual sites such as Instagram, Mastodon, Twitter, GitHub.

Being owned by Automattic, it has the capability to directly verify hosted blogs, but it also can do the verification against self-hosted WordPress (WordPress dot org) instances:

Other WordPress site

To make your association with the site, their instructions state:

For verifying your WordPress site, you need to add on your site a link to your Gravatar profile. This allows us to validate that you have access to your WordPress site. Please follow these steps below:

Go to your WordPress site and paste the embed code above into the editor of any post or page of your site. The code will create an invisible Gutenberg block that will help us to validate your site. (You can delete this block later after this verification process is completed.)

The code they offer looks like this:

<!-- wp:social-links --><ul class="wp-block-social-links"><!-- wp:social-link {"url":"","service":"chain","label":"test","rel":"me"} /--></ul><!-- /wp:social-links -->

As I don’t use the Gutenberg block editor I didn’t like the idea of adding a Gutenberg chunk to my site, but adding such a chunk to a post or page would work. It’s the filter the_content that runs the code inside them even if you use Classic Editor.

Rather than adding the code to a page, instead I added a bit of code to my own hand-rolled theme’s footer.php file. That looks like this:

$gravatar_rel_me_verifier = '<!-- wp:social-links --><ul class="wp-block-social-links"><!-- wp:social-link {"url":"","service":"chain","label":"test","rel":"me"} /--></ul><!-- /wp:social-links -->';
echo apply_filters('the_content', $gravatar_rel_me_verifier);

Note that if you use a theme that gets updates from this code edit will be clobbered. If you want the change to be permanent you would do that with a child theme of the theme you use, copying whatever code the theme uses for the footer and making the edit.

Digital Gardens

I attended the Homebrew Website Club Europe this morning (evening in the UK and Europe). These Zoom calls are always an excellent opportunity to hear folks ideas, thoughts, and ambitions.

Jo is often on the HWC meetings and her site uses the .garden top level domain extension. Her website is a kind of digital garden: her art and ideas and things she watched and pages she liked.

In the meeting today the name Maggie Appleton came up (I think maybe by James?). I had no idea that the term “digital garden” has a rich history. (It feels great to learn new history about the internet. I feel happy that people are thinking deeply about how we use these digital spaces). Maggie Appleton wrote an extensive backgrounder: “A Brief History & Ethos of the Digital Garden” 3 years ago. Charmingly the way the piece is attributed is “Planted 3 years ago.” Appleton’s description of a “digital garden” is as…

a collection of evolving ideas that aren’t strictly organised by their publication date. They’re inherently exploratory – notes are linked through contextual associations. They aren’t refined or complete – notes are published as half-finished thoughts that will grow and evolve over time. They’re less rigid, less performative, and less perfect than the personal websites we’re used to seeing.

She is not the originator of the term. For that, she credits Mark Bernstein.

Mark Bernstein’s 1998 essay Hypertext Gardens appears to be the first recorded mention of the term. Mark was part of the early hypertext crowd – the developers figuring out how to arrange and present this new medium.

That presentation: Hypertext Gardens: Delightful Vistas is great, this sentence–(from 1998!)–stood out to me:

Today’s Web designers are taught to avoid irregularity, but in a hypertext, as in a garden, it is the artful combination of regularity and irregularity that awakens interest and maintains attention.

I love that.

The short presentation is filled with metaphors for experience that resonate for me. I love delight and serendipity.

The whole presentation is a kind of meditation. About a notion of a gentle stroll through a garden. When you take your time you might notice things you never noticed before. I have a comic “Was that there before?” on that theme. I’m a big fan of garden as metaphor. I’ve been thinking in terms of human relationships along those lines for several years. I wrote about that recently in Al & My Friendship-as-Garden Theory.

I am glad to learn more history on the term digital garden.

In looking more at gardens I discovered that in August last year there was an IndieWeb Carnival last year on the topic. So… What’s an IndieWeb Carnival?

Each month, the carnival has a different host. At the beginning of the month, the host comes up with the topic, and posts it both on their website and here. Then, other people post their submissions and alert the host about them. At the end of the month, the host collects all the received submissions and posts an overview of it.

Source: IndieWeb Carnival, on the IndieWeb Wiki.

Mark Sutherland hosted the IndieWeb Carnival for August 2023 with the theme of Gardening.

I found 5 participating posts from that IndieWeb Carnival by searching. There may be more out there but if so they may not have been “spidered” by search engines either inadvertently or intentionally. (Jo’s post on Search Engine Hostility is worth your time.)

Mistrust-Based Technology Choices

The latest news in tech has been excitement about reports that Automattic has the intent to sell Tumblr and website data (read that as “your blog posts”) to a Large Language Model-based company for big dollars. Putting your stuff on a companies servers usually means they get to do what they want to with them, including make money selling it even if they don’t cut you in on the sale. This got me thinking about how it is I don’t have much in the way of content on those servers, and what I do have there I don’t feel strongly about whether it gets sold. I do use WordPress – the dot org version, which does not flow through and is not subject to the servers that Automattic runs. That’s my own view, but it also got me thinking of why I have not gone all-in on services writing on servers that host my own stuff and only appear there.

I chose to start blogging back in 2001. I was late to the blogging world but I had a fine list of blogs I checked regularly and it appealed to me, to write things regularly.

Blogger gave me two choices as to where I could send those posts. I could choose a subdomain of or maybe even set up a whole new domain and blogger would be the web host for it. Or I could do a trickier thing–using FTP to send the blog files somewhere when I published.

By that point I had used both AOL and earthlink’s hosting (Yes, ISPs used to give you web space! Do any do that anymore? I’m pretty sure Spectrum and Verizon don’t give me any web hosting with my cable modem and cell phone). And I’d had troubles with each. Sometimes it was flaky, or I hit limits. Using someone else’s server was inherently limited.

I chose the trickier thing out of an instinct: these were my words. They ought to live on my server. And I will be less limited by what I want to do with them if I have it on my host.

FTP publishing meant that sometimes I could not post because of downtime. As the blog grew it took longer to publish. Each time I published it would have to send all those individual HTML files to my server, and sometimes that took more than just a few seconds.

But it worked. And I put enough trust in blogger to use their fancy user interface and editing tools, but not enough to serve the content.

And it’s turned out that for me, making decisions based on mistrust has been the right choice. When I don’t have a thing as files on a thumb drive I don’t trust it. So how do I get that backup and keep it up.

Part of why I am involved with #IndieWeb things is my strong belief that people should own their stuff. And for me, that means if I put something up on someone else’s website I want a copy for myself at some point. In the case of this website, for example, anything I put on Instagram I pull back into my personal website.

That’s mistrust.

And I just remembered that I did the same thing with my bookmarks. That bookmarking site was sold, sucked, and died, and was exhumed more or less. Kind of a zombie case. Even if something goes on living, it might never be what it once was.

How does a site death happen? Products come out and the excitement is so strong. “LOOK AT THIS AWESOME NEW WEBSITE! USE IT! IT’S FREE! IT’S CHEAP! IT’S A FLOOR WAX! IT’S A DESSERT TOPPING!” Then the money runs out. Or the investors and owners cash out and sell the product or the whole. Or maybe they just get beaten in the marketplace. They don’t view it as their job to keep things online when they run out of money. Shimmer will go away. In a college marketing class I learned of the “product lifecycle:” introduction, growth, maturity, and decline. Teenage me was a natural contrarian, so I couldn’t help but think of Coca-Cola and McDonald’s. But exceptional products are exceptional. Not many companies, let alone products, live longer than an average human lifespan.

When a product I use dies I tend to put the email text as a blog post on this website. The shutdown tag includes mentions of Google Reader. Google Plus. tvtag. This is My Jam. And from 2008 here’s Yahoo! Mash Beta – which was a social media site I don’t remember. The IndieWeb website has a page called Site Deaths which documents the history of the phenomenon far better. It’s pretty sad.

If you find yourself thinking–maybe this time–this product–will be different. Don’t count on it. And it’s not new to this or even the last century: The Dead Media Project a list of dead media that goes back millennia.

In the long view, mistrust is a regrettably useful strategy to use when making technology choices.

Further reading: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

Another day, another…

This* is why I pay money to a registrar for my own domain and pay money to a hosting service for a web server and application server and databases and use open software that makes my data portable to myself as much as is practical.

* Different reason each week.

( originally published at )

2023 Top Albums

I like this image, from my account, from 2023. I forgot I grabbed this image and saved it. But includes many of my favorites: Zappa, Kitty, Lawrence, De La Soul, 2-Tone Ska, Roxy Music and things like Metric and Selena Gomez and Tessa Violet and an artist I never would have appreciated until I got older: Blossom Dearie.

People share their “wrap ups” from services like Spotify or Apple and they are so ephemeral. I’m a fan of keeping images of that and putting them on my blog so I can look later and remember the music that maybe a few laters I forgot worked for me.

I’m pretty sure Missing Persons is one I came back to, looking at old lists from old years after not listening to them for a long while.

Dark skies afternoon

Extracting CSS Colors from Screenshots of Web Pages

I’ve talked a lot about the headers on this site over the years. It’s one of my favorite long-term projects. One thing that I’ve considered over the years is making associated styling for the footer of the site or other parts based on colors in them.

Try 1: parse HTML for colors

The first thing I did was write code to parse the HTML for the headers and seek out colors. It wasn’t bad code, and I was able to find colors in whatever format, keywords like ‘blue’ or ‘black’ or ‘white’ and parse rgba() and rub() and find them in all kinds of inline style attributes. All fine and good.

Try 1, Part 2: parse images for colors

So I began to write code to capture each image locally and parse each image for colors and see what the results are. Weirdly straightforward to write code that reads the image into a GD object and run over every pixel.

$im = imagecreatefrompng($directory . '/' . $file);
// blur the image
// save the image
$uniqueColorsAndCount = [];
for ($x = 0; $x < imagesx($im); $x++) {
	for ($y = 0; $y < imagesy($im); $y++) {
		$rgb = imagecolorat($im, $x, $y);
		$r = ($rgb >> 16) & 0xFF;
		$g = ($rgb >> 8) & 0xFF;
		$b = $rgb & 0xFF;
		$hex = sprintf("#%02x%02x%02x", $r, $g, $b);
		if (in_array($hex, $ignoreColors)) {
		if (!in_array($hex, $ignoreColors)) {
			if (!array_key_exists($hex, $uniqueColorsAndCount)) {
				$uniqueColorsAndCount[$hex] = 1;
			} else {

And doing this for each image definitely gave me results which seemed like valid answers and some ok colors.

But also the images live on backgrounds, and just because a a color appears in a css gradient doesn’t mean it’s significant and some of the colors were essentially black. Pixel count is not significance.

Realization A: Significant Colors are not in the code or images, they’re part of the whole

I got frustrated. I was doing all this work to interpret the HTML and component images and what I wanted was to look at the totality of the way the headers look. So I went looking for other solutions for this problem. And immediately found a great library in PHP I could install with composer. And here was the question Detect main colors in an image with PHP. The PHP League’s Color Extractor library looked great, and I could run it on my header screenshots previously grabbed with the excellent shot scraper library.

Try 2: Use Color Extractor

Trying some examples with that library the colors it chose were far more representative of the overall. I ran it on a few screenshots and I liked the variety of colors it provided. I wanted to leave it to me to decide which foreground and background colors I wanted so I just dumped out the colors in CSS blocks for review in my editor.

The code looks like:

$palette = Palette::fromGD($im);
$topColors = $palette->getMostUsedColors(12);
$colorCount = count($palette);
// an extractor is built from a palette
$extractor = new ColorExtractor($palette);
// League\ColorExtractor\ColorExtractor library does work to
// get the most representative colors
$colors = $extractor->extract(5);
// populate the css file with colors
foreach($colors as $color) {
	printf("--color: %s;\n", Color::fromIntToHex($color));
	printf("--backgroundColor: %s;\n", Color::fromIntToHex($color));

The full code is now in a GitHub Gist.

Opening the resulting CSS file looks like:

My editor phpStorm is reminding me with that highlighting that I have duplicate CSS variable definitions. It also has this excellent feature that shows a swatch of the color itself next to any mention of a CSS color. Once I had that, my editorial task was to go through each block and leave one color and one backgroundColor to suit my tastes. I went through the many headers and did that, and integrated that into the site’s CSS. And now for all blog posts and date-based archive pages the footer has colors defined for the footer. I may yet use that CSS variable in more places. For now, just the footer:

My aesthetic is on the quirky side so these fit in great and they add some personality to archive pages that I like.

It’s also a reminder that for any task it’s helpful to think aloud “what task am I trying to solve?” and in this case I started with the question “what are the colors in these header files and header images?” and the better question is “what are the most iconically representative colors in each header as it appears on screens?”

Of course, it often takes experimentation to get from the first question to the final question!

Open Shelves

capjamesg has made public a promising tool to read bookshelves for book cataloging. It’s called Open Shelves. He wrote about it more in a post called Photograph a bookshelf, get a list of the book titles.

It’s wonderful to see that a single person can build an interesting computer vision tool. I plan to do this a bit more with my bookshelves with better lighting and more intention – it’s a great start on a great tool. Great job James!

Film Threat Video Recommendations: Duplass

This is a post that is a draft from last year after San Diego Comic Con 2023.

In my heyday of subscribing to and purchasing paper magazines one of my favorites was Film Threat. It’s not a perfect magazine by any means but it is a brand that has survived and adapted. And film has certainly changed. Production has changed radically since the 1990s. Financing has changed. Distribution has changed. In-person film has changed. And so, film has changed. Chris Gore spoke at SDCC 2023 at a panel on independent film and how to make movies and get the word out in 2023 (mostly: build your audience online and make films any fricking way you can, you probably have a filmmaking tool in your pocket right now).

Chris Gore recommended a few things, here are the two that stood out for me. The first includes this quote:

“The Cavalry Is Not Coming”

That’s a quote from Mark Duplass is the key, and it’s a lesson: that there’s no deus ex machina for making movies. And the lesson is exactly what Chris Gore amplified: make movies any which way you can if you want to make movies:

Mark Duplass keynote from SXSW 2015

And here’s a trailer for a Duplass movie: The Puffy Chair (film) (wikipedia)