Notes

Short-form thoughts, observations and musings

Yesterday, I came across an article on Engadget where the author pointed out a very odd discrepancy in Apple’s use of fonts:

There are a lot of rumors flying around about a big iOS and macOS redesign coming this year, perhaps as a distraction to the continued issues around Apple Intelligence. And while I’m game for a fresh coat of paint across the software I use every single day, I have one plea while Apple’s at it: Please, for the love of god, make the Notes app render the letter “a” properly.

Let me back up a bit. Apple first introduced the San Francisco typeface with the first Apple Watch in 2015; a few years later it became the default on basically every device Apple sells. The text you see in Messages, Apple Music, Maps and many other system apps are all different San Francisco fonts, and for the most part the multiple variations all feel consistent and cohesive.

But, at some point in the last seven or eight years I noticed something odd in the Apple Notes app. The font appears the same as the other San Francisco fonts, but something just felt “off.” It took forever before I put my finger on it: the lowercase “a” renders differently in the Notes app than it does anywhere else across the entire system.

You see, the Notes app uses a “single storey a,” the sort of “a” that most people use when writing by hand. That’s the only first-party app, as far as I can tell, where you’ll find a single-storey a. The rest of the time, it uses the double-storey a (just as you’ll see on this website and almost everywhere else a lowercase a is used these days outside of handwriting).

Engadget

I had never noticed it until I read the article, but it definitely is that way. I just tried it out and took a screenshot:

The different a's
The different a’s

You can even see that the title bar of the Notes app has the standard double-storey a whereas the note itself does not. Not pictured here is that the rest of the fonts used in these apps appear to look exactly the same. The article has a much better example of it side-by-side, but I don’t want to copy their image, so I’ll just recommend you take a look at it on Engadget.

This has never really bothered me because I never noticed it, but now it does. As the author pointed out, something always felt a little bit off with the font in Notes, but I hadn’t noticed what it was until now. Maybe Apple will fix this? Or is a feature? Presumably so if they went through the effort of having a separate font for Notes.

Here the original article on Engadget: https://www.engadget.com/apps/the-only-thing-i-want-from-apples-big-2025-redesign-is-a-120023531.html

Both 9to5Mac and The Verge report on a new bill being proposed by U.S. Representative Kat Cammack that would force companies to allow third-party app stores:

A Florida congresswoman has introduced a new bill targeting Apple, aiming to boost competition and expand consumer choice by mandating third-party marketplaces like the EU.

U.S. Representative Kat Cammack (R-FL) has introduced the App Store Freedom Act (via The Verge), a bill aimed at increasing competition and consumer choice in the mobile app marketplace. The legislation targets major app store operators—those with over 100 million U.S. users—including the App Store.

If enacted, the bill would require these companies to allow users to install third-party app stores and designate them as default, grant developers equal access to development tools, and permit the use of third-party payment systems. Additionally, it mandates the ability to remove or hide pre-installed apps—something Apple already does.

The bill also seeks to prevent app stores from forcing developers to use the company’s in-app payment systems, imposing pricing parity requirements, or punishing developers for distributing their apps elsewhere. Violations could lead to penalties from the Federal Trade Commission and civil fines up to $1 million per infraction.

9to5Mac

First of all, it’s always weird reading about Kat Cammack in the news because I went to school with her. Secondly, I have mixed feelings about this. On the one hand, more competition might lead to better pricing if developers don’t have to compensate for Apple’s fees.

On the other hand, I have a feeling this might lead to a fracturing of the ecosystem which would lead to a much more disjointed user experience. Frankly, I don’t want to have to download and install multiple app stores just to access the apps that I want. Plus, if Apple forces apps to use its in-app payment system, it makes the purchasing experience for users much more consistent and convenient. The current system is particularly useful with subscriptions since you can see them all in one place and easily cancel them at any time.

That said, maybe the issue that really needs to be tackled here is Apple’s fee structure. If Apple were forced to give up the 15%/30% fees they take from every in-app purchase, then it would solve both problems. Prices would fall, developers would earn more and users still get a consistent experience.

As far as being able to install apps with alternative app stores goes, I would prefer the ability to download them directly from the internet and install them much like you can for macOS, Android or any other operating system. Apple can still notarize the apps (like they do for macOS), but that would allow developers to offer apps directly to users without having to install multiple app stores.

You can read the article from 9to5Mac here and The Verge here.

A few days ago, an article was published on Ars Technica that, as a creative person, I thought I should share here. As the title implies, it is about protecting human creativity from the onslaught of AI.

Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.

Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.

A limited resource

By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.

But human creativity isn’t the product of an industrial process; it’s inherently throttled precisely because we are finite biological beings who draw inspiration from real lived experiences while balancing creativity with the necessities of life—sleep, emotional recovery, and limited lifespans. Creativity comes from making connections, and it takes energy, time, and insight for those connections to be meaningful. Until recently, a human brain was a prerequisite for making those kinds of connections, and there’s a reason why that is valuable.

[…]

Ars Technica

I create a lot of content in various forms: writing, coding, music, woodworking, etc. While I’m not too concerned about AI casually being able to do woodworking any time soon like it can generate text, it’s still concerning how much of what it is producing is filling the internet and is advertised as art.

I do have to admit that I use the occasional AI-generated image for the title images of some of my blog posts, but I label them clearly and don’t try to pass them as my own work because, well, they’re not. They’re machine-generated and I only use them if I can’t find something suitable on Unsplash or any of the other royalty-free image websites with images created by humans.

In ay case, I can recommend reading the full article on Ars Technica here and giving the subject matter some thought as well.

The Windows 2000 default start menu
The Windows 2000 default start menu

I recently ran into an article over at Ars Technica that contains the history of the Windows Start menu complete with screenshots of each major release of Windows, including a few beta versions. The article is a decade old by now and was spurred on by the twentieth anniversary of the release of Windows 95 as well as the release of Windows 10, which reintroduced the Start menu after the Windows 8 debacle. Given its age, it doesn’t include Windows 11, but it is still interesting nonetheless for those interested in the evolution of Windows.

Windows 95 launched exactly 20 years ago today, and though the operating system is long dead many of its ideas still live on. In celebration of 95’s 20th birthday, we’re revisiting our piece on the evolution of the Windows Start menu. Cut yourself a slice of cake and relive the memories.

One of the first Windows 10 features we learned about was the return of the Start menu, which is sort of funny, since the concept of the Start menu is over two decades old. Microsoft tried to replace it with the Start screen in Windows 8, and you only have to look at the adoption numbers to see how most consumers and businesses felt about it.

The Start menu has changed a lot over the years, but there are a handful of common elements that have made it all the way from Windows 95 to Windows 10. We fired up some virtual machines and traveled back in time to before there was a Start menu to track its evolution from the mid ’90s to now.

Ars Technica

For some, it might be interesting to see its humble origins in Windows 95 while for others, it will be a trip down memory lane. I remember each version distinctly even though I hardly used Windows Vista or Windows 8. I also remember Windows 3.1 before the Start menu made its appearance, although it was already obsolete by the time I was old enough to really use a computer.

Regardless of which camp your in, I can recommend taking a look through the screenshots and explanations. It’s definitely an interesting bit of computing history! You can find the article here: https://arstechnica.com/gadgets/2015/08/the-windows-start-menu-saga-from-1993-to-today

AI-generated image of a 1980s woman using a computer
AI-generated image of a 1980s woman using a computer

A few days ago, I ran into a blog post with the author’s thoughts on why retro computing is becoming so popular amongst computer enthusiasts. Essentially, he waxes poetic about how computers used to have a fairly high entry barrier despite their relative simplicity when compared to today’s technology. This barrier made for a sort of exclusive club that anyone who enjoyed computers was automatically a member of.

Modern smartphones, tablets, laptops and even desktops have become so easy to use and their ubiquity means that it’s rare to meet anyone who has never used a computer in any form. By extension, that “exclusive club” no longer exists. However, the rise of retro computing has, to an extent, recreated this nostalgic club.

Otherwise, he presents some points that I can fully relate to:

There’s a lot of discussion about why retro computing has become so popular and why – or even if – things where more fun in the golden era of home computers. Those of us who subscribe to the notion that things were indeed better before usually give a few standard reasons as to why, all of them true in their own right:

Old hardware was simpler. This meant that a single person could learn all, or at least most, of its features by heart.

Old hardware was limited. Slow processors, low resolutions and cheap sound chips impose restrictions that are fun to overcome with creative problem solving.

Old computers were offline. No attention economy, no SaaS subscription models. You could learn a piece of software and keep using it for as long as a decade without experiencing any major overhauls.

The Internet was mostly text-based. Things were comparably snappy and focused on human-to-human communication rather than passive content consumption and bloated advertising.

[…]

datagubbe

I have to admit that I was also bitten by the bug and bought myself a PowerMac G3 Blue and White that I recently worked on. I already owned an old blue G3 iMac from around 2000 that I bought about ten years ago. Both of them are fun to use because of the first three points above. It’s crazy to think that such slow, limited computers were state-of-the-art, even though I lived through the era and remember it very clearly.

I can recommend reading through the original article here: https://www.datagubbe.se/aficion. It’s good food for thought for anyone who is interested in retro computing.

Since Angular’s new Signals API became stable with the release of version 17, I have been a little confused about when to use them versus standard class variables in components and services. To be honest, I just hadn’t gotten around to reading the documentation properly and had therefore only seen them on the periphery of my work. That changed yesterday, though, when I thought I might have a good use case for them and so I finally read the documentation.

However, they weren’t exactly what I was expecting. In my naïve understanding of what they were, I thought they would basically be a drop-in replacement for RxJS Observables. While you can certainly use them instead of Observables in some cases, the more striking thing for me was that they seem to be more oriented towards replacing class variables in components and services.

So, I asked Microsoft Copilot about when to use class variables vs Signals. This was part of its answer:

When to Use Angular Signals

  • Automatic Reactivity: Signals are ideal for state that directly feeds into your component’s templates. When state changes, signals automatically update any computed values or dependent views without having to hook into Angular’s change detection manually. This means less boilerplate and a more intuitive reactivity model. Example: When you have a counter or any state where you want the UI to instantly reflect the latest value, a signal is the natural choice.
  • Computed and Derived State: If your component has derived values that depend on one or more pieces of state, signals let you define computed signals that recalculate as soon as their dependencies change. This pattern reduces the need for manual subscriptions or extra logic to update dependent data. Example: In a shopping cart, recalculating the total price automatically based on item signals makes the code both elegant and error-free.
  • Simplifying Change Detection: Signals reduce the overhead of Angular’s change detection mechanism by providing fine-grained reactivity. Rather than re-evaluating large portions of your component tree, only those parts bound to a signal update when the signal changes. This is particularly useful in performance-sensitive applications.

When to Use Normal Class Variables

  • Static or Low-Churn Data: If the data is either static or not tightly bound to the template (i.e., it doesn’t need to trigger an automatic view update), normal class variables are sufficient. They work well for values that don’t change frequently, or when you’re handling state using other reactive libraries like RxJS for asynchronous operations.
  • Legacy Code and Explicit Control: In applications already using class variables—with change detection either running manually or via zone.js—it might not be necessary to refactor everything over to signals. Class variables can give you the explicit control needed in scenarios where you manage state changes imperatively.
  • Complex Asynchronous Streams: For handling advanced asynchronous workflows (say, with operators like map, filter, or complex subscription logic), Observables (often stored as class variables or service members) might be a better fit than signals. Signals are synchronous by nature, so if your state management involves a lot of asynchronous behavior, you may continue to lean on established patterns with normal variables and RxJS.

[…]

Summary

  • Angular Signals: Best used when you need fine-grained, automatic reactivity for state that’s frequently updated and directly impacts the UI. They shine in contexts where derived or computed state is beneficial, simplifying the logic and reducing the need for manual change detection.
  • Normal Class Variables: Ideal for static or infrequently updated state, or when you need explicit control over change detection, or are already integrated with other reactive patterns (such as full RxJS flows for handling asynchronous data).

Ultimately, the decision hinges on the nature and complexity of your state management needs. If you’re building a component where reactivity is a priority, signals can streamline your code, but for scenarios that call for explicit control or complex asynchronous orchestration, traditional variables (often in combination with RxJS Observables) remain robust and reliable.

Microsoft Copilot

Say what you may about AI, but this does seem to make a lot of sense to me. I will, of course, have to test it out in my daily development and run some tests on our frontend to see if I can establish any performance gains since we deal with large quantities of data and data manipulation in our components and services.

Beyond the explanation, Copilot also included some code examples for components and services. Having them in context is nice since the code in Angular’s documentation is isolated and provides no context whatsoever.

In a component:

import { Component, signal } from '@angular/core';

@Component({
  selector: 'app-counter',
  template: `
    <div>Count: {{ count() }}</div>
    <button (click)="increment()">Increment</button>
  `
})
export class CounterComponent {
  count = signal(0);

  increment() {
    this.count.update((value) => value + 1);
  }
}

In a service:

import { Injectable, signal } from '@angular/core';

@Injectable({ providedIn: 'root' })
export class CartService {
  // A signal to hold the shopping cart items
  items = signal<string[]>([]);

  addItem(item: string) {
    this.items.update((currentItems) => [...currentItems, item]);
  }
}

I hope they will bring the promised performance boost when manipulating data, but testing will certainly be necessary. I will also have to play around with them to see in which contexts they work best.

I’ve included most of Copilot’s answer in this post, but I still recommend taking a look at the original answer here: https://copilot.microsoft.com/shares/mYg3gptgbpf4sZ8KX8uUu

MacLynx
MacLynx

Every once in a while, you run into a project that makes you scratch your head. You have to wonder why someone would work on it, and yet at the same time you have to admire the fact that they are working on it and are so passionate about it. That’s how I feel about the project MacLynx.

A new post that recently appeared on the blog Old Vintage Computing Research discusses this odd, text-based browser project for the classic Mac OS that is still seeing active development. He even calls it a “remarkably narrow niche” which is probably an understatement.

In any case, I love that people are working on things that they feel passionate about no matter how “niche.” Here is an excerpt from the blog post announcing the release of beta 6:

Time for another MacLynx save point in its slow, measured evolution to become your best choice within the remarkably narrow niche of “classic MacOS text browsers.” Refer to prior articles for more of the history, but MacLynx is a throwback port of the venerable Lynx 2.7.1 to the classic Mac OS last updated in 1997 which I picked up again in 2020. Rather than try to replicate its patches against a more current Lynx which may not even build, I’ve been improving its interface and Mac integration along with the browser core, incorporating later code and patching the old stuff.

The biggest change in beta 6 is bringing it back to the Power Macintosh with a native PowerPC build, shown here running on my 1.8GHz Power Mac G4 MDD. This is built with a much later version of CodeWarrior (Pro 7.1), the same release used for Classilla and generating better optimized code than the older fat binary, and was an opportunity to simultaneously wring out various incompatibilities. Before the 68000 users howl, the 68K build is still supported!

However, beta 6 is not a fat binary — the two builds are intentionally separate. One reason is so I can use a later CodeWarrior for better code that didn’t have to support 68K, but the main one is to consider different code on Power Macs which may be expensive or infeasible on 68K Macs. The primary use case for this — which may occur as soon as the next beta — is adding a built-in vendored copy of Crypto Ancienne for onboard TLS without a proxy. On all but upper-tier 68040s, setting up the TLS connection takes longer than many servers will wait, but even the lowliest Performa 6100 with a barrel-bottom 60MHz 601 can do so reasonably quickly.

Old Vintage Computing Research

The rest of the blog post goes into much more detail than just the first couple of paragraphs I quoted here. There are also screenshots he posted of the browser and its development, one of which I, um, borrowed above. I can recommend taking a look at it for anyone interested in odd but nonetheless interesting projects.

AI-generated image of a robot lost in a maze
AI-generated image of a robot lost in a maze

I recently stumbled upon an article at Ars Technica about Cloudflare turning AI against itself. I thought it was a very interesting strategy in the battle to try to protect creative content against AI training models.

On Wednesday, web infrastructure provider Cloudflare announced a new feature called “AI Labyrinth” that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots. The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT.

[…]

Instead of simply blocking bots, Cloudflare’s new system lures them into a “maze” of realistic-looking but irrelevant pages, wasting the crawler’s computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler’s operators that they’ve been detected.

“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” writes Cloudflare. “But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources.”

The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts—such as neutral information about biology, physics, or mathematics—to avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven). Cloudflare creates this content using its Workers AI service, a commercial platform that runs AI tasks.

Ars Technica

The article itself is based on a blog post by Cloudflare which you can find here.

Of course, not everyone cares about whether their websites are scraped by AI but there are plenty of people who do. Large media companies like the New York Times are even going so far as to sue companies like OpenAI for using their intellectual property without payment or permission.

I think it’s an interesting approach to solving a problem that is unsurprisingly getting more and more relevant as more models are trained on real data found on the internet. That said, I don’t bother opting into such things for my blogs because I figure that if I publish it openly on the internet, it’s fair game. I understand why someone would be upset if it managed to get through any paywalls though.

Here are the links to the posts:

I just discovered that, after years of development, Express.js 5 has finally been released as stable. It’s been in development for so long that I’ve forgotten how long it’s been. The reason this is newsworthy for me though is that it used to be my go-to server for Node development. However, because development seemed to have stalled years ago, I switched over to Fastify not only for my personal projects, but also for my professional projects.

The catalyst was support for HTTP/2. About five years ago, I had to integrate support for HTTP/2 support into our backend because Google Cloud Run only allows file uploads over 32 MB if your service uses HTTP/2. Otherwise, it’s restricted. Since Express.js didn’t support HTTP/2 and the recommended way of adding it with the deprecated spdy package doesn’t work with modern versions of Node, we had to move away from Express.js.

I haven’t been able to find whether Express.js 5 actually supports HTTP/2 or not, but considering the fact that there doesn’t seem to be any documentation, I’d wager it doesn’t. I did, however, ask A.I. if you could integrate it into Node’s http2 package and this is the code it suggested:

import express from 'express';
import http2 from 'http2';
import fs from 'fs';

const app = express();

// Example middleware
app.get('/', (req, res) => {
  res.send('Hello, HTTP/2!');
});

// Load SSL/TLS certificates for HTTP/2
const options = {
  key: fs.readFileSync('path/to/your/private-key.pem'),
  cert: fs.readFileSync('path/to/your/certificate.pem')
};

// Create an HTTP/2 server
const server = http2.createSecureServer(options, app);

// Start the server
server.listen(3000, () => {
  console.log('HTTP/2 server running on https://localhost:3000');
});

I haven’t tested the code, so I don’t know if it works but if it does, it would be an even better way to use it than Fastify’s in my opinion as it would mean the Express.js project wouldn’t have to reinvent the wheel and could just piggyback off of Node’s standard library. That would make a lot of sense.

If anyone is interested in trying it out, definitely let me know what the results are. I may test it in the future and if I do, I’ll write another post here about it.

Luca Bramè at LibreNews writes about why he would recommend against using the Brave Browser despite it being known for protecting its users’ privacy:

If you are keen on personal privacy, you might have come across Brave Browser. Brave is a Chromium-based browser that promises to deliver privacy with built-in ad-blocking and content-blocking protection. It also offers several quality-of-life features and services, like a VPN and Tor access. I mean, it’s even listed on the reputable PrivacyTools website. Why am I telling you to steer clear of this browser, then?

[…]

But yeah, if you are a big fan of AI and crypto, and are okay with having advertisements in the user interface out of the box, are okay with past attempts to steal money from websites and collect donations towards people who wouldn’t necessarily even receive it, plus you can put up with occasional privacy mistakes… use Brave!

Luca Bramè at LibreNews

I’ve chosen to include the first and the last paragraphs of the article because they do a great job summarizing it. In essence, Brave might be known for being a browser that protects its users’ privacy, but some of the practices the company behind it (and its founder and CEO) have engaged in are questionable at best. At worst, they harm user privacy rather than help it.

I won’t go into all the details of each controversy because Luca does an excellent job with that. I can highly recommend that you read the full article for all of the juicy details. What I will say, though, is that I agree with him fully. I experimented with Brave for a while but ultimately decided to give it up because of all the shady practices. They just didn’t sit right with me.

Browsers are such a fundamental piece of software in our modern lives and even more so for me as a web developer. I need one I can trust, and Brave is most certainly not that one.

Here’s the link to the full article: https://thelibre.news/no-really-dont-use-brave/

It was almost a year ago that I introduced the section on my blog that I’ve called “Notes.” It is the part of the blog where I write small posts about various things I’ve found on the internet or thoughts I’ve had. In contrast to the rest of the blog, the posts are fairly short and frequently include block quotes from other sources.

In any case, the page has used the same design as all of the other categories on the page: a list of article titles, summaries and images. I’ve now changed that to use a traditional blog-style format. That means you can now read the Notes section without having to click on each individual post.

This makes more sense for this part of the website because Notes are designed to be quick and easily readable. The traditional blog design makes less sense for the rest of the website where the articles tend to be longer and include more images.

You can check out the new design on the Notes page.

AI-generated image of King Arthur riding a horse above a medieval manuscript
AI-generated image of King Arthur riding a horse above a medieval manuscript

It’s incredible how technology can help us rediscover documents we thought were lost to time. This time, researchers found a lost manuscript about the tales of King Arthur and were able to fully restore a digital version without harming the original. That is especially impressive given that the manuscript had been re-worked into the cover of a book which required the researchers to find a way to scan the text trapped in the book binding.

The BBC reports on it:

An intriguing sequel to the tale of Merlin has sat unseen within the bindings of an Elizabethan deeds register for nearly 400 years. Researchers have finally been able to reveal it with cutting-edge techniques.

It is the only surviving fragment of a lost medieval manuscript telling the tale of Merlin and the early heroic years of King Arthur’s court.

[…]

For over 400 years, this fragile remnant of a celebrated medieval story lay undisturbed and unnoticed, repurposed as a book cover by Elizabethans to help protect an archival register of property deeds.

Now, the 700-year-old fragment of Suite Vulgate du Merlin – an Old French manuscript so rare there are less than 40 surviving copies in the world – has been discovered by an archivist in Cambridge University Library, folded and stitched into the binding of the 16th-Century register.

Using groundbreaking new technology, researchers at the library were able to digitally capture the most inaccessible parts of the fragile parchment without unfolding or unstitching it. This preserved the manuscript in situ and avoided irreparable damage – while simultaneously allowing the heavily faded fragment to be virtually unfolded, digitally enhanced and read for the first time in centuries.

BBC

And in case you were wondering what the sequel is about, there is a summary of it from the same article:

In it, the magician becomes a blind harpist who later vanishes into thin air. He will then reappear as a balding child who issues edicts to King Arthur wearing no underwear.

The shape-shifting Merlin – whose powers apparently stem from being the son of a woman impregnated by the devil – asks to bear Arthur’s standard (a flag bearing his coat of arms) on the battlefield. The king agrees – a good decision it turns out – for Merlin is destined to turn up with a handy secret weapon: a magic, fire-breathing dragon. 

BBC

My Portfolio