Notes

Short-form thoughts, observations and musings
MacLynx
MacLynx

Every once in a while, you run into a project that makes you scratch your head. You have to wonder why someone would work on it, and yet at the same time you have to admire the fact that they are working on it and are so passionate about it. That’s how I feel about the project MacLynx.

A new post that recently appeared on the blog Old Vintage Computing Research discusses this odd, text-based browser project for the classic Mac OS that is still seeing active development. He even calls it a “remarkably narrow niche” which is probably an understatement.

In any case, I love that people are working on things that they feel passionate about no matter how “niche.” Here is an excerpt from the blog post announcing the release of beta 6:

Time for another MacLynx save point in its slow, measured evolution to become your best choice within the remarkably narrow niche of “classic MacOS text browsers.” Refer to prior articles for more of the history, but MacLynx is a throwback port of the venerable Lynx 2.7.1 to the classic Mac OS last updated in 1997 which I picked up again in 2020. Rather than try to replicate its patches against a more current Lynx which may not even build, I’ve been improving its interface and Mac integration along with the browser core, incorporating later code and patching the old stuff.

The biggest change in beta 6 is bringing it back to the Power Macintosh with a native PowerPC build, shown here running on my 1.8GHz Power Mac G4 MDD. This is built with a much later version of CodeWarrior (Pro 7.1), the same release used for Classilla and generating better optimized code than the older fat binary, and was an opportunity to simultaneously wring out various incompatibilities. Before the 68000 users howl, the 68K build is still supported!

However, beta 6 is not a fat binary — the two builds are intentionally separate. One reason is so I can use a later CodeWarrior for better code that didn’t have to support 68K, but the main one is to consider different code on Power Macs which may be expensive or infeasible on 68K Macs. The primary use case for this — which may occur as soon as the next beta — is adding a built-in vendored copy of Crypto Ancienne for onboard TLS without a proxy. On all but upper-tier 68040s, setting up the TLS connection takes longer than many servers will wait, but even the lowliest Performa 6100 with a barrel-bottom 60MHz 601 can do so reasonably quickly.

Old Vintage Computing Research

The rest of the blog post goes into much more detail than just the first couple of paragraphs I quoted here. There are also screenshots he posted of the browser and its development, one of which I, um, borrowed above. I can recommend taking a look at it for anyone interested in odd but nonetheless interesting projects.

AI-generated image of a robot lost in a maze
AI-generated image of a robot lost in a maze

I recently stumbled upon an article at Ars Technica about Cloudflare turning AI against itself. I thought it was a very interesting strategy in the battle to try to protect creative content against AI training models.

On Wednesday, web infrastructure provider Cloudflare announced a new feature called “AI Labyrinth” that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots. The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT.

[…]

Instead of simply blocking bots, Cloudflare’s new system lures them into a “maze” of realistic-looking but irrelevant pages, wasting the crawler’s computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler’s operators that they’ve been detected.

“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” writes Cloudflare. “But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources.”

The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts—such as neutral information about biology, physics, or mathematics—to avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven). Cloudflare creates this content using its Workers AI service, a commercial platform that runs AI tasks.

Ars Technica

The article itself is based on a blog post by Cloudflare which you can find here.

Of course, not everyone cares about whether their websites are scraped by AI but there are plenty of people who do. Large media companies like the New York Times are even going so far as to sue companies like OpenAI for using their intellectual property without payment or permission.

I think it’s an interesting approach to solving a problem that is unsurprisingly getting more and more relevant as more models are trained on real data found on the internet. That said, I don’t bother opting into such things for my blogs because I figure that if I publish it openly on the internet, it’s fair game. I understand why someone would be upset if it managed to get through any paywalls though.

Here are the links to the posts:

I just discovered that, after years of development, Express.js 5 has finally been released as stable. It’s been in development for so long that I’ve forgotten how long it’s been. The reason this is newsworthy for me though is that it used to be my go-to server for Node development. However, because development seemed to have stalled years ago, I switched over to Fastify not only for my personal projects, but also for my professional projects.

The catalyst was support for HTTP/2. About five years ago, I had to integrate support for HTTP/2 support into our backend because Google Cloud Run only allows file uploads over 32 MB if your service uses HTTP/2. Otherwise, it’s restricted. Since Express.js didn’t support HTTP/2 and the recommended way of adding it with the deprecated spdy package doesn’t work with modern versions of Node, we had to move away from Express.js.

I haven’t been able to find whether Express.js 5 actually supports HTTP/2 or not, but considering the fact that there doesn’t seem to be any documentation, I’d wager it doesn’t. I did, however, ask A.I. if you could integrate it into Node’s http2 package and this is the code it suggested:

import express from 'express';
import http2 from 'http2';
import fs from 'fs';

const app = express();

// Example middleware
app.get('/', (req, res) => {
  res.send('Hello, HTTP/2!');
});

// Load SSL/TLS certificates for HTTP/2
const options = {
  key: fs.readFileSync('path/to/your/private-key.pem'),
  cert: fs.readFileSync('path/to/your/certificate.pem')
};

// Create an HTTP/2 server
const server = http2.createSecureServer(options, app);

// Start the server
server.listen(3000, () => {
  console.log('HTTP/2 server running on https://localhost:3000');
});

I haven’t tested the code, so I don’t know if it works but if it does, it would be an even better way to use it than Fastify’s in my opinion as it would mean the Express.js project wouldn’t have to reinvent the wheel and could just piggyback off of Node’s standard library. That would make a lot of sense.

If anyone is interested in trying it out, definitely let me know what the results are. I may test it in the future and if I do, I’ll write another post here about it.

Luca Bramè at LibreNews writes about why he would recommend against using the Brave Browser despite it being known for protecting its users’ privacy:

If you are keen on personal privacy, you might have come across Brave Browser. Brave is a Chromium-based browser that promises to deliver privacy with built-in ad-blocking and content-blocking protection. It also offers several quality-of-life features and services, like a VPN and Tor access. I mean, it’s even listed on the reputable PrivacyTools website. Why am I telling you to steer clear of this browser, then?

[…]

But yeah, if you are a big fan of AI and crypto, and are okay with having advertisements in the user interface out of the box, are okay with past attempts to steal money from websites and collect donations towards people who wouldn’t necessarily even receive it, plus you can put up with occasional privacy mistakes… use Brave!

Luca Bramè at LibreNews

I’ve chosen to include the first and the last paragraphs of the article because they do a great job summarizing it. In essence, Brave might be known for being a browser that protects its users’ privacy, but some of the practices the company behind it (and its founder and CEO) have engaged in are questionable at best. At worst, they harm user privacy rather than help it.

I won’t go into all the details of each controversy because Luca does an excellent job with that. I can highly recommend that you read the full article for all of the juicy details. What I will say, though, is that I agree with him fully. I experimented with Brave for a while but ultimately decided to give it up because of all the shady practices. They just didn’t sit right with me.

Browsers are such a fundamental piece of software in our modern lives and even more so for me as a web developer. I need one I can trust, and Brave is most certainly not that one.

Here’s the link to the full article: https://thelibre.news/no-really-dont-use-brave/

It was almost a year ago that I introduced the section on my blog that I’ve called “Notes.” It is the part of the blog where I write small posts about various things I’ve found on the internet or thoughts I’ve had. In contrast to the rest of the blog, the posts are fairly short and frequently include block quotes from other sources.

In any case, the page has used the same design as all of the other categories on the page: a list of article titles, summaries and images. I’ve now changed that to use a traditional blog-style format. That means you can now read the Notes section without having to click on each individual post.

This makes more sense for this part of the website because Notes are designed to be quick and easily readable. The traditional blog design makes less sense for the rest of the website where the articles tend to be longer and include more images.

You can check out the new design on the Notes page.

AI-generated image of King Arthur riding a horse above a medieval manuscript
AI-generated image of King Arthur riding a horse above a medieval manuscript

It’s incredible how technology can help us rediscover documents we thought were lost to time. This time, researchers found a lost manuscript about the tales of King Arthur and were able to fully restore a digital version without harming the original. That is especially impressive given that the manuscript had been re-worked into the cover of a book which required the researchers to find a way to scan the text trapped in the book binding.

The BBC reports on it:

An intriguing sequel to the tale of Merlin has sat unseen within the bindings of an Elizabethan deeds register for nearly 400 years. Researchers have finally been able to reveal it with cutting-edge techniques.

It is the only surviving fragment of a lost medieval manuscript telling the tale of Merlin and the early heroic years of King Arthur’s court.

[…]

For over 400 years, this fragile remnant of a celebrated medieval story lay undisturbed and unnoticed, repurposed as a book cover by Elizabethans to help protect an archival register of property deeds.

Now, the 700-year-old fragment of Suite Vulgate du Merlin – an Old French manuscript so rare there are less than 40 surviving copies in the world – has been discovered by an archivist in Cambridge University Library, folded and stitched into the binding of the 16th-Century register.

Using groundbreaking new technology, researchers at the library were able to digitally capture the most inaccessible parts of the fragile parchment without unfolding or unstitching it. This preserved the manuscript in situ and avoided irreparable damage – while simultaneously allowing the heavily faded fragment to be virtually unfolded, digitally enhanced and read for the first time in centuries.

BBC

And in case you were wondering what the sequel is about, there is a summary of it from the same article:

In it, the magician becomes a blind harpist who later vanishes into thin air. He will then reappear as a balding child who issues edicts to King Arthur wearing no underwear.

The shape-shifting Merlin – whose powers apparently stem from being the son of a woman impregnated by the devil – asks to bear Arthur’s standard (a flag bearing his coat of arms) on the battlefield. The king agrees – a good decision it turns out – for Merlin is destined to turn up with a handy secret weapon: a magic, fire-breathing dragon. 

BBC

AI-generated image of servers in the cloud
AI-generated image of servers in the cloud

Matt Burgess at Ars Technica:

There are early signs that some European companies and governments are souring on their use of American cloud services provided by the three so-called hyperscalers. Between them, Google Cloud, Microsoft Azure, and Amazon Web Services (AWS) host vast swathes of the Internet and keep thousands of businesses running. However, some organizations appear to be reconsidering their use of these companies’ cloud services—including servers, storage, and databases—citing uncertainties around privacy and data access fears under the Trump administration.

“There’s a huge appetite in Europe to de-risk or decouple the over-dependence on US tech companies, because there is a concern that they could be weaponized against European interests,” says Marietje Schaake, a nonresident fellow at Stanford’s Cyber Policy Center and a former decadelong member of the European Parliament.

Ars Technica

I’ve seen a lot of news like this recently from US-based media outlets. As someone living and working in Germany for a German company, I find it particularly interesting that I haven’t heard anything about this elsewhere, not even at my current position where AWS is heavily used. There has been no discussion about this whatsoever.

That isn’t to say that it isn’t happening. The article does indeed cite a few examples such as the Dutch government trying to move some of their services to European-based providers but I still have yet to see any difference in my own experience.

Moving clouds is a messy, expensive business if you are heavily invested in one of the major American providers. I suspect that most companies would have to have a huge financial or political incentive to invest the time and money to do so. A few barbs traded by politicians isn’t going to be enough in most cases.

Mac OS X Snow Leopard
Mac OS X Snow Leopard

This is a post that will most certainly betray my age in that I fondly remember the release of Mac OS X 10.6 Snow Leopard. The previous year’s release, 10.5 Leopard, was a significant upgrade in visuals and features after 10.4 Tiger’s many years of service. Unfortunately, with large upgrades come large amounts of bugs. While I can’t remember Leopard being unstable, I do remember that Snow Leopard made the system feel much more stable and polished which was exactly the point of the release. No new features, just more stability.

That said, 9to5Mac raises the question as to whether another Snow Leopard-like release is something that the Apple community needs:

The last few days have been very busy when it comes to Apple news. That’s because the company has confirmed that the new Siri experience has been delayed while sources suggest that the new features promised at last year’s WWDC won’t be ready any time soon. Given everything that’s going on at Apple recently, there’s one thing that could really help: another Snow Leopard.

9to5Mac

Frankly, I think we are definitely lacking a good stability release. As someone who relies on their Macs, iPhone and iPad every single day for work and play, it is crucial to me that the operating system and its accompanying software be as unproblematic and stable as possible.

Mozilla has recently had a lot of blowback about several recent changes they’ve made to Firefox. They have introduced a Terms of Service and have updated their privacy policy to include vague language about data collection. I won’t go into detail here though, so if you don’t know what’s going on, this is a great summary:

Watch the video on YouTube

As a reaction to this video, however, Ryan Sipes, a member of the Thunderbird team, went on video to clarify what the situation is. This video is definitely worth watching if you are interested in what is going on:

Watch the video on YouTube

I tend to give Mozilla the benefit of the doubt in this situation. They were quick to react to the feedback they received and since they are such a prominent name in the open source community, their every move will be continued to be heavily scrutinized online which I’m absolutely certain they are aware of.

In an effort to speed up TypeScript compilation, Microsoft has decided to rewrite the compiler to be a native application. There are, of course, a number of languages they could have chosen for this endeavor, but this is why they chose Go:

Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did deep investigations into the approaches used by existing native TypeScript parsers like swc, oxc, and esbuild. To be clear, many languages would be suitable in a ground-up rewrite situation. Go did the best when considering multiple criteria that are particular to this situation, and it’s worth explaining a few of them.

By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we’re undertaking this more as a port that maintains the existing behavior and critical optimizations we’ve built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management. While this implies a garbage collector, the downsides of a GC aren’t particularly salient in our codebase. We don’t have any strong latency constraints that would suffer from GC pauses/slowdowns. Batch compilations can effectively forego garbage collection entirely, since the process terminates at the end. In non-batch scenarios, most of our up-front allocations (ASTs, etc.) live for the entire life of the program, and we have strong domain information about when “logical” times to run the GC will be. Go’s model therefore nets us a very big win in reducing codebase complexity, while paying very little actual runtime cost for garbage collection.

We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.

Acknowledging some weak spots, Go’s in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API. We’ve been constrained in certain possible optimizations due to the current API model where consumers can access (or worse, modify) practically anything, and want to ensure that the new codebase keeps the door open for more freedom to change internal representations without having to worry about breaking all API users. Moving to a more intentional API design that also takes interop into account will let us move the ecosystem forward while still delivering these huge performance wins.

Why Go?

As the first comment in the discussion points out, it’s strange that Microsoft didn’t choose C# instead of Go but at the same time, I suspect it has to do with the points they made about keeping with a similar codebase structure and coding patterns for the sake of an easier and quicker port.

In my opinion, it’s interesting that they are choosing to rewrite the TypeScript script compiler at all. I understand that they want it to be faster which would be great for large frontend projects using TypeScript, but for backend Node applications, the TypeScript compiler may soon become superfluous since Node 23 includes native support for running TypeScript files.

You can view the whole thread on GitHub: https://github.com/microsoft/typescript-go/discussions/411

It’s almost been 20 years since I started my first blog. The year was 2006 when I signed up for my first blog on WordPress.com. I had had a personal website since 1998, I had mostly just added random stuff to it that didn’t have any consistent structure or content and as such could certainly not be considered to be a blog by any stretch of the imagination.

So if I don’t consider my personal website to be a blog, what exactly is a blog?

A blog is one of those things that people just seem to know what it is when they see it, but that wasn’t enough for the Harvard Law School. In 2003, when blogs (or weblogs as they were called at the time) were still a new concept, an article was posted to their website that attempted to define what exactly a blog was. While the article is 22 years old, I still think the definition stands.

Technically, what is a weblog?

Now on to the technical features and a definition only a mathematician could love. 

A weblog is a hierarchy of text, images, media objects and data, arranged chronologically, that can be viewed in an HTML browser. 

There’s a little more to say. The center of the hierarchy, in some sense, is a sequence of weblog “posts” — explained below — that forms the index of the weblog, that link to all the content in sequence.

What is a weblog post? 

A weblog post has three basic attributes: title, link and description. All are optional. Some weblogs only have descriptions. Others always have all three. On my own weblog, Scripting News, all items have descriptions, a few have titles, and most have links, some have several links. Generally, a title cannot contain markup, but the description can. 

Most weblog tools require titles. Manila is fairly unique in not requiring them. The tradeoff is simplicity vs flexibility. It’s simpler from a user interface standpoint to require the presence of all three basic attributes, but writers can find this limiting. 

If one of the basic attributes is optional it’s the link. In that case the title of the post is often linked to a permalink for the item (see below).

Most weblog posts are short, a paragraph or two. Some weblog tools provide for longer articles or stories, often by including a place for a summary in the form for a weblog post. If available, there should also be an option for only including the summary in the RSS feed for the weblog.

Dave Winer

The article goes a lot more into detail about what a blog actually is and what it consists of. I found this to be interesting because I have kept various blogs using different platforms for so long but never actually took the time to stop and think about what makes a blog a blog rather than just any old other website.

As an interesting side note, the author of the original article, Dave Winer, still writes daily in his blog that he mentions above.

Here is the link to the original article: https://archive.blogs.harvard.edu/whatmakesaweblogaweblog

What do you think a blog is? Do you agree with this article? Let me know in the comments below!

I somewhat recently ran into this screenshot of a rant that Linus Torvalds went on about C++ and why he thinks C is the better programming language. I usually don’t care much for the drama because I think each programming language has its purpose and there will always be developers who love them and hate them. However, what I did find interesting about this particular rant is that it not only came from a very well-respected developer who has a deep understanding of low-level languages, but also that his arguments fly in the face of conventional, modern software architecture.

What I mean by that is that he argues against object-oriented programming, class inheritance and other core principles of most modern languages. I am, of course, aware that his rant is nearly twenty years old but even then, those were the important principles in software architecture.

You can read his rant here:

Linus Torvalds on C++

My Portfolio