Earlier this month, a report emerged that the Denmark Ministry of Digital Affairs would shift away from using Windows and Microsoft Office in favor of Linux and LibreOffice. Now, it appears the ministry will only shift away from Office but continue using Windows.
Politiken, which reported on the situation, has amended its original piece, as spotted by PC Gamer. The Denmark Ministry of Digital Affairs will migrate from Microsoft Office to LibreOffice gradually over the coming months.
For anyone following tech news, this won’t be new. As a long-time Apple user, I do, however, have a few things to say about it.
My feelings are mixed. I’m the type of person that needs order. Call me slightly autistic or OCD, but I’ve thoroughly enjoyed the fact that Apple has mostly been consistent with its versioning when it comes to its software. I do wish they had stopped with the Mac OS X 10.x nonsense sooner and moved on to Mac OS 11, but at least it was still in sequence.
Contrast that to Microsoft’s chaotic and nonsensical versioning system for Windows. I realize that, internally, Windows NT is still versioned sequentially, but you don’t see that anywhere unless you know where to look.
As such, I am a bit dismayed that Apple made such a huge leap in version numbers.
That said though, I understand why. It makes absolute sense to unify the versions across their multiple platforms so that both consumers and developers know which feature set to expect. Since Apple already indicates Mac models by the year they were released, this makes sense to do on the software-side as well. Now, they just need to carry this over to their iPads, iPhones, Apple Watches, etc.
Ubuntu has been using a YEAR.MONTH versioning scheme for their OS releases from the beginning. This has never bothered me because it was logical and sequential.
It will take me a little bit of time to get used to Apple’s new versioning, but as long as they stay consisent and avoid Microsoft’s versioning follies, I think it’s for the better. In the end, it’s just a number that indicates a specfic release of a piece of software and version numbers don’t really matter. They are just labels to identfy a specific piece of software.
I don’t know how many times I’ve advocated for people to finally drop the “learnings” crap. I’ve hated it since day one, but have had to listen to it for years. I just wish I had had the idea to create a website like this. Someone else did it, though, and that makes me happy.
Publishers face an existential threat in the AI era and need to take action to make sure they are fairly compensated for their content, Cloudflare CEO Matthew Prince told Axios at an event in Cannes on Thursday.
Why it matters: Search traffic referrals have plummeted as people increasingly rely on AI summaries to answer their queries, forcing many publishers to reevaluate their business models
This is something I recently wrote about on my personal blog. If I had to rely on any of my blogs for income, I would be panicking about now. My biggest fear is that AI and the handful of companies that are working on it are going to irreversibly change the internet so that most people stay in their walled gardens. As someone who has been a web developer since the beginning of the internet, that worries me.
Earlier this week, Meta officially flipped the switch on in-app advertising for WhatsApp users worldwide, marking the first time ads have appeared inside the messaging platform. But if you’re in the European Union, there’s now an important update: the rollout won’t be happening for you… yet.
Europe’s privacy guardrails hold yet again
In comments to reporters today (via Politico), Ireland’s Data Protection Commission (DPC) said WhatsApp has informed them that the new ad model won’t go live in the EU until next year at the earliest. Previously, Meta had stated that they would be “rolling this out slowly over the next several months”, with no mention of the European rollout timeline.
[…]
To nobody’s surprise, that cross-platform data-sharing element in particular raised immediate concerns from European privacy advocates and regulators.
I generally try to avoid Meta’s products because of their abysmal attitude towards user privacy, so I won’t be terribly affected by it when it does come to the EU. I only use WhatsApp for the two or three people I know who don’t have anything else. Otherwise, it’s Apple Messages or Signal for me.
This year at Google I/O 2025, a new mode of search was announced: AI Mode. The idea behind it is simple: Google is going to add an AI-generated answer to search queries at the top of the results page so that users don’t have to click through websites to try to find the answer or information they are looking for.
I am very torn on this. There are both positives and negatives to this approach and I can see it from both perspectives.
Positives
We’ll start with the positive aspects. This is actually a great feature for users. It should, in theory, save them time so that they can immediately get what they are looking for without having to manually click on links, close cookie banners, close newsletter modals, close chatbots and finally comb through the ad-infested content of a website just to realize the information they want isn’t there and they have to repeat the process on the next website.
The amount of enshittification that has occurred on so many websites in the name of marketing or “helping the user” is astounding and leads to a terrible user experience. That’s not even mentioning all of the keyword-driven SEO content written purely so that a website ranks in the search results without providing much real information.
In theory, AI answers should enable the user to skip all of this. It will have done the job of combing through websites’ contents for you and provide you with a nice, neat summary of the information you’re looking for. It’s a win for the user — in theory, if it works properly.
Negatives
As great as all that sounds, there is a nefarious side to it as well. We’ll start with the fact that AI’s reliability is currently abysmal. It frequently hallucinates information that is flat out wrong and could potentially be harmful to the user.
Imagine if the user searches for the temperature considered to be a high fever for a baby. Instead of visiting an accredited website such as the UK’s NHS which says a high fever is 38 C, AI tells them a high fever is 40 C. That could have fatal consequences.
In essence, as AI is now, users have to double-check the answers it provides which defeats the entire purpose of it. They might as well just click through the websites and save themselves the extra step of reading the AI summary.
Part of the problem is that AI as it stands now with its large language models (LLMs) can only regurgitate what its models have been trained on. As we all know, not every website on the internet is reliable or accurate since anyone can write about anything whether they are an expert or not.
While not all AI hallucinations are a result of bad training data, there is so much inaccurate data on the internet that the models are bound to be full of it. This is unavoidable and makes the reliability of its summaries questionable at best.
That isn’t the only issue at play either. As a person who keeps multiple blogs, it is likely that I, like all other website owners, will see a significant drop in traffic. I write on my blogs for fun, not for profit, so the direct impact it has on me will be minimal. The problem is really for websites that rely on traffic to earn money. News organizations or commerical blogs will likely be the most highly impacted by AI summaries since they generally rely on ad revenue or subscriptions — both of which users have to visit the organization’s website for them to earn money.
This leads to a break in the current paradigm of how the internet works. To simplify it: a user searches for something and visits a website. Both the search engine and the website have now earned ad revenue. It’s a win-win. AI summaries have the potential to disrupt this as the user will stay on the search engine’s website and never visit the website with the content. The search engine therefore earns all the profit despite having used the other website’s content to train its models.
This is not only not fair to the producer of the content, it threatens the very production of content. If organizations can no longer earn a profit from producing content, they will go out of business and there won’t be any new content. This is a lose-lose situation for both the organizations and the search engines.
It seems awfully shallow-sighted on the part of the search engines to kill off the very content that they rely on to train their models. In the end, everyone loses.
That isn’t even to mention the psychological impact on people like me who keep websites for fun. Essentially, I write free content for AI bots to train on. It’s free labor that these companies are going to generate revenue from. I don’t get paid, but I put in the work and they reap the profit. As you can imagine, I resent that.
Conclusion
As you may be able to tell, the negatives still far outweigh the positives. While I am torn on it in that I think it would be wondeful to use AI summaries as a user, the unreliability and the cost of the potential impact it has on content creators are too great of an issue to simply ignore. I can’t, with a good conscious, use them exclusively.
That said, I’ve found AI can often get you pointed in the right direction. That is especially true if the information you’re looking for is obscure or you aren’t versed enough in the subject matter to formulate a decent query. I call that a hybrid approach as I use AI summaries to refine my query, then, once I am confident that I am heading in the right direction, I start looking through websites to verify the answer given by AI. Of course, that method is really only tenable for queries where you aren’t entirely sure of how to phrase what you’re looking for. It’s too complex and unnecessary for simple searches.
I’m neither a doomsayer nor an AI-enthusiast. It’s just another tool in the toolbox and you have to figure out how to use it best for your purposes. I can certainly see the potential AI has to benefit users, but we have to be wary about its reliability and impact and enjoy it with caution.
Subscribers to Adobe’s multi-app subscription plan, Creative Cloud All Apps, will be charged more starting on June 17 to accommodate for new generative AI features.
Adobe’s announcement, spotted by MakeUseOf, says the change will affect North American subscribers to the Creative Cloud All Apps plan, which Adobe is renaming Creative Cloud Pro. Starting on June 17, Adobe will automatically renew Creative Cloud All Apps subscribers into the Creative Cloud Pro subscription, which will be $70 per month for individuals who commit to an annual plan, up from $60 for Creative Cloud All Apps. Annual plans for students and teachers plans are moving from $35/month to $40/month, and annual teams pricing will go from $90/month to $100/month. Monthly (non-annual) subscriptions are also increasing, from $90 to $105.
Further, in an apparent attempt to push generative AI users to more expensive subscriptions, as of June 17, Adobe will give new single-app subscribers just 25 generative AI credits instead of the current 500.
The current trend of general AI enshittification of services and software is really astounding. While I do like AI and I do use it for some things, I really dislike the fact that it is creeping into every single aspect of computing. It’s becoming increasingly difficult to avoid having it shoved down your throat whether it makes sense in a certain context or not.
It’s no surprise that companies are using it as an excuse to significantly increase the cost of their subscription services. I already massively dislike subscriptions and avoid them whenever I possibly can. This just reenforces that view. In this particular case, I use the products from Affinity for my digital media needs because it is a one-time purchase.
I’m not the only one with this view of it though:
“I don’t need more AI in my life”
By automatically forcing customers onto a more expensive plan, Adobe risks upsetting, disrupting, and confusing customers, even though the company is emailing customers about the change. The changes also give credence to fears that many customers had when Adobe started incorporating generative AI into its offerings.
[…]
Another Redditor, Bmorgan1983, commented, “This is dumb. I don’t need more AI in my life.”
Adobe isn’t the only creative software company seeking to use generative AI to pull more money from customers. Last year, Canva announced new AI capabilities that led to 300 percent price hikes for some business customers. After customer backlash, Canva partially relented in October by allowing teams users to add up to four users for free instead of charging them per user.
Like Canva, Adobe is trying to introduce new features and be part of the generative AI boom while maintaining interest from creative customers, who are often long-time users who may not be interested in many of these added capabilities. For now, it seems that the former is being prioritized.
Yesterday, I came across an article on Engadget where the author pointed out a very odd discrepancy in Apple’s use of fonts:
There are a lot of rumors flying around about a big iOS and macOS redesign coming this year, perhaps as a distraction to the continued issues around Apple Intelligence. And while I’m game for a fresh coat of paint across the software I use every single day, I have one plea while Apple’s at it: Please, for the love of god, make the Notes app render the letter “a” properly.
Let me back up a bit. Apple first introduced the San Francisco typeface with the first Apple Watch in 2015; a few years later it became the default on basically every device Apple sells. The text you see in Messages, Apple Music, Maps and many other system apps are all different San Francisco fonts, and for the most part the multiple variations all feel consistent and cohesive.
But, at some point in the last seven or eight years I noticed something odd in the Apple Notes app. The font appears the same as the other San Francisco fonts, but something just felt “off.” It took forever before I put my finger on it: the lowercase “a” renders differently in the Notes app than it does anywhere else across the entire system.
You see, the Notes app uses a “single storey a,” the sort of “a” that most people use when writing by hand. That’s the only first-party app, as far as I can tell, where you’ll find a single-storey a. The rest of the time, it uses the double-storey a (just as you’ll see on this website and almost everywhere else a lowercase a is used these days outside of handwriting).
I had never noticed it until I read the article, but it definitely is that way. I just tried it out and took a screenshot:
The different a’s
You can even see that the title bar of the Notes app has the standard double-storey a whereas the note itself does not. Not pictured here is that the rest of the fonts used in these apps appear to look exactly the same. The article has a much better example of it side-by-side, but I don’t want to copy their image, so I’ll just recommend you take a look at it on Engadget.
This has never really bothered me because I never noticed it, but now it does. As the author pointed out, something always felt a little bit off with the font in Notes, but I hadn’t noticed what it was until now. Maybe Apple will fix this? Or is a feature? Presumably so if they went through the effort of having a separate font for Notes.
Both 9to5Mac and The Verge report on a new bill being proposed by U.S. Representative Kat Cammack that would force companies to allow third-party app stores:
A Florida congresswoman has introduced a new bill targeting Apple, aiming to boost competition and expand consumer choice by mandating third-party marketplaces like the EU.
U.S. Representative Kat Cammack (R-FL) has introduced the App Store Freedom Act (via The Verge), a bill aimed at increasing competition and consumer choice in the mobile app marketplace. The legislation targets major app store operators—those with over 100 million U.S. users—including the App Store.
If enacted, the bill would require these companies to allow users to install third-party app stores and designate them as default, grant developers equal access to development tools, and permit the use of third-party payment systems. Additionally, it mandates the ability to remove or hide pre-installed apps—something Apple already does.
The bill also seeks to prevent app stores from forcing developers to use the company’s in-app payment systems, imposing pricing parity requirements, or punishing developers for distributing their apps elsewhere. Violations could lead to penalties from the Federal Trade Commission and civil fines up to $1 million per infraction.
First of all, it’s always weird reading about Kat Cammack in the news because I went to school with her. Secondly, I have mixed feelings about this. On the one hand, more competition might lead to better pricing if developers don’t have to compensate for Apple’s fees.
On the other hand, I have a feeling this might lead to a fracturing of the ecosystem which would lead to a much more disjointed user experience. Frankly, I don’t want to have to download and install multiple app stores just to access the apps that I want. Plus, if Apple forces apps to use its in-app payment system, it makes the purchasing experience for users much more consistent and convenient. The current system is particularly useful with subscriptions since you can see them all in one place and easily cancel them at any time.
That said, maybe the issue that really needs to be tackled here is Apple’s fee structure. If Apple were forced to give up the 15%/30% fees they take from every in-app purchase, then it would solve both problems. Prices would fall, developers would earn more and users still get a consistent experience.
As far as being able to install apps with alternative app stores goes, I would prefer the ability to download them directly from the internet and install them much like you can for macOS, Android or any other operating system. Apple can still notarize the apps (like they do for macOS), but that would allow developers to offer apps directly to users without having to install multiple app stores.
You can read the article from 9to5Mac here and The Verge here.
A few days ago, an article was published on Ars Technica that, as a creative person, I thought I should share here. As the title implies, it is about protecting human creativity from the onslaught of AI.
Ironically, our present AI age has shone a bright spotlight on the immense value of human creativity as breakthroughs in technology threaten to undermine it. As tech giants rush to build newer AI models, their web crawlers vacuum up creative content, and those same models spew floods of synthetic media, risking drowning out the human creative spark in an ocean of pablum.
Given this trajectory, AI-generated content may soon exceed the entire corpus of historical human creative works, making the preservation of the human creative ecosystem not just an ethical concern but an urgent imperative. The alternative is nothing less than a gradual homogenization of our cultural landscape, where machine learning flattens the richness of human expression into a mediocre statistical average.
A limited resource
By ingesting billions of creations, chatbots learn to talk, and image synthesizers learn to draw. Along the way, the AI companies behind them treat our shared culture like an inexhaustible resource to be strip-mined, with little thought for the consequences.
But human creativity isn’t the product of an industrial process; it’s inherently throttled precisely because we are finite biological beings who draw inspiration from real lived experiences while balancing creativity with the necessities of life—sleep, emotional recovery, and limited lifespans. Creativity comes from making connections, and it takes energy, time, and insight for those connections to be meaningful. Until recently, a human brain was a prerequisite for making those kinds of connections, and there’s a reason why that is valuable.
I create a lot of content in various forms: writing, coding, music, woodworking, etc. While I’m not too concerned about AI casually being able to do woodworking any time soon like it can generate text, it’s still concerning how much of what it is producing is filling the internet and is advertised as art.
I do have to admit that I use the occasional AI-generated image for the title images of some of my blog posts, but I label them clearly and don’t try to pass them as my own work because, well, they’re not. They’re machine-generated and I only use them if I can’t find something suitable on Unsplash or any of the other royalty-free image websites with images created by humans.
In ay case, I can recommend reading the full article on Ars Technica here and giving the subject matter some thought as well.
I recently ran into an article over at Ars Technica that contains the history of the Windows Start menu complete with screenshots of each major release of Windows, including a few beta versions. The article is a decade old by now and was spurred on by the twentieth anniversary of the release of Windows 95 as well as the release of Windows 10, which reintroduced the Start menu after the Windows 8 debacle. Given its age, it doesn’t include Windows 11, but it is still interesting nonetheless for those interested in the evolution of Windows.
Windows 95 launched exactly 20 years ago today, and though the operating system is long dead many of its ideas still live on. In celebration of 95’s 20th birthday, we’re revisiting our piece on the evolution of the Windows Start menu. Cut yourself a slice of cake and relive the memories.
One of the first Windows 10 features we learned about was the return of the Start menu, which is sort of funny, since the concept of the Start menu is over two decades old. Microsoft tried to replace it with the Start screen in Windows 8, and you only have to look at the adoption numbers to see how most consumers and businesses felt about it.
The Start menu has changed a lot over the years, but there are a handful of common elements that have made it all the way from Windows 95 to Windows 10. We fired up some virtual machines and traveled back in time to before there was a Start menu to track its evolution from the mid ’90s to now.
For some, it might be interesting to see its humble origins in Windows 95 while for others, it will be a trip down memory lane. I remember each version distinctly even though I hardly used Windows Vista or Windows 8. I also remember Windows 3.1 before the Start menu made its appearance, although it was already obsolete by the time I was old enough to really use a computer.
Alex is a developer, a drummer and an amateur historian. He enjoys being on the stage in front of a large crowd, but also sitting in a room alone, programming something or reading a scary story.