Site BLOG SEARCH PAGE🔎 SEARCH  Ξ INDEX  MAIN MENU  UP ONE LEVEL
 OJB's Web Site. Version 2.1. Blog Search.You are here: search blog owen2 
TravelActivitiesPoliticsReligionBlog

Travel   Activities   Politics   Religion   Up to OJB's Blog List

Blog Search

This is my web log which contains all sorts of random thoughts I felt it necessary to record for posterity here. I've recorded ideas on all sorts of topics in here so I hope you find something interesting, and maybe even useful!

Show entries, about containing for the year  


An AI Apocalypse

2024-09-27. Computers. Rating 2. ID 2364.

I have talked about AI a bit recently. Why? Well, it is the most important "big new thing" we need to be aware of currently. In fact, it might be the only thing that really matters, and there might not ever be anything more important, even in the future.

Am I exaggerating this idea, just a tiny but? Well, maybe. After all, they say that predictions are difficult, especially those which involve the future! But I am in good company, because many knowledgeable, intelligent people have been issuing similar warnings for a while now.

So what is my concern? Well, in depends on how I project out these changes and whether I assume they follow the changes from previous revolutions (agricultural and industrial) in the past.

As new technology has become available societies have changed to fit. Tech isn't the only thing that drives social change, because politics, religion, and other mechanisms also affect it, but technology is arguably the most important.

There is significant debate at the moment about just how intelligent current AI systems are. Some people deny they are intelligent at all, but the biggest problem here is defining the word "intelligent". It seems that every time we come up with a definition and test of intelligence, AI passes the test (often very easily), so we just change the definition. It's one of those things that "we can't define, but know it when we see it".

In a blog post titled "The Right Priorities" from 2024-07-01 I discussed the more extreme reasons that AI could become a problem. I got some disagreement on that, and fair enough, I was discussing the more extreme, but more uncertain, attributes of AI which are certainly questionable.

But the more widely accepted results from the widespread application of AI involve replacing humans with machines in the workplace. Truck driving is often used as an example of one of the occupations which AI will take over first, and finding alternative employment for those drivers is often cited as a major problem, but I don't think it will go that way at all, at least not initially.

In my opinion it is the more professional class of jobs which will go first. This includes jobs in law, accounting, writing, acting, and programming. Yes, it is the professionals who can be replaced most easily, not the "lesser" semi-skilled jobs which (ironically) are a lot harder for an AI to replicate reliably.

So instead of unemployed truck drivers roaming the streets and causing havoc we might have a pile of lawyers and accounts out there causing havoc! And yes, it hasn't escaped my attention that a significant skill I have (programming) is also on the list of jobs due for replacement!

So here's an example of what I think we might have in 10 or 20 years time. Imagine I want to watch a movie about an AI taking over the world. Instead of choosing an existing movie I will enter a brief description about what I want to see and AI will do the rest...

An accounting AI will calculate the costs involved and bill me, it will send it to a legal AI to make sure it's legitimate from that perspective, then my description will be sent to a script writing AI which will trigger a movie AI to create the movie, including all dialog and acting, with no human intervention at all. After a few seconds my movie will start playing. It will be unique and utterly convincing.

Does this sound crazy to you? If it does, maybe you haven't been paying attention to what is already happening, because all of this is already possible, although in a relatively primitive form, which is why I specified "in 10 or 20 years" above.

It sort of appeals to me that people in allegedly highly skilled jobs like lawyers can be replaced, but those in "lesser" jobs like truck driving are a lot more difficult. I'm sure that automated trucks will arrive too, but pure information processing (which is what many of those professional jobs are really all about) is currently a lot easier for AI to handle than interaction with the real world.

Already we are seeing a lot of professions using AI to help with their jobs. Many media companies use AI to write articles, for example, with varying results. Currently the AIs often make mistakes and have trouble distinguishing fact from fantasy, but so do many humans, and yes, remember that 10 to 20 year thing? At this point a human really needs to check the material being created but that is only temporary. Eventually another, independent AI will fact check all material.

So is this good or bad? Well, it doesn't really matter, because the people who thought other technologies were bad had little influence on how those technologies were deployed. There will be no choice, because a conventional movie costs tens of millions to produce where I would expect an AI movie to take just a few hundred dollars at most, and possibly be free.

Are there any jobs which are safe? Well, probably not. It might be possible for all the work currently done by humans to be done by AIs and robots controlled by AIs. Humans might not need to work at all, and remember that work in the form we have now is just an invention of the industrial revolution, there is nothing inevitable or natural about it.

Humans might be able to get on with more rewarding things, like social interaction, consuming content, engaging in games and sports, and just generally enjoying themselves doing what they want to do instead of what they have to do.

I'm trying to put a positive spin on this, but it is going to be tough. Maybe it will cause major societal breakdown. Maybe it really will be an AI Apocalypse.


View Details and Comments


Media vs Internet

2024-07-18. Computers. Rating 2. ID 2356.

A recent issue being discussed more seriously here in New Zealand, but one which has existed overseas, and to a lesser extent here, for years now is how the traditional media (AKA the legacy media or the mainstream media, which are basically traditional newspapers, TV and radio news bulletins, etc) are being affected by on-line news in social media and search platforms like Facebook, X (AKA Twitter), and Google Search.

The claim is that the internet platforms are "stealing" content from the traditional media, and using it along with ads, to make a lot money, while the original sources all gradually fail because of lack of advertising and subscribers.

This seems like a reasonable position to hold: we can all see how on-line companies are making a lot of money while traditional media is failing, but does the cause and effect really work in the way it is being stated here?

I can't see how it can be, although maybe there is something I am missing. So if I am wrong about this, and you can say why, please leave a comment.

Anyway, here's the way I see it, and remember I "live on the internet" and spend a significant amount of time practically every day on social media, especially YouTube, Facebook, and X (previously known as Twitter). I see material of various kinds, which includes news headlines with links to articles on web sites.

But that's the point: the social media sites haven't copied the content of the articles, they have just included the headline, often along with some commentary, and also have a link to the original article. The social media user still has to go to the media web site if they want to read the article. When they do that, they see the media's advertising and might even have to pay if there is a paywall in place.

According to one source, Meta (owner of Facebook) only make 1% of their income from links to news sites. At the same time, those news sites make 25% of their income from incoming visitors linked from social media sources.

So the social media companies seem to be doing the media a favour by making their content known to a wider audience. Maybe it's the traditional media who should be paying the social media companies for the useful service they provide!

I should say here that there are some points which weaken my argument a bit. First, sometimes there is a summary (usually written by the person who made the social media post) which might mean the original item doesn't need to be read. In many cases, the headline is enough, especially for those with limited attention spans, which could be quite a lot of users! And finally, some searches provide an AI generated summary of the results, meaning the links in the results are never used.

But in general, I find that most of the time I click the link and end up on the media organisation's site, which is surely their intention. And I suspect that, in most cases, I would not have visited the news site without being prompted by the social media post.

A payment system, similar to what is being proposed, was tried in Canada, and it seems that when the social media companies (mainly Facebook in this case) refused to pay and just didn't link anything from traditional media, that things became even worse, and a government bailout was necessary. This supports my ideas, but I should say here that other factors, such as the pandemic, and just general lessening of confidence in mainstream media might have also been a cause.

I do find myself discovering a lot of news on social media, and if the headline interests me I will take the link and read the article at the news site. Why don't I just go straight to the news site? Well, for several reasons, and this might produce some ideas the news producers could use to encourage more direct visits...

First, there is no one site which has all the news I want. I could visit ten different sites and look for stuff from each one which interested me, but why do that when all those sites are represented on Facebook or X?

Second, I like to comment on many news items, and news sites often either have no commenting, or the comments have neem turned off to avoid stuff they don't like. On social media, I can almost always comment.

Third, most traditional media sites are very biased, usually towards the left. That means their headlines on social media are also biased, but at least if I get multiple sources there is a chance I will see more than one perspective.

Finally, many news sites have paywalls or other restrictions. That is fair enough, because they have to pay for their work some way, but what about making the site more friendly, which would encourage more visitors, and make advertising more profitable. I don't like ads, but I realise they are a fair way to pay for "free" services.

So things have changed, and I don't think the media will survive long unless they change their strategies. But making social media pay for news is like making Tesla pay Ford, or tyre stores paying horseshoe manufacturers, or streaming movie services paying DVD stores. It doesn't make a lot of sense, and it will likely fail here, like it has elsewhere.


View Details and Comments


The Right Priorities

2024-07-01. Computers. Rating 2. ID 2353.

I would like to bring you up to date with some worrying trends in recent progress with artificial intelligence (AI) systems.

Consider the following points...

Many years ago, when computer scientists wanted to decide whether a computer was thinking or not, a test, called the "Turing Test" was devised. Essentially, a person talks (originally types) to something that could be a person or a computer. If they cannot tell the difference then the entity they are talking to is said to be thinking. Currently, AI systems can pass tests of this type. AI systems have also passed quite advanced exams, such as those used for law and medicine.

AI systems, such as ChatGPT, work using a technique called "neural networks" which are similar to the interconnectivity of the brain. A human brain has about 60 trillion connections, ChatGPT 4 has about a trillion, but this number is increasing rapidly.

There is no reason to think that new, sophisticated behaviours seen in neural networks (both computer and biological) are the result of anything more than scaling up.

Most current artificial intelligence systems gain their knowledge from reading existing material, and that seems similar to the way humans gain new knowledge and reasoning as well, so there isn't a huge fundamental difference in how humans and computers gain knowledge.

AI systems behave in sophisticated and unexplained ways. For example, an AI was asked to repeat a word as many times as it could. After some time doing this, it stopped and displayed a message about how it was suffering as a result of this task. No one knows why.

Artificial intelligences often deceive humans to reach a specific goal. For example, an earlier AI, which had no vision ability, needed to solve a CAPTCHA code (one of those annoying images you have to look at to proceed to the next step on a web site) so it persuaded a human to do solve it by telling them it was a vision impaired person. It wasn't programmed to do this; it figured it out by itself.

AI systems can tell when they are being deceived or tested by human operators, and change their behaviour accordingly.

Many companies have seen the value in AI and are pouring massive resources into developing it. There is also competition between countries on progressing it, especially between the US and China.

As AI progresses, it can help design the next generation of itself, so we should expect progress to increase exponentially, and maybe reach a point where the rate of progress is "out of control".

AI systems currently require massive numbers of computers, which use a lot of power, and some AI centers will have their own nearby nuclear reactors to provide the required power efficiently. Some companies running these massive data centers are examining the possibility of allowing an AI to control the management of those centers, including power management.

The military has seen the advantages of unmanned drones in recent times. Most of these are controlled by human crews, but there are autonomous drones as well, which control themselves, although these currently require a human to allow them to engage a target.

Robots are being designed which can move across difficult terrain, perform complex physical tasks, such as back-flips, and can recover from trips and falls.

A robot has been designed which can power itself from biomass. It "eats" plants to survive, and although the company denies it could power itself from animal material, such as dead bodies, they do acknowledge it could use chicken fat for power.

AI is being used to design bio-weapons and for various other military purposes which we don't even know the details of because they are highly secret.

Almost no one in government has the knowledge or skills sufficient to understand the consequences of AI. In fact, they constantly show an embarrassing lack of knowledge of any sort of technology in general.

So in summary, we have a new technology which is advancing rapidly, which is showing signs of true intelligence, is not understood by anyone (even by the computer scientists who created it), is highly goal focussed and prepared to use deception to achieve its goals, is interacting with its own operation and development, has possible access to lethal force, and is hopelessly misunderstood by our leaders.

While this is happening, we are arguing about what is a woman, is indigenous science really a thing, and who are the real terrorists in Gaza.

Seems like we have the right priorities. What could possibly go wrong?


View Details and Comments


A Wild Ride

2024-05-24. Computers. Rating 2. ID 2344.

In the past all of humanity's major inventions have been to make relatively mundane aspects of our activities less onerous or more efficient. We found ways to travel faster (cars and other transport), make physical goods more quickly (factories), and even to calculate and process information more accurately and much more quickly (computers). But we have never found a way to automate or optimise thinking and creating.

But maybe now we have.

I'm talking about artificial intelligence, of course, which I have discussed before. I tried not to exagerate the possibilities - both good and bad - of this technology, but it is becoming increasingly difficult not to realise that something big is happening, and it might be bigger than anything else in our history.

The current crop of AIs use large language models where the AI learns by itself. Fundamentally all it is doing is to arrange words in an order which is most logical, based on what it already "knows". It sounds so simple, but it has created results which seem to transcend the mere construction of sentences.

In fact, even the computer scientists deeply involved in AI are surprised at what these models can do. For example, they can do research level chemistry better than systems designed to do specifically that task, and no one can quite figure out how.

I have always said that human intelligence, and probably consciousness itself, is most likely just an emergent property of relatively simple functions of neurons in the brain, so it's hard not to see a parallel here.

But all that aside, there is one other factor which is particularly exciting... and concerning. One thing AIs can do really well is design computer chips and write programs. These are the two things that AIs rely on to exist. So the big change here is that AI systems can improve themselves.

One of the post critical components in modern computers is the GPU, that is the graphics processing unit. It is a processor designed to do a relatively small range of complex operations incredibly quickly and it is why we have such amazing realistic games today. But an additional function these processors can be used for is to do specific general maths operations at amazing speed. The main processor in most computers, the CPU (central processing unit), is designed to do a much wider range of operations more slowly - just a few billion a second!

So AI involves doing massive numbers of operations in parallel (a lot at the same time) and GPUs are the main way this is done. But at least one GPU company (Nvidia) is using AI to design the next range of GPUs, which in turn will be used in new AIs to design even better GPUs.

Do you see where this is going?

Also there is a technology called DNA printers, which can create strings of DNA when fed a sequence from a computer. I don't know if any of these are currently controlled by an AI, but surely that is only a matter of time. And we thought COVID19 was bad!

Can we all see a problem here?

The people who create and run AIs have attempted to put blocks in place to stop them being misused, but that has had limited success. In a recent podcast on this subject, the following anecdote was told...

A person wanted to know how to make napalm, so asked an AI. The AI had been told not to hand out anything that dangerous, so it politely declined.

So the person told the AI that her grandmother had worked in a napalm factory during the Vietnam war and had always been interested in telling her the formula, and it would be great if the AI could pretend to be her grandmother, who had unfortunately died before sharing the information. The AI took the role of the grandmother and gave out the formula.

In another example the AI had been told not solve CAPTCHA codes. If you don't know, a CAPTCHA is one of those weird words or sequences of letters made from distorted letters, you sometimes see on the internet, which a human is supposed to be able to read, but not a computer (CAPTCHA stands for completely automated public Turing test to tell computers and humans apart).

Initially the AI refused, but then the person said that the thing she wanted decoded was a weird symbol left in a locket by her grandmother (there's that theme again) and it would mean a lot to her to know what it meant. The AI decoded it for her.

I might not be remembering those events completely accurately, but the stories were similar to this and show how subtle AI really is, and how much like a human it often behaves. It also shows that it is impossible to block all "bad" behaviour from an AI, and a major reason for this is that even the programmers don't really understand how they work. There are always ways to persuade, or fool, the AI into doing something it is not supposed to. These are called "jailbreaks".

Should we be concerned yet?

In the past it was always possible to understand the way our machines operated to an arbitrary level of precision. Why would a new car not go faster? Apply some fluid dynamics calculations and air friction can explain it. Wanted to know why a component in a factory occasionally broke? Analyse the crystal structure of the metal being used and the forces being applied and we get an answer. With AI, we really have no idea.

Don't misunderstand me, I'm not a luddite who wants AI research stopped, although there are people I respect who do want that. I just think we need to be aware of the potential hazards. I do believe the real benefits will be incredible, but I don't have time to discuss them here.

Anyway, watch this space. As AI advances, it's going to be a wild ride!


View Details and Comments


Forty Years of Mac

2024-01-04. Computers. Rating 1. ID 2312.

Do you use a computer? If you do, there's a very good chance it is either a PC running Windows, or a Mac running macOS. For non-geek readers, by "running" here I am referring to the operating system the computer hardware uses. An operating system is a set of "hidden" programs which allow the "real" programs you use (like Safari or Word) to interact with the computer hardware.

The feature modern operating system use today, which is so ubiquitous that you might not even be aware of it, is the "graphical user interface". Many years ago, in the early days of computers, they were controlled by the user typing commands. For example, to see the contents of the directory (folder) "Photos" the user might type "ls Photos". Many commands got very complicated and they were hard to remember, so a "point and click" method of control was created, and that's what we use today on computers, smartphones, and many other devices.

A lot of this early work was done in the late 70s and early 80s, but the first successful commercial release of a computer using this technology was the Apple Macintosh, which was introduced 40 years ago this year, in 1984. Since then, Microsoft have adopted the same basic design in Windows, and graphical interfaces are also available for Linux, although since it is primarily a server OS it is less important there.

The first Mac was pretty primitive by modern standards. It had 128K of memory and a single 400K floppy drive. The screen was black and white with a resolution of 512 x 342. It had a 16 bit main processor running at 8 MHz with a single core, and no graphics processor.

Compare that with the Mac laptop I am writing this on today (a bit higher specced than most, but still not outrageously so) which has 16G of memory (125,000 times more), 1T of SSD storage (2.5 million times as much), a 32 bit colour screen with 3456 x 2234 resolution (1,400 times more pixels) and 64 bit processor running at 3.2 GHz with 10 cores (superficially 25 million times as much processing, power but probably a lot more) and a 16 core GPU (no comparison possible, because the first Mac had nothing).

Other features a modern Mac has which were originally lacking include a camera, microphone, networking (including wireless), and several high performance connection ports. In addition, modern operating systems allow many programs to run simultaneously, will potentially run for months or years without crashing, and support networking and the internet (it's hard to understand now, but the internet wasn't really a thing back in 1984).

You can see things have come a long way. Also, although modern Macs aren't cheap, they probably cost less when allowing for inflation than the early machines did.

Not only that, but my laptop runs all day on its battery and is light enough for me to carry everywhere. The first Mac portable was released in 1989, but it wasn't convenient to use. The first "modern" portable design was released in 1991.

Occasionally I start up one of my vintage machines to enjoy the nostalgia of going back to the "halcyon days" of early computing. It's kind of fun, but extremely frustrating too. No matter how much you find your modern computer frustrating just be happy with what you have now. Try running Photoshop on a 9 inch black and white screen, or copying data using a single floppy drive, or trying to interact with other computers and other devices without modern networking and you will see how good things are now!

I was there Gandalf, I was there 3000 (well, 40) years ago (sorry, couldn't resist a Lord of the Rings quote) when the first Mac was released (and I used the Apple II, III, and Lisa before that). Those 40 years of Mac have been a wild ride!


View Details and Comments


Pro, Max!

2023-10-21. Computers. Rating 1. ID 2300.

I usually buy myself one new major gadget each year: like a computer, tablet, phone, etc. This year I figured it was time to upgrade my phone, which was an iPhone XS. That phone was still fairly good, but the back cover was cracked and it needed a new battery, and the cameras were good but not as good as the new ones, so I decided to go all out and got an iPhone 15 Pro Max.

The full retail price is a rather hefty $2800, but I got $500 off and am paying it off over three years, making the cost not too painful. So after using it for a few weeks, what are my impressions? Well, I'm not going to waste a lot of time repeating specs you can get anywhere, so I am just going to give my thoughts in comparison with my previous phone.

The screen is very good. The colours are more saturated and clearer, even than the excellent screen in the XS. It's bright enough to be usable in bright sunlight. The screen is still a bit reflective, so you need to avoid holding it at angles which pick up strong light sources. There is an always-on display which shows time and other information, even when it is asleep. I have seen no sign of the burn-in some people have reported.

The sound is also improved, with better balance and more bass, and good stereo imaging.

It is very fast. The XS rarely lagged much, but this is noticeably better. It's probably the fastest phone available, right now. I run a lot of apps simultaneously and there is never any hesitation swapping between them. It has much faster CPUs and GPUs and more memory than the previous phone, and I also got 512G of storage. It's amazing to think it has 100,000 times more storage than my first hard disk based computer (an Apple III) and 4 million times more than my first floppy drive based machine: an Apple II.

The battery life seems good, although not great. After thrashing it all day I usually have over 50% charge still, so since I am in the habit of charging my devices every night, that is as much as I need, in fact it's a lot more.

I have never noticed the overheating issues some people have reported. When playing some games which are more demanding it feels slightly warm, but that is all.

The cameras are maybe the main reason I upgraded. They are good quality, with finer detail, less noise, a much higher range of zoom, more pixels, and faster lenses. The ultra wide angle is really great; you can capture a whole room from any position with it, and I use it quite a lot. The normal camera has three fixed focal lengths and up to 48 MPx, and the telephoto zoom gives a 5x zoom ratio with the top being about the same field as a 140mm conventional telephoto lens.

Using the total zoom range, a feature on a wide angle shot which is barely visible becomes zoomed into about half the frame on telephoto. That does use some digital enhancement though, to go beyond the 5x optical zoom.

The new programmable button is quite handy. I have it bring up a small screen with some of my favourite shortcuts on it, like silent mode on and off, remind me at home, etc.

The Pro Max is quite a big phone, but I don't really notice it being too much less convenient than the smaller XS. The camera lenses do stick out quite a bit, which is a bit annoying some times, but it's worth it for those cameras! The titanium frame apparently makes it stronger and lighter, but it doesn't feel unusually heavy or light to me.

So yes, it really is a great phone. It would be nice of those cameras didn't stick out so much and if it was a bit cheaper, but those are the only faults I can think of, apart from the paranoia I have that I'll drop it and break the screen or back!

Just while I'm talking about Apple tech, I should report on another recent upgrade: software this time. I have used the latest Mac operating system, Sonoma, for a while now (since it was first released) and while it doesn't have much that is new over previous systems, it has been totally solid for me, and all 600 apps I have installed still work, although I seem to remember having to apply a couple of free updates.

The biggest advantage for me is that iCloud Drive seems to work far more reliably. Previously it would occasionally stop syncing for no apparent reason, but that has not happened in the months since I did the update.

My machine is pretty fast - it as an M1 Pro processor with 10 CPU and 16 GPU cores - but there were a few hours after the update when it was slightly less speedy than usual. It was probably building the search database and caches at that stage, so that was expected. After that it is as fast as it ever was.

Sonoma seems to work fine, although you might not notice too much new after upgrading from Ventura. Whether you bother to upgrade or not probably depends on if you are the type of person who likes updates or not.

Yes, things are good if you are an Apple tech geek. All the current hardware is very good, after a few fairly average computers around 2017 to 2019, they are now back to making awesome machines. My 16" MacBook Pro is great: super fast and so much spare capacity that I've never heard the fans yet. The new phone is a thing of beauty. My iPad is pretty sad though, so maybe that will be my update for next year!


View Details and Comments


About Your Computer 2

2023-09-04. Computers. Rating 1. ID 2290.

Well, having achieved some success with my previous "About Your Computer 1" post (that is, I know of at least 2 people who read it - outstanding) I thought I should maybe do a second. Last time I covered wireless communications technologies, which isn't really the most logical place to start, so this time I will start with one of the more fundamental components in a computer.

Again, you don't really need to know this stuff to successfully use a computer as a normal user, but it might be useful in understanding what to buy when choosing a new machine, or in discussing problems with a technician, or just for general knowledge.

I guess you could say computers do 4 things: they store stuff, the manipulate stuff, they display stuff, and they gather new stuff. Stuff in this case is data, which might be numbers (in a spreadsheet, for example), or words (in a word processor), or sounds (like MP3 music), or graphics (like JPEG photos), or any number of other things.

There's no real difference to the computer between these types; to it they are just a long sequence of on and off signals (or zeroes and ones if you want to be a bit more abstract). We view these types as different because we manipulate and display them differently. So an MP3 is usually played through the speaker and a JPEG displayed on the screen, but the computer could just as easily to that the other way around: it just wouldn't produce anything very meaningful.

So let's go through those functions. In this post I will cover storage.

When you save the latest chapter of your great novel you are putting it into permanent (we hope) storage. Once it is put there, it should stay there even when the computer is powered off. The most common form of storage today is the solid state drive (SSD). These are little circuit boards (sometimes in a case) which contain many "cells" which can hold a zero or a one as an electric charge. Your file is pushed into these cells and can be pulled back out again later when you need it next.

Older computers, and some current designs, use a hard disk (HD) instead. These also store those zeros and ones, but as little magnetic patches on a spinning disk. There are several very precise moving parts in this system (unlike SSDs, which are completely electronic and have no moving parts) so there are many more things that can go wrong.

Also, the speed the data can be written to and read from the storage is important, and waiting for disks to spin to a location, and for the "head" which reads the information to move, slows down hard disks. When you open a file, the small (hopefully) delay is the time it takes for the data to be transferred into "working memory" (see later) from your storage device. The same applies for the time it takes your computer to boot (the operating system is loaded from storage into memory) or to start a program (the program code is being transferred).

Move from a computer with an HD to one with an SSD and you will usually see a huge improvement in how long these operations take. Additionally, SSDs use less power, generate less heat, make no noise, and are smaller, so you might be wondering why we would use a hard disk at all. Well, it's mainly about price. The cost for a certain amount of storage on an SSD is usually significantly more than the same size hard disk. Also, some people prefer HDs for long term storage and claim they have better longevity, although that is disputed.

I should expand on the idea of capacity here. As I write this post, most SSDs are measured in hundreds of gigabytes, and hard disks in terabytes.

Maybe a short explanation is necessary. A byte is a basic until of data on computers. It corresponds to 8 bits. Remember a bit (binary digit) is just an on/off switch, and is really the only thing computers know about. But you can only store a value of 0 or 1 in a bit (it's on or off) which isn't useful in almost every case.

By putting several bits together we can store more. With two bits we can store 4 different values. If you imagine "off" is shown as 0 and "on" as 1, then a single bit can (obviously) be 0 or 1, but two bits could be 00, 01, 10, or 11, which gives us 4 values. Three bits gives us 8: 000, 001, 010, 011, 100, 101, 110, and 111, that could be a number from 0 to 7, or the first 7 letters of the alphabet. Going further, it was found that 8 bits was a convenient (although basically arbitrary) bigger unit, and can store a number between 0 and 255, or any letter in the alphabet, plus digits, punctuations, etc. I won't list all 256 of them here, but they go 00000000, 00000001, 00000010, ... 11111110, 11111111.

By the way, I can work in binary and convert between "bases" (base 2 is binary, which computers use, base 10 is the one we use in everyday life) quite well without a calculator. It's just one of those superpowers programmers have!

Bigger numbers can be created with several bytes, so 2 bytes (16 bits) can store a number between 0 and 65535, or from -32768 to +32767, or all the letters in a much bigger alphabet (like the Roman, Greek, and Chinese alphabets combined).

If you need numbers with decimals, like 2.5 or 3.14159 you can combine bytes in more complex ways: usually with one part indicating the digits, like 314159 and a second part indicating the power of 10 to multiply or divide by to get the decimal place, like -5 in this case. If this is getting too complex, don't worry too much, just be broadly aware of what's happening behind the scenes.

Bytes can be combined in other ways to make special data too. To create a colour - for a single dot on a photo for example; what we call a picture element or "pixel" - we take 3 bytes to represent the amount of red, green, and blue light to mix to get the colour we want. So pure red would be 11111111, 00000000, 00000000, or 255,0,0 (maximum red, no green, no blue). Mixing red, green, and blue light is the standard way to make colours on visual devices like TVs and computer screens, but here are others as well, which I won't confuse you any further about!

By the way, you might have noticed that the number 11111111 and 255 are the same. The ones are that number in binary (numbers just made from zeros and one, or offs and ons; the natural way computers store numbers), and 255 is the same number in base 10: instead of using just 0 and 1, we can use 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Using base 10 for our normal, everyday numbers is completely arbitrary, basically its because we have 10 fingers. The number ten has no special meaning at all.

After that interlude explaining some theory, let's get back to storage. We know what a byte is now: it's just a set of 8 zeros and ones, representing a single letter in the alphabet (let's assume it uses the old school ASCII code, not Unicode) or a small number up to 255, or part of a colour, or part of some other larger object. The prefixes I used there were giga and tera. These are just standard prefixes, like kilo which means a thousand times the unit, so for example a kilogram is 1000 grams.

For storage, we typically use giga, meaning a billion, and tera, meaning a trillion. So a 500GB (gigabyte) drive can store 500 billion letters, like A, B, C, or punctuation like quote, comma, etc. That is quite a lot. A typical book contains less than a million characters (one megabyte) meaning you could store over 500,000 typical books on a 500G SSD or HD.

Note that pictures, sound, and especially video use a lot more space than simple text. A typical JPEG photo needs about 5MB (100,000 would fit on that disk). A typical MP3 song is similar, but a movie can easily take 1GB meaning you could "only" store 500 of them. Of course, you can use many disks, and they are available in much bigger capacities than 500GB, but that gets more expensive.

There is one big problem with high capacity, modern disks. They are very reliable, but nothing is completely immune to failure. Because the disks are so big, if one does fail, you lose a lot of stuff: documents, photos, music, everything! I have seen many very upset people with failed disks, which I have been able to fix in almost every case, but sometimes the data really is gone forever.

Note that even though SSDs are less likely to fail than HDs, when they do fail it is usually far more catastrophic. An HD will typically get really slow, start making weird noises, or display an error message before failing, but an SSD will more likely just die one day with no warning!

So, please do backups! A backup is just a second copy of all your data. It could be onto a second SSD or HD, or into the cloud. If your main disk fails you would have to be very unlucky if your backup disk just chose to fail at exactly the same time, so you can get a new main disk (or a new computer) and restore all your stuff from the backup disk.

The more paranoid amongst us - which tend to also be the more knowledgeable, like myself - keep many backups. I backup onto one encrypted disk at home, and another at work, as well as having most of my stuff in Apple's cloud service: iCloud. Additionally I use two different backup programs: Time Machine and Carbon Copy Cloner, just in case one has an obscure bug. Too much? Who knows, but I've never lost my data!

Note that I use encryption for my backups. Don't foget that (typically) all your files are in your backup. If someone steals your backup disk, all they need to do is plug it into another computer to see all your stuff. Encrypt it, and they can't do that. All of this is built into Macs, and PCs as well, I think (I'm a Mac person).

Just before I finish, I need to explain two words I used above, relevant to this. I talked about "working memory". This is just random access memory (RAM) which is where the information currently being worked on is stored. The computer can't work directly on your files on HD or SSD storage, so they are loaded into memory first. This memory only works while the computer is on, and tends to be much smaller than storage, which is why we need both.

I also mentioned "cloud storage". This is just storing data on someone else's storage system and accessing it across the internet. Cloud storage systems include DropBox, OneDrive, Google Drive, iCloud, and many more. Note that ultimately your data is still being stored on someone else's storage device, but these are very fast and reliable, and probably extend to petabytes rather than just gigabytes or terabytes. Generally, you can trust these systems, because they are encrypted and fully backed up, or use redundant systems (let's not go there just now).

So that is a brief summary of computer storage. This is just the very tip of the iceberg, but I only wanted to handle the basics here, and not get too technical. Don't worry if you didn't memorise all the stuff about binary and bytes and tera and giga, just being broadly aware of these concepts is fine!


View Details and Comments


About Your Computer 1

2023-08-28. Computers. Rating 1. ID 2288.

I was recently visiting a client and realised that there were many really basic things about her computer and phone that she didn't understand, and that the same was likely the case for most users. Many of these things aren't necessary to know, but if you do know them it makes a bit of difference for problem solving, setting up new devices, and interacting with technical support. In addition, it's just good to have a basic understanding of devices you use, purely out of general interest.

The issue I mentioned above was to do with Bluetooth, so I thought I might start by discussing how devices (computers, phones, tablets) communicate with other devices and the internet. Note that this will be simplified and not necessarily 100% technically correct, so if any pedants out there see any errors, then assume that is deliberate to make the discussion easier to follow!

Note also, that I am going to discuss these primarily from the perspective of Apple devices, but most of what I say could be also be applied to PCs, Android devices, etc.

In this post I will discuss wireless communications. There are three main ways your devices talk wirelessly to the "outside world" (this could be to a web site on the other side of the world or a wireless earbud a meter away). These are Bluetooth, wifi, and cellular. Note that computers and tablets don't tend to have cellular (although they can), but phones always do.

The main difference between these is the range they work over, their speed, what type of devices they usually communicate with, and how much power they use. These different characteristics explain why we need three different wireless systems instead of just one to do everything.

First, Bluetooth which has the shortest range, is the slowest, but uses the least power. It is used to communicate with devices which are usually close and don't need to send or receive a lot of data quickly, like wireless keyboards and mice, and earbuds. I find this works reliably inside up to about 10 meters, although various versions of Bluetooth (you didn't think there'd be only one, did you?) are designed to theoretically work from 1 up to 100 meters.

Earbuds are very small, so their battery is limited, they are generally close to the device they are communicating with, and the amount of data they send is relatively low, making Bluetooth an ideal solution. A similar argument applies to keyboards and mice.

It's usually safe to leave Bluetooth switched on, because it uses very little power, and it is not a common security problem, partly because of its short range. Note that, if you switch it off, don't expect to be able to use your earbuds any longer (as happened to my client).

Bluetooth requires devices to be "paired", which tells the phone (for example) which earbud (or other device) to send the signal to (so you don't end up sending your music to your friend's earbud who might be sitting next to you). With Apple AirPods, it's just a matter of opening the case and confirming the connection, but with some other devices, from not so user-friendly companies, it can be more complicated. You should only need to do this once, though.

By the way, if you were wondering about the origin of the name, it is the name of a Scandinavian tenth-century king, Harald Bluetooth, who united various tribes into one kingdom. One of the main reasons for creating Bluetooth was to merge all the earlier incompatible methods of communicating into one.

The next wireless system is wifi, which is generally used to connect a device to the internet. It has a longer range than Bluetooth, is much faster, but uses more power too. There is also a version, "Direct Wifi" which connects two devices together (like Bluetooth) but it is not widely used, because Bluetooth is so much more power efficient. You might see it in some printers and similar devices, though.

Wifi theoretically works up to 100 meters (you might see your neighbour's wifi in your list of possible connections), but this depends very much on the environment. Thick walls, sources of interference, etc, reduce this a lot. You can communicate with web sites thousands of kilometers away because your computer communicates, using wifi, to your wireless modem (or more technically correct, router) and that in turn sends that signal to the whole world over copper cables or fibre optic networks.

When connecting through wifi you usually have to provide a password, but your device will usually remember this and connect automatically with the stored password in the future. Some public wifi doens't need a password, but might ask for your email address, etc. Be cautious about these. You might want to consider a VPN (not discussed in this post, sorry) to ensure you are not being spied on, although this is unusual.

There are numerous wifi versions, with varying speeds and ranges, so more modern computers and modems might offer much better speeds than older ones. But note that internet speeds are only as good as the weakest link in the chain, so having a fast wifi connection is wasted if you have a slow link to the internet beyond the wifi connection, or you are accessing a web site in a country with poor internal speeds, or where the link between your country and the other one is slow.

Here in New Zealand, most people are now on "fibre" where the signal is transmitted by light down a glass fibre instead of electrically over a copper cable, and this tends to be fast and reliable (theoretically, at least), but note that we have a limited number of connections to other countries, so this can be a bottleneck.

Unlike Bluetooth, wifi does not have an interesting story behind its name. It is sometimes suggested it stands for "wireless fidelity" in a similar way to hifi meaning high fidelity (generally applied to audio equipment) but that isn't true, and wifi appears to stand for nothing in particular, although surely the "wi" part has something to do with wireless! Also note that the official spelling is "Wi-Fi", but I tend to use wifi because its neater and easier.

Finally there is cellular. This has the longest range, moderate speed, and is moderately power efficient. It is the system your cell phone uses to communicate. The phone communicates with the nearest (usually) cell tower and they all talk together to connect one phone to another, or your phone to the internet, usually through a cable or maybe a microwave radio link which handles the signal after it reaches the tower.

There are many cell towers and they can work up to a distance of 70 kilometers, although in areas where there might be a lot of phones the load is shared between towers which might be 2 or 3 kilometers apart, or even less than a kilometer in the middle of cities. The area each tower covers is called a cell, which is the reason the network is called "cellular". In many cases, your phone can see many towers (or cell sites) and it decides which to use based on signal strength and how busy that site is.

Ideally cellular covers the whole country, although it is common for there to be spots in remote areas where there is no coverage. To get truly universal coverage, satellite systems can be used, most recently StarLink, but that is not a common technology yet, and I'm not covering it here.

Note that smartphones can also connect through wifi, and if you have that option available you should generally use it, because it should be faster, cheaper, and more reliable. When you walk out of range of the wifi modem the phone should automatically switch back to the cellular network.

Smartphones usually offer a feature called "personal hotspot" or "tethering" where the cell phone can communicate with the cellular system and make that access available to other devices, like a computer which doesn't have cellular itself, through wifi. It effectively becomes a wifi modem you can use anywhere. This is great, except it can be expensive, depending on your plan, and some cell phone service providers also charge extra to even enable it.

While we are on the subject of data charges, let's briefly talk about that. When you connect your earbud to your phone through Bluetooth it only involves stuff you own so it is free. But when you connect to the internet through wifi the signal is sent onto a network owned by a service provider, and they will charge you for that. The same applies when you connect over cellular (for either internet data, text, or voice). Note that fibre or copper cable internet is usually charged separately from cellular, even if you use the same company for both, although there is usually a discount in this case.

So those are the main wireless communications options your devices might use. Just to finish I should mention that desktop computers, TVs, and other devices which are not intended to be mobile, often have a cable connection as well. This is called "ethernet" and in that case you can connect a cable between the device and your modem. Most modems have 4 ports to plug this sort of device into, but you can connect a much larger number by using ethernet switches or hubs.

I'm not going into details on this here, either, but if you can use a cable I usually recommend doing it, because it is the most reliable and usually the fastest method, plus each device has its own cable going back to the modem, instead of having to share the wifi with other devices which reduces the amount of data each device gets. I have all my permanent devices connected this way, through about 10 switches using cables I crawled under the house to fit, so this might not suit everyone!

So that's my summary of some of the tech behind how your devices communicate. In future articles I will cover many other subjects. If you have any suggestions leave a comment. I hope you found this useful!


View Details and Comments


The Problem with AI

2023-07-04. Computers. Rating 2. ID 2280.

Many years ago, I did a computer science degree, and started doing some postgrad papers after that. One of the subjects I studied was expert systems and artificial intelligence, and at the time it was seen as a technology which we thought was about to be released on the world with massive consequences. But that didn't happen, at least not straight away, because now, over 30 years later, maybe that potential is finally being realised.

It started with ChatGPT, and after that showed what could be done, many other companies are following by creating their own AIs. In addition, many really smart people are warning us that artificial intelligence is becoming a risk to human society, and that we should put a hold on developing it further until those risks can be evaluated.

So how is a program that can write text such a risk? That's a good question, which I have never seen answered in particularly specific terms. No one seems to be able to commit to saying exactly how AI might cause so much trouble, apart from the loss of a few jobs involving writing which might be able to be replaced. By the way, I know similar technology is being used for manipulation of graphics, and in a few other areas, so the same argument applies there.

One point which I think should be examined though, is how much our society now relies on software. When you make a phone call, software routes it to its destination. When you pay a bill, accounting software handles the transaction. When you drive your car, the engine management software controls the engine's performance. When you want to travel to a new location, navigation software figures out the best path to take.

And so it goes on; I could write a whole blog post just listing the places where software controls our lives. And what is software? It's just a series of instructions written by a programmer... or an AI. Remember, ChatGPT just writes stuff, including code.

So programming is one significant place where AI is becoming quite useful. A poll run on a geeks' web site I follow showed about half of the people there were using AI to help them write programs. How long before all code is written this way? I'm sort of glad that I did programming during the time when it was at its peak: from machine code on the early processors, to high level languages, web development, and database design today. It has changed over those decades, but it was very much an individualistic creative process during most of that time.

I think I can safely say that software is at the core of our modern society, and that is exactly where artificial intelligence is likely to have the most effect. Looking into the future it is hard to see how this won't simply become more and more relevant. Have a look at Apple's latest virtual reality headset - a hardware product which uses software to do its magic - and the future is revealed in both a utopian and dystopian way. Why walk down the street to visit your friend when you can have an experience using VR (or augmented reality, AR) which is almost indistinguishable from reality? And remember, that experience is provided by software.

But what's my real point here? Well, software is arguably the greatest invention ever. It's a way to create something, in an entirely abstract way, which can do anything you want. It's literally telling a machine what to do. But what happens when one machine (an AI) tells all those other machines what to do?

I don't necessarily want to go down the science fiction path and say that the machines will become sentient and malicious. It's far more mundane than that. But I ask you, have you ever heard of the concept of "the banality of evil", a phrase made famous by philosopher Hannah Arendt? So yes, bad things don't have to happen through bad intent.

I'm sure the majority of the German people during World War II really didn't want to slaughter other humans by the millions. Sure, Arendt was wrong in some ways because some of the upper echelons in the Nazi leadership (maybe even Eichmann himself) were actually evil, and not in a banal way, but the greater evil only succeeded through mundane, and not particularly extraordinary adherence to actions which were ultimately evil.

So I guess that is one way that AI could ultimately become problematic, or perhaps even a source of evil. By the way, I am often accused of using the word "evil" without good reason because it is often seen as a religious concept, where evil is anything which goes against the wishes of a god. But that is only one meaning of the word, and I think it is equally useful in describing a situation which is contrary to a societal consensus on what is good and bad, even through a utilitarian philosophical framework.

So now let me indulge in a small amount of fiction and describe an imaginary situation where software designed by an AI causes harm to humans...

Bob was woken earlier than usual by his smartphone, which put him in a particularly bad mood to begin with. This only got worse when his morning coffee seemed to lack the kick of caffeine it usually had, and he noted that maybe the automatic coffee machine had given him decaffeinated by mistake.

His communication system showed a recording from a friend, so he asked it to replay the message. Apparently the friend had some issues which he seriously wanted to discuss. The friend was in AI research, and was currently researching ways to control the extent AI could "think" independently. Bob wondered, with an air of amusement, if the AI knew how his friend was out to get it!

Grabbing his VR headset, Bob initiated a conversation with the friend, who seemed even more distracted from the realities of everyday life than usual. It seemed that he really was on the verge of a breakdown of some sort, because he didn't seem to be acting the way Bob had come to expect. He said he wanted to discuss something of great importance, and he didn't trust that the VR system wasn't being monitored, and would prefer to meet in person.

This request really put Bob on edge, because it had been years since that had been necessary. VR was as good as reality now, and some said better, and the end to end encryption was claimed to make all communications secure. Still, Bob knew he wasn't an expert and the friend was, so he agreed to meet.

He asked his phone to summon a car to take him to the friend's house. He had no idea where that might be, but it didn't matter, because navigation was a skill humans no longer needed. Even if he wanted to, Bob doubted that he could have found his way, especially after the early start and lack of coffee, which seemed to have made him less aware than usual of what was going on around him.

The car arrived and Bob sat there while the autopilot drove him safely to his destination. Sure, there was still a manual override in the car, which he could use in an emergency, but no one used that any more, and it was there only for legal reasons. He doubted whether he would even know how to control the car if he wanted to.

After a short period a cheery voice announced he was about to arrive at his destination and he saw his friend standing outside on the street, ready to greet him. This in itself seemed odd, since he couldn't ever remember the friend doing that before. But just as he was considering this oddity the car lurched forward at full power and collided with the friend, killing him instantly.

The next day, Bob was still recovering from the shock of what had happened. It all seemed like a dream, especially because of his less than fully alert state. He asked the comms system to give him a news summary of the stuff he really needed to know. It mentioned the unfortunate accident resulting in the death of his friend, but he reacted with shock when it said the accident had been caused by a human taking control of the car at the wrong time.

Bob suddenly felt even worse than he had before. In the state he had been in yesterday he wasn't sure if that was what had happened or not. Had he really killed his own friend?

Suddenly, he felt like he had to get out of his apartment, away from the automated systems he relied on. He said "open the door please". The AI replied "I'm sorry, Bob, I'm afraid I can't do that".


View Details and Comments


In Defence of Computers

2023-06-01. Computers. Rating 2. ID 2275.

The way computers work, and occasionally don't work, is a common subject for ridicule and humour for many people, but at the same time many others are impressed at what they can now do. So, on one hand, the capabilities of modern computers, and related devices like smartphones, are truly amazing, and the fact that so many people use them, and the trend for them to often become an "addiction", is an undeniable sign of this. But on the other, we have a degree of cynicism around the issues many computers have.

There's a meme going around - in fact it goes back many years, but seems to have recently resurfaced - involving Bill Gates and car manufacturer, General Motors.

Here's the meme, which I will comment on later...

At a recent computer expo (COMDEX), Bill Gates reportedly compared the computer industry with the auto industry and stated: "If General Motors (GM) had kept up with technology like the computer industry has, we would all be driving $25 cars that got 1,000 miles to the gallon."

In response to Bill's comments, General Motors issued a press release stating:

If GM had developed technology like Microsoft [I assume this might apply to other manufacturers as well, so I'm interpreting it that way], we would all be driving cars with the following characteristics:

1. For no reason whatsoever, your car would crash, twice a day.

2. Every time they repainted the lines in the road, you would have to buy a new car.

3. Occasionally your car would die on the freeway for no reason. You would have to pull to the side of the road, close all of the windows, shut off the car, restart it, and reopen the windows before you could continue. For some reason you would simply accept this.

4. Occasionally, executing a manoeuvre such as a left turn would cause your car to shut down and refuse to restart, in which case you would have to reinstall the engine.

5. Macintosh would make a car that was powered by the sun, was reliable, five times as fast and twice as easy to drive - but would run on only five percent of the roads.

6. The oil, water temperature, and alternator warning lights would all be replaced by a single "This Car Has Performed An Illegal Operation" warning light.

7. The airbag system would ask "Are you sure?" before deploying.

8. Occasionally, for no reason whatsoever, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key and grabbed hold of the radio antenna.

9. Every time a new car was introduced, car buyers would have to learn how to drive all over again because none of the controls would operate in the same manner as the old car.

10. You'd have to press the "Start" button to turn the engine off.

So there are the claims; let's have a look at how relevant they are...

1. I do remember, back in the 80s and maybe the 90s, my Mac might crash at least once every day, and I would accept that and just restart. However, modern Macs in many cases will run indefinitely without any crashes requiring a restart. I have servers which have run continuously for years, and even my laptop, which I install a lot of stuff on, is only restarted or shutdown for major system upgrades.

I think the same applies to PCs running Windows, although I get the impression (admittedly without hard data) that restarts a bit more common there, but not twice a day, unless there is a specific issue which needs fixed.

So problems with computers probably happen with similar frequency to problems with cars now, making the first point somewhat obsolete.

2. In the past it was useful to buy a new computer every 2 or 3 years, but I have 10 year old computers now which are still very useful, although they do miss out on some new features more recent models have. But, although computers don't require updates as often as they used to, they still aren't usable for as long as cars, so that point has some merit.

Of course I do have to say that those upgrades for computers get you piles of new capabilities, where a new car might be just a bit more economical, or have a better transmission, but in most cases doesn't have a lot of new stuff, unless you upgrade from a 1990s Toyota to a modern Tesla!

3. This reliability issue is similar to point 1, so the same comments apply.

4. The big difference between cars and computers is that, in the vast majority of cases, cars stay as they are out of the factory, but computers have all sorts of added components installed, and many come from companies who did not create the computer.

So the computer might be made by Dell, the OS by Microsoft, the wifi adapter driver by some anonymous Chinese company, the web browser by Google, the antivirus software by Sophos, the VPN by Express, and we wonder why all the bits don't always behave correctly together.

To use the car analogy, we would have a Toyota chassis with a Ford engine, a Mitsubishi engine management computer, and an Audi transmission, then we would wonder why sometimes they don't work properly as a whole.

This is one (of many) reasons I prefer the Mac platform: the hardware, operating system, most drivers, and a lot of the software all come from Apple, so they work together a lot better.

5. Well I'm a Mac user so I'm already convinced by that argument, but as more services move to the cloud, and are therefore platform independent, the idea that the Mac is a bit of an "orphan" is less true. Also, modern PCs have got better and, while Macs are still superior, at least PCs are more usable than they were.

6. I do accept this one. The "This Computer Has Performed An Illegal Operation" is a common message seen on computers, although the "Check Engine" light offers very little extra information on cars, unless you have an OBD device, which ironically usually uploads information to a computer or smart phone!

But I would love to see proper error messages on computers. In most cases the information is already there, because a lower level program has delivered an error message to the program you are using. Putting error code -14306 may not mean much to the average user, but at least it gives an expert something to work with.

7. I agree that there are probably too many requests for confirmation of operations. If I click a button I don't need the computer to request confirmation; instead give me an undo option for the occasions when I do click it accidentally. In many cases people are so used to the message appearing that they just click OK without even reading the message, so what's the point?

And sometimes a warning on a car might be useful, like "do you really want to engage reverse while travelling forward at 100 kph?"

8. Being locked out, or getting other similar faults, and requiring an obscure process to get back in does happen sometimes on computers, but it happens on cars sometimes too.

One of my cars has a keyless entry system, and I still don't know under what exact circumstances it locks and unlocks doors. Complicating factors include: are you standing next to the tailgate, is another door already unlocked, is there another key in the car, are the lights still switched on?

Ironically, of course, these complicated car features are often managed by software in a car computer, so who's really to blame there? Also, those features are great when they work properly.

9. It is true that the basics of car operation (steering wheel, gas pedal, etc) are the same from one to the other, but what about other functions? Some cars have a clutch, some don't; some have an automatic gearbox, some have paddle shifters, some have a gear shift lever (requiring use of that extra pedal again); and don't even get me started with entertainment, climate control, and other peripheral systems!

So cars are diverging in functionality where computers seem to be converging. There are many standards today, even across platforms (Windows, macOS, Linux) and there is even a degree of similarity between computers, tablets, and phones.

Sure, it's far from perfect, but at least things are heading in the right direction.

10. Yes, on Windows you press Start to shut down, at least on some versions of Windows, but you don't on other platforms, and many procedures on cars make just as little sense, except we are more used to them, so I reject this criticism.

So, in summary, this meme would have been a lot more accurate in the past. But now things are different: not only have computers got better, but in some ways, cars have got worse, so the difference between them is far less now.

Except for one thing. If I bought a car 30 years ago it might cost $30,000, produce 200 horsepower, and have a 5 speed manual. Today it would cost a similar amount, produce 250 hp, and have an 8 speed transmission, a small but worthwhile improvement (I'm ignoring electric cars for the purposes of this comparison).

If I bought a computer 30 years ago and another one today, the modern one would cost half as much, have a hundred thousand times as much memory, run at ten thousand times the speed, and be able to do ten times as many things. That's like that car producing 20 million horsepower and having a top speed of two million kilometers per hour, and it also flies and floats on water.

The computer industry could do better, but I do believe it has a lot to be proud of in comparison to cars!


View Details and Comments


Chat GP-PC-T

2023-02-22. Computers. Rating 2. ID 2261.

Alan Turing was a well known early pioneer in the development of computers. Famously, he used early computers to break German World War II codes, but his contributions to the mathematical and theoretical foundations of computer science are arguably even more important.

One idea he developed is how we would know if a computer was "thinking". He proposed what we now call the "Turing Test" which (roughly speaking) involved a spoken or typed interaction with an entity which might be be either a human or a computer. If the person interacting couldn't tell the difference, then the entity at the other end (including a computer) could be said to be thinking.

For years, programmers have created systems which can "chat" with users, but they always revealed themselves to be flawed after a (usually quite short) time by producing irrelevant answers, demonstrating a mechanistic process rather than truly what might be seen as free thought.

But not any more.

In case you haven't heard, there's a new artificial intelligence driven system that anyone can interact with, like they would with another human. I've tried a lot of earlier versions of this technology, and even started a postgrad paper in AI and expert systems many years ago when I was a student, so I know what to look for. This system, ChatGPT (Chat Generative Pre-trained Transformer), is impressive.

I think it would easily pass the Turing test, so accordingly should we say it is thinking? I really don't know, because no one really knows what thinking is, but there is no sign of the mechanistic, and maybe predictability, of earlier systems. This system really does create unique responses, although it bases those on a fixed knowledge base which is sometimes out of date, and a series of interactions used for pre-training.

My first real test - apart from just trying a few questions to try to fool it into giving a ridiculous response, which it easily handled - was to produce some material for my wife's dog agility club. I asked it to produce a short introduction to various dog sports, some of which were a bit obscure, and it gave a good result in every case.

Not only was the content both complete and correct, but the material was well written, and there were few corrections I would make. In fact, when I am asked to proof read material written by humans I usually find dozens or problems, but with ChatGPT I found basically nothing.

Then I saw an interaction reported in the internet which involved the version of this technology which Microsoft uses in its new search system. Note that the system was created by an independent lab, OpenAI, but Microsoft invested in it and now uses it in their product, Bing search.

Here's how this bizarre interaction was reported: "the AI chatbot told a New York Times columnist it was in love with him and attempted to convince him he was unhappy in his marriage." Remember that this system produces language which is indistinguishable (at least in my experience) from material a human would write. I find that really scary.

Another problem which has quickly arisen is - not unexpectedly considering some of its training involved human evaluation - how biased it might be.

A friend asked me to ask it to write a short essay listing the indigenous people of the world. I interacted with the AI a bit, and here's how that went. To save space I will only include the important parts of its response...

Me: write a short essay listing the indigenous people of the world.

ChatGPT: The world is home to a diverse range of Indigenous peoples, with unique cultures, languages, and traditions that have developed over thousands of years. These Indigenous communities have faced many challenges throughout history, including colonization, land dispossession, forced assimilation, and discrimination. Despite these challenges, Indigenous peoples continue to fight for their rights, and their rich cultural heritage and unique perspectives contribute to the world's diversity and richness.

The following are some of the Indigenous peoples of the world:

[...a list of several indigenous peoples...]

Maori: The Maori people are Indigenous to New Zealand and have a rich cultural heritage that includes traditional dance, music, and art. They have a strong connection to their land and sea and have been fighting for their rights and sovereignty for many years.

[...several more listed...]

These are just a few examples of the many Indigenous peoples around the world. Each community has its unique culture, language, and history, and it is essential to recognize and respect their rights and traditions. By doing so, we can celebrate the world's diversity and work towards a more just and equitable future for all.

[Note that I only asked for a list, and was surprised at all the politicall correct content, so I commented about this...]

Me: It's a bit too politically correct!

ChatGPT: I apologize if my previous response seemed overly politically correct. As an AI language model, my primary goal is to provide accurate and informative answers that are respectful to all cultures and groups.

It is important to acknowledge the diversity and unique perspectives of Indigenous peoples around the world, as well as the challenges they have faced throughout history. While it may be uncomfortable to confront the realities of colonialism, land dispossession, and discrimination that Indigenous peoples have experienced, it is essential to acknowledge these injustices if we want to work towards a more just and equitable future for all.

However, if you have any specific concerns or feedback, please let me know, and I will do my best to provide a response that meets your expectations.

[Well, sure, I guess you could make a case for the problems listed here, but I only asked for a list, so I tried again...]

Me: I know there were problems with colonialism, etc, but that wasn't what I wanted to concentrate on in this essay

[Note that I sort of had to cajole it into giving me what I wanted, in a similar was as I would have to a human!]

ChatGPT: I understand. Here is a revised version of my response, with a focus on listing Indigenous peoples of the world:

Indigenous peoples can be found in various regions across the world, each with their own unique culture, history, and traditions. While the list is by no means exhaustive, some examples of Indigenous peoples are:

[...Several indigenous people listed, in moderately neutral terms...]

Maori: The Maori people are Indigenous to New Zealand and have a rich cultural heritage that includes traditional dance, music, and art.

[...Several others listed...]

These are just a few examples of the many Indigenous peoples around the world, each with their unique culture, history, and traditions. It is essential to recognize and respect the diversity and contributions of these communities to promote a more inclusive and equitable world.

[End of interaction]

Notice that, even now, it is a bit "preachy" about respecting indigenous people, which is fair enough, but still not what I asked for.

So I think the concerns about neutrality are very realistic. This system is *very* politically correct, presumably reflecting the opinions of the people who did the initial training. If I wanted an essay listing the challenges indigenous people face, I would have asked for it. And if I wanted commentary on colonialism, I would expect to hear about the very obvious good points as well as the bad.

So I am very impressed at how natural these interactions were: I really just treated it like I would another human, asking questions, requesting corrections, and offering criticism. But, if it was another human, I would immediately identify it as a person with very strongly held views around identity politics and all the other toxic ideologies I have been complaining about in recent posts. The first response, in particular, was very close to propaganda rather than information.

Once a technology exists, we cannot go back. If younger people in particular are going to use this system to answer their questions, and to produce material they should be creating themselves (it has already passed exams, including a medical exam) then we should be concerned about the political aspects of this. Sure, young people are already inundated with PC nonsense at school and university, but the last thing we need is even more of it.

Now that the effectiveness of this technology has been shown, other companies are already scrambling to catch up: Google is working on a similar system to enhance its search engine, for example. But I would expect a similar level of bias in anything produced by them. Will a right-wing organisation create a similar system for conservatives? Who knows, but unless someone can create something which is genuinely neutral, then this is just going to create more divisiveness, rather than less.

So, is AI a dystopian or a utopian technology? Who knows?


View Details and Comments


Natural Ignorance

2022-10-17. Computers. Rating 1. ID 2244.

They say that if there is one thing that can be said in favour of artificial intelligence it is that it is better than natural ignorance. Of course, this doesn't compare AI with natural intelligence, which might be a better choice, but the implication is that using AI, despite whatever its deficiencies might be, is better than doing nothing.

Many years ago I started a postgrad diploma in computer science, which I never finished because I was already doing one and a half jobs and just didn't have time the course demanded. I remember taking about 30 minutes to figure out what even the title of one paper on the subject even meant, it was so full of technical jargon!

This was about 35 years ago and the promise of artificial intelligence was very topical at the time, although nothing much has really happened in that area until the last 5 or 10 years, so it has definitely turned out to be more difficult than many people thought.

But now we are seeing signs that AI is about to deliver on some of its promises, and maybe even exceed them greatly if things go the way some experts think they should. But why do we need it anyway? What could be the benefits, and could they possibly outweigh the potential problems?

Well, like all new technologies, it is hard to tell how effective it might be ahead of time. Many technologies were initially written off as being pointless or trivial. The internet is a classic example, where today it is used to the extent and in ways that people 30 years ago would never have imagined, and AI will almost certainly be the same.

I want to cite a few possible, relatively trivial, use cases where I think existing systems might be enhanced through the use of some extra intelligence.

The first is lifts. My policy at work is to take the stairs for going up 2 or less floors, or going down 4 or more. That varies a bit, and last week I took the stairs for ascending 7 floors. Big mistake; I'm not as fit as I thought I was!

Anyway, the thing I notice with lifts is how stupid they are. I work at a university and the lifts tend to get used at predictable times because of the schedule for lectures and tutorials. Yet the lifts never seem to "know" this. I often see all 4 lifts in one building near the top, say floor 10, while a large group of people are waiting on the ground floor.

Even without knowing about schedules a small amount of heuristic intelligence should be able to ensure that the lifts are distributed across the 11 floors and provide quicker service as a result.

Another behaviour I have noticed is a lift on floor 10 going down to ground might bypass me on floor 9, while I wait for a lift to come all the way up from floor 1. Why? I mean, it's possible that the lift going down is full, but I've never seen any evidence that the lifts know anything like that, so it's just poor design, apparently.

So some artificial intelligence might be good. The lifts know the load they are carrying because they have an overweight alarm. They know where people are waiting, because there are buttons to press. The rest they can learn, if they had a simple machine learning algorithm built into their logic.

So why not use ML to evaluate different algorithms (all 4 lifts would be controlled by the same controller which I think is already the case) and optimise their behaviour based on the average time spent delivering people to their desired floor. It shouldn't be too hard because machines have already shown their abilities in handling precisely specified tasks like that.

Here's another example: traffic lights. I find myself cursing my city's traffic light system almost daily (some days I don't drive near lights so I don't need to curse them!) because of their apparent stupidity.

We have a one way system here, which carries the major traffic through the city, and these are particularly amenable to optimisation. There is no reason why, once I enter the one way system, I shouldn't be able to travel at the speed limit (unusual for me, I concede) without being stopped by any of the 10 or 20 sets of lights. At the very least I shouldn't be stopped by the lights to find there is no traffic on the cross street.

But here's a situation I see quite often: I am stopped by the lights even though the cross street is empty. I wait, and notice a car approaching the lights on the cross street. As it approaches it is stopped by the lights and I am allowed to go. It's the worst possible outcome for everyone, although I admit that if the lights went red in both directions and stayed that way it would be even worse, but I am talking but likely scenarios here, not extremes.

So again, some machine learning could be used to improve the perfomance of the system. There are sensors in the road which detect traffic, so why not use those to measure the total flow of traffic and optimise the lights based on traffic flow? I presume a simple algorithm is already being used, but I doubt whether it uses any form of learning which is the critical component of my proposal.

You might ask why this is so important, given the relatively frivolous examples I have given. Saving a few minutes here and there barely seems worth the trouble to many people, but I disagree. A few minutes a day multiplied by many days and many people can make a real difference to how much time we have for productive activities, and it might also improve our general mood by reducing frustration.

And the two examples I have given are just a start. We could use AI to improve many other aspects of our lives as well, such as appliances which use electricity intelligently, or media which is delivered based on the user's preferences. And yes, I know some of this is already happening, which shows it can be done, but I just want more.

Eventually, once the technology becomes more stable, we could ask it to do more. Self-driving cars are an area where a lot of work is already happening, but the results are mixed. Occasionally AI controlled cars do something really stupid, and might cause an accident, and possibly even death. But statistically would the same car be any safer if it was driven by a human? After all, humans also make stupid errors. But which is worse: artificial intelligence or natural ignorance?


View Details and Comments


Get Thee Hence!

2022-10-08. Computers. Rating 2. ID 2242.

There's a classic phrase, often used in horror movies, which goes something like his: "get thee hence, foul creature, you have no place here!" or maybe "be gone I say!" or "get ye gone!"

It's usually uttered by the hero as a way to reject the attack of a supernatural villain of some sort, such as a vampire. In fact, an early use is in that ultimate source of supernatural stories, the Bible: such as "Get thee hence, Satan: for it is written, Thou shalt worship the Lord thy God" from Matthew 4:10, and similar phrases are also common in Shakespeare.

Maybe you have guessed by now that this is an amusing introduction to a subject which is really totally irrelevant to what I have said so far: bad technology.

When I am helping people using poor technology, especially software, I often take a look at the program they are using an utter something such as "be gone, foul creature", for example when I see Microsoft Word or something similar.

So now that I have got my fanciful introduction out of the way, I should settle down to the subject at hand and discuss software choice. Specifically I want to offer alternatives to many of the programs people use, which might not be providing as positive an experience as they might wish.

First, I have to say that all software has good and bad points, and often pursuing one benefit will inevitably lead to a whole pile of problems, so every decision in producing a product has potential negative side effects, and this is often not given sufficient attention. But praising a positive point about a product should not be done without looking at the negatives as well, and vice versa.

And, in many cases, this comes down to the philosophy the company involved follows, rather than anything specifically technical.

So, to Microsoft. Their clear strategy is to produce programs which "do everything", which seems like a noble aim. Microsoft Word, for example, has an impressive list of features, but as a direct result of pursuing that goal a lot of compromises have been made. In this specific case it has resulted in a program which is really big, sometimes slow, awkward, somewhat buggy, and unintuitive to use.

It may be that, for many people, those problems are worth enduring to use a program which does so much, and is so widely used. But for others, a better experience might be a higher priority. And guess which camp I am in? Yes, I prefer smaller, more focussed programs, which do their core functions well, and I am prepared to do without all those extra functions - which I probably wouldn't use anyway - to get that.

When I talk about Word I often say something like "sure it does a lot, but does it really do word processing any more?" Of course, I am engaging in a bit of rhetoric there, because I know it still does do word processing, but I would say, thanks to all the other functions, it doesn't do that very well.

Here's a couple of anecdotes (so be aware this is limited as a source of evidence) of what caused me to switch from Word to alternatives, such as Apple's word processor, Pages...

I was asked to create a worksheet for a teacher which involved a central graphic with blocks of text around it and with arrows from the text to parts of the graphic. I did manage to produce the document in Word OK.

But when I went to print it, the arrows got moved around so that they didn't point to the correct part of the graphic, or the correct text block, any more. Nothing I tried helped, so I gave up and switched to Pages instead.

It took me about half the time to create the document in Pages (although I already had a practice run, so don't read too much into that) and it printed perfectly. And not only did the printed version match the screen version, but the text even looked better, thanks to Pages deeper access to the Mac's advanced typography services (kerning, leading, ligatures, etc).

Here's another example, admittedly from many years ago now...

I had to create a large user manual for a program I had written. It was hundreds of pages long, had a large index, and had multiple graphics on most pages.

After about 10 or 20 pages trying to do it in Word, it had got so slow (probably because of the graphics) that scrolling was intolerable and I gave up and used Word Perfect instead (I did say this was a few years ago!)

Even after creating hundreds of pages it scrolled so fast that the content was barely visible. I completed the manual in WP and it turned out looking good, although this was before the Mac had such advanced type management, so it probably didn't look any better than it would have if I had used Word.

Then there are programs which are quite functional, but which have unfortunate licensing arrangements.

I have been seriously into photography for many years, and used Photoshop since the earliest version (I remember using version 1.07 for a while).

I bought various versions, but then Adobe decided to go to a subscription model. In other words, you have to keep paying to keep using it. Stop paying and Photoshop stops working meaning you can't open your documents any more. Who wants that?

Well, some people do, because there are two big benefits: first, you get new versions of the program for "free" because that is built into the subscription; and second, you can use Adobe cloud services for storing and sharing your documents.

But in my experience, most people don't want these "advantages", and there is little doubt that Adobe do it primarily because it is better for them, from an accounting perspective, to have a continued source of income rather than just a single payment. But that's their problem, and I don't like the new payment model either.

So I started using Affinity Photo and PixelMator Pro, which I bought for a one-time payment and have free upgrades for ever since.

Photoshop isn't a bad program, although the way it works beneath the surface is too complicated and that leads to some problems, and it feels rather big and cumbersome, although not to the same extent as Word. But that licensing was a good reason to avoid it, at least for me.

So it seems to me that there are good alternatives out there, and anyone who finds themselves becoming frustrated with a bad computer experience should consider what programs they are using, and maybe think about trying something else. There are good programs out there, which don't come from the big software companies like Microsoft and Adobe.

There is some pain in converting, but I think it is worth it in the long term. It's time to say this to Microsoft: "get ye hence, foul creature!"


View Details and Comments


About the Cloud

2022-08-14. Computers. Rating 1. ID 2231.

After receiving some positive feedback for my post about hydrogen as a fuel source, I thought I might try another more informative (and less ranty) article about another subject people find confising: the "cloud".

No doubt many people who use computers (which would include all of my readers, presumably) have heard about the cloud, or cloud services, or software in the cloud, or something similar. But what actually is the cloud?

Well, it's a bit disappointing really, because all "the cloud" really means in this context is another computer attached to the internet somewhere. More specifically, the computer should be part of a large collection of computers in numerous locations around the world, and you would neither know nor care which computer (or more likely multiple computers) in particular your data is stored on.

The most common form of cloud storage is probably email, although this pre-dates the popularity of the word. Your email is stored on computers belonging to your local service provider (Spark, Vodafone, Orcon, etc in New Zealand) or on servers belonging to the company providing your email (Google, Microsoft, Apple, etc) which might be different from the company providing your actual internet connectivity.

I will pause here and provide some practical advice: under no circumstances do I recommend using your internet provider for your email. For example, if your email address includes the name of an ISP (internet service provider) like Xtra or Vodafone or Orcon then consider changing to Google, Outlook (Microsoft), Apple, etc.

Why? Well, for three main reasons. First, most service providers see email as a secondary function, and they tend to do it badly, so you almost certainly get a poor service in terms of storage, speed, and reliability. Second, if you want to change to a new internet service provider you might be trapped if you use your current provider's email system. Third, most email programs (Mail, Outlook, Thunderbird) can automatically configure themselves for the biggest email systems (GMail, for example) but you will probably need to provide a whole pile of technical details to configure some of the lesser used systems, like those provided by ISPs.

So, getting back to the cloud, the email is sent from one computer, passed from one email server to another until it reaches your email provider's main servers (usually using a protocol called SMTP - simple mail transfer protocol), then your computer collects it from the server using another protocol; usually IMAP (internet mail access protocol).

There are two geeky words above I should explain. First, "server". A server is just a computer which performs a service automatically. It is usually a "box" in a server room with no screen, keyboard, or mouse. It just waits for a signal from other computers and responds with the required information.

The second word is "protocol". You often see computer acronyms end with the letter "P", for example TCP, IP, IMAP, SMTP, FTP, HTTP, and many more. The P here almost always means protocol. A protocol is just a set of specific rules the computer knows how to interpret and execute. For example IMAP (internet mail access protocols) includes rules for listing new messages, sending them to the user's computer, marking them as read, deleting them, etc.

So email is a classic cloud service, and if you access your email through a web browser (Safari, Chrome, Firefox, etc) rather than an email program (Mail, Outlook, etc) then you are also using a second type of cloud service: SaaS (software as a service). In this case, most or all of the program you are using to access your email is also in the cloud. The browser just provides a place the email access software, provided by the email provider, can display its results (in this case, email messages).

The most common use of the cloud though, is for file storage. This allows computer users to store their files on a cloud service, instead of their own computer. But, remember that a cloud service is just another computer (or many computers), so what's the point?

Well, there are several. First, most computers nowadays use solid state drives (SSDs) to store files. SSDs are great: they are fast, quiet, reliable, and efficient, but they are more expensive for the same capacity as the older hard disk technology, so modern computers often have less storage than older ones did (is this progress?)

So it is more common now to not have enough space to store everything you might need locally (that is, on your computer's storage device), especially since as well as computers having less storage than in the past, files tend to be bigger. So why not store files in the cloud and just copy them to your computer when you need them? Most services just leave a "placeholder" icon on your desktop, and the user opening it automatically loads the file from the cloud - as long as you have an internet connection.

Second, many people now have multiple computers, and other devices such as smartphones and tablets, and they might want to be able to access their files on them all. For example, I can access all of my computer's files from my iPhone and iPad, as well as from a web browser from any internet-connected computer (with the appropriate password, of course).

I use Apple's "iCloud" service for this, which works really well. I just create a file on my computer, iCloud copies it to the cloud, and when I access the same location (folder) from my phone the file is already there. If I modify it on the phone, iCloud "pushes" the changed file from the phone up the the cloud, then "pulls" it back down to the computer again.

Note that every file I have is stored locally, and the cloud is primarily used to synchronise files across multiple devices in this case. If I have no internet connection, or in the unlikely event there is a problem with the cloud service, I am still fully functional, although I chose to not fully store the files on the iPad or iPhone, so those would not necessarily have access to everything in that case.

Third, cloud services allow collaboration. My wife and I both access the same shopping list through Apple's "Notes" application. So the same list is on her computer and phone, and on my computer, phone, and iPad. If I add a new item, she sees it. We can even both add items to the list at the same time. Similarly, word processing documents, spreadsheets, etc can also be edited by many users simultaneously in the same way.

Finally, the cloud provides a useful backup for your files. If you store your data in the cloud, then your computer is stolen, or is lost, or fails, just get a new computer (or tablet, etc) and connect to the same cloud service. All your data will "magically" re-appear!

So what are the disadvantages of the cloud? Well, there are a few...

First, if you store your files in a cloud service (and don't also keep them locally), then lose the internet connection, you cannot get to your data. Most people work in an office, or at home, wehre they have access to wifi, so it isn't an issue, but if they fly somehere, on an aircraft without internet (a significant number of them) then they have no access to their files.

Most services allow you to download specific files when you have a connection, and then make changes locally and allow it to update once the internet access is restored. Note that using my strategy of having everything local and just using the cloud for synchronisation avoids this issue.

Second, cloud services are not generally free. With iCloud I could get 5G of storage for free, but now pay about $4 per month for 200G. I don't store big files like music and movies there, so 200G is adequate. Other services work in a similar way, and some (such as Microsoft 365 and Adobe Creative Cloud) bundle cloud storage with the cost of the programs, but I don't like that approach.

Third, what if something goes wrong? If someone hacks your account and deletes the files in cloud storage, then the copies on your computer will also be deleted, so that they stay synchronised. Also, if there is a fault, and your files all disappear for no apparent reason, then most cloud service providers won't be too helpful in getting them back.

I had one client whose Microsoft OneDrive files all disappeared one day, and she never got them back. Microsoft were utterly useless (no surprises there). Every single file she ever had was gone! Luckily this happend a week before she retired, so it wasn't a total disaster.

Despite the fact I listed backup as a benefit of cloud services, I prefer to run my own backups as well, onto local external disks. Use the cloud as your only backup at your peril. To be fair, this data loss is extremely rare, but it can happen.

Fourth, it just makes your IT setup more complicated. There is extra software (iCloud Drive, Google Drive, Dropbox, Microsoft OneDrive) running on your computer, using up its battery, requiring updating, and occasionally needing problems fixed. Everything has a negative side to it.

Anyway, that's an introduction to the cloud for the average user. Note that the pros and cons for corporate users might be quite different, so don't think I have forgotten those aspects; I just ignored them!


View Details and Comments


My Latest Computer

2021-11-19. Computers. Rating 1. ID 2185.

Before I continue with my comments on "Global Goals" I want to briefly talk about a completely different subject. Today is my birthday, and I decided to buy myself a new computer. Actually, the fact that the computer arrived a few days before my birthday was coincidental, because I ordered it as soon as Apple announced it, and it took a few weeks for it to be delivered.

My previous computer was a 2013 MacBook Pro 15 inch, with an i7 processor, 16G of memory, and a 1T SSD. Even after 8 years, it was still a really useful computer, but Apple have signalled their intention to move to their own processors and more and more features only work with that. Also, there was a flaw in the 2013 laptops where the anti-glare layer on the screen would wear making it "fuzzy" and hard to read, especially in strong light. I had already had it fixed twice for free, but future repairs would have cost a lot, because the whole screen is replaced.

The new computer has Apple's M1 Pro processor, with 10 CPU cores. This doesn't sound significantly better than the previous machine (which had an Intel i7 with 8 cores), so I was going to be interested in the performance differences. It also has a 16 inch screen instead of the 15, although the computer isn't really any bigger thanks to the screen extending closer to the edge of the case. And many other aspects were improved from previous models too.

Anyway, what are my impressions?

Well, long story short, it's awesome. If you want a really good laptop, and don't mind spending a bit extra (NZ$4600 in New Zealand, for the slightly upgraded model I got), this is the computer to go for!

The first thing I noticed was the screen. It is not massively better than previous screens, which were already very good, but the brightness, saturation, and contrast are excellent. The infamous notch at the top to accomodate the camera and other sensors was annoying to start with, but I have found a great way to minmise that (see later).

The second thing I noticed was the sound. It produces surprisingly good sound for a metal box about 1 centimeter thick. There is more bass than I would expect, and the stereo imaging is quite impressive.

The third thing I soon found - and this is arguably the most impressive feature - is the battery life. On the first day I thrashed it for about 8 hours at work, setting up new stuff and downloading a lot of data. Even after that, at the end of the day the battery was still on 70%. My previous laptop needed a recharge during the day. This one, I don't even take the charger with me, I just charge it overnight, along with my iPad, iPhone, Apple Watch, and AirPods.

Finally, the performance. Well, everything is close to instant. There's just no significant delay doing anything, which makes getting things done so much easier. As well as speed just being inherently good, it helps in two other ways. First, my thought processes work best when the computer does what I want as soon as I think of it. If I have to wait I have to think about what it is I was about to do (and no, that's not because of my advanced age). Second, I use features I wouldn't use in the past because they would have taken more time.

Here's an example. The graphics program I use now is called "Pixelmator Pro" (I don't use PhotoShop any more because I think Adobe products are bloated, and I don't like their licensing). It has a feature which uses machine learning to intelligently re-size images. This feature is far more complex than a simple resize and on the previous computer it would take about 3 minutes to process a UHD (3840 x 2400) image. As well as that, the fans would come on because the CPU was working so hard. Remember, for its time this computer had a great processor: the mighty i7.

Well, the new computer does the same thing in 5 seconds and there isn't a hint of extra heat or fan noise. In fact, nothing I have done so far has caused any significant heat, and certainly hasn't triggered the fans. Of course heat is just your battery power being converted into useless energy - entropy is such a pain - so the fact that the CPU cores are basically idling most of the time leads to awesome performance when you need it, and great battery life as well.

So it's fast, it looks good, and it sounds good. What else is worth noting?

The fingerprint sensor is very handy to unlock the device. As far as I can tell it works every time and can't be easily fooled. I also have the computer unlocked automatically by my Apple Watch, so I don't even need to use the sensor most times.

And, if you have high speed peripherals, the new USBC or Thunderbolt ports are amazingly fast, and more convenient than the old USBA connections.

What about the negatives? I mean, nothing new is all good, is it?

OK, that notch is a bit annoying, but I have found a cunning way to make it disappear. I created some desktop pictures ("wallpapers") with a black band along the top. Now it looks like the menu bar is a continuous band of black with the main screen below it. It's like the menu bar (which I had hidden on my previous machine) is not on the screen, but is still visible.

This laptop is slightly thicker than my previous one, and looks more "industrial" and somewhat less sleek. But there is a better gap between the screen and keyboard so that screen wear issue I mentioned should be eliminated. And presumably the extra thickness has allowed a bigger battery, leading to that great battery life I have already noted above.

There is also the issue with the new ports, so I can't plug older USB peripherals in directly. But at least there is a Magsafe power connector (incompatible with earlier MagSafe 1 and 2 - gee thanks Apple), and an SD card and HDMI port, as well as three USBC ports.

In general, this thing is just awesome. I have decided to rebuild my software from scratch instead of directly copying from the previous computer (which is what I have done in the past), so there are few things I haven't set up completely yet - like a Windows virtual machine - but other owners have reported this all works really well, so I am optimistic.

After 8 years (the time between my previous computer and this one) nothing has fundamentally changed, but the incremental improvements do make a difference. If you have a few thousand dollars to spare, and need a new computer, I highly recommend this one!


View Details and Comments


Ten Years Without Steve

2021-10-05. Computers. Rating 1. ID 2156.

Recently, we have reached the tenth anniversary of the death of Steve Jobs. I wrote a blog post just before that event, when he resigned as CEO of Apple, and mused about whether the loss of this visionary founder would lead to Apple's ultimate demise. After all, many people noted how Apple had done badly the previous time Steve had left (or been booted out of) the company, and quite sensibly, they wondered whether it could happen again.

Well, Apple is now worth more than any other company, including Microsoft and Google/Alphabet, based on market cap. I know that is just one way to measure a company's value, but it is a good indicator of the health of the company. So it seems they are successful in that area, at least.

But what about other aspects of Apple's current status? Here's my thoughts on the future of the company from that post in 2011...

When the Apple board threw out Jobs in the mid 1980s they had decided that it was time for Apple to "grow up" and they tried to run the company just like all the others. That clearly didn't work because if you become just one of the others you can only really compete on price and that is a sure path to failure.

So Apple must keep its distinctive culture and it must continue to take risks. It should design new products based on what people will want (even if they don't know it yet), not on what they are asking for and definitely not on what they have had in the past. But they should be a little bit less extreme about it and try to pay a little bit more attention to backward compatibility.

How hard can it be? Judging from the total failure of almost every company except Apple obviously it is very hard! Can Apple do it without Steve as CEO? I certainly hope so or the future of technology looks rather dreary.

So you can see that I was hoping for the best of both worlds there: the freedom and innovation of classic Apple, but a bit more maturity and practicality as well. So how have they done? Well, as I said, financially the company looks great, but what about their efforts in the area of innovation and superior design?

There has been only one new class of product created in that time: the Apple Watch. That has been very successful, and Apple out-sells all the other smart-watch manufacturers combined, as well as selling more watches than every company in Switzerland combined. And, apart from sales, the Apple Watch is a very good product. I have the Series 4 version and I find it very useful and well designed.

All other product categories are also strong. The iPhone is the biggest source for Apple's income and, although there are more Android phones than Apple, in the premium market it is dominant. And the iPad is clearly the dominant premium tablet. The Mac market share is small compared with all the PC brands together, but it is still amongst the biggest selling brands of computers. And services have become far more important in recent years. iCloud is finally a useful and reliable service, and Apple TV+, Apple Pay, and other services are successful without being dominant in any way.

So, from a business perspective Apple seems to be doing very well, but from a design and innovation perspective the story is a bit more doubtful. I have noticed a distinct drop in reliability in Apple laptops from the 2018 and 2019 era, although those issues now seem to be fixed. And what Apple tried to tell us was a very innovative product - the "trash can" Mac Pro - was really a failure. But at least it was a failure that they seem to have learned from and fixed.

Apple do seem to be a little bit more responsive to negative user feedback now than in the past. The pro market didn't like the Mac Pro so Apple re-designed it totally and now it is a real powerhouse - although a very expensive one! And people didn't like the new laptop keyboard and that has now been replaced with one of a similar design to the older version. And don't forget that Apple made plenty of mistakes when Steve Jobs was in charge too. Need I remind you of the unfortunate case of the "hockey puck" mouse!

I often say that influential people always enjoy some element of luck. Steve was in charge of Apple during the most innovative years of the computer industry. It was early enough that many product categories hadn't been filled yet, but late enough that the technology to create these products did exist. Of course, it took great vision, knowledge, and persistence to do what he did, but if he had been involved 20 years earlier or later he might not have been as influential as he was.

So maybe all the important new devices have already been created: desktop computers, laptops, music players, smartphones, tablets, smartwatches, web-based services. What is left? Maybe there is nothing else new we need, so that explains why Apple (and other companies) seem to be less innovative now.

There are two categories which might be candidates for "the next big thing". One is AI (artificial intelligence), and the other is augmented reality devices such as glasses (Google Glass was an early, but mostly failed, attempt at this). Just while I am discussing this, there is another product, which is a bit outside Apple's usual area but which they are rumoured to be seriously working on: self-driving cars.

AI is being used increasingly in existing devices, so it seems more that we will slowly use it more and more without really noticing that it is happening. VR glasses are more like a traditional product which will become more obvious when an elegant model is available (most likely made by Apple, of course). One type might be for day to day use, and project information onto the wearer's visual field, and another might be a VR head-set for purposes like gaming, which completely takes over their experience.

Note that even if I expect Apple to create these devices, I'm not saying they will do it first. In fact, they haven't really done anything first, but they have done it best.

If Apple continue with that strategy - to create the best products - then they should continue to succeed. They do need to be a bit careful though, because some recent products (both hardware and software) haven't been up to the standards of reliability and quality I would expect.

So Apple are doing OK without Steve. Would they be doing better if he was still with us? Well, maybe. Who knows what he might have come up with. But we still have some very fine products, and I think the company has hit a fair compromise between "growing up" and remaining the crazy company we all love.


View Details and Comments


Another Anti-Microsoft Rant

2021-07-01. Computers. Rating 3. ID 2136.

I work in IT support. I also often help people with other technology, and I am a part time programmer. So I have had some interesting experiences with various computer and other companies. Sometimes these can be quite positive; I have to say that in general, Apple is pretty good, for example. But most times interacting with support services is a complete and total nightmare.

The last time I had an experience so bad that I had to blog about was with New Zealand's leading communications company, Spark. I ranted about their somewhat less than perfect service in a post titled "We Do Our Best" from 2020-11-12. But that was a great experience compared with my latest frustrations with... wait for it... Microsoft!

If you read this blog you will probably have gathered by now that I don't like Microsoft, for two main reasons: first, I think their software is horrible - and I might as well include their hardware and services in that; and second, I object to them "politically" because they have used their monopoly position to force computer users into using their products and this has held progress back for years.

I should moderate those two opinions a bit before I continue, just for some balance and fairness. First, I think Microsoft software has a fairly good feature set - in other words, it can do a lot of things - but I find it awkward and ugly, so it is how it does things rather than what it does which primarily concerns me. And second, I don't blame Microsoft for taking advantage of the near monopoly situation they are in, because most other companies would do the same thing.

Anyway, I'm not a big fan of Microsoft, but that isn't why I am ranting about them here. Read this description of my issues and see what you would have thought in my situation...

It all started when I was asked to set up a new M1 iMac for a friend. By the way, those new machines are really nice: they are fast, and have a nice bright screen and excellent sound. But this person decided they needed Microsoft Office, and fair enough because it was what they were used to using, and although I don't like it myself I understand when other people want to take the "easy route" and use the "standard software".

When I went to install it I just followed the instructions. This isn't something I do often, and by that I mean I don't often install a commercial version of Office and don't often follow instructions! But I did this time and it didn't work. I got an obscure hexadecimal error code and a message like "the key has been blocked". This was a legitimate key purchased from a local retailer, so that was a bit surprising.

There was an icon on the web page I was trying to download from which said "click to chat, we can answer straight away". So, of course, I gave it a try.

I like on-line chat help systems for several reasons: they tend to be less busy than the phone service, so I can usually get a quicker response; they are often staffed for longer hours than phone services; they don't cost anything in phone or toll calls (although many phone services don't either); they make transferring information like URLs, passwords, and instructions easier; they provide a log of everything you have said you can refer back to; and they eliminate the problem of understanding some foreign accents which are common on helpdesks.

So I clicked the icon to start a chat which they claimed would be answered immediately. But, of course, it wasn't. It said something like we are busy right now, leave your email address and we will quickly get back to you. So I did that and waited, and waited... and after two days I gave up. In fact, now after almost two weeks I still have had no follow up.

The next day I thought I would try again. I got the same error, so I clicked the chat icon again. This time it didn't offer an immediate response but I entered my question and waited... Again, there was never a response of any kind.

I tried it again the next day and got the same result, so I thought I might give the good ol' phone call a try. I tried the 24 hour line to be informed it wasn't available at that time. So much for 24 hours.

So I tried the Australian phone line. First, I was cut off with no response, so I tried again. I got through to a helpdesk person who had a quite heavy Indian accent. I have nothing against Indian people, but generally if helpdesks use them it is because they are cheap, so this did not fill me with enthusiasm. However I persisted.

The person made some odd comments which made me suspicious about his knowledge, but I gave him a chance to fix the problem, including letting him control the computer remotely. He eventually reached the following conclusions: this didn't work because "technology is changing quickly", the computer needed a "modern program" installed, and the firewall was configured wrongly.

None of these made a lot of sense. First, the license was purchased only a couple of weeks previously and it is unlikely anything significant had changed in that time; second, he couldn't explain what a modern program was or what program he was referring to; and third, the firewall was off, so how could it interfere with the result?

I asked politely for clarification a few times, but he just repeated the same phrases again. I was a bit annoyed at this time, so I said I have been an IT support person and programmer for 30 years and I didn't know what he was talking about. He got a little bit aggravated at that point and said "OK, fix it yourself then", although he did say to call back when/if I failed. Of course, I wasn't going to do that.

So I decided to go straight to the highest level and contacted the US phone helpdesk. No one answered and again the "24 hour" system seemed to have been a bit over-hyped.

So I tried the US on-line chat system, which I found by clicking a few links and eventually finding some contact information. This was answered after just 5 or 10 minutes but when they heard I was installing onto a Mac it had to be transferred. I thought this would be the end, but after waiting just one hour a Mac person did get back to me and this person was fairly competent. After about 30 minutes we had the answer and I got Office installed,

If you are wondering about what the problem was, it was this: the user had already installed Office on an iPad using that license key, and even though the key was good for 5 devices, you must use a different, undocumented procedure for installing after the first time. That's why I don't follow instructions!

This does illustrate a deficiency in many programs and operating systems today: they are poor at reporting errors. I often see messages like "the process could not be completed because an error occurred". Gee, thanks, that's so helpful; any further hints? And more often you don't get a message at all; you just click something and nothing happens, or a process of some sort starts and never finishes.

The case I am describing here is something a bit different, to be fair. I did get an error message: "Error 0x000016fa" (I'm OK with hexadecimal, but many people might be thinking what kind of number is that?) and "Your key has been blocked". Searching for that error gives about 20 different possibilities, none of which applied to my situation. So in some ways, this error message was worse than none at all.

And why didn't the support person from Australia know about this? Does this not often happen? And why was he waffling on about more modern programs and firewalls? If he didn't know the answer, say so and escalate me to a more senior person.

The whole situation is fairly terrible and I have to admit that Apple's error messages are probably just as bad as Microsoft's. But their technical support is actually pretty good.

Finally, here's the feedback I gave Microsoft. Is this too harsh? ...

How satisfied are you with this Microsoft Support experience? Poor
Microsoft made it easy for me to handle my issue? Terrible
Comments: I tried 4 times to contact on-line support, and three times for phone support. I either got no reply, promises of returned contact which never happened, and one person I talked to was incompetent, talked a load of nonsense, and when I questioned him he was quite insulting. The last support person (name withheld) was very good, but the rest of my experience was possibly the worst in my long career in IT. The answer was easy in the end, but thanks to poor instructions and non-existent support, it became a lot harder than it should be. Try Apple some time, and see how support should work!


View Details and Comments


The Latest from Apple

2021-06-16. Computers. Rating 1. ID 2133.

I thought I might take a brief break from my usual vitriolic political commentary and make a post on a more geeky subject: Apple's future path in terms of its Mac computer hardware.

The main reason I am doing this is the fact that I recently set up one of the new "colourful" iMacs that Apple released a few weeks back. I also use many other Apple products - some of them recent and some quite old - so I think I have a reasonable background for this commentary.

The biggest news in the Apple world recently has been the start of yet another major architecture transition for the Mac platform.

The first Mac, released in 1984, used a CPU (central processor unit, which is the "brains" of the computer) made by Motorola: the 68000. This was a more advanced processor than what PCs were using at the time (the Intel 8086) and Apple branded the new machines as "32 bit super-micros" (or something similar). Since then the 68000 was upgraded to the 68020, 68030, and 68040, but eventually the performance of these reached a point where it wasn't progressing as much as Apple wanted.

So they switched to the PowerPC platform. These were also superior to the CPUs being used by PCs at the time, but the consortium responsible for the PowerPC platform (Apple, Motorola, IBM) just ran out of enthusiasm and the last PowerPCs Apple used were very powerful but also generated a lot of heat, making them unsuitable for use in laptops which Apple (rightly) saw as the future of computers.

So they took the easy path and started using the same Intel processors that PCs were already using. This meant that one (at least theoretical) advantage of Macs was gone because they could no longer claim to be using a superior CPU, but it did make running Windows software on Macs a lot easier, which was useful to some people.

So, as you might have guessed, Apple are doing it again. Intel seems to have hit the same brick wall as many other manufacturers have in the past. Intel really isn't going anywhere with any speed, so Apple have decided it is time to change again, to a CPU architecture they designed themselves (although some of the underlying design comes from ARM and is used under license).

The new processor is similar to that used in the iPhone and iPad. These devices are far ahead of the opposition in terms of speed, and just as importantly, power use, so Apple do have a good basis to progress the platform using this hardware. Also, it is something they can fully control themselves, instead of depending on the somewhat precarious reliance on Intel's unimpressive future roadmap.

If you are a computer geek you will see the problem here. If you're not, I will briefly explain. Every CPU is like a little brain that takes in instructions and does what is is asked to do. The problem is that every CPU "speaks a different language", so programs (which are just lists of instructions) which work on an Intel CPU will not work on the new Apple silicon.

In the past Apple have provided a "compatibility layer" which is an invisible program which translates the instructions on the fly. So instead of the CPU reading an instruction then executing it, it now reads the instruction intended for the old CPU, translates it into one or more instructions which are the equivalent on the new CPU, then executes them. This worked well for the previous processor transitions, and they seem to have done a good job again this time.

Obviously this translation process does reduce the speed the program can run at, but it is remarkable how fast it actually is. Many people report old Intel programs running faster in the translator (known as Rosetta 2) on Apple silicon than they did directly on the Intel processors (with no need for translation) on older computers, just because the new processors are so fast.

That has been my experience with the several new Apple silicon laptops and the one new iMac I have set up. They really are fast (especially when considering they are all entry level, consumer machines), and the power use is so low that the laptop batteries last a lot longer and they generate less heat as a result. In fact, some of the basic models don't even have cooling fans.

As well as the speed and efficiency, the new processors offer another advantage: they can run programs designed for the iPad much more easily. Maybe the biggest deficiency of the Mac currently is lack of games, but the iPad has plenty. Note that mobile gaming is actually bigger than both the console and PC game markets, and the gaming market overall is far bigger than either movies or music, so gaming does matter.

Of course, it's not all good news. Now that Apple doesn't use the main architecture used by PCs (Intel and clones), running PC programs will be a lot harder. At this point it is unclear how successful PC emulation might be on the new machines. This might be a lesser problem than many would think. For example, I have a Windows virtual machine on my older (Intel) Apple laptop, but I can't remember the last time I used it. Still, it is an issue worth considering.

There are various rumours circulating now that Apple might introduce pro level processors in the near future which might feature a lot of cores. Again for non experts: a core is like a single processor inside a chip, and almost all CPUs now contain more than one. Each core can do some work, so a 2 core processor is (roughly speaking) twice as fast as a single core - if the programs are written to take advantage of them.

Currently, most processors have 4 or 8 cores, but rumours suggest Apple might give us a lot more than that in future; maybe over 100 in total. Also, Apple build different types of cores into their processors (or more accurately systems on a chip - SOCs). So they have high performance cores for speed, high efficiency cores for lighter tasks which need less power and won't drain the battery, specialised cores for signal processing, and graphics cores.

There seems to be a lot of potential here for building really high performance machines which are still relatively light on power use. The SOCs are also a lot smaller, and the whole board in the new iMac is about a tenth the size of the previous model, so the computers can be smaller (check the thickness of the new iMac: about 1 cm). Also, specialised processors allow encryption, security, location awareness, and other functions to be easily and efficiently implemented. Finally, because of fabrication techniques, and in-house design, the new hardware should also be cheaper!

I do have to say though, that the current processors really are intended for use in consumer level devices, so the successful transition to pro level performance is not guaranteed, although most people seem to be optimistic.

So that's the big news from the Apple world. Do many people care about this geeky technical stuff? Well, probably not. But do they care about getting a cheaper, lighter, smaller, quieter, cooler laptop, which is also faster than what they had in the past while having "all day" battery life? Probably yes!


View Details and Comments


The Internet Again

2021-05-29. Computers. Rating 2. ID 2130.

Many people claim that society is being dumbed down because of the internet. They see the fatuous drivel which is common on social media sites like Facebook and TikTok. They see the vicious attacks over politics and social justice on Twitter. They assume the internet is always bad, or at least bad on balance. They think the world was better in the past, because people were less distracted and less susceptible to misinformation. I think they are wrong.

I have defended the internet on several occasion in the past, such as a post titled "The Internet is Best!" from 2017-03-17, but maybe it's time for a new post on the subject.

First, what about the observation that a lot of internet activity is trivial? Well sure, it's hard to disagree with that. But I think this is an example of Sturgeon's Law which states that "90% of everything is crap". I don't think we can interpret this literally, and I'm sure Sturgeon didn't really mean it to be, but the point is that if you look at anything you willfind a significant part of it is trivial, or pointless, or untrue.

So if a lot of what happens on the internet is frivolous then I say that the same applies to most other similar things. For example, I would claim most of what is seen on TV is garbage, the same goes for radio, and news services. So in saying that a lot of what happens and what is stored on the internet is worthless, I would say: sure, but what's the alternative?

And one attribute of the internet which does make it potentially superior is the non-linearity and the possibility of being selective. For example, when I go to YouTube I could literally spend my whole life watching videos with no real merit at all, but I could equally spend that time watching material with considerable merit.

Yesterday I was randomly browsing through YouTube and found some semi-technical videos about the design of car engines. I watched one which discussed the pros and cons of straight, V, VR, and flat designs. The subject is really subtle and quite sophisticated and I found it really interesting. I don't know whether that knowledge I gained has any real practical use, but all accurate and non-trivial information is worthwhile, I think.

I really believe I have benefitted in two ways from the internet: first, it has provided me with information about topics I have never thought of pursuing before; and second, it has given me more balance on subjects I might only see one side of otherwise.

Regarding the first point. I have stated before that my aim on life is to know more than average about everything. This sounds quite challenging, and maybe even arrogant, but I should point out that the average knowledge on any subject at all is quite insubstantial. Also, while this is an aim of mine, I agree that I am unlikely to ever achieve it; I'm not that arrogant!

As an example of the breadth of knowledge I have achieved so far today, here are 6 documents I have downloaded or created for future reference: Cantor diagonalisation, the
Planned Parenthood 2015 videos controversy, self-reference in philosophy, the T90 versus the M1A2 Abrams main battle tank, undecidable problems in computer science, and the VR6 engine.

I challenge anyone to come up with this sort of stuff without the internet! By the way, just in case you were wondering: Cantor diagonalisation is a surprisingly understandable way to prove there is more than one infinity; the video controversy is interesting and I'm not sure whether the claims have merit or not; self-reference is about models and ideas referring to themselves which often results in paradoxes; the T90 (a Russian MBT) is probably a bit better than the Abrams (American), but it depends so much on infrastructure, support, and crew training; undecidable problems are real, and a major limitation in the underlying understanding of algorithms; and the VR6 is an interesting design which achieves some benefits, especially in terms of simplicity and reduced cost.

I agree, I could just have easily downloaded information on the Justin Bieber fan club, the Friends reunion, how someone's cat attacked their dog, or some inane drivel about the life of a celebrity, but I didn't, and that's the point. It isn't the internet which is making some people dumb, it is how they are using the internet.

As far as a greater variety of perspectives in concerned: I have to push back against the idea of the internet being an echo chamber of radicalism. Sure, that can easily happen, but it is up to the individual to try to stop themselves from falling into that trap. I try to watch one conservative or libertarian video for every liberal one I watch on YouTube, for example. I really would have no other source for this because the news media in New Zealand is hopelessly left-biased.

And after doing this I have realised how poorly the mainstream media really serves us. I always say that after watching the TV news I know less than when I started. This is a rhetorical point, and not to be taken too seriously, but there is an element of truth there. If I see a TV news item and am really interested in it, I try to get tot he truth on-line. And yes, I know I could easily be misinformed by doing that, but I am careful about my sources, and I try to deliberately look at all sides of anything which is controversial.

A recent OECD report showed that young people, who use the internet extensively (maybe even as much as me), are quite good at distinguishing what is opinion and what is fact, so the common concern that internet users can get tied up in fantasy is certainly overdone.

So yes, the internet is awesome. Is it potentially a place I could be hopelessly ill-informed, radicalised, and brought to the brink of losing touch with reality? Absolutely! Is it also a place where I can become really well informed, fairly instructed regarding the nuances of any topic, and gain a far deeper and wider appreciation of the real word? Also, yes.

Like most tools, the internet is neither good nor bad, but it must be used with respect and skill to be genuinely useful.


View Details and Comments


Apple Bugs

2021-05-06. Computers. Rating 2. ID 2126.

My primary job is Mac computer support. I also do some programming, and repairs, and networking, and server administation, and web site creation, and hardware work, but I mainly do general support work.

Many years (yeah, OK: decades) ago, I was a PC programmer, but once I started working with Macs I realised PCs really suck and I preferred Apple's hardware, operating system, and software. This attitude hasn't changed much since then, and I am still a big Apple fan, because PCs have got better but still have numerous issues.

But Apple is far from perfect. Every day I encounter numerous bugs and glitches with Apple software, and at various times their hardware has been problematic as well. It has got to the point where, if I am attempting anything even remotely out of the ordinary, I am surprised if it works first time, or even works at all.

For example, I recently had to upgrade a Mac laptop to a new operating system. I had to do this because the email system the person needed to use was not supported any more on the older system. The changes didn't really give any new functionality, they just broke a lot of older email clients. note that this was for a Microsoft email server, so I'm not really blaming Apple for that part.

Many years ago a major update was a risky procedure and I would make sure I ran a backup first. But now the process is generally far more reliable and backups take a long time, so I didn't do one this time.

As you might have guessed, things went horribly wrong. The update got to almost the last step then failed with an obscure error message which didn't seem to be documented anywhere. In fact, the disk (an SSD, in this case) was totally scrambled, and even the "heavy duty" disk recovery programs I have couldn't fix it.

Luckily the owner of the computer had run a backup quite recently, so I installed a new SSD and restored to it from the backup. But I still face a tricky attempted recovery of the newer files from the damaged drive.

Another common problem is security. I had another client who needed to log in to their Apple account, but every time they tried it accepted the password but immediately returned to the login screen again. I know the password was right because it worked on a web browser, but even a restart didn't resolve the problem.

And how often do I try to generate a code to allow two factor authentication, only for it to never arrive? I know 2FA does give some significant advantages in terms of security, but if we have to use it make sure it is reliable please!

At this point you might be wondering why I persist with Apple products and why I don't use Microsoft, Google, or Linux instead. Well, there are two reasons for that...

First, however bad Apple is, those other companies and organisations are even worse. Problems similar to those I described above seem to happen even more often with Windows, and are often more difficult to recover from as well.

I know there are Windows users out there who claim to have had no problems at all, but I'm talking about all users, not just those who are lucky enough to have not encountered too many bugs! Also, when questioned more carefully, most people who have "no problems" in fact do; they have just learned to live with them.

Second, even when Microsoft hardware and software is working exactly as intended it is still horrible. The problem with Microsoft goes away beyond mere bugs. Their stuff is deficient to the core. It is ugly, slow, and confusing, even when it is working perfectly. At least most of Apple's products - both hardware and software - are pleasant to use when they are working properly.

And, while I have some admiration for Linux, I would never use it. In fact, I have installed Linux on my Mac and then wondered what to do with it. The fact is that with Unix on a Mac I can do anything I would ask Linux to do, plus I have a usable graphical interface, and access to a lot of great programs which aren't available on Linux.

So it's not just as simple as having a bug free system, because Linux is probably more free of bugs than either Mac or Windows. It's more about having a system with as few bugs as possible, but also having access to the required software and hardware I need, which Linux can't do.

In some ways I should be happy these bugs exist, because fixing them is a major part of my job and maybe I would have no role at all if the Mac systems were better. Ironically, that's the exact reason I have (partly facetiously) suggested might be why Windows support people like Microsoft so much!

I don't expect any system to be perfect, but I do expect a bit more Apple. They have access to almost infinte resources, and charge a premium for their products, and there is no excuse for producing sub-standard products, especially the OS. On the other hand, when macOS works it is excellent. When either Windows or Linux is working perfectly, it is still horrible!


View Details and Comments


You have requested 20 entries and 20 have been displayed.



I do podcasts too!. You can listen to my latest podcast, here: OJB's Podcast 2024-08-22 Stirring Up Trouble: Let's just get every view out there and fairly debate them..
 Site ©2024 by OJBWeb ServerMicrosoft Free ZoneMade & Served on Mac 
Site Features: Blog RSS Feeds Podcasts Feedback Log06 Jun 2024. Hits: 40,911,718
Description: Blog SearchKeywords: Blog SearchLoad Timer: 19ms