Linux's update handling was obviously designed by a bunch of kids in their mommy's basement who spend all day on the internet, and whose mommy pays for them to have a high-speed broadband connection.
Down here in Mexico, I don't have an ultra-fast broadband connection. I don't have internet at home either. I'm just a recovering Linux fanboy trying to develop a life beyond sitting around on Slashdot and Digg all day flaming anyone who doesn't think Linux is God's gift to earth.
So, this morning, I booted in the Ubuntu that I installed last weekend (I had some time to kill before dating a girl Saturday evening). Got on the network, then started seeing if I could download the updates to bring my system up to date.
Well, except the connection here at work was seriously lagging (it does that sometimes). The DNS broke down about halfway through the painfully slow process of seeing what packages were available to be updated.
Does the update manager bother to cache the IPs of the site it connected to to get updates? Nope. Is there a usable DNS server on the localhost port in the default install of Ubuntu? Nope. Does Ubuntu come with a usable compiler and development environment so I can compile my own DNS server on the localhost port? Nope, you have to apt-get it.
apt-get is another usability nightmare. You would think apt is smart enough to figure that anyone who wants a C compiler also wants to, you know, compile programs that run. But, no. Once you get gcc, you also have to hunt down and get the "libc-dev" package to compile anything.
So, anyway, this process of looking for updates failed halfway through. I had a list of packages to update, but I have no idea if some critical security update was missed. I had better things to do with my time than to restart the "look at the big huge package lists to see what updates I need" process.
So I booted back in to Windows.
Let's compare this to Microsoft. With Microsoft, the update process is one that is perfectly usable, even on a dialup connection. It will run the updates in the background, with a low priority given to the packets uses for downloading updates. This required something Linux isn't very good at: Coordination with the people responsible for making sure the operating system is up to date and the people who implement the TCP stack is as simple as getting a few people together in a meeting room somewhere in Redmond and talk about the need to have it so downloading updates can be done on a slow connection that is often offline without affecting the user's internet experience.
The update process is one where, if something fails, like DNS dying halfway through the process, the task is stopped where the failure happened, and can be painlessly started again without needing to go back to square one. It's one where downloads can be interrupted and resumed again at any time.
The closest Linux gets is with CentOS, where I can just go here and download the updates by hand. Should the download be interrupted, I only have to re-download one package instead of the whole spiel. Once I download the updates, I then have to, by hand, see which RPM files I have on my system and update them. OK, I can kinda-sorta automate this with something like for a in *rpm ; do if rpm -qa | grep $( echo $a | awk -F- '{print $1}' ) ; then rpm --upgrade --nodeps $a ; fi ; done but that's a little unreliable and buggy.
But even that doesn't hold a candle to Microsoft's update process. Don't get me started on the distributions where the distribution maker one day lost interest in keeping the distribution up to date, making it so the distribution has no security updates whatsoever.