Thursday, April 27, 2006

The Myth of the Password Change

Eugene Spafford has a recent blog post on how security "best practices" are often just myths that have been passed on over the years, and have no current basis as a true best practice. The example he gives is the required monthly password change, which is a holdover from the non-networked mainframe days of old, and does nothing to truly increase password security in today's world. He recommends one-time passwords or two-factor authentication (tokens):

In summary, forcing periodic password changes given today's resources is unlikely to significantly reduce the overall threat - unless the password is immediately changed after each use. This is precisely the nature of one-time passwords or tokens, and these are clearly the better method to use for authentication, although they do introduce additional cost and, in some cases, increase the chance of certain forms of lost password.

I mentioned previously how dangerous simple password authentication was in the context of securing SSH servers. Spafford's article goes into much more detail than I did on the risks of using passwords (I only addressed one of his seven failure modes - cracking), it's definitely worth reading if you are an admin.

Technorati Tags: , ,

Tuesday, April 25, 2006

Restoring Files From an Amanda Tape Backup

I've used the University of Maryland's open source Amanda tape backup system for some time now. There is some documentation on restoring entire disks or partitions with Amanda, but restoring individual files from tape or image file wasn't that intuitive, so I thought I'd share my experiences.

Here are the assumptions I make - it shouldn't be too hard to replace these values with your own in the synopsis below.

  • Your tape drive is on host 'backupbox' as device /dev/st0
  • Your Amanda backup set is named 'DailySet1'
  • Your tape dump files are stored on backupbox in /dumps/amanda/work/...
  • The host you are restoring data to is named 'gandalf'
  • We're restoring the file /etc/smbpasswd to host gandalf. The (Linux) partition name that this file is on is sda8

We will use amrestore (8), not the nicer amrecover (8), simply because amrestore is universal, and works even if indexes have not been generated during the backup process (i.e, if "index yes" is not set in the amanda.conf dumptype sections).

First, from a shell on backupbox, cd to the /tmp directory and rewind the most recent tape:
cd /tmp mt -f /dev/st0 rewind
Next, we use 'amadmin' to pull dump data on the partition in question. We're doing this to find out which tape to use for the restore operation. Our best bet will be the most recent level 0 (full) backup, since we just want to restore a single file. If you were restoring an entire disk or partition, you would probably need to restore the last full backup, as well as multiple, incremental backups.

backupbox:/tmp# amadmin DailySet1 find gandalf 'sda8' Scanning /dumps/amanda/work... 20060401: found Amanda directory. date host disk lv tape or file file status 2006-03-31 gandalf sda8 0 TLPC01 12 OK 2006-04-01 gandalf sda8 1 /dumps/amanda/work/20060401/gandalf.sda8.1 0 OK 2006-04-02 gandalf sda8 1 TLPC03 7 OK 2006-04-03 gandalf sda8 2 TLPC02 11 OK 2006-04-06 gandalf sda8 2 TLPC04 10 OK 2006-04-07 gandalf sda8 0 TLPC05 12 OK ... backupbox:/tmp# amadmin DailySet1 info gandalf 'sda8' Current info for gandalf sda8: Stats: dump rates (kps), Full: 495.0, 734.0, 740.0 Incremental: 479.0, 205.0, 302.0 compressed size, Full: 17.9%, 21.3%, 21.4% Incremental: 22.9%, 10.3%, 11.9% Dumps: lev datestmp tape file origK compK secs 0 20060407 TLPC05 12 10709970 1920768 3877 ...
Now we put proper tape in the tape drive and rewind again. We'll use tape TLPC05, as it's the most recent full backup.

mt -f /dev/st0 rewind amrestore -p /dev/nst0 gandalf sda8 | restore -i -v -b 2 -f -
Note that for the amrestore command, we use the "non-rewinding" version if our tape device, /dev/nst0. This will dump us into an interactive restore mode. here are some of the commands we can use the from within the restore shell:

CommandDescription
add [path]Adds files or directories to an extraction queue
ls, cdShow a directory listing or change directories, works as expected
extractExtracts queued files (including the leading path) to the current directory

We'll see some of these commands in action now as we go over how to use an Amanda dump file with amrestore.

In the output above, we saw that the file /dumps/amanda/work/20060401/gandalf.sda8.1 was available as a level-1 backup, but had not been dumped to tape yet. This happens from time to time if there is a tape error of some sort, or the admin forgets to switch tapes. Amrestore works just as well with these files as it does with tape devices. The command to use is this: amrestore -p /dumps/amanda/work/20060401/gandalf.sda8.1 | restore -i -v -b 2 -f -
Here is what our session looks like as we restore /etc/smbpasswd from this file:

amrestore: 0: restoring gandalf.sda8.20060401.1 Verify tape and initialize maps Input is from file/pipe Input block size is 2 Dump date: Thu Apr 1 01:28:06 2006 Dumped from: Tue Mar 30 00:51:29 2006 Level 1 dump of / on gandalf:/dev/sda8 Label: none Extract directories from tape Initialize symbol table. restore > cd etc/samba restore > add smbpasswd Make node ./etc Make node ./etc/samba restore > extract Extract requested files restoring ./etc/samba/smbpasswd extract file ./etc/samba/smbpasswd Add links Set directory mode, owner, and times. set owner/mode for '.'? [yn] n restore > quit backupbox:/tmp# ls -lR etc/ etc/: total 4 drwxr-xr-x 2 root root 4096 Jul 7 12:37 samba etc/samba: total 12 -rw------- 1 root root 10622 Aug 18 16:13 smbpasswd backupbox:/tmp#
That's it. Amrestore has now copied the smbpasswd file into the /tmp/etc/samba directory for us. Now we would just have to copy the restored file over to the host gandalf.

Technorati Tags: , ,

Desktop Linux and Microsoft's OEM Power

TechNewsWorld has an opinion piece up about how Linux May Never be a True Desktop OS. In it, Rob Enderle reiterates all the tired, old reasons for thinking this, like "Free" not meaning "Free", and how Linux installations are too "different", raising end-user support costs. All nonsense. I've talked about this before. The real reason why Linux hasn't made inroads into the desktop (and may never) is because of Microsoft's power in the OEM market, enforced and maintained by their draconian OEM licensing agreements. This example is rather dated, and MS has reportedly altered this to allow some cosmetic Desktop and start menu options after the US vs. Microsoft anti-trust case was settled. Head over to Kuro5hin for one of the best explanations I've seen that substantiates MS's OEM power to squash desktop competition. As that article indicates, MS still controls the end-user bootup process, something that killed off BeOS and, according to the US DOJ, seems to be a pending concern with Vista:

Plaintiffs have received a complaint regarding the ability of OEM's to customize the first-boot experience in Vista, and in particular concerning the Welcome Center, a new interface that presents the user with various setup options and commercial offers (presented by Microsoft and OEMs) at the end of the initial out-of-the-box experience. Plaintiffs are also talking with several industry members who have expressed additional concerns regarding aspects of Windows Vista.

Technorati Tags: , , ,

Saturday, April 22, 2006

Article Roundup

The Program Manager for Microsoft Platform Strategy thinks the GPL is more restrictive than, say, this. No, really.
My general opinion on GPL version 2 or version 3 is that it's the most restrictive license you could possibly have. So the idea of freedom and the GPL are not one and the same. It restricts the author.

A comparison of the GPL and Microsoft's EULA's.

Dorkbot meetings. These sound kind of cool...

Some heavy-hitters (including Red Hat, IBM, HP, Intel, Sun, Google, and MySQL) are getting behind the Free Standards Group. Ian Murdoch of Debian fame has been appointed their CTO.

There's an interesting thread over at Perlmonks about "Finding, Hiring, Inspiring and Keeping" tech workers.

McAfee: Stop Blaming Open Source Culture for Malware

McAfee has posted a whitepaper that discusses the increasing proliferation of rootkits. Nothing unusual here, especially for a major anti-malware vendor. The paper basically says that there has been a large increase in the number of and complexity (as measured by the raw number of components per rootkit) of Windows rootkits over the last three to five years, and that the easy availability of rootkit code has made it proliferate and increase in complexity. They basically finger open source and the Internet as the culprits:

The "open-source" environment, along with online collaboration sites and blogs, is largely to blame for the increased proliferation and complexity of rootkit components. [p. 3]
...
Collaboration does more than just spread stealth technologies. It also fosters the development of new and more sophisticated stealth techniques. [p. 5]

I think proliferation through collaboration is just so obvious that it's not worth mentioning. Crackers have been sharing malicious code for decades, first via BBS's and even printed magazines, then via the early WWW, IRC channels, and now blogs. The point is that bad guys communicate, they always have. The point they missed is that it is probably easier for for the average script-kiddie to find exploit code, given the huge improvements in search quality over the last decade, and the penetration worldwide that the Internet has had. On the other hand, easy access to exploit code works both ways. Academic researchers, curious hackers, and even companies like McAfee also have easy access, enabling them to see how such code works and perhaps ferret out new threats earlier than they otherwise could have. This exposes a flawed (but unstated) assumption that the whitepaper relies on, the assumption that most of those accessing malicious source code online will use it for malicious purpose.

As far as complexity goes, I'm not sure I see even a correlation between increased complexity and increased collaboration. Common-sense would say that what has made rootkits increase in complexity is simply the increasing complexity of the modern operating system and modern countermeasures - simple necessity. In DOS times, for example, trojans and viruses were simple because the OS was simple. Remember the floppy boot-sector viruses? 512 bytes worth of virus code.

Finally, placing the blame for rootkit proliferation on the "open source environment" is crazy. The whitepaper glosses over the fact that there has been a large decrease in Linux rootkits over the very same time period, despite very obvious increases in the number of Linux deployments over the same time period, and a pre-existing culture of sharing and collaboration among Linux users.

Marcus Ranum had this to say on the very same subject in an interview last year:

If we consider the Internet as a big local network, we will see that some of our neighbours keep getting exploited by spyware, virus, and so on. Who should we blame? OS producers? Or our neighbours that chose that particular software and then run it without an appropriate secure setup?

There's enough blame for everyone.

Blame the users who don't secure their systems and applications.

Blame the vendors who write and distribute insecure shovel-ware.

Blame the sleazebags who make their living infecting innocent people with spyware, or sending spam.

Blame Microsoft for producing an operating system that is bloated and has an ineffective permissions model and poor default configurations.

Blame the IT managers who overrule their security practitioners' advice and put their systems at risk in the interest of convenience. Etc.

Truly, the only people who deserve a complete helping of blame are the hackers (emphasis added). Let's not forget that they're the ones doing this to us. They're the ones who are annoying an entire planet. They're the ones who are costing us billions of dollars a year to secure our systems against them. They're the ones who place their desire for fun ahead of everyone on earth's desire for peace and [the] right to privacy.
Technorati Tags: , , ,

Friday, April 21, 2006

Does Ease of Use Make for Bad Security?

Something I've been wondering for a while, has the proliferation of web-based and GUI firewall or security appliance interfaces over the past few years been helpful as far as enabling network security administrators? Take as examples Check Point's policy editor, Fwbuilder, or m0n0wall. All make administering firewalls pretty easy at this point. But do they really help administrators learn their craft? Is it too easy to administer a firewall or security appliance these days? While that sounds like an odd question (easy is good, right?), what if the ease of these products enables a false sense of security in their less experienced users? To put it another way, does the "security black box" that many of these appliances have become contribute to security blunders?

This may be just another variant of the "I walked uphill to school in two-feet of snow every day when I was a kid...", but I learned firewall administration and networking by digging into low-level stuff. Hand-editing firewall rulesets (Linux ipfwadm back then), dissecting pcap traces and syslog output, sometimes staring at driver code to figure out what an obscure error meant, and generally fixing problems by trial-and-error. Through this process, I learned a lot about how the security mechanisms I was using worked under the hood. This has helped me quite a bit over the years when confronted with an odd problem hidden by a convenient interface. Other people I know had the same experiences fiddling with router ACL's, but the result was the same, their low-level experiences helped them be a better administrator. This reminds me of the Joel on Software article on The Perils of Java Schools, where he says that Java is not a "hard" enough programming language to distinguish great and mediocre programmers. Similarly, perhaps experience with the latest black-box appliance is not a good indicator of skill in security administration.

Does this matter anymore? Can you take someone fresh out of a CS program and plop them in front of a Check Point firewall, with a CD of PDF manuals, and expect them to create a coherent and effective security policy? I don't think so. Good security is still too hard - some of the most egregious security mistakes I've seen come from inexperienced admins using tools meant to make security "easy". If anything, these tools encourage the hiring of inexperienced security staff. While true that everyone makes mistakes, I think it's just a matter of degree in this case.

One example I've seen plenty of over the years in various forms is allowing bi-directional traffic flow between hosts or networks, when only one direction is needed. This usually is evident in rules like "Allow all TCP traffic on ports 1025-65535 to and from these two hosts", or "We have to allow the replies from our ISP's DNS server, so open up all traffic into our network with a source port of UDP 53". This stems from a lack of understanding of how stateful firewalls track connections, and only a basic understanding of how the underlying protocols work. But it opens up avenues of attack that would not have otherwise been present.

Technorati Tags: , ,

Thursday, April 20, 2006

Comments on SSH Security

There were a lot of interesting comments from my SSH post yesterday, both here and on Digg. I'd thought I'd address some of the points raised, and go into a bit more detail on why you should use key-based authentication.

Shallow Defense

One poster advocated leaving password authentication in place, and just using strong passwords. This is a bad idea for a few reasons. First, even if you pick a password strong enough to make brute-force attacks next to impossible, you are still relying on a single authentication factor. Depending on your circumstances, you should take into account non-brute force methods of attack that may compromise passwords, like social engineering or exploiting weaknesses in underlying key-exchange or encryption algorithms. Again, this depends on your circumstances. If you are the only SSH user and are willing to accept the risks of password authentication, it may work fine for you. The idea of not relying on a single authentication method, applying defense in depth whenever possible, underlies the idea of using two-factor (keys + passphrases) authentication. Brute force attacks become impossible without an attacker having your private key.

Automated Blocking

Quite a few posters advocated using password authentication, plus a tool like Denyhosts. I've never been a big fan of methods like this, I think using the security features in SSH itself makes for a simpler and cleaner solution, for a few reasons.

Single Point of Failure

Denyhosts works by periodically scanning your access logs to look for evidence of failed logins. What happens if this process fails (cron dies or software bug), and you haven't taken other steps to secure your SSH installation? It's even worse when you consider that a failure would most likely be silent, so you would think you are protected from brute-force attacks, but would not be.

To the developer's credit, the Denyhosts FAQ lists a few other ways of securing SSH, among them the ones I mentioned in my article yesterday. I would say to just implement key-based authentication, and forget about trying to intelligently block brute-force logins. While the defense in depth principle would say to use both Denyhosts and OpenSSH's security features, once you implement two-factor authentication, brute force attacks become impossible without your private key, so why bother? You could argue that it gives the bad guys a hard time no matter what, and there is something compelling about this, like using greylisting to slow down spammers. However, once you eliminate the possibility of a brute-force attack succeeding, Denyhosts doesn't offer you any more security.

False Positives

Blocking a source IP address is prone to false-positives. An anecdote might illustrate this point - about seven years ago, the Check Point managed security provider I worked with had a policy of adding 'bad guy' hosts to a group that was summarily dropped at perimeter firewalls. A 'bad guy' was anyone who port-scanned back then. The initial process was manual, and it became too time-consuming to add and remove IP addresses from the block lists. Some "port scans" were legitimate traffic, but in some cases were real port scans, just part of security audits. This procedure was eventually dropped as too difficult to maintain. Later, they experimented with Check Point's "Malicious Activity Detection" (MAD), a scheme much like Denyhosts, which was capable of automatically dropping or rejecting traffic sources that matched certain patterns, like port-scans or SYN floods. The problem was that it was too easy to deny valid traffic, and administrators of busy networks would spend too much time un-blocking legitimate users. Like other source-IP blocking methods, it was replaced by methods that filtered application or network-layer content, and summarily dropped traffic that matched certain patterns, regardless of source IP address (these methods still have problems with false-positives, but they don't pick on individual sources, so are easier to manage).

Circumventing the IP Blockers

This is just a thought experiment, but there may be at least one way to circumvent tools like Denyhosts. Consider the attacker that routinely uses something like The Onion Router (TOR), but for arbitrary TCP connections. Every minute or so, their source IP address will change (as seen from the target). The attacker could configure their brute-forcing software to make a few login attempts and wait about a minute before trying again under a new source IP address. They might be able to continually attack a host this way for an indefinite period without notice, depending on the Denyhosts threshold settings. Most admins won't use settings that are too draconian, because of the time spent un-blocking legitimate users who may have just fat-fingered their passwords, so this probably has a good chance of working against non-root accounts for which the failure threshold will be higher, and where strong passwords are not being used. It probably won't work at all against root accounts or where a valid user account is not known. Given the level of underground cooperation among botnet operators, I'd be surprised if this wasn't already being tried, but there is really no way to know (It will also work against purely connection-counting based methods, although in this case, an anonymyzer is not needed, just a way to throttle brute force attempts down to a few per minute).

Anyway, this is not to suggest that Denyhosts is a bad project, or doesn't work. It obviously works well for quite a few people. It's just worth keeping in mind the points I've raised above before leaving password authentication enabled.

Technorati Tags: ,

Wednesday, April 19, 2006

Five-Minutes to a More Secure SSH

Note: Updated (twice) below

Here is a quick way to drastically improve the security of your OpenSSH server installations. Apart from past flaws in the OpenSSH daemon itself that have allowed remote compromise (very rare), most break-ins result from successful brute-force attacks. You can see them in your firewall, system or auth logs, they are an extremely common form of attack. Here is an excerpt from the /var/log/messages file on a CentOS Linux box (the attacking hostname has been obfuscated). You can see multiple attempts to login as users root and ftp. Also note the time between repeated attempts - one second or less, much too quick to be a human. This is an automated attack.

Apr 9 09:34:51 server1 sshd(pam_unix)[1511]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=host52-182.foobar.pl
Apr 9 09:34:52 server1 sshd(pam_unix)[1513]: check pass; user unknown
Apr 9 09:34:52 server1 sshd(pam_unix)[1513]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=host52-182.foobar.pl
Apr 9 09:34:53 server1 sshd(pam_unix)[1515]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=host52-182.foobar.pl user=ftp
Apr 9 09:34:53 server1 sshd(pam_unix)[1517]: check pass; user unknown
Apr 9 09:34:53 server1 sshd(pam_unix)[1517]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=host52-182.foobar.pl
Apr 9 09:34:54 server1 sshd(pam_unix)[1519]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=host52-182.foobar.pl user=root
Apr 9 09:34:55 server1 sshd(pam_unix)[1521]: check pass; user unknown
Apr 9 09:34:55 server1 sshd(pam_unix)[1521]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=host52-182.foobar.pl
Apr 9 09:34:56 server1 sshd(pam_unix)[1523]: check pass; user unknown
Apr 9 09:34:56 server1 sshd(pam_unix)[1523]: authentication failure; logname= uid=0 euid=0 tty=NODEVssh ruser= rhost=host52-182.foobar.pl

From personal experience with clients over the years, I have found that most administrators tend to install an SSH server and leave it at its default settings, typically to allow password authentication and root logins. Many don't even know that there is an alternative (key-based authentication), or they think the alternative is too hard to use. It is not - it takes all of five minutes to configure key-based authentication and disable root logins, and the security gains are enormous. Below, I'll step you through the process. I add comments where a step may not be needed.

Configuring and Testing Key-Based Authentication

This is not really a hard as it seems. By default, all recent OpenSSH configurations allow public key authentication. You just have to generate a pair of SSH keys on the client, and append the generated public key to a file named authorized_keys on the server. This works on any Unix/Linux with OpenSSH, or even under Windows with Cygwin's OpenSSH port (I won't mention it further here, but If you are stuck on a Windows client without Cygwin, use PuTTY and check out Using PuTTYgen).

Let's say you routinely use SSH to login from your home workstation, sshclient, as user obiwan, to a remote SSH server as user vader. For consistency's sake, we'll call the remote server sshserver. I'm assuming you have working password authentication already.

On sshclient, log in under your user account obiwan, and run the following three commands.

obiwan@sshclient:~$ ssh-keygen -t dsa

This first command generates the public/private key pair for you, assigns a passphrase to the private key, and stores both keys in the directory ~/.ssh by default. Just accept the defaults, and enter a passphrase when prompted.
obiwan@sshclient:~$ cat ~/.ssh/id_dsa.pub | ssh vader@sshserver "cat >> .ssh/authorized_keys"

This command appends your public key, which is stored in ~/.ssh/id_dsa.pub by default, to the authorized_keys file on the remote host. One caveat, you may not have a '.ssh' directory on the remote server in vader's home directory. Create it if needed.
obiwan@sshclient:~$ ssh vader@sshserver chmod 600 .ssh/authorized_keys

This last command changes the permissions on the remote authorized_keys file to be more secure, and to prevent a common problem that prevents key-based authentication.

Now, test SSH login to 'vader@sshserver'. You should be prompted for a passphrase this time, not a password. Enter the passphrase you chose when you created your keys, and you should be logged in.

Disabling Password Authentication

Once key-based logins are working, disable password authentication. Find your SSH server's configuration file. This is usually in /etc/ssh/sshd_config. Find the line that says:

PasswordAuthentication yes
and change it to

PasswordAuthentication no
You should also disable challenge-response authentication, in case your version of OpenSSH is using PAM to authenticate (see updates below for an explanation):

ChallengeResponseAuthentication no

Disabling Root-Logins

Finally, disable root logins. Kudos go to NetBSD and FreeBSD, whose default configurations do not allow root logins, so there will be nothing to do for you to do in this step if you are using a recent version of either one. OpenBSD and most Linux distributions allow root logins by default. Again in /etc/ssh/sshd_config, find the line that reads

PermitRootLogin yes
and change it to

PermitRootLogin no
Reload your SSH server's configuration with /etc/init.d/ssh reload (Debian Linux-based), service sshd reload (Red Hat Linux-based), /etc/rc.d/sshd reload (NetBSD/FreeBSD) or just send sshd a HUP signal, usually with something like kill -HUP pid, where pid is the process ID of the SSH server, you can get it quickly on any Unix platform by running something like ps ax | grep sshd.

That's it. There is much more to OpenSSH configuration and security, but these few, simple steps will go a long way towards preventing brute-force attacks and letting you sleep at night. Repeat the above steps on all your SSH servers, copying your SSH public key to each server as needed (it may not be obvious if you've never used key-based authentication before, but don't regenerate your keys for each host).

Update:

One Digg reader commented that if UsePAM were set to 'yes', that this would override the 'PasswordAuthentication no' setting. This is not quite true (see below). I did some poking around, and it turns out this is true, but only if your version of OpenSSH was compiled with PAM support. Of the OS's I looked into for this article, only FreeBSD enables PAM support in OpenSSH, and has UsePAM default to 'yes' by default (since version 4.x? Anyone know for sure?). NetBSD and OpenBSD don't include PAM support in OpenSSH, and in Debian/Ubuntu/Red Hat, UsePAM defaults to 'no'. So be sure to check your sshd_config for a UsePAM directive, and change it to 'UsePAM no' if needed.

Update II:

Thanks to Darren Tucker, one of the OpenSSH/OpenBSD developers, who has commented and clarified the 'UsePAM' directive. It turns out that you need to set 'ChallengeResponseAuthentication' to 'no' in order to disable PAM authentication. He correctly points out that while OpenSSH with PAM support may allow password auth (among others), it uses a different method internally as compared to standard OpenSSH password authentication. The latest FreeBSD man pages have this to say, which may help to clarify the situation a bit:

Note that if ChallengeResponseAuthentication is 'yes', and the PAM authentication policy for sshd includes pam_unix(8), password authentication will be allowed through the challenge-response mechanism regardless of the value of PasswordAuthentication.

Technorati Tags: , ,

Thursday, April 13, 2006

Quit Slashdot.org Today!

This is pretty funny (especially their Top Nine Reasons to Quit Slashdot.org), although I think the producers of this website are quite serious. I have to admit spending much less time at Slashdot, and more at sites like Reddit lately.

Technical Writers and FOSS Adoption

Over at NewsForge, Bruce Byfield tells us why Technical Writers Aren't Using FOSS. What he found after some real discussions on a technical writing mailing list, is that most tech writers don't use Free/Open-source software professionally, and that there emerged one underlying attitude:

However, what is more interesting about the comments is the attitudes they reveal. To start with, none show any interest in the philosophies of either free software or open source. Most had no understanding of them. Encouraged to ask questions, those who accepted the invitation asked the most basic of questions, such as what incentive developers would have if they didn't get paid. A few attempted to debunk FOSS based on secondhand knowledge. Even more disavowed any interest in the philosophies, claiming that they were only interested in practical results. Posada spoke for many when he responded to my question about the role of philosophies by saying, "I don't care about philosophy.... I'm more interested in the speed that I can get my documentation written."

I wrote about this attitude before, it is something many geeks have a hard time understanding, even if you tell proprietary software users that FOSS will make their life easier (I was speaking in the context of Windows malware in my previous post, but the principle applies to other benefits of FOSS). It also shows the difficulty of displacing an entrenched standard, even if that standard is genuinely harmful. The initial barrier to adoption (that business and consumer PC's don't typically ship with FOSS), coupled with the initial learning curve (not hard anymore, just different enough to be a annoying), is still way too steep for those with purely practical concerns. They've learned to deal with all the warts their proprietary software has, and don't want a new set to worry about.

Technorati Tags: , ,

Friday, April 07, 2006

Apt - Debian's Killer App

In SUSE, Fedora or Debian for sys admins: A closer look Tom Adelstein says that in doing research for an upcoming book, he found that most Linux sysamdins prefer Debian. The reason? Apt.

Overwhelmingly, system administrators preferred apt-get for adding, removing and updating their servers. We also discovered that system admins added ports of apt-get to Fedora and SUSE. So much for yast -i. The preferred Debian administration utility drove people who used the non-commercial distributions to Debian.

I have to say I agree - I've used Debian for years and prefer it for everyday use over other Linux distros or BSD systems. The combination of dpkg/apt and the related Aptitude/Synaptic are at once easy to use for the most common sysadmin activities of update and install, but offer plenty of features for advanced use. While RPM and dpkg are pretty much equivalent in functionality, Apt offers many more features than yum, including package pinning and the ability to do major system upgrades. I also find yum to be rather slow when compared to Apt.

I'll add some thought on the various BSD's, too - FreeBSD's binary packages system is pretty easy to use, as is NetBSD's pkgsrc, but still both lack features when compared to Apt. The BSD's adherence to a unified system of kernel/userland makes it impossible to upgrade between major OS versions without going through a formal installation procedure, while Debian allows you to do it with two commands - you can even keep your old kernel with the new userland if you want. OpenBSD always recommends using the installer's upgrade option, with plenty of manual labor both before and afterwards.

There are other reasons to use Debian, too, like sticking to Free Software ideals, security, and ease of administration, but package management with Apt seems to be Debian's "killer feature" that keeps people coming back.

Technorati Tags: ,

Wednesday, April 05, 2006

Article Roundup

Bruce Schneier has a contest up on his blog for people to come up with the most unlikely, but still plausible movie-plot terrorist threat. Some of the ideas are, well, interesting.

The Internet Storm Center has an interesting post detailing some anecdotes about when companies started to take network security seriously. As you might imagine, most of the stories indicate that companies are reactive when it comes to security.

The US lags behind many Asian and European nations in IPv6 migration.

All you ever wanted to know about Debian's secure Apt.

Marcus Ranum's list of the six dumbest ideas in computer security.