Friday, December 22, 2006

Article Roundup

Mark Pilgrim reviews gNewSense, the FSF-approved, entirely Free GNU/Linux distro.

Eben Moglen weighs in on the MS-Novell patent deal, as does Bradley Kuhn. From the twilight zone, Ballmer spouts off. This Canada says "No big deal". Nicholas Petreley strongly disagrees.

The city of Munich is completing its desktop Linux migration.

Defmacro reminds us that LaTeX is still useful.

Interesting commentary on The New Dependency Hell over at LWN.

This is really clever. The War on Terror - as Viewed from the Bourne Shell.

Sunday, November 19, 2006

Linux, Binary Blobs and Restricted Media

There is some interesting discussion over at LWN about binary blobs used in hardware drivers. The impetus was this article, a review of Fedora Core 6 that basically said "It sucks because it has no support for modern-day web content".

I appreciate the fact that distributions like Fedora Core are still focused on free-as-in-rights software, but today's Web content requires more proprietary browser plugins than yesterday's did, and today's hardware is increasingly designed to be dependent on proprietary binary blobs in the form of firmware and driver packages. Programmers are not falling over themselves to write free replacements for these things (or they are unable to because of a lack of documentation from hardware manufacturers), and the projects that do exist are non-operational and/or several generations behind current technology.

I can certainly relate to this, but the anger is misguided. It should be directed at the hardware vendors who refuse to release specs and at the very least, allow unrestricted redistribution of binary blobs. There are distributions that have tried to address this issue, Debian, for example, does not treat binary blobs as software, and so allows then to be shipped with their distribution without source code (for the upcoming Etch release, and assuming the blob can be legally redistributed). OpenBSD is probably at the forefront of the fight against binary blobs, some of their developers have invested significant time and energy into writing blob-free drivers for closed hardware. The list of supported wireless chipsets in the latest OpenBSD release is impressive.

There are completely different issues at work when it comes to web/media formats - that of patents and restrictive laws like the DMCA. The former prohibits things like mp3 support from being shipped with Fedora, the latter prohibits (in the US, anyway) support for DVD decoding (hence playback). Still, it's not that hard to get support for mp3 playback and DVD decoding immediately after installation, this page seems to be the best available for FC6 at present. Ubuntu has a "restricted" repository that can be accessed after installation, and there is always Easy Ubuntu.

One final point that has to be made, I've seen too many reviews that harp on "ease of installation" (another point the aforementioned review makes) - the truth is that modern Linux distributions are much, much easier to install than say, Windows, and have better hardware support out-of-the-box, thanks to the latest Linux kernels. I think we're lucky the latest Red Hat and Ubuntu installers work as well as they do. Post-install, the same hardware that tends to be problematic under Linux requires a third-party driver download under Windows. This issue of drivers in Windows tends to get overlooked, because the vast majority of hardware you buy comes pre-installed and pre-tested. Just take a look at the driver download section of Dell's website if you want to see how frightening the situation is. As for web content, even under Windows you still have to install flash, realplayer, and java plugins, if you want them.

Technorati Tags: , ,,

Tuesday, September 26, 2006

The Perfect (Ubuntu) Linux Laptop

So I recently purchased another laptop, after surviving the last Dell fiasco. I bought an Acer Aspire 3620, and it runs Ubuntu Dapper beautifully. Every piece of hardware is supported out of the box, including the onboard wireless (I did a bit more research this time before I bought it). Here are the basic specs:

  • Memory: 512MB
  • HDD: 80GB (IDE, not SATA)
  • Ethernet: Realtek 8139
  • Wireless: Atheros 5212
  • Video: Intel 915 (1280x800)
  • Sound: Intel ICH6 AC'97
  • CD: CD-RW/DVD-ROM

So the resolution is that odd 1280x800, but I didn't need the 915resolution package to get it to work with the Xorg in Dapper. Also, the ACPI hibernate works perfectly, even properly unloading/reloading the wireless drivers for you. No configuration was necessary after installation.

The only thing unusual about the install was that Ubuntu's live installer crashed during partitioning, while I was trying to wipe out the original Windows partitions. I used the Debian Sarge text-based installer to create the initial Linux partitions and swap, then went back and installed Dapper with no issues.

If you are wondering about Debian proper on this laptop, Sarge still uses XFree86, as opposed to the latest Xorg, and so does not support the 915 video chipset. A way around this would be install Xorg from Etch or backports, then the 915resolution package. Etch installed on this laptop when I tried it, but Etch networking (via udev) is, well flaky at the moment. I also had to manually install the madwifi wireless drivers under Etch to get the Atheros chipset to be recognized.

Technorati Tags: , , ,

Saturday, August 26, 2006

Comments on "Involuntary Ubuntu"

There is an interesting article at tbray.org where Tim Bray recounts his recent experiences with Ubuntu. He discusses a few things I've talked about before - one, that apt-get is Debian/Ubuntu's main strength, and why something like it should be in Solaris:

You know, this has been said a lot, but it bears repeating: Apt-get is just so unreasonably fucking great. Why aren’t we using it for Solaris updates? I managed to pull together the whole witches’ brew of OSS that makes ongoing go without ever leaving Synaptic. Oops, not quite true, I cruised past CPAN to get DBI and DBD::MySQL, but I’m not sure I needed to, because when I got MySQL, I saw a lot of perl-related stuff go flying by.

He's right, he did not need to use CPAN for DBI/DBD. The package 'libdbd-mysql-perl' would have pulled it in for him. 'apt-cache search dbi mysql' would list relevant packages.

Technorati Tags: ,,,

Article Roundup

Ex-parrot brings us fun with wireless Internet access thieves. This one is really funny, a real-life "Revenge of the Nerds". Of course, someone who can fiddle fluently with iptables and Perl should be using some variant of WPA, but that would not be nearly as fun.

NeoBinaries lists Five very useful Firefox extensions.

Windows DRM is done, thanks to the program FairUse4WM. I don't think the DRM peddlers will ever learn - DRM doesn't work.

Red Hat vs. Ubuntu - My take - I don't think Red Hat is going under anytime soon - they have too much of a buy-in from the Fortune-500, the ex-proprietary Unix clients.

Information Week lists the best software ever written. The verdict? Unix is #1, of course.

A programmer's Bill of Rights. Yes, coding in noisy cubicle-land with an ancient PC sucks.

International crime rings are much more of a concern than the lone 'hacker'. No surprise there, really. Identity theft is too easy, too lucrative to ignore.

Sunday, August 13, 2006

Perl Script that Alerts on Clam Anti-Virus Errors

One of the reasons I like Perl so much is CPAN, and how easy it makes writing scripts for system administration. One of my clients uses Clam AV to screen incoming mail for viruses. The updater, called 'freshclam', runs periodically and updates the virus definitions database, and also checks to see that the installed version of Clam AV is not out-of-date with respect to the database. If it is, the freshclam log file fills with messages that start like this:

ClamAV update process started at Sat May 6 04:02:09 2006
WARNING: Your ClamAV installation is OUTDATED!
WARNING: Local version: 0.88 Recommended version: 0.88.2

It turns out the messages also get returned by the Clam AV daemon when it is scanning mail. This isn't usually a big deal, but in this case, the client was using a home-grown mail system that died if the Clam AV daemon returned this error while scanning mail. As a temporary workaround (until the MTA could be fixed), to alert me whenever this happened, I put the following Perl script together and had it tested and installed within an hour. When run from the command line, it automatically daemonizes itself and scans the freshclam logfile for the above message. If found, it sends an email alert (most cell phones have an email-to-SMS gateway address, which is what I use to get text alerts sent to my cell phone). It does not need to be run as root (and shouldn't), it only needs enough permission to read the freshclam log file.

You will need to edit the variables 'logfile' and 'recipient' at the top of the script, and you probably want to add it to your target system's startup sequence. You can download it here:

Perl script to check for and alert on freshclam errors

It's worth mentioning that there are quite a few projects that handle parsing of logfiles for certain patterns (logcheck and swatch come to mind), but they are very general, and in this case I felt a targeted solution was preferable (and faster to implement).

#!/usr/bin/perl -wT # # $Id: clammon.pl,v 1.3 2006/08/13 17:19:42 dmaxwell Exp $ # # Parses the freshclam updater log, looking for messages like this # one: # # -------------------------------------- # ClamAV update process started at Sat May 6 04:02:09 2006 # WARNING: Your ClamAV installation is OUTDATED! # WARNING: Local version: 0.88 Recommended version: 0.88.2 # -------------------------------------- # # If found, it sends an alert via email. # # Copyright (c) 2006, Doug Maxwell # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA # use strict; use File::Tail; use Mail::Mailer; use Proc::Daemon; use Unix::Syslog qw(:subs); use Unix::Syslog qw(:macros); # Fork Proc::Daemon::Init; # Clean up our environment for taint mode delete @ENV{qw(IFS CDPATH ENV BASH_ENV)}; $ENV{PATH} = "/bin:/usr/bin"; # The logfile we are monitoring my $logfile = "/var/log/clamav/freshclam.log"; # The regex we will test against each new line my $pattern = qr/Recommended version:/o; # Where to send alerts my $recipient = '8605551212@vtext.com'; my $file = File::Tail->new(name=>$logfile, maxinterval=>120, adjustafter=>3) or die; while (defined(my $line = $file->read)) { send_alert($recipient,$line) if ($line =~ /$pattern/); } sub send_alert { my ($recipient,$body) = @_; my $from = 'root@example.com'; my $subject = "Clam AV is outdated!"; my $mailer = Mail::Mailer->new("sendmail"); $mailer->open({ From => $from, To => $recipient, Subject => $subject, }) or log_error($!); print $mailer $body or log_error($!); $mailer->close( ); return; } sub log_error { my $text = shift; openlog ("clammon.pl", LOG_PERROR|LOG_CONS , LOG_LOCAL7); syslog (LOG_INFO, "$text"); closelog(); return; }
Technorati tags: , ,

Tuesday, August 08, 2006

Questions About the Legitimacy of the Lieberman Website Takedown

Being a CT resident, I'm taking some interest in the story of Joe Lieberman's "hacked" website. According to the Lieberman campaign, their website and email has been offline for about 18 hours now. They are also claiming that this is a DoS (Denial of Service) attack, and suggesting Ned Lamont supporters' involvement (Update: Now denied).

(Note: More updates below)

A few extra pieces of info you can glean from public databases, apart from what is in the linked post:

1) The hosting provider for joe2006.com (myhostcamp.com) has a /30 IP block assigned to them, meaning only two usable IP addresses, one of which is www.joe2006.com (69.56.129.130).

2) A hosting provider that has only a /30 assigned to them is not very big - most likely, they are using virtual hosting on one or two servers to provide websites for all their clients.

3) The assigned range of IP addresses, 69.56.129.128/30, is part of a much bigger range assigned to theplanet.com - a large hosting provider and hosting reseller.

4) www.myhostcamp.com - the website of the hosting provider - is offline as well, also redirecting to a 'suspended' page. This is the biggest clue to what happened.

Given the above, it looks like a small-time web hosting provider was overwhelmed on election eve/day by traffic to one of their hosted websites, namely joe2006.com. The hosting provider's (myhostcamp.com) bandwidth allocation was exceeded, causing the end provider (theplanet.com) to shut them down. Until some money is forthcoming from myhostcamp.com to theplanet.com, the site won't be back up (at least under the original hosting provider). We can't know for sure this is what happened, the facts just seem to point in that direction. It is certainly possible that a DoS attack took place last night/this AM, but has since stopped. It would only have needed to run for long enough to exhaust myhostcamp's monthly bandwidth quota.

Contrary to what others are saying, the Lieberman camp could probably still make updates to the site, since most hosting providers will use some sort of policy routing or QoS (quality-of-service) to restrict web bandwidth only. This would also explain why echo-requests (ICMP pings) sent the the IP address of www.joe2006.com have an RTT of 10ms or so - very fast in Internet terms. There must be very little traffic to that domain right now - only web traffic is being redirected to the suspended pages.

A few things are odd about all this:

1) Given that Senator Lieberman's website associated email have been offline for over 18 hours, on the eve of a contentious election, why has the Lieberman camp allowed this to continue? As the link above suggests, a competent sysadmin could get them back online with another provider in an hour or so.

2) Why is the website being handled by such a small operation, and why were no contingency plans put into place in a race that has had national interest? I'd say they got some very bad advice from their hosting provider/Internet consultant.

3) Email for joe2006.com is down because the email is handled by the same server as the web traffic - not something usually done with larger domains, precisely because it's a single point of failure for the domain. Again, it would be very simple to redirect mail to another server temporarily. Why wasn't this done?

Now, we have to be careful not to blame the victim - if joe2006.com was DoS'd, there is simply no excuse, and those responsible should pay. If not, the Lieberman campaign got some very bad hosting and capacity planning advice from their Internet consultant, and should not be pointing their collective fingers anywhere but at themselves.

UPDATES: An update from DailyKos, from someone who did even more digging...and here.

Technorati Tags: , , , ,

Friday, July 28, 2006

Article Roundup

In honor of Sysadmin Appreciation Day (July 28th), the classic song and some humor.

Very useful site with backported Debian installer images. They mainly contain updated kernels, meaning they support lots of newer hardware, including Dell's latest power-edge servers.

SearchSecurity.com gives us an analysis of security patch times for various Linux distros. The fastest patchers? Ubuntu, Fedora and RHEL, with Debian a close fourth.

Yet another high-profile switch from Mac to Ubuntu Linux. This is interesting:

It has been my experience that the Mac "community" (ie, the most vocal and active of the Macintosh enthusiast and power users) tend to be incredibly negative and expect much more than they deserve.

Stifflog interviews some well-known programmers.

What the World-Wide-Web looked like in 1996, when the Internet Archive started archiving web pages. I'd forgotten how bad some of these sites looked.

Saturday, July 15, 2006

Article Roundup

The US is unprepared for a major attack on the Internet infrastructure.

A good tutorial on Securing Apache with mod_security.

Tim O'Reilly comments on recent high-profile switches from Mac to Ubuntu.

Captchas are slowly but surely falling prey to computers.

The shock of using IE when you are used to Firefox.

Randsinrepose brings us A Nerd in a Cave. Interesting commentary on Geek work habits. Those of us who are knowledge workers can relate to this:

Once I've successfully traversed my morning routine and have entered The Zone, I am OFF LIMITS. I mean it. Intruding into The Cave and disrupting The Zone is no different than standing up in the middle of the first ever showing of The Empire Strikes Back, jumping up and down, and yelling, "DARTH VADER IS LUKE'S FATHER! DARTH VADER IS LUKE'S FATHER!" Not only are you ruining the mood, you're killing a major creative work.

Quality Online Resources for Perl Beginners

Perl beginners might be tempted by online books like this, simply because they are readily available. In this case, I would not recommend this book even after just skimming the online chapters. One of the biggest things that jumped out at me was the chapter on CGI form processing - using hand-rolled variable processing and no taint-checking are big no-no's. What explained this was the book's copyright - 1996 - a lot has happened in Perl in the past ten years, including CGI.pm, and many fine templating systems.

A much better and more recent (2000) online Perl book is Beginning Perl. For simple CGI, Ovid's Perl CGI course is excellent. Beginners to Perl should also check out learn.perl.org and the perlmonks.org tutorials, in particular the Getting Started With Perl Section.

Technorati Tags: ,,

Saturday, July 08, 2006

Adventures in Linux-Laptop-Land

What is it about Dell and their laptops that they have to change the hardware every few months, even among the same model lines? I used to have a Dell Inspiron 600m, which worked quite well under Debian Sarge, with a decent X screen resolution (1400x1050), working sound, Ethernet and wireless, and working PCMCIA.

I recently got a new laptop, and got the next model up in the Inspiron line (630m), since the 600m was not made anymore. The 630m has proven to be *very* Linux and Free-Unix unfriendly. The video is in Intel 915GM card, with the LCD screen only able to do 1280x800 (what a strange resolution, very wide but narrow). Anyway, pretty much every distro I've tried on it needs the 915resolution hack to work at anything but 1024x768. Then there is the sound card, an Intel ICH6 - needing the latest Alsa drivers to get working headset muting and mic (meaning compile from source). The wireless is a Broadcom bcm4318 (re-branded as an Intel so you can't tell ahead of time), which doesn't work unless you use ndiswrapper. And there is no more PCMCIA - this thing has an ExpressCard slot, which completely pissed me off since it turned my collection of useful PCMCIA cards into junk. The SATA hard drive is not supported by the 2.6.8 kernel in Debian Sarge, meaning 2.4 (oddly enough, the SATA drive *is* supported by the 2.4 kernel installed by default in Sarge) or something like Ubuntu or Red Hat. ACPI suspend worked under Breezy, but hibernate has never worked properly on this thing (perhaps I didn't try hard enough).

I originally had Ubuntu Breezy installed on it, but never could get the sound card to work properly, a problem since I rely on a SIP phone for work. I upgraded to Dapper when it was released, still no-go on the sound card (even after installing the latest CVS Alsa drivers). Same for Debian Sarge and FreeBSD. I poked around, finding that others had some success with sound on this particular laptop under Fedora Core 5, using a particular Alsa version, so I installed it. Now sound works fully, including mic and headphone muting, but it is unusably choppy with Ekiga or any other Linux SIP phone. Grrr..

So I just settled on using Fedora Core 5 on my laptop for the past month or so, sick of all the fiddling, and have been working off my cell phone when I travel. In my office I just have an old PC running Debian that runs Ekiga just fine, but this doesn't travel and defeats the biggest advantage of using an IP phone - being able to transplant your office anywhere. Fedora has its own warts (especially for someone so used to Debian for many years), but I'll save that for another post.

So, again, what is it with the current laptop market, Dell in particular? Full of proprietary hardware that changes every few months (if I had known I was getting a Broadcom wireless chipset, I never would have bought it). I keep hoping that the laptop market will settle on some sort of standard group of hardware components with open specs, but this will probably never happen.

Technorati Tags: , ,,

Friday, June 30, 2006

Article Roundup

Ivan Ristic (the author of ModSecurity), talks about some of ModSecurity's new features.

Some decent tips for securing Linux distributions, mainly concerned with Red Hat-like distributions.

Over at Desktoplinux.com, Jem Matzan comments on how desktop Linux distros are headed in the wrong direction. His main point is that developers are trying to compete with Vista and Mac OS X by incorporating eye-candy into their desktops, when they should be trying to innovate in the application space. This is probably more about the widely-used desktops KDE and Gnome used in most popular distros - they are agonizingly slow on older hardware, and are even sluggish at times on newer hardware. Of course, if you are developing a commercial Linux distribution, the snazzy graphics give you good screenshot fodder, and increase general interest.

Bruce Byfield talks about how GPL enforcement may have a chilling effect on the smaller Linux distributions.

Mark Pilgrim updates his Ubuntu essential software list after his switch from Mac. His first one is funny:

1. Ubuntu, which is an ancient African word meaning "can't install Debian".

Interesting commentary on whether or not Computer Science Majors make good programmers. My take - not necessarily, but what is learned in a compsci degree program makes a good foundation if one is so inclined.

Monday, June 26, 2006

Requiring IE for Security Reasons - Huh?

I had an amusing exchange recently when I was calling a big-name security vendor for support on behalf of a client. I had been mildly irritated that I couldn't access their support portal with Firefox, since I wanted to open a ticket online. I suspected it was one of those 'IE-only' sites you hear about. Becoming less-and-less frequent, those. No easy way for me to test it, not a Windows box in sight...

Me: "Are you aware your support portal doesn't work with Firefox? Do other people complain about it?"
Engineer: "Yes, we get complaints about that. It only works with Internet Explorer, for security reasons".
Me: [Laughing out loud]..."You realize how that sounds?"
Engineer: "Yes, I use Firefox, too - for security reasons."
Me: "[More laughs]... Did they say if they would fix it?"
Engineer: "I don't think they will. Anyway, I don't have much of a say in the decision."

Wow. You would think a security company would know better.

Technorati Tags: , ,

Saturday, June 24, 2006

Article Roundup

Cracking buggy wireless drivers. Makes you glad some operating systems don't ship with binary-only drivers.

Debian Administration tells us about stack-smashing protection (SSP) now in Debian Sid. Also a good overview of shellcode exploits.

Yet another person who doesn't understand the false dichotomy between Free and commercial software. I'm glad that in the end, he saw the value in and was able to make use of GNU Privacy Guard (GPG).

Steve Yegge tells us about some of the new features in the development version of Emacs. I've been using CVS Emacs for a while (what will become Emacs 22) - in fact I'm typing this post in a version I compiled last week, it really is a pleasure to use. Here is a detailed feature list, and instructions for checking out your own copy. Read the 'INSTALL.CVS' file after you checkout the source. Be sure to report any bugs you find (I haven't found one yet in daily use).

Fedora Core 5 is one of the better Linux distributions around, according to this opinion piece. I still think most of these articles and distro reviews need to distinguish between desktop and server use (but see my next link). I guess a shell prompt doesn't make a good screenshot.

TechTarget talks about Ubuntu Dapper's bid for the enterprise server market. A good quote:

"The problem with the subscription model is that they feel a lot like licenses," Zachary said. "For Ubuntu to be different, it needs to focus on enterprise support deals. Whether Ubuntu is installed on five or more computers and they charge X amount a year on support, it doesn't matter because it decouples the install from services and makes the customers feel that they are more in control of their choices," he said.

Thursday, June 22, 2006

OpenBSD VPN Goodness

Well, OpenBSD keeps getting better and better as a firewall platform. First, pf, CARP and pfsync for failover or load-balanced firewall clusters, and now IPSec VPN failover. Sounds like it will be ready for the next release this fall. While this has been available as a feature in expensive, proprietary firewalls for some time (think Check Point), I don't know of any free-software implementation that offers this. Add to this OpenBSD's BGP and OSPF implementations, and you have a very nice, open redundant routing platform. Developments like this are a welcome relief to small businesses and others that have a hard time affording proprietary solutions, and I'm not just talking about the monetary costs. After all, you still need someone with a clue to install and support your firewalls, and those people don't come cheap. I'm really talking about the hidden costs - like vendor lock-in, license management and disturbingly bad support.

Technorati Tags: , , ,

Tuesday, June 20, 2006

Article Roundup

A good interview with Eugene Spafford about the prevalence of network security risks, and how current trends are increasing them. He points to three factors:

  • Deployment of cost-saving technologies without thinking through the consequences (VOIP, wireless)
  • The disappearance of the network perimeter
  • Relying on one security vendor for all your products.

He has one interesting comment concerning the dangers of losing Net Neutrality:

A threat that is not so much technology as it is governance is the trend toward preferential treatment for commercial traffic. Big ISPs and companies are installing spam filters that block traffic from other countries, companies, ISPs or domains. It's effectively a breakdown of the end-to-end model. You cannot depend on your e-mail going through. You've got some countries setting up their own domain roots. We're losing the underlying commonality that the Internet grew on.

In No sex please, robot, just clean the floor, researchers are already starting to wrestle with a robotic code of ethics.

Yet another reason not to leave Emacs...an elisp version of Sudoku with about 200 built-in puzzles and the ability to get more from the 'Net. Four difficulty levels are supported. Put the following in your .emacs:

(add-to-list 'load-path "~/elisp") (autoload 'sudoku "sudoku" "Play a game of Sudoku" t)
Then put the 'sudoku.el' you downloaded into ~/elisp, and run 'M-x sudoku'. You can customize the options with 'M-x customize-group RET sudoku RET'. Here's what it looks like:

Emacs22
           and
           Sudoku

Andy Lester talks about how geek culture can be harmful. I can definitely relate to the phrase 'flipping the bozo bit':

The Bozo Bit was introduced in Dynamics of Software Development. It's the mythical switch you flip on someone after they've done or said something that you deem stupid. It's a permanent black mark against that person, and once its set, anything else coming from that person is deemed worthless. "And as far as his making a contribution is concerned, he's just dead weight, a bozo."

An interesting interview with Debian project leader Anthony Towns and his deputy Steve McIntyre. They talk about what lies ahead for Debian, and how well the Debian and Ubuntu projects work together.

Tuesday, June 13, 2006

Article Roundup

I came across this nifty Perl script for starting services in /etc/rc.d on Slackware (easily modified to run on other Linux or *BSD variants). This is like Red Hat's service command (e.g. 'service sshd restart'), just more concise and with fewer options, but still very usable.

There are two articles about switching back to Linux from Mac OS X, one by Chromatic at the Linux Devcenter, the other at Mark Pilgrim's blog (the author of Dive Into Python). Having never used Mac OS X, I can't say I know how they feel, but as a free-software advocate, I have never felt the urge to go the Apple route. To me, they Apple is just another proprietary OS vendor, complete with closed hardware, DRM and vendor lock-in. No thanks.

I mentioned Firefox for Emacs Users in a previous post. Here is part II, with lots of tips.

Why Enterprises are Adopting Open Source.

Linux.com tells us about Using Debconf to configure a Debian system.

This is a doozy - from Google Research, Nearly All Binary Searches and Mergesorts are Broken. This is decades-old software.

Finally, a bit of humor.

Monday, June 12, 2006

Comments on Dapper Drake

There is a not-so-nice review of Dapper Drake, Ubuntu's new release, over at Tectonic. A few comments - I'm typing this on my laptop running Dapper as we speak, and it has been pretty stable for me, once I got it installed. One complaint I did share was the mysterious removal of some packages during my dist-upgrade from Breezy, like Evolution, Openoffice.org and Gaim. It wasn't that big of a deal, I just re-installed any missing packages afterwards, but it was still rather odd. I also did try to upgrade using the live CD, but found the installer horribly slow and the partitioning tool almost unusable - I actually rebooted into my old system to do the command-line dist-upgrade, which worked with a few missing packages. While the live-CD installer is a nice feature, I much prefer something like Debian's text-based installer, which is much more responsive. Text-based installers are underrated.

I do share some of their concerns about stability:

The first mistake, I think, was its desire to be a bleeding edge distribution, rather than a leading edge distro. Basing itself on Etch, Debian's unstable release, could be a problem. When the early versions of Ubuntu used the then-unstable Sarge as their foundations, it wasn't a risk -- Sarge was on the cusp of being released. Etch, on the other hand, is brand-new, and far from getting a thumbs-up as a stable distribution.

I think it's important to specify desktop vs. server here, I see a lot of reviews that make assumptions about how a system is being used. Debian Etch makes a fine desktop, even without the Ubuntu touch. I do wonder how Canonical can keep their server and desktop variants in synch with one another, however - the two have different goals. Desktop users tend to value applications and bleeding-edge hardware support, while server admins value stability. It's difficult to reconcile the two under the same codebase. Debian has been dealing with this issue for years ('Stable' is out of date, etc.), and has dealt with it pretty decently, I think (you run Debian testing or unstable if you want an up-to-date desktop). It seems difficult, if not impossible, to produce a well-tested and stable server distribution if key components of it like the compiler, kernel and C library are less than six months old. I've always thought of Ubuntu as a polished Debian meant for desktops, anyway, and reserve Debian stable for production servers. In the end, Ubuntu is still a rather new distribution, and I think it still remains to be seen if they can break into the enterprise server space in a meaningful way.

Technorati Tags: , ,

Thursday, June 08, 2006

Article Roundup

I guess Emacs really can be used as an operating system.

Over at O'Reilly blogs, Brian Jepson gives us some more humor as he is outsmarted by a chatterbot - this for fans of Monty Python.

It seems bloggers really like Ubuntu.

Two good articles on Pre-seeding Debian installations: Part I and Part II.

A former NSA cryptologist gives us a fascinating look at breaking a 137 year-old Confederate code.

Linux.com tells us how to suspend and hibernate a laptop under Linux.

Hewlett-Packard has registered Debian Sarge as a Carrier-Grade Linux. Here is the list of other, registered Linux distributions.

Vitavonni.de tells us how to easily and quickly optimize Linux ext2/ext3 filesystems.

Tuesday, June 06, 2006

Safe, Remote Firewall Management

One of the hazards of remote firewall administration is the possibility of locking yourself out after an erroneous rulebase change. It can happen with any firewall. There are various ways around this, I'm going to go over a few of them.

Traditionally what I've used when making major (or first-time) firewall policy changes via a remote SSH session or remote GUI (e.g., fwbuilder or Check Point's "Smart" Dashboard) is a rather ugly hack where I enter a cron job that unloads or clears the firewall policy every five minutes. If I retain remote access after a policy update, I just disable the cron entry. If I accidentally lock myself out, I can just wait a few minutes and establish an SSH session again. This is insecure, but presumably the "open" firewall policy would be corrected after a few minutes anyway. The cron entry (in root's crontab, of course) looks like this:

*/5 * * * * /usr/local/bin/fw-unload.sh
If we are using iptables, fw-unload.sh is something like this:

#!/bin/sh # # fw-unload.sh # Clears an iptables firewall and allows all traffic # IPT="/sbin/iptables" # You may want to disable forwarding until the firewall # policy is fixed # /bin/echo "0" > /proc/sys/net/ipv4/ip_forward # Clear the builtin chains and # delete any user-defined chains $IPT -F $IPT -X # Flush the nat and mangle tables for table in nat mangle do $IPT -t $table -F $IPT -t $table -X done # Default ACCEPT policies $IPT -P INPUT ACCEPT $IPT -P OUTPUT ACCEPT $IPT -P FORWARD ACCEPT
It can also strictly allow SSH from a single host to the firewall, which is obviously a much more secure fallback policy:

#!/bin/sh # # fw-unload.sh # Clears an iptables firewall and allows only SSH # from a single host to the firewall itself # IPT="/sbin/iptables" MGMT_IP="10.1.1.1" # Clear the builtin chains and # delete any user-defined chains $IPT -F $IPT -X # Flush the nat and mangle tables for table in nat mangle do $IPT -t $table -F $IPT -t $table -X done # Default drop policies $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP # Allow SSH to the firewall $IPT -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A INPUT -p tcp --dport 22 -s $MGMT_IP -m state --state NEW -j ACCEPT
You could also modify it slightly to be a known-good policy that allows not only SSH inbound to the firewall, but also allows outbound traffic and NAT from a protected network, just to keep your users happy. In fact, it would be ideal if the fallback policy was our previous one...read on to see how we can do that. In any case, given that fw-reload.sh enables a firewall ruleset in itself, you should test it to make sure you can rely on it in an emergency.

For a Check Point firewall on Linux or Nokia IPSO, this would work as a cron entry by itself:

*/5 * * * * $FWDIR/bin/fw unloadlocal
If you use fwbuilder, it has a very nice feature where you can choose to always allow SSH access from a single host, regardless of the rulebase that has been applied. From the fwbuilder GUI, highlight the firewall object in question, right-click and choose Edit from the context menu, then click on Firewall Settings... (setting highlighted below):

Ensuring Firewall SSH access
Finally, here is what I think is the best solution for those using iptables-save/restore. Martin Krafft (author of the excellent book The Debian System) has posted a script that solves this problem quite nicely. I like it because it is so simple - you take a firewall ruleset in iptables-save (8) format, and feed it to the iptables-apply.sh script. It prompts you after applying the new ruleset - if you cannot reply at the prompt (within 10 seconds by default), it reverts to your old ruleset:

root@stealth:~# ./iptables-apply.sh firewall-rules.txt Applying new ruleset... done. Ruleset applied; are you seeing this message? apparently not... Timeout. Something happened (or did not). Better play it safe... Reverting to old ruleset... done. root@stealth:~#
Technorati Tags: , , , ,

Sunday, June 04, 2006

Article Roundup

GNU grep may have been around a while, but the developers are still adding some new features. Some of the most notable are a '-P' switch that allows the use of Perl regexps, ,and a '-o' option that causes grep to return only matched patterns (as opposed to entire lines).

Debian/Ubuntu Tips & Tricks tells us how to use debootstrap to Install Debian Etch From a Running Debian-based System. This is nothing new (debootstrap has been around since 2001) - but still quite useful for installing or testing multiple Debian releases. Here are more generic instructions on using debootstrap from any RPM-based Linux distribution.

There's an interesting discussion at Martin Brown's blog on whether GNU/Linux distribution choices are based on fads or favoritism. In some ways this is a chicken-and-egg type question - popular distros like Ubuntu get that way because they are easy to install and use on many different hardware platforms. It's also probably less of an issue when you're choosing a server OS. The preponderance of laptops with proprietary and fast-changing hardware tends to guide (desktop) distro choices.

Nine live mini GNU/Linux distributions on one CD. You can choose which one to use at boot time.

Bruce Schneier talks about Aligning Interest with Capability, and opens up the can of worms (again) that is software liability (further down in the first post). Marcus Ranum has some interesting comments on software liability in another of Bruce's posts last year.

Wednesday, May 31, 2006

Article Roundup

TechNewsWorld tells us how small security risk is still a selling point for Linux.

This is very cool if you are a die-hard Emacs user - Bill Clementson discusses Conkeror in Firefox for Emacs Users. Throw away your mouse for good. Here's a screenshot showing the numbered links for easy keyboard navigation:

Conkeror

Debian Administration tells us how procmail can help handle Debian mailing lists easily.

The title says it all - Microsoft launches security for Windows. Apparently, MS now thinks they should "protect people who use its Windows operating system from Internet attacks". They are just thinking about this now? Worse, it costs $49.95 per year for up to three computers. They should be giving this away for free, since they're the ones that have been subjecting the world to their horridly insecure software for years.

Kerneltrap gives us a two-part interview with the many developers at the 2006 OpenBSD Hackathon.

Simon Cozens shares the contents of his bin directory.

Tuesday, May 30, 2006

Perl and Network Security Auditing

Network Auditing on a Shoestring tells the story of two auditors who wrote a custom, web-enabled, database-backed front end in Perl to handle the task of auditing share permissions on a 2000-user Windows network.

I have also found Perl tremendously useful for network auditing, although I tend to use it for data-munging. One of the modules I wrote is NetAddr::IP::Obfuscate, which I use to generate obfuscated Nessus reports, but which will work on any text file with IP addresses in it. I've also posted a Perl script I use to do bulk reverse-DNS lookups.

One quick story - I had a client who suspected one of his network admins was reading other employee's email, using Outlook to open and view MS Exchange mailboxes. They wanted to know who had been accessing certain mailboxes and when. I had them send me daily event logs, exported as text, then used Perl to parse the logs, looking for the mailbox events and specific user accounts, finally generating a CSV report (of course, Exchange can't distinguish between accesses of a mailbox, calendar, journal, etc., but the data was still useful).

, ,

Saturday, May 27, 2006

Comments on "Does Installing SSH Enable More Exploits Than it Solves?"

There is an article up at InformIT by John Tränkenschuh titled SSH Issues: Does Installing SSH Enable More Exploits Than it Solves?. The basic premise of the article is that SSH usage is enabling security holes, in most cases quietly, that otherwise would not have been present. The specific example given is that of SSH agent forwarding, and how compromise of the agent-forwarding host would result in intruder access to any of the end systems. While I agree that SSH can be configured in a way that makes it less secure, I don't think ceasing use of SSH is the answer (the author never states explicitly that this is his goal, however, the title of this article certainly suggests this). SSH can be configured securely, but like any other complex security system, it takes a little effort.

I think most admins would accept the basic premise that remote connectivity is a must in today's always-on IT environment. Widespread adoption of SSH (and OpenSSH in particular) has been responsible for a welcome downturn in the use of telnet and the Berkeley r-tools (rsh, rlogin, etc.). While most admins would also agree that discontinuing use of any remote connection protocol would enhance security, I think it is unrealistic to assume suddenly discontinuing SSH usage would fix anything. Most sysadmins would find a way to do work remotely, whether by falling back to insecure protocols, or by using VPN clients. In any case, the same or worse risks would be present as with SSH. Interestingly, the compromise of the intermediate agent-forwarding host in the author's example may not be the worst security risk in that case - the admin's client may be a weak link in the authentication chain if it has, say, SSH root login and password authentication enabled. The unsophisticated attacker that compromises an admin's home workstation and non-root user account with an SSH brute-force login script would be able to jump to other systems by simply scanning a shell history file and setting a few environment variables (assuming an ssh-agent running that had cached credentials). The same problem exists with home VPN's used by telecommuters. A compromise of the VPN client while the VPN tunnel was active would lead to corporate LAN access. It's why companies like Check Point provide VPN clients that can be remotely configured during connection initiation to disallow any non-VPN traffic while a tunnel is active.

Anyway, raising awareness of insecure SSH usage is certainly beneficial, so in that respect, I think the article is a good one (it is the reason I wrote Five-Minutes to a More Secure SSH, after all). I think the title could have been a bit less sensational, however.

Technorati tags: , , ,

Friday, May 26, 2006

Article Roundup

Reporting vulnerabilities is for the brave. I talked about this recently.

From the Linux Devcenter, How Shellcode Exploits Work.

How Not to Manage Geeks.

RMS enlightens us on Sun's recent decision to allow binary distribution of their JDK with the various Linux/Unix distributions. Still not free software (in fact, it will be in Debian's non-free archive); we still have the Java Trap to worry about.

Google's Picassa is now available for Linux. Ian Murdoch comments.

Ubuntu 6.06 Release-candidate screenshots. OSDir also has a set up for the latest Kubuntu RC.

Tuesday, May 23, 2006

Streamlining Iptables for FTP and SMB/CIFS Traffic

There is an article at nixCraft on Connecting a Linux or UNIX system to Network attached storage device. The article itself is a good one, except for the part about iptables firewall rules to permit FTP and SMB/CIFS traffic between the Linux client and NAS. The errors are common misconceptions, so I thought I'd mention them, and show the standard iptables usage.

First, iptables, along with all modern firewalling systems, is a stateful firewall. That means it will record the "state" of new network onnections, and allow future packets that are related to or part of an established connection to traverse the firewall rules. While iptables can be used as a simple packet filter, it is usually not, since using it in this way results in more complex, less secure firewall rulesets. See the resources at the end of the post for more details.

Anyway, the article in question says this:

Please note that when configuring a firewall, the high order ports (1024-65535) are often used for outgoing connections and therefore should be permitted through the firewall. It is prudent to block incoming packets on the high order ports except for established connections.

This is actually information from the Securing Samba Howto. It is misleading, in that if you are using a stateful firewall, you don't need to allow return traffic on high ports. It will be allowed by a properly configured stateful ruleset.

Next, the list of ports the authors recommend opening is too broad. For FTP and Samba/CIFS, the following ports are used:

TCP 21 - FTP control TCP 20 - FTP data TCP 135, 139, 445 - smbd UDP 137, 138 - nmbd
We don't care about the FTP data connection (TCP 20), since it will be handled by iptables' FTP connection helper. The UDP ports 137 and 138 are used for domain browsing, and are not needed for mounting remote SMB shares. Of the three TCP ports, 445 is used by the smbmount (8) command, with a fallback to port 139 if 445 is not available.

In the network diagram given in the article, there is a Linux client with a (presumably) host-based firewall, directly connected to a NAS box. The iptables rules given for FTP and SMB/CIFS communication between the two boxes have a lot of unnecessary cruft in them, including the TCP high ports. Most host-based firewalls allow all outbound traffic, so you can simply do this:

iptables -A OUTPUT -m state --state ESTABLISHED, RELATED -j ACCEPT iptables -A OUTPUT -m state --state NEW -j ACCEPT
This will allow all outbound traffic from the Linux host itself, and statefully allow other outbound traffic as needed. The use of an unqualified state "NEW" here allows all but invalid packets. In fact, the INPUT chain, which is hit by packets coming into the Linux host directly (including replies to our outbound traffic), can be safely closed off to all but established or related packets in this instance:

iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -j DROP
Just remember that you have blocked all (state NEW) inbound traffic here, so don't do this remotely!

If you want to filter outbound traffic explicitly by port, the following OUTPUT chain rules will allow FTP and SMB/CIFS mounts from the Linux host to the NAS box (I assume you have the IP address of the NAS box in the shell variable $NAS). It doesn't make sense to specify a source address here, since the OUTPUT chain is only hit by packets leaving the local host:

iptables -A OUTPUT -m state --state ESTABLISHED, RELATED -j ACCEPT iptables -A OUTPUT -p tcp -d $NAS --dport 21 -m state --state NEW -j ACCEPT iptables -A OUTPUT -p tcp -d $NAS -m multiport --destination-port 139,445 -m state --state NEW -j ACCEPT

One note, don't forget to set the default chain policies to "DROP" anytime you use iptables:

iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP
Finally, if you have a modular kernel (as in any Debian-based installation), you will have to load the FTP connection helper somewhere near the top of your firewall script:

/sbin/modprobe ip_conntrack_ftp
Related links:

Linux Iptables Firewall Scripts, TCP/IP and Linux Network Security with Iptables, Using Samba as a File Server, PDC or Domain Client, Accessing Windows Shares From a GNU/Linux Desktop, Iptables tutorial

Technorati Tags: , , , ,

Thursday, May 18, 2006

Article Roundup

Thoughts on Perl in the Enterprise.

Interesting take on the reality of Enterprise Software.

Some software humor, and more.

Why good package management is important.

Tom Adelstein on the New Era of Linux System Administration.

Automating Debian Etch installs via pre-seeding (part one).

Comments on Debian, Ubuntu, and the Future of Linux

There's an interesting blog post at by Stephen O'Grady of Redmonk about Debian, Ubuntu and the Future of Linux. Basically, the author says that Debian is poised to become much more relevant in the large-enterprise Linux space, taking it's place alongside Red Hat and SuSe, with a little help from Ubuntu:

...from where I sit it seems entirely possible that Ubuntu and its corporate partent Canonical could be tabbed as the corporate interface into the Debian community. It's difficult to imagine large ISVs such as IBM or Oracle dealing effectively with the Debian community, due to the cultural gap alone. Canonical, however, would seem to be an effective bridge between the two parties, having as they do one foot solidly in the Debian community with the other in businesses models the ISVs would understand.

I agree with part of this - right now, Debian and Ubuntu don't exist without one another. What's good for one is generally good for the other. This may have been incredible foresight, as Sun recently announced it was going to support Ubuntu.

On the other hand, the reality of Debian is that it is probably used in many more "large enterprises" than we know (I know of one Fortune-100 firm that has a large Debian server deployment, although they don't make it publicly known). While it's easy for Red Hat and Novell to track large-scale Linux deployments, it's nearly impossible for Debian, unless that information is offered. For server deployments, Debian Stable is, well, seriously stable. This is obviously something large companies look for in server deployments. As far as big-name support goes, HP has been offering Debian support on their own hardware for some time now - much as Sun will be providing support for Ubuntu on their hardware. There are also lots of smaller firms and independent consultants (many of them Debian developers) that provide Debian support.

Technorati Tags: , ,

Wednesday, May 17, 2006

Stealth and Security With Filtering Bridges

There is a good tutorial on bridging under Linux at Nepotismia. For those that don't know, bridging is a way to transparently connect and forward data between two networks. Because they operate at layer 2 (the data-link layer), bridges can operate independently of the protocols above them. Here is another good overview of Linux bridging. Pure bridging devices have largely been replaced by switches, but dedicated bridges can still be useful when they are combined with packet filtering. I've used bridging firewalls in a few situations over the years.

In one, a client had a proprietary application housed on a dedicated server (Windows-based) that was supplied and pre-configured by the vendor. One interface on this device had a public address used for remote management, and the other interface had to be connected to their LAN. Despite perimeter firewall rules that limited access to the device from the Internet, the customer did not trust the security of the device - it was basically an unknown risk to them. What we ended up doing was placing a bridging firewall between the device and the rest of the LAN (really the switch port it was connected to), allowing transparent filtering of packet flow to and from the device. The bridge in this case was a m0n0wall (a great firewall in its own right, also with bridging capability) on a Soekris box.

In another, I was doing remote penetration testing and had to satisfy the client's demands that the testing platform was segmented from the rest of our network, so that any data collected during the test could be kept secure. In this instance, I opted for a spare PC running OpenBSD, configured as a filtering bridge. The advantage of doing this was that it did not impact the layout of the LAN, as the bridge had no IP addresses - basically an "invisible" firewall. The testing server was allowed full outbound access, but no inbound network access to the server was permitted.

Bridging firewalls also turn out to be useful in honeynets. All-in-all, a very useful addition to your networking toolkit.

Technorati Tags: , ,

Monday, May 15, 2006

Article Roundup

Taosecurity on why Prevention Can Never Completely Replace Detection. The IPS hype.

Using Ftester to test your Linux firewall. This is useful with any firewall, not just Netfilter.

IBM Developerworks brings us SELinux From Scratch.

Becoming productive quickly as a Unix developer.

More Emacs goodness - Emacs/del.icio.us integration.

Nothing whatsoever to do with sysadmin or security, but laugh-out-loud funny.

Sunday, May 14, 2006

Is Anti-virus Software Really Necessary?

There is a blog post from May 9th titled Linux Security - The Illusion if Invulnerability over a viruslist.com (Kaspersky Lab's blog). This quote sums up the theme of the post:

At the Kaspersky stand we talked to a lot of visitors. Pretty soon, it dawned on us exactly what the biggest threat to Linux systems is: the almost overwhelming belief in the invulnerability of Linux

I think they have it wrong - it's not belief in invulnerability, and it's not Linux. It's a belief that "Yeah, it could happen to me, but it probably won't" and the fact that you could envision users of OS X, Windows, or any OS saying this. But the quote pre-supposes that there is a need for anti-virus software at all. Sound crazy? Perhaps not. Here's a few questions to think about:

  • For some reason, the metric used to judge anti-virus products is how quickly they release signature updates to counter new threats. This seems backwards to me. Has any anti-virus vendor ever done research on how many infections their software has prevented, and what the impact could have been?
  • More to the point, is anti-virus software really a valuable part of an IT security policy, or are there better ways of preventing viruses/malware?
  • Why does it seem that despite the entire world running Windows desktops, and almost all of those running some form of anti-virus software, there are still major virus outbreaks?
  • Does it help to divide malware threats into known and unknown categories? Clearly, antivirus software protects against the former, but not the latter.
  • Does reliance on a single security product give a false sense of security? For example, a common misconception is that a firewall is all one needs for protection against external network threats. The truth is much more complicated than that, as most security practitioners know.

This question of whether or not you really need anti-virus software is answered quite well at vmyths.com:

If an expert proclaims you need antivirus software to protect you from a virus, you can counter with the following argument:

If we'd turned off automatic macro execution in Word before Melissa came along, then our PCs wouldn't have gotten infected. If we'd turned off Windows Visual Basic Scripting before ILoveYou came along, then our PCs wouldn't have gotten infected. This means our PCs could have protected us even when antivirus software failed to do its job. Perhaps we don't need to update our antivirus software so often -- maybe we really just need to update our antivirus experts.


Technorati Tags: ,

Comments on Switching From Solaris to Linux

There's an interesting post and discussion at Blog O' Matty about why people are switching from Solaris to Linux. I suppose it's a matter of familiarity, but I could never get used to the Solaris way of doing things. Why can't Sun just ship their OS with all the GNU and other packages like OpenSSH that everyone just installs anyway?

Redhat Linux ships and provides regular updates for numerous opensource software (e.g., postgres, MySQL, Apache, Samba, Bind, Sendmail, openssh, openssl, etc), where Sun keeps trying to sell customers the Sun Java One stack, "modifies" an opensource package and diverges the product from what is available everywhere else, and fails to provide timely bug fixes and security patches for the opensource packages that are shipped (Apache, MySQL and Samba are perfect examples) with Solaris.

I agree that the support for most free/open source software under Solaris seems incomplete at best. Sun's SunSSH is a perfect example of this. Red Hat is very involved in the open source community, providing upstream patches regularly to major projects. The software they ship with RHEL and Fedora is also reasonably current.

As for package management:

5. Managing applications and patches on Solaris systems is a disaster, and redhat's up2date utility is not only efficient, but has numerous options to control the patch notification and update process...

6. Staying on the cutting edge with Nevada is difficult, since there is currently no way to easily and automatically upgrade from one release of Nevada to another. On Fedora Core servers, you can run 'yum upgrade' to get the latest bits. Having to download archives and BFU is tedious, and most admins don't want to spend their few spare cycles BFU'ing to new releases.

Too true - yum is very nice - although I think Fedora is a bit too bleeding edge for production server use. For those admins who have not discovered Debian's package management, you don't know what you are missing. I'm not sure why admins feel the need to waste all sorts of time manually upgrading and patching systems anymore. Debian stable and RHEL automate these mundane tasks quite nicely, and both provide timely security updates to reasonably up-to-date software.

Technorati Tags: , ,

Friday, May 12, 2006

Article Roundup

Given the seemingly abysmal state of network security these days, especially web application security, I thought I'd share an old (in Internet terms), but useful link for those developing Perl CGI applications with an eye towards security.

A very good article explaining why Lisp is the way it is and why you, the programmer, should care. Lisp advocacy with a different approach.

Debian-Administration (an excellent site, BTW) gives us Automating new Debian installations with preseeding.

Linux Format interviews Novell's Greg Mancusi-Ungaro, the Director of Marketing for Linux. One of his comments is interesting:

LXF: Do you think that any company can be the Linux equivalent of Microsoft, given that it's an open source OS and people can do pretty much what they want?

GMU: Well, if we ever woke up one day and said 'Wow, Novell is the Microsoft of Linux' or 'Red Hat is the Microsoft of Linux', then the Linux movement would be over. What you want to say is, Novell is the enabler - the company that's enabling Linux to be successful. But Linux is largely held in trust by the community, and Novell is making Linux work for large enterprises. That's a very different thing. Microsoft controls everything; it can make nations change their mind, and not in a good way I don't think.

I agree with this completely - No one company has a lock on Linux (the kernel proper), nor any of the most useful server, development or desktop apps, like GCC, OpenOffice.org or Apache. I suppose you could lock yourself into one Linux vendor by becoming completely reliant on a proprietary app that only supported one Linux distribution, but then that would be the fault of the application, not the operating system.

Wednesday, May 10, 2006

Security Research and Computer Crime - Where do we Draw the Line?

This is interesting - the case of Eric McCarty, a security researcher and sysadmin charged by Federal prosecutors last month with "knowingly having transmitted a code or command to intentionally cause damage" to the University of Southern California's applicant website (I noticed the FBI press release uses the word "sequel" instead of SQL. I hope that wording didn't come from the complaint itself...).

Apparently, McCarty exploited a SQL injection flaw to access student data (which included social security numbers and dates of birth) in the database backing USC's website. He then notified SecurityFocus via email, who notified USC of the vulnerability. USC shut their site down for two weeks while it was being fixed (my guess is the "damage" comes from the fact that USC had to take their applicant website offline, since McCarty didn't do anything malicious with the information). Here is the text of the statute he is alleged to have violated (see section (5)(A)(i)).

The case, and others like it, show the ethical conflict involved in some computer crime prosecutions. In particular, this case reminds me of Randal Schwartz's case from way back in 1993. The same issues were raised back then - in that case, Schwartz was running the crack program to disclose weak passwords, but without authorization. In the end, he was convicted of three state felony charges.

Unfortunately, the law is pretty clear in these cases. It appears McCarty violated Federal law, a felony that could land him up to ten years in jail. This seems quite excessive, however, given McCarty's intent. Perhaps some exceptions are needed in various Federal and state computer crime statutes to allow for legitimate security research, although the question of what is legitimate and what is not could be unclear. Sure, USC had to take down their site for two weeks (does anyone else but me think that's a long time to fix a SQL injection bug?), but just think of how long it would have been down after a real compromise. At the same time such an exception could be said to encourage security research, you could see this used as a loophole that crackers with malicious intent could use to escape prosecution. "Really, detective, I was just testing their security". Web application security testing can also be dangerous, unintentionally resulting in database or web server outages. For now, the risks of doing "stealth" research just are not worth it, whatever the intent.

Technorati Tags: , ,

Tuesday, May 09, 2006

Article Roundup

Three hackers from the University of Toronto have developed distributed proxy software that is set to be released at the end of this month. Called Psiphon, it is meant for use inside countries with restrictive Internet access policies.

NetBSD developer Hubert Feyrer describes a cool way to use qemu during sysadmin training. See also my qemu howto.

Perl surprises some at the Coverity open source code analysis defect project by having the fewest defects of the three LAMP languages - Python, PHP and Perl. Here is the quote:

...surprised, however, by the performance of the Perl language. Of the LAMP stack, Perl had the best defect density well passed standard deviation and better than the average, Chelf said. Perl had a defect density of only 0.186. In comparison Python had a defect density of 0.372 and PHP was actually above both the baseline and LAMP averages at 0.474.


Speaking of Perl, Leo Lapworth brings us The Ultimate Perl Module List.

Interesting article and discussion on Why Business Needs More Geeks.

How even geeks can stay in shape.

Is the US lagging behind in open source adoption?

Sunday, May 07, 2006

It's the Other Apps, Stupid

I came across this funny blog post - Emacs Key Bindings Make You Retarded. Of course the post is decent satire, but I think it's the other apps' key bindings that make you retarded. Really. It drives me crazy when I'm in Emacs after having used vi for a few agonizing hours on some lame, minimalist system and "Esc :wq [Enter]" gives me a Lisp evaluation error. So I just try to do everything in Emacs (including my blog posts). It's easier that way. For those times when I can't be in Emacs, Firefox can get Emacs key bindings with a few tweaks (easiest under GTK+/Gnome) and OpenOffice.org's key bindings are completely customizable (Tools->Customize->Keyboard).

Technorati Tags: , , ,

Friday, May 05, 2006

Article Roundup

Here's an article about the next version of Ubuntu being ready for the Enterprise. Of course, that phrase "ready for the Enterprise" is over-used marketing hype (what does it really mean?). I suppose if it means that Ubuntu is stable, well-supported, with predictable release cycles, than I would wholeheartedly agree.

The Firefox power-users guide. A great list of tips and useful plugins.

Does Wardriving matter? Is wireless security as bad as we are told? I think it's probably worse. If you ran a business, would you make a corporate security breach public (absent statutory requirements, of course)?

This is hilarious satire.

Linux has gotten fat.

Thursday, May 04, 2006

Save the Internet - Sign the Online Petition

Head on over to MoveOn.org and sign this online petition to help prevent the large telcos from destroying net-neutrality.

Tuesday, May 02, 2006

Accessing Windows Shares From a GNU/Linux Desktop

Many people work for companies that have a Windows-based infrastructure, while they have a GNU/Linux desktop. You could use something like VNC or rdesktop to gain access to files on Windows shares, but a couple of command-line utilities from the Samba suite are a nice option that let you gain access right from your desktop (for a more in-depth treatment of Samba, see the presentation Using Samba as a File Server, PDC or Domain Client).

Let's say your company has a Win2k3-based fileserver, call it "FILESRV", in the domain "DOM". The problem is that you have a Linux workstation, and don't have access to a Windows terminal server, or you don't have Administrative rights on the server in question, so you can't install VNC. The 'mount' command (which uses 'smbmount' under the hood when it is asked to mount a Windows share), 'smbclient', and Openoffice.org's import/export filters provide a nice alternative. On a Debian-based system, run 'apt-get install sudo smbfs smbclient' to get the required command-line utilities described below, although Ubuntu systems will already have sudo.

File Transfers With Smbclient

The 'smbclient' command from the Samba suite of tools provides an FTP-like command-line interface to a Windows fileshare. To make it easier to use, and so we don't have to remember all the command-line options, we are going to define a shell alias. Put the following in your ~/.bashrc file (using .bashrc means this alias will be available to you in any interactive shell, not just a login shell).

alias files='smbclient //FILESRV/path -d 3 -A ~/.dom.txt'
Where the file ".dom.txt" in your home directory is in this format (see smbclient(1) for details):

username = username password = password domain = DOM
Some tips:
  • To make this alias apply to all users, put it in /etc/bashrc, not ~/.bashrc. Each user will still have to have their own ~/.dom.txt file, however.
  • The "-d 3" sets the debug level to something useful, in case the command fails - you can remove this once your alias works correctly
  • Using the "-A" option like this keeps you from having to give a username and password every time you run the alias (but see the note about security, below).
  • You can put the share path in double quotes if it contains spaces, as in "//FILESERV/Tech Docs".
  • You can use an IP address in place of the server name if you want.
  • Running the 'alias' command will display a list of aliases that are currently defined.


When you are done editing ~/.bashrc (or after each editing session), you need to re-evaluate your .bashrc by running '. ~/.bashrc' or 'source ~/.bashrc'. Now when you run the command 'files', you should get dumped into an FTP-like interface from which you can get or put files from the remote share:

dmaxwell@stealth:~$ . .bashrc dmaxwell@stealth:~$ alias alias files='smbclient //10.0.0.2/Tech -A ~/.dom.txt' dmaxwell@stealth:~$ files Domain=[DOM] OS=[Windows Server 2003 3790 Service Pack 1] Server=... smb: \> get "ftp fix.doc" getting file \ftp fix.doc of size 24064 as ftp fix.doc (195.8 kb/s) smb: \> quit dmaxwell@stealth:~$

Transparent Access With Mount

Accessing Windows shares with the mount command is a little more convenient, since once mounted, you can use 'ls', 'cp', 'mv', 'mkdir' and all the other Unix filesystem commands you are used to. Let's say that you want transparent access to the Windows fileshare noted above (//FILESRV/path). First, create a directory with something like 'mkdir ~/files'. This directory is where the remote filesystem will get mounted. Then add the following to your ~/.bashrc (we are using sudo to let us run the mount command with the required root privileges):

alias filemount='sudo mount -t cifs //FILESRV/path ~/files -o username=user,password=pass,workgroup=DOM'

Make sure your alias definition is on one line - your browser might wrap the display, above. 'DOM' here is the name of your Windows domain, 'user' and 'pass' are your Windows domain credentials. When you are done, source your .bashrc again, and run the command 'filemount'. You should now be able to access the files in "//FILESRV/path" from within your ~/files directory.

Accessing HOME$ Shares

Sometimes a Windows admin will map a hidden share to everyone's Windows desktop, to use as a private storage space. This is usually called the "HOME$" share, although you won't see it if you try to browse the network (in a Windows-sense). To access it, make a directory in your home directory, again as a filesystem mount point. I'll use 'z' as the name of the directory, since a lot of Windows admins map the Z: drive to user's home shares at login.

Add the following to your .bashrc:

alias homez='sudo mount -t cifs //FILESRV/HOME$ ~/z -o username=user,password=pass,workgroup=DOM'

That's it. When you source your .bashrc again you should be able to run the command 'homez' and have full access to your private Windows home share from within your ~/z directory.

Security

A final word about security is in order. You may want to change the permissions on the "~/.dom.txt" and "~/.bashrc" files to 0600, to prevent any other non-root users on your workstation from reading the passwords stored in those files. Even though this is really just 'security through obscurity', the alternative, which is less convenient, is to type the password in every time you mount or access one of the remote shares. The smbmount(8) (and mount(8), since mount just passes it's options onto smbmount) command supports a 'credentials' option that allows the use of a file when authenticating, much like the '-A' option to smbclient.

Compatibility

These aliases will work on pretty much any (recent) GNU/Linux system under any shell, with Samba 3.x. I've written them with the Bash shell in mind, as it's the installed default on most of these systems. What may differ under other shells is the name of the shell startup file, and the syntax for defining shell aliases (although the above syntax will work under any Bourne-compatible shell).

If you have trouble getting the mount command to recognize the '-t cifs' option, try it with '-t smbfs' instead, but you may not be able to access Win2k3 shares easily if you do this. Some Unix systems' mount commands don't support passing the smbfs or cifs filesystem types onto smbmount - in this case, you can use smbmount(8) directly.

Technorati Tags: , , , ,

Thursday, April 27, 2006

The Myth of the Password Change

Eugene Spafford has a recent blog post on how security "best practices" are often just myths that have been passed on over the years, and have no current basis as a true best practice. The example he gives is the required monthly password change, which is a holdover from the non-networked mainframe days of old, and does nothing to truly increase password security in today's world. He recommends one-time passwords or two-factor authentication (tokens):

In summary, forcing periodic password changes given today's resources is unlikely to significantly reduce the overall threat - unless the password is immediately changed after each use. This is precisely the nature of one-time passwords or tokens, and these are clearly the better method to use for authentication, although they do introduce additional cost and, in some cases, increase the chance of certain forms of lost password.

I mentioned previously how dangerous simple password authentication was in the context of securing SSH servers. Spafford's article goes into much more detail than I did on the risks of using passwords (I only addressed one of his seven failure modes - cracking), it's definitely worth reading if you are an admin.

Technorati Tags: , ,