Thursday, March 30, 2006

Linux Iptables Firewall Scripts

Here are two iptables firewall scripts you can use to quickly firewall any recent (kernel 2.4 or 2.6) Linux system (I posted an update, below).

The first, 'firewall.sh', is a script meant to protect a SOHO or home office network behind a dual-homed (two interface) firewall. It doesn't support DMZ hosts, but does support the most common scenario of SOHO or home firewalls doing double-duty as SSH or web servers, for example. It features syn-flood protection and rate-limiting for log entries.

The next script, 'bastion-host.sh', is much simpler, and is meant to be used on any singly-homed host directly connected to the Internet, like a home workstation or laptop. It drops all inbound connections by default.

You can download the scripts here:

Dual-homed Linux firewall script
Singly-homed Linux firewall script (bastion host)

The way I like to use these scripts on a Debian or Ubuntu system is as follows (you can see some other methods in the Securing Debian Manual):

Put your chosen script in /sbin, and make it owned by root with modes 0700:
sudo cp ./firewall.sh /sbin && chown root.root /sbin/firewall.sh && chmod 0700 /sbin/firewall.sh
Edit /etc/network/interfaces, and add the following line to the interface stanza of your external interface (usually eth0):
pre-up /sbin/firewall.sh
So the stanza for your external interface will probably look something like this when you are done:
interface eth0 inet dhcp pre-up /sbin/bastion-host.sh
or
interface eth0 inet static address 10.1.1.254 netmask 255.255.255.0 gateway 10.1.1.1 pre-up /sbin/firewall.sh
Update: You can use the shell syntax to make working with iptables a bit easier, when you have a lot of IP addresses to consolidate under one access rule. For example, you can allow SSH connections to your firewall only from certain IP's like this: SSH_IN="192.168.1.1,172.16.0.1,10.1.1.1" for ip in $SSH_IN; do $IPT -A INPUT -p tcp --dport 22 -s $ip -m state --state NEW -j ACCEPT done
Then you just need to edit the variable SSH_IN when you need to alter the access list. You can also use the Ipset extension to allow you to use multiple ports and IP addresses directly in iptables rules.

Technorati Tags: , , ,

Tuesday, March 28, 2006

Open Source Advocacy and Looking Good

The former CIO of Massachusetts, Peter Quinn, says that Open Source advocates don't dress the part for business, and this hampers Linux and Open Source adoption by businesses and governments.
"Open source has an unprofessional appearance, and the community needs to be more business-savvy in order to start to make inroads in areas traditionally dominated by commercial software vendors. (Having) a face on a project or agenda makes it attractive for politicians (to consider open source)."

He went on to suggest that while the open-source community was slowly beginning to come to terms with the need to dress for success, doing so is a "huge education process."

Business-savvy, yes. Dress code, no. As someone who has been part of both the Open Source and business culture, I can tell you that idea is mostly crap. Yes, the "sandal and ponytail set" doesn't make a good impression on potential customers, but neither do most developers want to play the role of pre-sales engineer, or really have anything to do with clients. It has nothing to do with how they dress, it has to do with how they how they interact with other people on a social level. Putting hordes of Open Source developers in suits and sticking them in front of government panels won't improve Open Source adoption. I'd bet that if you put Theo de Raadt in a suit he would still manage to piss off DARPA (There has been a lot written on dress codes for techies, forcing geeks to dress in business-casual isn't the way to improve their image).

A previous employer of mine instituted a business-casual dress code that applied to everyone, even developers and engineers. The official reason was that it looked more professional when clients or potential clients visited. All it really accomplished was pissing off people who had been used to jeans and t-shirts for years. Managers, salespeople and pre-sales engineers already dressed formally for client visits, and here's a surprise - they made use of technical talent by phone only, never putting them in front of clients unless it was a dire emergency, and even then, it was usually to fix some hairy technical problem where they didn't have to speak to anyone. Just quickly usher them into the server room and leave them alone.

Another quote by Mr. Quinn:

"(I blame them) for not understanding what it is that they do, for spending too much time talking and thinking in technology terms, and not thinking in terms of business"

Sheesh. Here's a clue - it's what they do, most are not capable or not interested in the effort of speaking down (as they see it) to non-tech types. Be happy that there are a few Open Source developers who have business-sense. You should use them as a resource.

Technorati Tags: ,

Sunday, March 26, 2006

An Introduction to Perl Programming

The last presentation I'm posting is an Introduction to Perl, another talk I gave to the GHGLUG recently. It is geared towards those with some programming experience, perhaps just not Perl. You can download it here in the original Oo.org or PDF formats, or check it out online:

An Introduction to Perl - OpenOffice.org
An Introduction to Perl - PDF
An Introduction to Perl - Online Version

The program discussed in the presentation can also be downloaded:

Popcount.pl

Feel free to modify and use this (and any of the others I've posted, as well) for your own user's group presentations - they are all licensed under the GFDL. Feedback is welcome.

Technorati Tags: , ,

Saturday, March 25, 2006

TCP/IP and Linux Network Security with Iptables

This next presentation is an overview of TCP/IP and network security with the Linux Netfilter (iptables) framework. I've always thought that a network or firewall administrator needed a good grounding in networking basics, so this was part of a two-hour presentation that was designed to touch on TCP/IP before talking about iptables rulesets. You can download it here in the original Oo.org or PDF formats, or check it out online:

TCP/IP and Network Security with Iptables - OpenOffice.org
TCP/IP and Network Security with Iptables - PDF
TCP/IP and Network Security with Iptables - Online Version

One of the books I recommend anyone wanting to seriously broaden their networking knowledge read is Richard Stevens' masterpiece The Protocols (TCP/IP Illustrated, Volume 1).

Technorati Tags: , , , ,

Friday, March 24, 2006

Introduction to Regular Expressions

The next presentation I'm putting online is an introduction to regular expressions, which are the bane joy of every programmer's existence. It goes over basic use and syntax, and offers examples with sed/awk/grep, Perl and Emacs. You can download it here in the original Oo.org or PDF formats, or check it out online:

Intro to Regular Expressions - OpenOffice.org
Intro to Regular Expressions - PDF
Intro to Regular Expressions - Online Version

Technorati Tags: , , ,

Wednesday, March 22, 2006

Using Samba as a File Server, PDC or Domain Client

I have a bunch of presentations I've given to the Greater Hartford GNU/Linux User's Group (GHGLUG) over the past two years; I'm updating them individually and putting them back online.

The first is a presentation on Samba, and how to use it for secure file sharing (as an NFS replacement), as a Windows domain controller, or as a Windows domain client. You can download it here in the original Oo.org or PDF formats, or just check it out online:

Samba Howto - OpenOffice.org
Samba Howto - PDF
Samba Howto - Online Version

The Samba configuration files referenced in the presentation can also be downloaded:

smb.conf (File Server)
smb.conf (Domain Controller)
smb.conf (Domain Member)

Note:The presentations are licensed under the GNU Free Documentation License (GFDL), so feel free to modify, share or use them for your own user's group tutorials.

Technorati Tags: , , ,

Tuesday, March 21, 2006

Groklaw: Wallace Anti-GPL Complaint Dismissed

Groklaw has the details up about the dismissal of the Wallace v. FSF complaint. Back in April of 2005, Daniel Wallace filed a complaint in Federal Court against the Free Software Foundation, alleging that the GPL amounted to "price fixing", and prevented him from earning a living selling software. No, really.

Here is the meat of the complaint:
"The Defendant FREE SOFTWARE FOUNDATION INC has entered into contract and otherwise conspired and agreed with individual software authors and commercial distributors of commodity software products such as Red Hat Inc. and Novell Inc. to artificially fix the prices charged for computer software programs through the promotion and use of...the GNU GENERAL PUBLIC LICENSE."

Anyway, the complaint was dismissed today, with the Judge saying this in the ruling:
"As alleged, the GPL in no way forecloses other operating systems from entering the market. Instead, it merely acts as a means by which certain software may be copied, modified and redistributed without violating the software's copyright protection. As such, the GPL encourages, rather than discourages, free competition and the distribution of computer operating systems, the benefits of which directly pass to consumers. These benefits include lower prices, better access and more innovation."

All I can say is, it's about time. I've written about how proprietary software harms competition, it's nice to see what seems like common sense backed up by the courts. I'm not sure that this will actually strengthen the GPL, since this case doesn't appear to have touched on copyright law, but it will make it a bit harder to promote anti-GPL FUD.

Technorati Tags: , ,

Sunday, March 19, 2006

Monitoring Unix System Processes with Psmon

Some time ago I found a useful tool for monitoring and restarting Unix processes should they unexpectedly die. Psmon is a system monitoring script written in Perl and licensed under the Apache license that is quite useful if you run servers with critical processes on them. You can download the latest version from the psmon homepage (Version 1.39 as of this writing). Read on for tips on installing and configuring it.

There are some Perl module requirements, but the install script will try to install any modules not present. The installer is a shell script 'install.sh', located in the 'support' directory. When you run it, you'll need root privileges. Something like this should do the trick:
tar xvzf psmon-1.39.tar.gz cd psmon-1.39 sudo support/install.sh
Given that it's pure Perl, it should run on any Unix, although I've only run it on Debian and Red Hat Linux servers. If the install script doesn't work, or fails to pull in the required modules, you can install the modules manually like this (again, you'll need root privileges):
for m in Config::General Proc::ProcessTable Net::SMTP Unix::Syslog Getopt::Long; do perl -MCPAN -e"install $m";done
You can use this Perl one-liner to check definitively that all the requirements are in place, once the install is finished:
perl -e 'foreach ((Config::General, Proc::ProcessTable, Net::SMTP, Unix::Syslog, Getopt::Long)) { eval("use $_;"); print "Module $_ not present\n" if $@; }'
The psmon binary is installed in /usr/bin. Once installed, psmon can be configured through the file /etc/psmon.conf. You'll have to change both lines in the config file that read Disabled True to Disabled False before psmon will function. Once that is done, configuration is pretty simple, especially if you want psmon to just monitor processes and restart them if they die. There are a lot of other things you can do, like monitor CPU usage of a given process and restart it if it is above a certain percentage. See the /etc/psmon.conf that gets installed by default for lots of examples, or the online docs. Here is a simple config file snippet that tells psmon to monitor sshd:
<Process sshd> LogLevel LOG_CRITICAL SpawnCmd /etc/init.d/ssh restart PidFile /var/run/sshd.pid </Process>
The paths in this example are specific to Debian or Ubuntu, here is an example crafted for Red Hat's Fedora:
<Process sshd> LogLevel LOG_CRITICAL SpawnCmd /etc/init.d/sshd restart PidFile /var/run/sshd.pid </Process>
Modify the paths as appropriate for your distribution.

Once configured, you can run psmon periodically via a cron entry, like this:
*/5 * * * * /usr/bin/psmon --daemon --cron
The above will run psmon every five minutes. It will actually spawn a daemon the first time it runs, thereafter it will do nothing if it detects that it is already running, or respawn itself if needed. The '--cron' switch disables 'already running' warnings.

You can configure psmon to send email every time it has to restart a process - the config file directive is this:
AdminEmail you@yourdomain.com
This will use Net::SMTP to send mail via localhost by default - use the 'SMTPHost' directive to configure a mail relay if needed:
SMTPHost mail.yourisp.net
That's really all there is to using psmon for basic system monitoring. See the config file and docs that come with the tarball for details on other features.

Technorati Tags: , , , ,

Saturday, March 18, 2006

Using Emacs to Edit Blog Posts

I use a modified HTML mode in Emacs to write most of my Blogger posts. I've been using GNU Emacs for years, so I'm comfortable with it, and the Blogger post editor is quite lacking as far as editors go (or perhaps the standard I'm used to with Emacs is just way too high).

You can add the Emacs lisp code shown below to your .emacs file, feel free to change the key bindings to suit - I chose most of them based on what would be mnemonic, or what was not already taken by HTML mode. One note, you could use html helper mode, but I find that mode overkill for simple blog posts, and HTML-mode is already built in to Emacs' SGML-mode, while html-helper-mode is an addon. The tags I tend to use most are href anchors (<a href...) and the eight that I've defined new key bindings for below, so standard HTML mode works fine. Update: It seems copying & pasting the text from the code section at the end of this post results in a large block of newline-less Emacs lisp. I've only tested this under Linux copying from Firefox to Emacs, but I've made the elisp available for download anyway. Update II: There is also a comment from Jason Dunsmore, below, that links to a short tutorial on using atom-blogger mode to post directly from Emacs. I've tested it and it works great, although the key bindings I have defined below will have to change, using atom-blogger-mode-map and atom-blogger-mode-hook instead of their html-mode equivalents. It's also possible to post from Emacs via email, using Blogger's email-to-post gateway (Settings->Email) and Emacs' message mode. The drawback to this is that it is text-only, no HTML. Atom-blogger mode has the advantage of allowing you to edit/delete posts within Emacs.

To get started, create a new buffer in Emacs and give it a ".html" extension, or if you've just started up, just switch the *scratch* buffer over to HTML mode by doing 'M-x html-mode'.

Once in HTML mode, just start typing away. If you have to enter one of the tags below, just use the key sequence listed. So , for example, if you need a <blockquote>...</blockquote> section, just hold down the Control key, hit the letter 'c' twice, then release the Control key and hit the letter 'q'. The cursor will be placed between the start and end tags for you. Control-c is a common key-sequence in Emacs for user-defined keys, and in the case of HTML mode, there are already some other tag insertion commands defined. To see them, type 'C-h m' while in HTML mode, and the screen will split in two, with the bottom half showing a help buffer that describes the current major mode (in this case, HTML mode). Type 'C-M-v' to scroll the help window from your current editing session, until you get to the 'C-c C-c' key bindings (Type 'C-x 1' to close the help window):

HTML mode help

You can see that 'C-c C-c h' is bound to html-href-anchor, which will insert <a href=""></a> into the current buffer, prompting you for a URL as it does so. 'C-c C-c u' might also be useful - it inserts an unordered list template into the current buffer:
<ul> <li> </ul>

So what is the 'html-span-fullpost' function for? This is because I use this technique for expandable post summaries to allow my longer posts to have the 'Read More...' links in them. So when I type 'C-c C-c s', I get this:
<span class="fullpost"></span>
with the cursor sitting between the '><'. Anything I type between these start and end span tags becomes part of the 'hidden' post, and is only shown when the reader clicks on the link to read more.

Finally, the html-lt, html-gt, and html-amp functions just insert common html-entities into the current buffer. I used this quite heavily when I was editing this post, for example, since anytime I wanted a literal '<' or '>' in the code snippets, I had to use the corresponding HTML entities, which are &lt; and &gt;, respectively.

Once you are done with your post, just mark and copy the entire buffer with 'C-x h M-w', then switch to your browser window and click the middle mouse button. You should see the entire post appear in Blogger's editor window (this is rather Unix-centric, I know. No apologies, but if you are stuck with NTemacs, I think just the standard Windows paste key-sequence 'C-v' will work). If you want, you can spell check your post from within Emacs first with 'M-x ispell-buffer'.

Here is the elisp code for your .emacs. If you don't want to restart Emacs, just mark the region containing the code below, and do 'M-x eval-region'. The idea for this code was taken from the excellent chapter on Emacs Lisp programming in the third edition of Learning GNU Emacs.
;;HTML mode customization (add-hook 'html-mode-hook '(lambda () (define-key html-mode-map"\C-c\C-cd" 'html-code) (define-key html-mode-map"\C-c\C-cq" 'html-blockquote) (define-key html-mode-map"\C-c\C-cb" 'html-bold) (define-key html-mode-map"\C-c\C-ct" 'html-italic) (define-key html-mode-map"\C-c\C-cs" 'html-span-fullpost) (define-key html-mode-map"\C-cl" 'html-lt) (define-key html-mode-map"\C-cg" 'html-gt) (define-key html-mode-map"\C-ca" 'html-amp))) (define-skeleton html-code "HTML code block" nil "<code>" _ "</code>") (define-skeleton html-blockquote "HTML blockquote" nil "<blockquote>" _ "</blockquote>") (define-skeleton html-bold "Bold text" nil "<span style=\"font-weight:bold;\">" _ "</span>") (define-skeleton html-italic "Italic text" nil "<span style=\"font-style:italic;\">" _ "</span>") (define-skeleton html-span-fullpost "HTML fullpost span" nil "<span class=\"fullpost\">" _ "</span>") (defun html-lt () "HTML less-than entity (<)" (interactive) (insert "&lt;")) (defun html-gt () "HTML greater-than entity (>)" (interactive) (insert "&gt;")) (defun html-amp () "HTML ampersand entity (&)" (interactive) (insert "&amp;")) ;; End HTML mode customization Technorati Tags: , , ,

Friday, March 17, 2006

SCO Claims They Own Part of MySQL's Code?

So what is SCO up to? They partnered with MySQL some time ago, and are now trumpeting their latest brain-child SCAMP, an assault on LAMP. I can't imagine anyone actually buying their arguments that they are somehow adding value to LAMP by replacing the "L" and plunking the rest on top of their own OS. But this is odd, from this PDF doc on MySQL at SCO's site (my emphasis added):
9. OPEN SOURCE FREEDOM AND 24 X 7 SUPPORT Many corporations are hesitant to fully commit to open source software because they believe they can't get the type of support or professional service safety nets they currently rely on with proprietary software to ensure the overall success of their key applications. The questions of indemnification come up often as well. These worries can be put to rest with MySQL as complete around-the-clock support as well as indemnification is available through MySQL Network. MySQL is not a typical open source project as all the software is owned and supported by both SCO and MySQL AB, and because of this, a unique cost and support model are available that provides a unique combination of open source freedom and trusted software with support.
Ignore the non-sequitur about proprietary software support being somehow better than open source software support, this seems to pretty clearly state that SCO owns part of the MySQL code base. Anyone have a clue why they would say that? This is what Mysql's site says about the ownership of the MySQL code base:
The company was founded in Sweden by two Swedes and a Finn: David Axmark, Allan Larsson and Michael "Monty" Widenius who have worked together since the 80's. MySQL AB is the sole owner of the MySQL server source code, the MySQL trademark and the mysql.com domain worldwide.
They are right about MySQL offering indemnification, but MySQL offers this idependently of SCO. They should just take "SCO" out of the sentence in question, since it's MySQL giving support and owning the code, not SCO.

Technorati Tags: , ,

Math For Programmers

Another good post by Steve talks about Math For Programmers. His basic premise is that Math is useful for programmers, and that it is taught wrong in school. He talks of teaching it breadth-first, rather than depth-first. I have to say I agree - I didn't really grok certain areas of Math until I took some courses designed to teach the underlying concepts. You still need the depth-first in certain areas, but that can come later, after you foundation in mathematical concepts is solid. An example is a course in discrete math I took as a requirement for my CS degree. The first book we had to read was How to Read and Do Proofs - a remarkably eye-opening book, that teaches logical thought while showing you basic proof techniques (highly recommended, although I have an older edition). Understanding the two helped me with future courses (like the Theory of Computation), and also with programming, which at its core requires logical thought. I touched on this in a previous post on What Should be in a CS Curriculum. This is similar to the mistaken way in which programming is taught. High school or first-year college students are taught the syntax of a popular language, then do some simple exercises designed to cement the syntax in their head. The best programming course I took way-back-when was the "Theory of Programming Languages", which gave a breadth-first overview of programming techniques (procedural, functional, logic, etc.), and sampled various languages from each domain with programming exercises that showed how each domain lent itself to solving particular problems. Once you know the underlying concepts, picking up whatever the latest, hot language is turns out to be a snap. Here is a good article on the terrible way students are taught programming that says basically the same thing.

Technorati Tags: , ,

Effective Emacs

I've always been an Emacs diehard, although I use vi from time-to-time when I have no choice (usually a fresh server install). So I've read the Seven habits of effective text editing, but just can't bear to use vi/vim for anything serious. Anyway, not to be outdone by a mere seven tips, Steve Yegge brings us 10 Specific Ways to Improve Your Productivity With Emacs. There are some particularly good tips in this - I can relate to #7 'Lose the UI', since I don't use the tool- or menubar anyway. Why display it if you don't use it?
You don't need a menu bar. It's just a crutch placed there for disoriented newbies. You also don't need a toolbar with big happy icons, nor do you need a scrollbar. All of these things are for losers, and they are just taking up precious screen real-estate.
One tip I didn't see that I use frequently is the 'mark-whole-buffer' command, bound to 'C-x h' by default. It's great for quickly copying entire buffers - just do 'C-x h M-w' to yank (copy) an entire buffer. In X, you can do this and then paste into another application (like Blogger edit windows in Firefox) with the middle-mouse button. Another good tip from the list at the bottom of host post is the 'align-regexp' function. That one was new to me, but after trying it out, I can see how useful it would be. One of the great things about Emacs is the embedded help - I typed 'C-h f align-regexp<RET>' and got a description of the function with an example:
align-regexp is an interactive compiled Lisp function in 'align.el'. (align-regexp beg end regexp &optional group spacing repeat) Align the current region using an ad-hoc rule read from the minibuffer. beg and end mark the limits of the region. This function will prompt for the regexp to align with. If no prefix arg was specified, you only need to supply the characters to be lined up and any preceding whitespace is replaced. If a prefix arg was specified, the full regexp with parenthesized whitespace should be supplied; it will also prompt for which parenthesis group within regexp to modify, the amount of spacing to use, and whether or not to repeat the rule throughout the line. See 'align-rules-list' for more information about these options. For example, let's say you had a list of phone numbers, and wanted to align them so that the opening parentheses would line up: Fred (123) 456-7890 Alice (123) 456-7890 Mary-Anne (123) 456-7890 Joe (123) 456-7890 There is no predefined rule to handle this, but you could easily do it using a regexp like "(". All you would have to do is to mark the region, call 'align-regexp' and type in that regular expression.
Technorati Tags: , ,

Thursday, March 16, 2006

Gates Whines that Windows Won't be on $100 Laptops

Seems Bill Gates has his shorts in a bunch over MIT's $100 laptop program. The laptops are meant for developing countries, with the intent that governments will purchase the laptops and give them to children for free. They will come with GNU/Linux installed, and with the ability to mesh wirelessly with other laptops nearby, allowing for sharing of (possibly rare) Internet connections. They are also designed to be rugged, and have a hand-crank for use without power. The LCD has a high-contrast mode for use in bright sunlight. They will have 500MB flash drives, but no hard drive. Obviously, they are designed for use in environments that normal laptops would last about 5 minutes in. According to the article, Gates is quoted as saying:
"The last thing you want to do for a shared use computer is have it be something without a disk ... and with a tiny little screen..."
Well, 500MB would hold about one or two Word documents of almost any size (anyone else noticed that?), but plenty of OpenOffice.org docs or program text. and
"Hardware is a small part of the cost" of providing computing capabilities, he said, adding that the big costs come from network connectivity, applications and support."
I think Bill is just a little out of touch with the mainstream. Has he forgotten that software is becoming a commodity? Does he get that Windows would cost more than the laptop itself? Applications? Every Linux distro I've used has come with more usable apps than I can shake a stick at, no licenses required. Support? I suppose Bill would love it if kids in developing countries called MS's support line for help every time they blue-screened, oh, wait, most of them don't have phones, let alone a credit card to pay for the support. They can support themselves with Linux, let's give them some credit, kids will be resourceful if offered the chance. Network connectivity? The idea is that kids will be able to form ad-hoc mesh networks amongst themselves, and perhaps share a single Internet connection, if it is available. But the best quote is this one:
"If you are going to go have people share the computer, get a broadband connection and have somebody there who can help support the user, geez, get a decent computer where you can actually read the text and you're not sitting there cranking the thing while you're trying to type..."
I suppose he wants to pay for the infrastructure to bring broadband to developing nations (the scary thing is, he could probably afford it). I know he's being "funny" with the crank comment, but you crank it first, then type. And I suppose Bill has just such a "decent" computer he can offer? Oh, yeah:
...a new "ultra-mobile computer" which runs Microsoft Windows on a seven-inch (17.78-centimeter) touch screen. Those machines are expected to sell for between $599 and $999...
I suppose we could get MS to give away a few million units for "charity", let's throw in the OS license for free and MS Office licenses and some commercial educational software, since notepad and solitaire don't cut it in the classroom. Don't forget MS Visual Studio, unless you don't want the kids to hack on code in their spare time. Seven-inch screen, eh? Looks to be the same size screen as the $100 laptop's planned design, anyway. So much for the "tiny little screen". I guess the text would look just as small, at least for the first few minutes of use, until the shiny "ultra-mobile" computer was dropped and its screen broke, or the hard drive crashed, or... Well, let's put Windows on the $100 laptop, shall we? I suppose I don't even have to mention how well Windows XP/Vista/whatever would run on a 500 MHz laptop with 128MB of RAM. He could just come out and say that he wishes they had chosen Windows for the $100 laptops, but he has no real way to justify this. At least Steve Jobs kept his mouth shut.

Technorati Tags: , ,

Wednesday, March 15, 2006

Perl Script that Does Bulk Reverse-DNS Lookups

Speaking of security and pen-testing, below is a Perl script I wrote and use to do bulk reverse-DNS (PTR) lookups on a specified network, during the discovery phase of a network assessment. Just cut-n-paste it into a text editor and save; instructions are in the header comments (Update: You can also download the script here) #!/usr/bin/perl # # netdns.pl: Simple script to do bulk PTR lookups on a network of IP's # # Requires Net::DNS, NetAddr::IP # # perl -MCPAN -e 'install Net::DNS; install NetAddr::IP' should do the # trick on any Unix OS. On Debian/Ubuntu, do 'apt-get install # libnet-dns-perl libnetaddr-ip-perl' # # Usage: Takes an IP network or single IP (as per the NetAddr::IP docs # at http://search.cpan.org/~luismunoz/NetAddr-IP-3.028/IP.pm). Output # is a comma-delimited list of the IP addresses and the hostname they # resolved to, or NXDOMAIN if no PTR record exists, or if the IP # address is not well-formed, or error text if there is some other # error with the DNS query. # # Examples: # # ./netdns.pl 10.0.0.1/24 > ptr-list.csv # ./netdns.pl 10.0.0.1 # # Copyright (c) 2006, Doug Maxwell <doug@unixlore.net> # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 # USA # use strict; use warnings; use Net::DNS; use NetAddr::IP; my $ip = new NetAddr::IP (shift) || die "Unable to create NetAddr::IP object\n"; my $res = Net::DNS::Resolver->new; my $num = $ip->num(); for (my $i=0; $i<=$num; ++$i) { my $ip_address = $ip->addr(); if ($ip_address) { my $query = $res->search("$ip_address"); if ($query) { foreach my $rr ($query->answer) { next unless $rr->type eq "PTR"; print "$ip_address,",$rr->ptrdname, "\n"; } } else { print "$ip_address,",$res->errorstring,"\n"; } } ++$ip; } Technorati Tags: , ,

10 Best Security Live CD Distros (Pen-Test, Forensics & Recovery)

Darknet has a list of the 10 best security-related live CD distros. Some I've used, but some of these I wasn't even aware of, so this is a nice list to have. Includes a summary and links to download each one.

Technorati Tags: ,

Tuesday, March 14, 2006

Shaded Text Boxes in Blogger Templates

I recently edited my Blogger template to enhance the look of blockquotes and code snippets , by enclosing them in shaded boxes. Hopefully others will find it useful. To use this, just enclose the text you want boxed-in with
<blockquote>...</blockquote>
or
<code>...</code>
tags in the HTML editor. The difference between the two is that text in a code block will have it's formatting preserved (the white-space: pre attribute). Here is the template code itself - I had to add the code block, the blockquote was already present in my template CSS (post section), it just needed to be edited. blockquote { margin:.75em 0; border:1px solid #596; border-width:1px 1px; padding:5px 15px; display: block; background-color: #dedede; } code { font-family: Courier; margin:.75em 0; border:1px solid #596; border-width:1px 1px; padding:5px 15px; display: block; background-color: #dedede; white-space: pre; } Technorati Tags: , , ,

Battles with Proprietary Software Support

It always amazes me how much pain reliance on proprietary IT solutions can cause, yet people go back for more time and time again. Having worked in the past with a large, managed security vendor, I've seen first-hand the kind of trouble relying on proprietary solutions can cause. Here are just a couple of examples:

  • This vendor's managed security service relies on some closed components at its core, including backend databases, ticketing, and alerting. Over the past year, the security industry has seen a drastic change in what people expect from managed security vendors (which is now a lot more than it used to be) - this vendor has been unable to change quickly enough for the demands of the market, and is now feeling competitive pressures from those firms that are using Open Source components and can quickly adapt to what the market demands. Internal development resources are underfunded to begin with (the thought process was "Oh, good, an all-in-one solution, I don't need developers, let's lay them all off". They then regretted it when a solution had to be developed, because the old bunch of proprietary components no longer fit with, well, anything).
  • Providing support for proprietary software is a losing battle. The aforementioned vendor provides some of the best Check Point firewall support around, and has in previous years taken direct support business away from Check Point themselves. Where the support falls over, though, is the inevitable problem caused by software bugs (there seem to be about 30 of those nasty Check Point bugs that impact this vendor's clients each year), which they can only get resolved by going to Check Point for support. As you might imagine, since Check Point is the only company in the position of fixing software bugs in their closed-source product, they have no impetus to provide good support. Delays of a month or more are common, regardless of the severity. Non-critical bugs sometimes never get fixed. Nokia's IPSO platform is the same way - while based on FreeBSD originally, it is completely proprietary, and support vendors and clients are unable to get bugfix support elsewhere. I have personally been involved with two IPSO kernel bug-fixes (simple ones, at that) that have taken over a year each to resolve.

Now, you might think "So what, don't buy it if you don't like the support lock-in". But at some point this is like sticking your head in the sand. If you believe Check Point, every single Fortune-100 firm uses their firewall software. How long will it be before a serious bug or security vulnerability affecting Check Point impacts not just the firm using their software, but that firm's clients and customers? At what point do we look at the risks imposed on users of technology, and give the potential impact of those risks more weight than rights of the technology implementers? This question is at the heart of most hot-button topics centered around Free/Open Source software (FOSS), including the BSD/GPL debate. The BSD license favors the rights of the developers, while the GPL favors the rights of the users. Keep in mind that "developer" and "user" might be third or fourth generation, so a BSD-licensed piece of software that stayed that way through three vendors can suddenly become proprietary - the users have just lost most rights they had prior to that, if they use that software. A GPL-licensed piece of software allows it's end-users the same rights, no matter how many vendors have touched it (assuming they each honor the license).

I had personal experience with some incidents that made me really see the direct value that FOSS can bring to a business (as opposed to just reading about such ideas). No really new ideas here, but experiencing the problems with proprietary software first- or second-hand, while being in the middle of a support nightmare as a product support provider - is, I think, much more enlightening than presenting theories.

In one incident, a large financial institution had many Nokia/Check Point firewalls, configured in active-passive failover. In one of their many two-firewall sites, they use dynamic routing (OSPF) to direct traffic through a primary firewall, while a secondary firewall sits idle, waiting to take over for the first, should it fail.

Check Point's state synchronization is commonly used in these scenarios to share connection table information between two or more firewalls, so if one fails, no users experience dropped connections.

At 1 AM one morning, the state sync between one such pair of firewalls failed, causing the firewall daemon's log files on the active firewall to be flooded with errors. The logging pushed the firewall's CPU over the edge, and it was running at 90-99% CPU, with 1-3 second packet latencies.

Needless to say, the client was not happy, as it was impacting their business.

Still nothing too unusual there, except that this company, like many others, pays quite a bit of money to keep their hardware and software support "up-to-date". In this particular case, the Check Point version had hit "end-of-life" during the time they were trying to resolve the incident. Despite the support ending in June, and this event occurring in May, the company was told that they would not be able to get code support, should it rise to that level. So, if a bug was found, Check Point refused to even entertain the idea of providing a patch for their software.

The solution, of course, as presented by the vendor, was to upgrade to the newest, supported version of Check Point, while any bug found would not even be guaranteed to be fixed. The client has to accept the software vendor's word that the latest version of their software is "much more reliable", and wouldn't cause the firewall to fall over.

This particular company had no choice, they had to accept these terms, regardless of the dire straits they found themselves in. Sure, legal action could be possible, but they need help now, and any such action would take months or years. Besides, we all know how EULAs are written - certainly suing for defective software is not an option.

Now, why is it significant that this is all proprietary software? Because if it were truly FOSS, the client would have several options (outside of an upgrade):

  • They could fix the bug themselves, assuming they had skilled people on staff.
  • They could hire someone to fix the bug, and do so in a cost-effective manner, since everyone has access to the application source, and presumably more than one vendor could provide a fix.

(Note that this applies to using software provided by large vendors of FOSS, like Red Hat, even though they might follow similar "end-of-life" practices. The source code is still free to obtain, modify and distribute.)

The second point is important, as the availability of source code, along with the rights to change and distribute it (for money, or not), actually results in competition. In this scenario, the vendors providing the best support make the most money. Sounds pretty sensible, right? In today's current software market, software vendors who write and sell proprietary software hinge their entire business model on the software itself, the marketing hype that surrounds it, and the assumption that people will pay to upgrade when the time comes. Support is always an afterthought, a "necessary evil", and getting good support for such software is almost impossible. There is absolutely no incentive to provide good support once the software is sold and the customer is locked-in to one vendor for support. It is important to note at this point that I'm referring to software support, which is different from product support. The latter can be quite competitive, unlike the former. As an example, plenty of people provide Check Point product support, some better than others, but only one company can provide Check Point software support (i.e. bug-fixes).

It always amazes me that many people see FOSS as "anti-capitalist", when the correct term should be "anti-monopolist". No one could ever convince me that the vendor lock-in that many companies subject themselves to (as in this case) was somehow "the capitalist way". Why else would MS be fighting the corporate adoption of GNU/Linux servers so strongly - such adoption threatens their monopolistic business model. I would say that the current software model is harming consumers and the overall world economy, and this can only get worse as we become more and more dependent on proprietary software.

Why do companies put up with this? My belief is that it's a lack of awareness, coupled with the usual corporate stigma against anything that is "free" being poor quality - really just an artifact of the term "free" in the English language having more than one meaning, as has been noted by others. While the term is not perfect, I've seen firsthand that "Open Source" seems to be received without too much explanation much more readily than "Free Software" in most businesses.

Technorati Tags: , , ,

Monday, March 13, 2006

Google Mars

This is really cool - Google has a new site up called Google Mars. Based on Google Maps, it allows you to see Mars in three formats - Elevation, Visible, and Infrared. The map is tagged with the locations of spacecraft, and links to articles about certain regions. It's better viewed than described...

Technorati Tags: ,

RMS Interview: Free Software as a Social Movement

There is an interesting interview with Richard Stallman (RMS) at Znet. It seems they have been pondering the idea of converting to 100% free software, and spoke to RMS about it. For those that have heard RMS speak, the first part of the interview is a condensed version of his History of GNU and Free Software speech that he gives frequently. It gets interesting near the end, however, venturing into territory most interviewers neglect.

Unlike some confused thinkers (and nicely debunked here), I've long thought that the Free Software movement was decidedly capitalist, in that it accepted software as a commodity and encouraged competition and offered a profit motive for those who offered high-quality services around Free Software. RMS touches on this:

JP: I have read other interviews with you in which you said you are not anti-capitalist. I think a definition of capitalism might help here.

RMS: Capitalism is organizing society mainly around business that people are free to do within certain rules. ... JP: -- But "anti-capitalists' use a different definition. They see capitalism as markets, private property, and, fundamentally, class hierarchy and class division. Do you see class as fundamental to capitalism?

RMS: No. We have had a lot of social mobility, class mobility, in the United States. Fixed classes--which I do not like--are not a necessary aspect of capitalism.

However, I don't believe that you can use social mobility as an excuse for poverty. If someone who is very poor has a 5% chance of getting rich, that does not justify denying that person food, shelter, clothing, medical care, or education. I believe in the welfare state.

JP: But you are not for equality of outcomes?

RMS: No, I'm not for equality of outcomes. I want to prevent horrible outcomes. But aside from keeping people safe from excruciating outcomes, I believe some inequality is unavoidable.

...

I'm a Liberal, in US terms (not Canadian terms). I'm against fascism.

JP: A definition would help here too.

RMS: Fascism is a system of government that sucks up to business and has no respect for human rights. So the Bush regime is an example, but there are lots of others. In fact, it seems we are moving towards more fascism globally.

Something else I haven't heard about RMS before, is that he discounted political action for direct action (i.e. coding) early on, given that this was where his strengths were:

JP: It is interesting that you used the term "escape' at the beginning of the interview. Most people who think about "movements' think in terms of building an opposition, changing public opinion, and forcing concessions from the powerful.

RMS: What we are doing is direct action. I did not think I could get anywhere convincing the software companies to make free software if I did political activities, and in any case I did not have any talent or skills for it. So I just started writing software. I said, if those companies won't respect our freedom, we'll develop our own software that does.

Still, RMS doesn't do too badly at politics, either, as he deftly fits the Free Software movement in with the ideals of Znet's reader base:

JP: Many of ZNet's readers see themselves as part of some movement -- anti-poverty, or anti-war, or for some other form of social change. Can you say something about why such folks ought to pay attention and relate to the free software movement?

RMS: If you are against the globalization of business power, you should be for free software.

JP: -- But it isn't the global aspect of business power, is it? If it were local business power, that wouldn't be acceptable?

RMS: -- People who say they are against globalization are really against the globalization of business power. They are not actually against globalization as such, because there are other kinds of globalization, the globalization of cooperation and sharing knowledge, which they are not against. Free software replaces business power with cooperation and the sharing of knowledge.

That one idea, replacing business power with cooperation, is one place where I think the Free Software model doesn't fit with the definition of "perfect" capitalism, which is defined so as to exclude altruism. There is no such thing as "perfect" capitalism, however, and there are many businesses have made hefty profits and still have been socially responsible.

Technorati Tags: , , ,

Creating Your Own Debian Package Mirror for Use With Apt

Creating a Debian mirror is fairly easy using the debmirror command. In this article, I'll take you step-by-step through the process, including showing you how to configure the mirror server for use over FTP or HTTP/HTTPS. Note: The following was tested under Debian Sarge (stable).

Installation

First, we install debmirror and gnupg, if the latter is not already installed. When I initially tried to get this working, I ran into problems trying to use debmirror, without having imported the Debian master archive signing key - the quickest solution was to just import the public key into my GPG keyring. Note that this does not imply that you trust the key (in the GPG sense), it just imports it so the debmirror script will run.
apt-get install debmirror gnupg
Import the Debian master archive signing key:
gpg --recv-keys 2D230C5F
OR
wget http://ftp-master.debian.org/ziyi_key_2006.asc
then import this key into your keyring with
gpg --import ziyi_key_2006.asc

Building the Mirror

You'll need about 9GB of space for the full i386 sarge archive, all distributions (contrib, main, and non-free). Note that this does not include any source packages. Here is the command syntax: debmirror -v -a i386 -h ftp.us.debian.org -d sarge /path/to/mirror --nosource --progress Where /path/to/mirror is the path on your server where the mirror is going to be housed.

Archive Access Methods

Apache v1 Edit /etc/apache/httpd.conf or /etc/apache-ssl/httpd.conf:
Alias /debian /path/to/mirror

<location>
order deny,allow
deny from all
allow from all
Options Indexes FollowSymLinks MultiViews
</location>
Apache v2 Edit /etc/apache2/apache2.conf (Apache v2 does not have a separate SSL directory for config files, just an ssl.conf in /etc/apache2. See Setting up an SSL Server with Apache2 if you need it).
Alias /path/to/mirror "/debian/"

<directory>
AllowOverride FileInfo AuthConfig Limit
Options Indexes SymLinksIfOwnerMatch IncludesNoExec
</directory>

FTP Install vsftpd on the apt server:
apt-get install vsftpd
Change the home directory of the "ftp" user to /path/to/mirror using the vipw command.
ftp:x:108:65534::/path/to/mirror:/bin/false
The vsftp installation is automatically anonymous-ftp enabled on Debian, so you don't have to do anything else to get apt-get to work with this FTP setup. Then, put the following in your client's /etc/apt/sources.list: deb http(s)://debian/ sarge main contrib non-free OR deb ftp:// sarge main contrib non-free Then run apt-get update apt-get dist-upgrade You can re-run the above debmirror command from cron to automatically update the mirror however often you like. Make sure to use the Debian mirror list. Cron sample - this updates the mirror every morning at 2AM: 0 2 * * * debmirror -v -a i386 -h ftp.us.debian.org -e ftp --passive -d sarge /opt/debian --nosource --progress > /dev/null 2>&1 Technorati Tags: , ,

Schneier on Security: Huge Vulnerability in GPG

Read about Bruce's take on this vulnerability in the way GPG verifies signatures. When Bruce Schneier calls it "huge", there is certainly cause to be worried. It seems that non-detached digital signatures (like those used in email communications, or embedded into signed documents) will still come back as valid when checked with GPG, even when bogus data has been added to the original, signed content. GNU Privacy Guard implements signatures according to the OpenPGP message format, which allows for multiple signatures to be part of one document, each possibly signing different data blocks. The code that implements this works as advertised, but is a bit too lenient in how it processes certain legacy signature formats, or handles malformed data. The end result is that an attacker could alter signed data and have it appear genuine. From the security announcement:
Signature verification of non-detached signatures may give a positive result but when extracting the signed data, this data may be prepended or appended with extra data not covered by the signature. Thus it is possible for an attacker to take any signed message and inject extra arbitrary data. Detached signatures (a separate signature file) are not affected. All versions of gnupg prior to 1.4.2.2 are affected.
So the fix is to upgrade, but you can use detached signatures in the meantime.

Technorati Tags: , , ,

The Worm That Didn't Turn Up

Read (yet again) about the prevalence of worms, spyware and viruses among Windows users. This pretty much sums up how I feel about the dangers of the MS Windows computing monoculture [PDF] (although I've been Windows-free since 1995). A choice quote:
In my case, for example, I have not used a Windows machine for any serious purpose since 1999. And in those six years, I have never had a computer virus, trojan or worm. Not a single one. Neither has any adware or spyware taken over my browser (which also comes with a facility for automatically blocking pop-up windows as well as the ability to do tabbed browsing). And all this despite being connected to the net 24 hours a day, seven days a week.
The author asks the question why do people put up with a constant barrage spyware, adware, viruses and other malware? I think the answer is simple - people will use whatever software and operating system comes with their PC, and MS's exclusive OEM agreements with consumer hardware retailers pretty much guarantees what that will be. As is often painfully recognized by most geeks, the vast majority of consumers just don't care, don't have time, or don't even want to know that there are alternatives. They use what comes with the PC they buy, period. The business world is slightly different, in that the people provisioning workplace PC's are generally more tech-savvy than the average consumer. So what drives the madness in the workplace? One word - Office. For most people, it's just not convenient to work with anything but MS Office. In any given workday, they are guaranteed to have to receive and send all sorts of email attachments with their business peers, and they don't want to have to think about document format conversion, since their peers use Office, too. It's even worse for example, in accounting departments, because they tend to make heavy use of Excel spreadsheet features that are just not supported by alternatives like OpenOffice.org. The networking community's exclusive use of Visio is another example, but even worse, since there are no pieces of FOSS that can import or export Visio file formats.

Technorati Tags: , , ,

Sunday, March 12, 2006

Quickly Re-configuring a Source-Based PHP Installation

I recently had the experience of having to add several extensions to an installed version of PHP (v4) that was originally compiled from source on a Red Hat Linux box. The procedure below I came up with to save me the time and hassle of re-typing the long "configure" command, with all the options.

First, make a backup of your original php shared library - this will usually be in the "libexec" directory under your main Apache root. In my case, it was in /usr/local/apache/libexec/libphp4.so. This will be replaced by the install process.

Next, I had to figure out what options the original PHP was compiled with. This wasn't too hard. To start, cd into your PHP build directory (this should be where you originally unpacked the PHP source code), and take a look at the output of 'php -i'. At the top, you'll see a line like this:

Configure Command => './configure'

'--with-apxs=/usr/local/apache/bin/apxs' '--with-xml' '--with-mm' '--enable-bcmath' '--enable-calendar' '--enable-ftp' '--with-gd' '--with-jpeg-dir=/usr/local' '--with-png-dir=/usr' '--with-xpm-dir=/usr/X11R6' '--enable-magic-quotes' '--with-mm' '--with-mysql=/usr' '--enable-discard-path' '--with-pear' '--enable-sockets' '--enable-track-vars' '--with-ttf' '--with-freetype-dir=/usr' '--enable-gd-native-ttf' '--enable-versioning' '--with-zip' '--with-zlib' '--with-mime-magic'

This is great, but doesn't lend itself to copy-n-paste, with the extra quotes and characters present. A simple text filter will do nicely. The exact configure command to use as a base can be extracted with this:

php -i | grep Configure | sed -e "s/'//g" -e "s/^Conf.*=> //g"

(Note that there are two spaces after the =>)

Here is what the output looks like:

root@server [~/]# php -i | grep Configure | sed -e "s/'//g" -e "s/^Conf.*=> //g"

./configure --with-apxs=/usr/local/apache/bin/apxs --with-xml --with-mm --enable-bcmath --enable-calendar --enable-ftp --with-gd --with-jpeg-dir=/usr/local --with-png-dir=/usr --with-xpm-dir=/usr/X11R6 --enable-magic-quotes --with-mm --with-mysql=/usr --enable-discard-path --with-pear --enable-sockets --enable-track-vars --with-ttf --with-freetype-dir=/usr --enable-gd-native-ttf --enable-versioning --with-zip --with-zlib --with-mime-magic

At this point, just copy-n-paste the line output by the filter to a command prompt, and just add the extensions you want to the end of the list, then hit enter (in this case, I added ' --with-curl --with-mcrypt' to the end of the list). Then type 'make' - if there are no errors, type 'make install' and the new php shared library is now installed.

Type 'apachectl graceful', or reload your Apache config some other way (this Apache was also installed from source, the binary packaged version of Apache can have its configuration reloaded on Red Hat with 'service httpd reload'), the new PHP modules should now be active and ready for use.

Technorati Tags: , ,

Remotely Administering Groups of Servers With Dsh and SSH

This article details using Dsh (the distributed/dancer's shell) and SSH to remotely administer groups of machines.

Overview

Dsh is a tool that will allow you to use SSH to run commands or spawn login shells on multiple machines at once. When properly used with ssh-agent, dsh can run unattended commands on many different hosts at once. I assume you have a working su and sudo setup. Just in case, here is a tutorial on configuring and using sudo.

Installing Dsh

On a Debian GNU/Linux-based system, you can install dsh quite simply with apt-get install dsh.

If you aren't using a Debian system, or are on another Unix-based system, you can always install from the dsh sources using the instructions at the dsh home page.

Configuring Dsh

The main dsh configuration file is /etc/dsh/dsh.conf (or $HOME/.dsh/dsh.conf). Here is a working sample (note that this file is installed for you when you install the Debian dsh package):

#default configuration file for dsh. #suppled as part of dancer's shell verbose = 0 remoteshell = ssh showmachinenames = 0 waitshell=0 #remoteshellopt=... # default config file end.

Note: In the default config file supplied with Debian Sarge or Ubuntu Hoary, you will have to change the remoteshell option from rsh to ssh, and the waitshell option from 1 to 0. You can also just cut-n-paste the version listed above.

Other Dsh Configuration Files

There are several other files located in /etc/dsh. Here are their descriptions:

  • /etc/dsh/machines.list: A list of all hostnames or IP addresses, one per line. These are the remote servers you wish to run commands on.
  • /etc/dsh/group/all: This is really a symlink to ../machines.list. You can create this with the command cd /etc/dsh/group && ln -s ../machines.list all
  • /etc/dsh/group/servers: A chosen subset of all the workstation IP's or hostnames in machines.list, one per line. We will actually use the word servers as part of our dsh commands. You can actually choose any name for this file, and can have multiple group files.

Dsh Command Line Switches

The following are the most useful options to dsh:

Switch Description
-a All machines
-g servers Use the group servers
-c Use concurrent connections
-w Wait for one machine to finish before moving onto next
-v Verbose output
-M Show machine name, useful with -c

Using Dsh - An Example

Setting Up SSH and the SSH Authentication Agent

In this example, we will use dsh to update and install any packages available from an apt-get update and apt-get dist-upgrade. Since we need root access on each remote host, we will have to create a public/private key pair on the machine we are launching dsh from, under the root account (using ssh-keygen -t dsa). Next, append the public key you just created (this will be in /root/.ssh/id_dsa.pub by default) to each remote host's /root/.ssh/authorized_keys files. Make sure the remote authorized_keys files all have a permission mode of 0600. Once this is done, you should be able to run the remote commands on all the configured remote machines without being prompted for a passphrase. Here are the steps, in detail:

  • Login to your own workstation, use su to become root
  • If you don't already have a public/private SSH keypair for the root user, run ssh-keygen -t dsa as root. Use the default locations/filenames, and be sure to use a secure passphrase
  • Append /root/.ssh/id_dsa.pub to the /root/.ssh/authorized_keys file of the other servers you wish to run commands on
  • Make sure the remote authorized_keys files have permissions of 0600 by running chmod 0600 /root/.ssh/authorized_keys on each remote box
  • Exit the root shell you entered with su, above. If you are running in console mode (versus X, which always runs under an ssh-agent in Debian and most other GNU/Linux distributions), type ssh-agent bash (or whatever your preferred shell is) at the shell prompt. This will start another shell under its own ssh-agent process that you can access from the newly spawned shell
  • Type sudo ssh-add, this will add your root private key to the local authentication agent
  • Verify that the identity is added with ssh-add -l

Running Dsh

Now we are ready to actually run dsh. Test dsh access to the workstations you specified in one of your dsh config file groups sudo dsh -M -w -g servers -- 'w'.

You may be prompted to accept the remote host key if this is the first time you are logging in to the remote host(s). We have used the -w switch to ensure we process one host at a time, so that we can answer y to the host key question at each machine in turn. You will not be prompted for a passphrase, since you have cached it in the your local SSH authentication agent.

Assuming the access works (and you should have the output of the w command on all three hosts sent to your console or xterm), let's try running apt-get update and apt-get dist-upgrade on each host. This time, you will not be prompted to accept each remote host key:

sudo dsh -M -g servers -c -- 'apt-get update'

sudo dsh -M -g servers -c -- 'apt-get -y dist-upgrade'

You should see the output of the commands mixed together, as they are all running concurrently. The -M switch helps in this regard, since it prepends the machine name to each output line. When you are done, all machines in the host group serversshould be updated. Because the dist-upgrade may occasionally prompt for your input (despite the -y), or even abort with an error, you may wish to run it first with -u and -s, coupled with the dsh -w switch, to show which packages will be upgraded, while simulating what will happen if the apt-get command is actually run (sudo dsh -M -g servers -w -- 'apt-get -u -s dist-upgrade'. If you want to only download the packages to be updated, but not actually install them, use the -d switch to apt-get.

Sample Dsh Session

In this example, the /etc/dsh/group/servers file looks like this:

10.1.1.101 10.1.1.102 10.1.1.103

dmaxwell@lab0:~$ sudo ssh-add Could not open a connection to your authentication agent. dmaxwell@lab0:~$ ssh-agent bash dmaxwell@lab0:~$ sudo ssh-add Enter passphrase for /root/.ssh/id_dsa: Identity added: /root/.ssh/id_dsa (/root/.ssh/id_dsa) dmaxwell@lab0:~$ ssh-add -l 1024 dd:a9:8f:bd:e8:9e:7e:5d:cc:10:9f:e4:a4:c2:52:22 /root/.ssh/id_dsa (DSA) dmaxwell@lab0:~$ sudo dsh -M -w -g lab -- 'w' 10.1.1.101: 12:58:51 up 239 days, 3:28, 1 user, load average: 0.00, 0.00, 0.00 10.1.1.101: USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT 10.1.1.101: dmaxwell pts/0 lab0.turinglabs 03Mar05 16:11m 0.00s 0.00s -bash 10.1.1.102: 12:59:09 up 324 days, 18:07, 1 user, load average: 0.00, 0.00, 0.00 10.1.1.102: USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT 10.1.1.102: dmaxwell pts/0 lab0.turinglabs 15Mar05 16:12m 0.06s 0.06s -bash 10.1.1.103: 12:57:06 up 234 days, 3:10, 1 user, load average: 0.00, 0.00, 0.00 10.1.1.103: USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT 10.1.1.103: dmaxwell pts/1 lab0.turinglabs 27Apr05 15:47m 0.01s 0.01s -bash dmaxwell@lab0:~$ exit Technorati Tags: , , , ,

Saturday, March 11, 2006

What Should be in a CS Curriculum?

I came across Dan Zambonini's blog posting "What would you put in a Computer Science Curriculum?" recently. Definitely interesting, but I think he misses the point of a CS degree. It's not meant to teach all sorts of low-level, specific, real-world tasks.

The problem is that for every list of skills Dan can come up with (and a few are heavily slanted towards his specialties, I noticed), I can come up with another that would arguably be just as useful to someone in the "real world" who was trying to get a programming job. The true point of a CS degree is to teach the underlying theory of computation, algorithms, networks, etc. Also don't forget the Math - all the aforementioned topics require a good understanding of mathematics. Once the erstwhile grad starts working, they can apply what they learned and will pick up what they need over time (and will do it much more quickly then if they went to a "trade school" CS program like Dan describes).

Let's face it - some real-world skills need an understanding of theory, for example algorithm optimization, or crafting a parser while writing a domain-specific language. If you are looking for real-world experience in a college grad, your hiring practices need adjustment, or go ahead and hire someone with an MIS degree, who learned very little math and theory, but plenty of Java - just know that you are getting someone who might be incapable of venturing outside of the domain you hired them for.

As one who graduated from a CS program myself (UMASS Amherst), years later I am glad my curriculum included plenty of theory. That's not to say that I didn't program - I did plenty of that while I was there, also. My Senior course on compiler design included theory (itself presumed discrete math, the theory of languages and grammars, and algorithm design), but also included a semester-long project crafting a real compiler. It was built up in pieces as the semester progressed, and we learned the parts we needed to continue.

To be fair, Dan's list of topics does include some good basics - like database theory and writing skills. I would be less worried about having experience with language "X", again, someone well-versed in theory can pick up a new computer language very quickly (This article talks about the flawed way in which many students are taught to "program" - by learning a specific language's syntax, rather than actually learning to solve problems with computers). If I were hiring a programmer fresh out of a CS program, here is what I would like to see at a minimum:
  • Calculus, discrete math, statistics
  • Technical writing
  • Algorithm design and analysis
  • Computer architecture
  • Theory of Networks
  • Theory of computer languages
  • Theory of computation
  • Some experience with the various types of languages by domain - functional, logic, procedural
  • Database theory
  • Software engineering
  • Computer Security


Technorati Tags: ,

Friday, March 10, 2006

Using Qemu and Kqemu Under a Debian or Ubuntu Linux Host

In this article, I'll show you how to get the excellent FOSS emulator qemu and its accelerator module, kqemu, up and running on a Debian Sarge or Sid system (you won't find Debian packages for kqemu, and the packaged version of Qemu tends to out of date). I include networking setup, and present a script that automates much of the qemu startup.

Preparation

First, you need the 2.6 or 2.4 kernel sources in /usr/src/linux. To get up and running more quickly, install the kernel source version you are currently running - "uname -r" will tell you this. I'll use the 2.6.8 kernel as an example, since that's what my Debian system is currently running.

apt-get install kernel-source-2.6.8 cd /usr/src tar xjf kernel-source-2.6.8.tar.bz2 ln -s kernel-source-2.6.8 linux

Despite what is on the qemu website, I've found that the kqemu accelerator does not compile if you have not at least compiled a kernel in the kernel source directory. The easiest way to do this is to use the installed config file in the /boot directory - in this case, it is /boot/config-2.6.8-1-386 (assuming you downloaded the kernel source for the version of the kernel you are running).

cp /boot/config-2.6.8-1-386 /usr/src/linux/.config cd /usr/src/linux && make oldconfig && make bzImage

Qemu will use SDL if you have the SDL devel package:

apt-get install libsdl1.2-dev

Also, you'll need sudo to run the script below as a non-root user.

apt-get install sudo

Don't forget to add your user to /etc/sudoers using the "visudo" command - just copy the one line in the default file for the root user, changing "root" to your user name.

On a stock Ubuntu system, you'll need to install GCC and tools (sudo is installed already):

sudo apt-get install build-essential automake autoconf

Building & Installing Qemu and Kqemu

Download qemu-0.8.0.

Download kqemu-0.7.2 (the accelerator) from the same site. Note that this is not free software, as in FSF free, so don't use it if this bothers you.

Untar qemu somewhere, then untar kqemu inside the top-level qemu-0.8.0 directory.

Do

./configure make sudo make install from within the qemu directory. qemu and its associated binaries will be installed in /usr/local/bin.

Note: You need GCC version 3.x to build qemu, GCC version 4.x will not work. You can pass a different compiler to the configure script as './configure --cc=/usr/bin/gcc-3.4'. Run 'apt-cache search gcc' to view a list of installable gcc versions. Further complicating things, if you are running a kernel compiled with gcc 4.x, you need to re-compile kqemu with gcc 4.x after compiling qemu with gcc 3.x. You can do this by re-running the 'configure' script without the '--cc' option and with the '--disable-gcc-check' option from the main qemu source directory, then type 'cd kqemu && make clean && sudo make install'. This will build and install kqemu using gcc 4.x.

Post-Install Setup

Do "modprobe kqemu" to load the accelerator. You may need to create kqemu's device file first (NOTE: in the latest version of kqemu (0.7.2) the kqemu device and permissions are created and set properly for you during the 'make install'):

mknod /dev/kqemu c 250 0 chmod 666 /dev/kqemu

On Debian, you can load the kqemu module at boot by adding the line "kqemu" to /etc/modules to get it to load automatically at boot. You'll need to modprobe the "tun" module, as well, for networking support:

cat >> /etc/modules kqemu tun ^D

You may need to load the tun module with "modprobe tun"; check that it is not already loaded with "lsmod | grep tun".

Creating a Disk Image & Running Qemu

Now you're ready to get started. Create a 10GB qemu disk image in some spare directory: qemu-img create disk.img 10G

The nice thing is that this 10GB virtual disk is a sparse file, so it only uses as much disk space as actually written to it by qemu. The installer sees a flat, 10GB disk, though.

Download any OS ISO image. I used the Debian 3.1 net installer and renamed it "deb.iso". I have a script "run.sh" (below) that will either boot from the cdrom ISO image, or from the qemu disk image, depending on how it is called. Just run the script from the same directory as the deb.iso:

run.sh install - Boots from cdrom/iso file - Use to install run.sh run - Boots from disk image - Use after installation

The "-m" switch to the qemu command controls virtual memory, 128MB is the default, but I put it in anyway to remind me how to increase qemu's memory allocation if needed. Feel free to change it.

My script sets up a networked environment using /dev/net/tun, so you may need to create this device as follows:

mknod /dev/net/tun c 10 200

The host IP is 172.20.0.1, and the guest virtual IP should be something on the same network (by default, Qemu sets this up with a class B netmask - 255.255.0.0). I use a static address of 172.20.0.2, but you could just as easily run a DHCP server on the host OS and use that to assign an IP address, gateway, and DNS client settings. I enable forwarding and NAT in run.sh, so the guest OS session should have Internet access through the host, as long as the host OS is connected to the Internet.

Run a DHCP, web, FTP, NFS, or any other network server on the host and access it from the guest OS as needed. You can also SSH back and forth between OS's. If you install the package dnsmasq on the host OS, you can use that as both a DHCP server and a DNS caching forwarder, allowing fully automatic, install-time network configuration of your Qemu guest OS.

Taking Screenshots of the Qemu Session

Qemu is great for making screenshots of installers, I use ImageMagick's command "import" from the host OS:

import -frame file.png

Then click on the window to capture - it captures just the window you click on into the file "file.png". Use ImageMagick's "display" command to view them quickly.

Tips & Scripts

Once qemu is up and running, the qemu monitor will be shown in the console window you actually ran qemu from. You can change cdroms from this prompt, just type Ctrl-Alt to escape qemu itself, then type "change cdrom deb2.iso", or whatever the second ISO image filename is. You can also switch virtual consoles in the guest OS by typing "sendkey ctrl-alt-f2", for example, in the qemu monitor window.

In any qemu session, use "crtl-alt" to escape out to the host OS.

You should have this two-line shell script in /etc/qemu-ifup, it won't be present if you compiled from source, but may be present if you have ever installed the qemu Debian binary package:

#!/bin/sh sudo -p "Password for $0:" /sbin/ifconfig $1 172.20.0.1

Here is run.sh:

#!/bin/sh # Set the mode for the tun device, # turn on IP forwarding, # and setup NAT for guest OS connections sudo -p "Password for sudo access:" chmod 666 /dev/net/tun sudo sysctl -n -w net.ipv4.ip_forward=1 sudo iptables -t nat -A POSTROUTING -o eth0 -s 172.20.0.0/16 -j MASQUERADE if [ -z "$1" ] then echo "Usage: `basename $0` [install|run]" exit 64 fi case "$1" in "run" ) qemu -boot c -net nic -net tun -cdrom deb.iso -hda disk.img -monitor stdio -m 128;; "install" ) qemu -boot d -net nic -net tun -cdrom deb.iso -hda disk.img -monitor stdio -m 128;; * ) echo "Usage: `basename $0` [install|run]";; esac exit 0 Technorati Tags: , , ,

Dubious Network Scanners

If you click on the “Scan Now” link at this website, you get some HTML output of a simple portscan conducted against your connecting IP address (at least, the IP that makes the connection to Sygate’s web server). The scan seems to be just a simple nmap scan that say things like “SMTP is used to send email across the internet. This allows an attacker to verify user accounts on your system, send anonymous (spam) email, or even access files on your hard drive”. Of course, the company offering this “service” is selling personal firewalls, so this kind of alarmist nonsense doesn’t surprise me.

Interestingly, those using a SOCKS proxy, tor, or some other kind of internet anonymizer will actually scan the last hop’s source IP address that made the connection to Sygate’s web server.

Technorati Tags: ,