Wednesday, May 31, 2006

Article Roundup

TechNewsWorld tells us how small security risk is still a selling point for Linux.

This is very cool if you are a die-hard Emacs user - Bill Clementson discusses Conkeror in Firefox for Emacs Users. Throw away your mouse for good. Here's a screenshot showing the numbered links for easy keyboard navigation:


Debian Administration tells us how procmail can help handle Debian mailing lists easily.

The title says it all - Microsoft launches security for Windows. Apparently, MS now thinks they should "protect people who use its Windows operating system from Internet attacks". They are just thinking about this now? Worse, it costs $49.95 per year for up to three computers. They should be giving this away for free, since they're the ones that have been subjecting the world to their horridly insecure software for years.

Kerneltrap gives us a two-part interview with the many developers at the 2006 OpenBSD Hackathon.

Simon Cozens shares the contents of his bin directory.

Tuesday, May 30, 2006

Perl and Network Security Auditing

Network Auditing on a Shoestring tells the story of two auditors who wrote a custom, web-enabled, database-backed front end in Perl to handle the task of auditing share permissions on a 2000-user Windows network.

I have also found Perl tremendously useful for network auditing, although I tend to use it for data-munging. One of the modules I wrote is NetAddr::IP::Obfuscate, which I use to generate obfuscated Nessus reports, but which will work on any text file with IP addresses in it. I've also posted a Perl script I use to do bulk reverse-DNS lookups.

One quick story - I had a client who suspected one of his network admins was reading other employee's email, using Outlook to open and view MS Exchange mailboxes. They wanted to know who had been accessing certain mailboxes and when. I had them send me daily event logs, exported as text, then used Perl to parse the logs, looking for the mailbox events and specific user accounts, finally generating a CSV report (of course, Exchange can't distinguish between accesses of a mailbox, calendar, journal, etc., but the data was still useful).

, ,

Saturday, May 27, 2006

Comments on "Does Installing SSH Enable More Exploits Than it Solves?"

There is an article up at InformIT by John Tränkenschuh titled SSH Issues: Does Installing SSH Enable More Exploits Than it Solves?. The basic premise of the article is that SSH usage is enabling security holes, in most cases quietly, that otherwise would not have been present. The specific example given is that of SSH agent forwarding, and how compromise of the agent-forwarding host would result in intruder access to any of the end systems. While I agree that SSH can be configured in a way that makes it less secure, I don't think ceasing use of SSH is the answer (the author never states explicitly that this is his goal, however, the title of this article certainly suggests this). SSH can be configured securely, but like any other complex security system, it takes a little effort.

I think most admins would accept the basic premise that remote connectivity is a must in today's always-on IT environment. Widespread adoption of SSH (and OpenSSH in particular) has been responsible for a welcome downturn in the use of telnet and the Berkeley r-tools (rsh, rlogin, etc.). While most admins would also agree that discontinuing use of any remote connection protocol would enhance security, I think it is unrealistic to assume suddenly discontinuing SSH usage would fix anything. Most sysadmins would find a way to do work remotely, whether by falling back to insecure protocols, or by using VPN clients. In any case, the same or worse risks would be present as with SSH. Interestingly, the compromise of the intermediate agent-forwarding host in the author's example may not be the worst security risk in that case - the admin's client may be a weak link in the authentication chain if it has, say, SSH root login and password authentication enabled. The unsophisticated attacker that compromises an admin's home workstation and non-root user account with an SSH brute-force login script would be able to jump to other systems by simply scanning a shell history file and setting a few environment variables (assuming an ssh-agent running that had cached credentials). The same problem exists with home VPN's used by telecommuters. A compromise of the VPN client while the VPN tunnel was active would lead to corporate LAN access. It's why companies like Check Point provide VPN clients that can be remotely configured during connection initiation to disallow any non-VPN traffic while a tunnel is active.

Anyway, raising awareness of insecure SSH usage is certainly beneficial, so in that respect, I think the article is a good one (it is the reason I wrote Five-Minutes to a More Secure SSH, after all). I think the title could have been a bit less sensational, however.

Technorati tags: , , ,

Friday, May 26, 2006

Article Roundup

Reporting vulnerabilities is for the brave. I talked about this recently.

From the Linux Devcenter, How Shellcode Exploits Work.

How Not to Manage Geeks.

RMS enlightens us on Sun's recent decision to allow binary distribution of their JDK with the various Linux/Unix distributions. Still not free software (in fact, it will be in Debian's non-free archive); we still have the Java Trap to worry about.

Google's Picassa is now available for Linux. Ian Murdoch comments.

Ubuntu 6.06 Release-candidate screenshots. OSDir also has a set up for the latest Kubuntu RC.

Tuesday, May 23, 2006

Streamlining Iptables for FTP and SMB/CIFS Traffic

There is an article at nixCraft on Connecting a Linux or UNIX system to Network attached storage device. The article itself is a good one, except for the part about iptables firewall rules to permit FTP and SMB/CIFS traffic between the Linux client and NAS. The errors are common misconceptions, so I thought I'd mention them, and show the standard iptables usage.

First, iptables, along with all modern firewalling systems, is a stateful firewall. That means it will record the "state" of new network onnections, and allow future packets that are related to or part of an established connection to traverse the firewall rules. While iptables can be used as a simple packet filter, it is usually not, since using it in this way results in more complex, less secure firewall rulesets. See the resources at the end of the post for more details.

Anyway, the article in question says this:

Please note that when configuring a firewall, the high order ports (1024-65535) are often used for outgoing connections and therefore should be permitted through the firewall. It is prudent to block incoming packets on the high order ports except for established connections.

This is actually information from the Securing Samba Howto. It is misleading, in that if you are using a stateful firewall, you don't need to allow return traffic on high ports. It will be allowed by a properly configured stateful ruleset.

Next, the list of ports the authors recommend opening is too broad. For FTP and Samba/CIFS, the following ports are used:

TCP 21 - FTP control TCP 20 - FTP data TCP 135, 139, 445 - smbd UDP 137, 138 - nmbd
We don't care about the FTP data connection (TCP 20), since it will be handled by iptables' FTP connection helper. The UDP ports 137 and 138 are used for domain browsing, and are not needed for mounting remote SMB shares. Of the three TCP ports, 445 is used by the smbmount (8) command, with a fallback to port 139 if 445 is not available.

In the network diagram given in the article, there is a Linux client with a (presumably) host-based firewall, directly connected to a NAS box. The iptables rules given for FTP and SMB/CIFS communication between the two boxes have a lot of unnecessary cruft in them, including the TCP high ports. Most host-based firewalls allow all outbound traffic, so you can simply do this:

iptables -A OUTPUT -m state --state ESTABLISHED, RELATED -j ACCEPT iptables -A OUTPUT -m state --state NEW -j ACCEPT
This will allow all outbound traffic from the Linux host itself, and statefully allow other outbound traffic as needed. The use of an unqualified state "NEW" here allows all but invalid packets. In fact, the INPUT chain, which is hit by packets coming into the Linux host directly (including replies to our outbound traffic), can be safely closed off to all but established or related packets in this instance:

iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -j DROP
Just remember that you have blocked all (state NEW) inbound traffic here, so don't do this remotely!

If you want to filter outbound traffic explicitly by port, the following OUTPUT chain rules will allow FTP and SMB/CIFS mounts from the Linux host to the NAS box (I assume you have the IP address of the NAS box in the shell variable $NAS). It doesn't make sense to specify a source address here, since the OUTPUT chain is only hit by packets leaving the local host:

iptables -A OUTPUT -m state --state ESTABLISHED, RELATED -j ACCEPT iptables -A OUTPUT -p tcp -d $NAS --dport 21 -m state --state NEW -j ACCEPT iptables -A OUTPUT -p tcp -d $NAS -m multiport --destination-port 139,445 -m state --state NEW -j ACCEPT

One note, don't forget to set the default chain policies to "DROP" anytime you use iptables:

iptables -P INPUT DROP iptables -P OUTPUT DROP iptables -P FORWARD DROP
Finally, if you have a modular kernel (as in any Debian-based installation), you will have to load the FTP connection helper somewhere near the top of your firewall script:

/sbin/modprobe ip_conntrack_ftp
Related links:

Linux Iptables Firewall Scripts, TCP/IP and Linux Network Security with Iptables, Using Samba as a File Server, PDC or Domain Client, Accessing Windows Shares From a GNU/Linux Desktop, Iptables tutorial

Technorati Tags: , , , ,

Thursday, May 18, 2006

Article Roundup

Thoughts on Perl in the Enterprise.

Interesting take on the reality of Enterprise Software.

Some software humor, and more.

Why good package management is important.

Tom Adelstein on the New Era of Linux System Administration.

Automating Debian Etch installs via pre-seeding (part one).

Comments on Debian, Ubuntu, and the Future of Linux

There's an interesting blog post at by Stephen O'Grady of Redmonk about Debian, Ubuntu and the Future of Linux. Basically, the author says that Debian is poised to become much more relevant in the large-enterprise Linux space, taking it's place alongside Red Hat and SuSe, with a little help from Ubuntu:

...from where I sit it seems entirely possible that Ubuntu and its corporate partent Canonical could be tabbed as the corporate interface into the Debian community. It's difficult to imagine large ISVs such as IBM or Oracle dealing effectively with the Debian community, due to the cultural gap alone. Canonical, however, would seem to be an effective bridge between the two parties, having as they do one foot solidly in the Debian community with the other in businesses models the ISVs would understand.

I agree with part of this - right now, Debian and Ubuntu don't exist without one another. What's good for one is generally good for the other. This may have been incredible foresight, as Sun recently announced it was going to support Ubuntu.

On the other hand, the reality of Debian is that it is probably used in many more "large enterprises" than we know (I know of one Fortune-100 firm that has a large Debian server deployment, although they don't make it publicly known). While it's easy for Red Hat and Novell to track large-scale Linux deployments, it's nearly impossible for Debian, unless that information is offered. For server deployments, Debian Stable is, well, seriously stable. This is obviously something large companies look for in server deployments. As far as big-name support goes, HP has been offering Debian support on their own hardware for some time now - much as Sun will be providing support for Ubuntu on their hardware. There are also lots of smaller firms and independent consultants (many of them Debian developers) that provide Debian support.

Technorati Tags: , ,

Wednesday, May 17, 2006

Stealth and Security With Filtering Bridges

There is a good tutorial on bridging under Linux at Nepotismia. For those that don't know, bridging is a way to transparently connect and forward data between two networks. Because they operate at layer 2 (the data-link layer), bridges can operate independently of the protocols above them. Here is another good overview of Linux bridging. Pure bridging devices have largely been replaced by switches, but dedicated bridges can still be useful when they are combined with packet filtering. I've used bridging firewalls in a few situations over the years.

In one, a client had a proprietary application housed on a dedicated server (Windows-based) that was supplied and pre-configured by the vendor. One interface on this device had a public address used for remote management, and the other interface had to be connected to their LAN. Despite perimeter firewall rules that limited access to the device from the Internet, the customer did not trust the security of the device - it was basically an unknown risk to them. What we ended up doing was placing a bridging firewall between the device and the rest of the LAN (really the switch port it was connected to), allowing transparent filtering of packet flow to and from the device. The bridge in this case was a m0n0wall (a great firewall in its own right, also with bridging capability) on a Soekris box.

In another, I was doing remote penetration testing and had to satisfy the client's demands that the testing platform was segmented from the rest of our network, so that any data collected during the test could be kept secure. In this instance, I opted for a spare PC running OpenBSD, configured as a filtering bridge. The advantage of doing this was that it did not impact the layout of the LAN, as the bridge had no IP addresses - basically an "invisible" firewall. The testing server was allowed full outbound access, but no inbound network access to the server was permitted.

Bridging firewalls also turn out to be useful in honeynets. All-in-all, a very useful addition to your networking toolkit.

Technorati Tags: , ,

Monday, May 15, 2006

Article Roundup

Taosecurity on why Prevention Can Never Completely Replace Detection. The IPS hype.

Using Ftester to test your Linux firewall. This is useful with any firewall, not just Netfilter.

IBM Developerworks brings us SELinux From Scratch.

Becoming productive quickly as a Unix developer.

More Emacs goodness - Emacs/ integration.

Nothing whatsoever to do with sysadmin or security, but laugh-out-loud funny.

Sunday, May 14, 2006

Is Anti-virus Software Really Necessary?

There is a blog post from May 9th titled Linux Security - The Illusion if Invulnerability over a (Kaspersky Lab's blog). This quote sums up the theme of the post:

At the Kaspersky stand we talked to a lot of visitors. Pretty soon, it dawned on us exactly what the biggest threat to Linux systems is: the almost overwhelming belief in the invulnerability of Linux

I think they have it wrong - it's not belief in invulnerability, and it's not Linux. It's a belief that "Yeah, it could happen to me, but it probably won't" and the fact that you could envision users of OS X, Windows, or any OS saying this. But the quote pre-supposes that there is a need for anti-virus software at all. Sound crazy? Perhaps not. Here's a few questions to think about:

  • For some reason, the metric used to judge anti-virus products is how quickly they release signature updates to counter new threats. This seems backwards to me. Has any anti-virus vendor ever done research on how many infections their software has prevented, and what the impact could have been?
  • More to the point, is anti-virus software really a valuable part of an IT security policy, or are there better ways of preventing viruses/malware?
  • Why does it seem that despite the entire world running Windows desktops, and almost all of those running some form of anti-virus software, there are still major virus outbreaks?
  • Does it help to divide malware threats into known and unknown categories? Clearly, antivirus software protects against the former, but not the latter.
  • Does reliance on a single security product give a false sense of security? For example, a common misconception is that a firewall is all one needs for protection against external network threats. The truth is much more complicated than that, as most security practitioners know.

This question of whether or not you really need anti-virus software is answered quite well at

If an expert proclaims you need antivirus software to protect you from a virus, you can counter with the following argument:

If we'd turned off automatic macro execution in Word before Melissa came along, then our PCs wouldn't have gotten infected. If we'd turned off Windows Visual Basic Scripting before ILoveYou came along, then our PCs wouldn't have gotten infected. This means our PCs could have protected us even when antivirus software failed to do its job. Perhaps we don't need to update our antivirus software so often -- maybe we really just need to update our antivirus experts.

Technorati Tags: ,

Comments on Switching From Solaris to Linux

There's an interesting post and discussion at Blog O' Matty about why people are switching from Solaris to Linux. I suppose it's a matter of familiarity, but I could never get used to the Solaris way of doing things. Why can't Sun just ship their OS with all the GNU and other packages like OpenSSH that everyone just installs anyway?

Redhat Linux ships and provides regular updates for numerous opensource software (e.g., postgres, MySQL, Apache, Samba, Bind, Sendmail, openssh, openssl, etc), where Sun keeps trying to sell customers the Sun Java One stack, "modifies" an opensource package and diverges the product from what is available everywhere else, and fails to provide timely bug fixes and security patches for the opensource packages that are shipped (Apache, MySQL and Samba are perfect examples) with Solaris.

I agree that the support for most free/open source software under Solaris seems incomplete at best. Sun's SunSSH is a perfect example of this. Red Hat is very involved in the open source community, providing upstream patches regularly to major projects. The software they ship with RHEL and Fedora is also reasonably current.

As for package management:

5. Managing applications and patches on Solaris systems is a disaster, and redhat's up2date utility is not only efficient, but has numerous options to control the patch notification and update process...

6. Staying on the cutting edge with Nevada is difficult, since there is currently no way to easily and automatically upgrade from one release of Nevada to another. On Fedora Core servers, you can run 'yum upgrade' to get the latest bits. Having to download archives and BFU is tedious, and most admins don't want to spend their few spare cycles BFU'ing to new releases.

Too true - yum is very nice - although I think Fedora is a bit too bleeding edge for production server use. For those admins who have not discovered Debian's package management, you don't know what you are missing. I'm not sure why admins feel the need to waste all sorts of time manually upgrading and patching systems anymore. Debian stable and RHEL automate these mundane tasks quite nicely, and both provide timely security updates to reasonably up-to-date software.

Technorati Tags: , ,

Friday, May 12, 2006

Article Roundup

Given the seemingly abysmal state of network security these days, especially web application security, I thought I'd share an old (in Internet terms), but useful link for those developing Perl CGI applications with an eye towards security.

A very good article explaining why Lisp is the way it is and why you, the programmer, should care. Lisp advocacy with a different approach.

Debian-Administration (an excellent site, BTW) gives us Automating new Debian installations with preseeding.

Linux Format interviews Novell's Greg Mancusi-Ungaro, the Director of Marketing for Linux. One of his comments is interesting:

LXF: Do you think that any company can be the Linux equivalent of Microsoft, given that it's an open source OS and people can do pretty much what they want?

GMU: Well, if we ever woke up one day and said 'Wow, Novell is the Microsoft of Linux' or 'Red Hat is the Microsoft of Linux', then the Linux movement would be over. What you want to say is, Novell is the enabler - the company that's enabling Linux to be successful. But Linux is largely held in trust by the community, and Novell is making Linux work for large enterprises. That's a very different thing. Microsoft controls everything; it can make nations change their mind, and not in a good way I don't think.

I agree with this completely - No one company has a lock on Linux (the kernel proper), nor any of the most useful server, development or desktop apps, like GCC, or Apache. I suppose you could lock yourself into one Linux vendor by becoming completely reliant on a proprietary app that only supported one Linux distribution, but then that would be the fault of the application, not the operating system.

Wednesday, May 10, 2006

Security Research and Computer Crime - Where do we Draw the Line?

This is interesting - the case of Eric McCarty, a security researcher and sysadmin charged by Federal prosecutors last month with "knowingly having transmitted a code or command to intentionally cause damage" to the University of Southern California's applicant website (I noticed the FBI press release uses the word "sequel" instead of SQL. I hope that wording didn't come from the complaint itself...).

Apparently, McCarty exploited a SQL injection flaw to access student data (which included social security numbers and dates of birth) in the database backing USC's website. He then notified SecurityFocus via email, who notified USC of the vulnerability. USC shut their site down for two weeks while it was being fixed (my guess is the "damage" comes from the fact that USC had to take their applicant website offline, since McCarty didn't do anything malicious with the information). Here is the text of the statute he is alleged to have violated (see section (5)(A)(i)).

The case, and others like it, show the ethical conflict involved in some computer crime prosecutions. In particular, this case reminds me of Randal Schwartz's case from way back in 1993. The same issues were raised back then - in that case, Schwartz was running the crack program to disclose weak passwords, but without authorization. In the end, he was convicted of three state felony charges.

Unfortunately, the law is pretty clear in these cases. It appears McCarty violated Federal law, a felony that could land him up to ten years in jail. This seems quite excessive, however, given McCarty's intent. Perhaps some exceptions are needed in various Federal and state computer crime statutes to allow for legitimate security research, although the question of what is legitimate and what is not could be unclear. Sure, USC had to take down their site for two weeks (does anyone else but me think that's a long time to fix a SQL injection bug?), but just think of how long it would have been down after a real compromise. At the same time such an exception could be said to encourage security research, you could see this used as a loophole that crackers with malicious intent could use to escape prosecution. "Really, detective, I was just testing their security". Web application security testing can also be dangerous, unintentionally resulting in database or web server outages. For now, the risks of doing "stealth" research just are not worth it, whatever the intent.

Technorati Tags: , ,

Tuesday, May 09, 2006

Article Roundup

Three hackers from the University of Toronto have developed distributed proxy software that is set to be released at the end of this month. Called Psiphon, it is meant for use inside countries with restrictive Internet access policies.

NetBSD developer Hubert Feyrer describes a cool way to use qemu during sysadmin training. See also my qemu howto.

Perl surprises some at the Coverity open source code analysis defect project by having the fewest defects of the three LAMP languages - Python, PHP and Perl. Here is the quote:

...surprised, however, by the performance of the Perl language. Of the LAMP stack, Perl had the best defect density well passed standard deviation and better than the average, Chelf said. Perl had a defect density of only 0.186. In comparison Python had a defect density of 0.372 and PHP was actually above both the baseline and LAMP averages at 0.474.

Speaking of Perl, Leo Lapworth brings us The Ultimate Perl Module List.

Interesting article and discussion on Why Business Needs More Geeks.

How even geeks can stay in shape.

Is the US lagging behind in open source adoption?

Sunday, May 07, 2006

It's the Other Apps, Stupid

I came across this funny blog post - Emacs Key Bindings Make You Retarded. Of course the post is decent satire, but I think it's the other apps' key bindings that make you retarded. Really. It drives me crazy when I'm in Emacs after having used vi for a few agonizing hours on some lame, minimalist system and "Esc :wq [Enter]" gives me a Lisp evaluation error. So I just try to do everything in Emacs (including my blog posts). It's easier that way. For those times when I can't be in Emacs, Firefox can get Emacs key bindings with a few tweaks (easiest under GTK+/Gnome) and's key bindings are completely customizable (Tools->Customize->Keyboard).

Technorati Tags: , , ,

Friday, May 05, 2006

Article Roundup

Here's an article about the next version of Ubuntu being ready for the Enterprise. Of course, that phrase "ready for the Enterprise" is over-used marketing hype (what does it really mean?). I suppose if it means that Ubuntu is stable, well-supported, with predictable release cycles, than I would wholeheartedly agree.

The Firefox power-users guide. A great list of tips and useful plugins.

Does Wardriving matter? Is wireless security as bad as we are told? I think it's probably worse. If you ran a business, would you make a corporate security breach public (absent statutory requirements, of course)?

This is hilarious satire.

Linux has gotten fat.

Thursday, May 04, 2006

Save the Internet - Sign the Online Petition

Head on over to and sign this online petition to help prevent the large telcos from destroying net-neutrality.

Tuesday, May 02, 2006

Accessing Windows Shares From a GNU/Linux Desktop

Many people work for companies that have a Windows-based infrastructure, while they have a GNU/Linux desktop. You could use something like VNC or rdesktop to gain access to files on Windows shares, but a couple of command-line utilities from the Samba suite are a nice option that let you gain access right from your desktop (for a more in-depth treatment of Samba, see the presentation Using Samba as a File Server, PDC or Domain Client).

Let's say your company has a Win2k3-based fileserver, call it "FILESRV", in the domain "DOM". The problem is that you have a Linux workstation, and don't have access to a Windows terminal server, or you don't have Administrative rights on the server in question, so you can't install VNC. The 'mount' command (which uses 'smbmount' under the hood when it is asked to mount a Windows share), 'smbclient', and's import/export filters provide a nice alternative. On a Debian-based system, run 'apt-get install sudo smbfs smbclient' to get the required command-line utilities described below, although Ubuntu systems will already have sudo.

File Transfers With Smbclient

The 'smbclient' command from the Samba suite of tools provides an FTP-like command-line interface to a Windows fileshare. To make it easier to use, and so we don't have to remember all the command-line options, we are going to define a shell alias. Put the following in your ~/.bashrc file (using .bashrc means this alias will be available to you in any interactive shell, not just a login shell).

alias files='smbclient //FILESRV/path -d 3 -A ~/.dom.txt'
Where the file ".dom.txt" in your home directory is in this format (see smbclient(1) for details):

username = username password = password domain = DOM
Some tips:
  • To make this alias apply to all users, put it in /etc/bashrc, not ~/.bashrc. Each user will still have to have their own ~/.dom.txt file, however.
  • The "-d 3" sets the debug level to something useful, in case the command fails - you can remove this once your alias works correctly
  • Using the "-A" option like this keeps you from having to give a username and password every time you run the alias (but see the note about security, below).
  • You can put the share path in double quotes if it contains spaces, as in "//FILESERV/Tech Docs".
  • You can use an IP address in place of the server name if you want.
  • Running the 'alias' command will display a list of aliases that are currently defined.

When you are done editing ~/.bashrc (or after each editing session), you need to re-evaluate your .bashrc by running '. ~/.bashrc' or 'source ~/.bashrc'. Now when you run the command 'files', you should get dumped into an FTP-like interface from which you can get or put files from the remote share:

dmaxwell@stealth:~$ . .bashrc dmaxwell@stealth:~$ alias alias files='smbclient // -A ~/.dom.txt' dmaxwell@stealth:~$ files Domain=[DOM] OS=[Windows Server 2003 3790 Service Pack 1] Server=... smb: \> get "ftp fix.doc" getting file \ftp fix.doc of size 24064 as ftp fix.doc (195.8 kb/s) smb: \> quit dmaxwell@stealth:~$

Transparent Access With Mount

Accessing Windows shares with the mount command is a little more convenient, since once mounted, you can use 'ls', 'cp', 'mv', 'mkdir' and all the other Unix filesystem commands you are used to. Let's say that you want transparent access to the Windows fileshare noted above (//FILESRV/path). First, create a directory with something like 'mkdir ~/files'. This directory is where the remote filesystem will get mounted. Then add the following to your ~/.bashrc (we are using sudo to let us run the mount command with the required root privileges):

alias filemount='sudo mount -t cifs //FILESRV/path ~/files -o username=user,password=pass,workgroup=DOM'

Make sure your alias definition is on one line - your browser might wrap the display, above. 'DOM' here is the name of your Windows domain, 'user' and 'pass' are your Windows domain credentials. When you are done, source your .bashrc again, and run the command 'filemount'. You should now be able to access the files in "//FILESRV/path" from within your ~/files directory.

Accessing HOME$ Shares

Sometimes a Windows admin will map a hidden share to everyone's Windows desktop, to use as a private storage space. This is usually called the "HOME$" share, although you won't see it if you try to browse the network (in a Windows-sense). To access it, make a directory in your home directory, again as a filesystem mount point. I'll use 'z' as the name of the directory, since a lot of Windows admins map the Z: drive to user's home shares at login.

Add the following to your .bashrc:

alias homez='sudo mount -t cifs //FILESRV/HOME$ ~/z -o username=user,password=pass,workgroup=DOM'

That's it. When you source your .bashrc again you should be able to run the command 'homez' and have full access to your private Windows home share from within your ~/z directory.


A final word about security is in order. You may want to change the permissions on the "~/.dom.txt" and "~/.bashrc" files to 0600, to prevent any other non-root users on your workstation from reading the passwords stored in those files. Even though this is really just 'security through obscurity', the alternative, which is less convenient, is to type the password in every time you mount or access one of the remote shares. The smbmount(8) (and mount(8), since mount just passes it's options onto smbmount) command supports a 'credentials' option that allows the use of a file when authenticating, much like the '-A' option to smbclient.


These aliases will work on pretty much any (recent) GNU/Linux system under any shell, with Samba 3.x. I've written them with the Bash shell in mind, as it's the installed default on most of these systems. What may differ under other shells is the name of the shell startup file, and the syntax for defining shell aliases (although the above syntax will work under any Bourne-compatible shell).

If you have trouble getting the mount command to recognize the '-t cifs' option, try it with '-t smbfs' instead, but you may not be able to access Win2k3 shares easily if you do this. Some Unix systems' mount commands don't support passing the smbfs or cifs filesystem types onto smbmount - in this case, you can use smbmount(8) directly.

Technorati Tags: , , , ,