Linux Gazette... making Linux just a little more fun!

Copyright © 1996-98 Specialized Systems Consultants, Inc.


Welcome to Linux Gazette!(tm)


Published by:

Linux Journal


Sponsored by:

InfoMagic

S.u.S.E.

Red Hat

Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at sponsor@ssc.com.

Linux Gazette is a non-commercial, freely available publication and will remain that way. Show your support by using the products of our sponsors and publisher.


Table of Contents
May 1998 Issue #28


The Answer Guy

The Graphics Muse Will Return


TWDT 1 (text)
TWDT 2 (HTML)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.


Got any great ideas for improvements? Send your comments, criticisms, suggestions and ideas.


This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com

"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at gazette@ssc.com

Contents:


Help Wanted -- Article Ideas


 Date: Wed, 1 Apr 1998 23:13:13 +0200
From: Tomas Valusek, tvalusek@vs.inext.cz
Subject: MIDI on Linux

I'm trying to understand how is MIDI supported on Linux. Can you write a detailed article about it?

Tomas Valusek


 Date: Thu, 02 Apr 1998 15:59:18 +0800
From: Kevin Ng, kng@HK.Super.NET
Subject: Patch troubleshooting

It is common nowadays for s/w to be delivered in form of patches, which makes sense in terms of saving network bandwidth and time. However, as a end user, when somehow a patch fails, I don't know what do do, except email to the original author.

I'd therefore like to see an article describing patches, i.e.,

Kevin (from Hong Kong)


 Date: Mon, 30 Mar 1998 16:51:09 -0800
From: Nate Daiger, daiger@newdream.net
Subject: HELP--Utility for changing NTFS partition sizes

I want to dynamically change my NTFS partition to install Linux, but can only find resizing utilities for FAT. If no such utility exists, is there a way to install Linux on an NTFS partition?

Nate Daiger


 Date: Tue, 31 Mar 1998 22:33:39 -0500
From: Ahmad Faiz, AFAIZ@cstp.umkc.edu
Subject: Printing with Linux

I'm running Red Hat 5.0 on my machine, and I've just bought a HP DeskJet 722C printer, but I couldn't get it to work. I asked around on the IRC channels, and so far everyone has answered that Linux does not support it - is it a windows-only printer?

If so, is it possible to write a driver for it? or does anyone know of where I can get my hands on the driver (if it's already been written, of course). I would love to try and write one, but unfortunately I'm new to Linux and to programming.

any help would be appreciated...thanks!

Faiz


 Date: Wed, 1 Apr 1998 16:00:19 -0500 (EST)
From: Nordic Boy, jklaas@cs.albany.edu
Subject: SysV init for Slackware

I am wondering if someone out there knows of a package to change Slackware's BSDish inittab (and rc.d/rc.*) files to a SysV type structure with separate rc.d.0, rc.d.1, etc inits. I am asking because I recently installed KDE and I really like it and I was thinking of using the SysV init editor that comes with it, but it would be nice to have something to start with rather than starting from scratch.

Thanks,

James Klaas


 Date: Thu, 02 Apr 1998 16:03:00 +0800
From: Kevin Ng, kng@HK.Super.NET
Subject: How to enable swapping

My machine, which is a Pentium Pro with 64MB memory, reports no swap space being used. In procinfo, it always report 0K swap space.

I did a fdisk on /dev/hda and verified that a 64MB partition of type Linux swap (83) is actually there.

So why is the swap never being used ?

Kevin


 Date: Fri, 3 Apr 1998 09:35:37 +0200 (CEST)
From: K. Nikolaj Berntsen, berntsen@bkm.dtu.dk
Subject: finite elements programs for Linux

At the department where I am sitting they are planning to buy a PC-bar, and they intend to put NT on the machines. I would benefit from them putting Linux on them, since I could then use them for simulations overnight.

I started talking to the ones buying it and my arguments stopped, when they said that one reason for using NT was that they should be running finite elements programs on them and that the frontier for those programs was now on the windows platform. I don't know s... about that, so I am looking for info; should I accept their arguments or is it that he just does not know what can be gotten for Linux? Commercial Finite Element Method (FEM) programs are also in the searchlight!

Happy Computing, Nikolaj


 Date: Mon, 06 Apr 1998 13:42:35 -0700
From: Peter D'Souza, dsouza@panix.com
Subject: Btrieve Port?

Our company runs two major apps using a Btrieve database. I was wondering if anybody has ported either Btrieve server or client to Linux. It is an extremely fast database (and highly underrated too) which would be excellent if ported to Linux. I'm not too sure if the developers of our Btrieve applications would move to Linux, but if I could test a Linux-based solution with sample datasets, perhaps they'd be more amenable to the idea of moving to a Linux platform (as an alternative, at least).

Peter D'Souza


 Date: Wed, 08 Apr 1998 11:12:53 +0200
From: Denny Åberg, Denny@ele.kth.se

Hi, I'm tired of starting my X-session with 'startx -- -bpp 16' to get 16 biplanes instead of the default 8. How do I get xdm to run with 16 bpp? If I use it now, it starts X with 8bpp on my Red Hat 5 installation.

cheers,
Denny Åberg, Sweden


 Date: Sat, 11 Apr 1998 17:18:11 +0000
From: pheret, pheret@linex.com
Subject: floppy problems

Hi there. Okay, i don't know if this is a floppy problem, or what, but here goes.

I am able to mount my diskette, but when I try to copy something from the disk to my hard drive I get this error:

floppy0: disk absent or changed during operation
end_request:  I/O error, dev 02:00, sector 1
bread in fat_access failed
cp: <file name>:  I/O error
Is this because it is mounted umsdos? Should I mount it something else?

I am running Linux 2.0.0 on an AST ascentia950n. I only have my basic system right now because I can't get my floppies to copy! arrgh.

anyhow, if you can help me, could you please send suggestions to pheret@linex.com? Thanks!


 Date: Sun, 19 Apr 1998 17:53:47 +0200
From: letromb@tin.it
Subject: cd rom

Hello.I have the Linux Slackware 2.0.30 Walnut Creek.I installed it on a Pentium 200 MMX with a 24x CD-ROM. During the installation I had to write "ramdisk hdd=cdrom" for reading the CD-ROM, but after the installation Linux doesn't see the CD-ROM. I have an atapi CD-ROM, and when I tried to compile my kernel another time, I saw that atapi is the default !!! So I don't understand where is the problem . What can I do ?

Thank you for your reply,
Leonardo


 Date: Sun, 19 Apr 1998 13:45:54 +0000
From: Jason Powell, jay@Lauren.dyn.ml.org
Subject: Red Hat Linux 5

Anyone know when Red Hat Linux 5.1 is coming out? I'm running a severely modified version of 5.0 now, and needless to say it stinks. I can't compile anything that uses sockets because of broken headers. Suffice to say, I find it to be quite an annoyance.


 Date: Fri, 24 Apr 1998 16:02:09 +0200
From: Lambert van Eijck, eijck@iri.tudelft.nl

I'm having a problem with my menus in X. I can access all menus (by mouse), but the items of those menus which are WITHIN a "X-box" are not selectable, somehow. The menus I'm talking about are menus like the 'vt fonts', 'main options' and 'vt options' in the Xterm. Or the 'file' and 'page' menu of Ghostscript.

If anyone has a suggestion on why I can select the menu but not menu item, please send me a mail. I'm using Debian 1.3.

Lambert van Eijck


 Date: Sat, 18 Apr 1998 13:12:53 +0800
From: Guan Yang, guan@wk.dk
Subject: How do I set up XDM?

I have heard that one can login to Linux via XDM. How is this done? Also, I have also heard that you can get a Linux penguin at boottime or something like that. Tell!


 Date: Sun, 26 Apr 1998 14:42:28 +0200
From: Ola Ekdahl, ola.ekdahl@swipnet.se
Subject: Modem

I am a real Linux newbie and I wonder how do I configure my modem. It's a sportster flash modem.


 Date: Sat, 18 Apr 1998 17:01:59 -0700
From: tng, tng@sosweb.com
Subject: getting ppp-2.3.3 to work

Anyway I finally decided to migrate to linux kernel 2.1.94 mainly because of the .94 indicates that they are almost ready for the next stable release...

The problem I have is ppp 2.3.3 I downloaded is read the README compiled the required parts and installed flawlessly...Now I CANNOT conect to my ISP.. They are running a linux network with redhat 5 for web hosting and slakeware controling the raid and passwords. I'm running slackware. (redhat would crash every couple days wipeing out my harddisk...got tired of rebuilding my system...got real good at backups : ) )

the ppp-2.2 I was using I had to use the +ua <file> switch where file contained the username and password for upap auth. after upgrading this swich was no longer available so I simply added it to my /etc/ppp/pap-secretes file:

 
username    *    password
this didn't work. So, I tried the following:
 
localhost    *    username:password
*                 *    username:password
My ISP hangs up on me. I changed the order of the fields every which way I could thing of but nothing worked. I would like to get my linux box back on the net because of better transfer times and a more stable environment. (linux connected at 33.6 and windoz connects and 24.# with the same serial settings modem init etc.)

Please help...I hate to downgrade after houres of work upgrading.


 Date: Tue, 28 Apr 1998 10:31:14 +0800
From: Stephen Lee, sljm@pobox.org.sg
Subject: Help Slackware

I am running Slackware 3.2 and I want my machine to have a name like stephen.merlin.com when people dial into my machine using PPP or Slip (My idea is to run some sort of a intranet BBS with poeple dialing in using Dial-up networking and people can telnet in) but apart from setting /etc/hostname do I need to run "named" perhaps you can have a article on how to set up this type of service.


General Mail


 Date: Wed, 01 Apr 1998 09:45:11 -0600
From: Mike Hammel, mjhammel@graphics-muse.org
To: STunney@ahcpr.gov
Subject: grammer sites?

You recently wrote to the Linux Gazette to express your aggravation about the use of apostrophes and the world "alot" in many articles and letters. You are correct - both of these are misused often in email, even more so in general email not destined for an online magazine. I often find myself trying to reword a sentence to not use "alot", and am aggravated with myself for having used it so often I can't think of more proper wording! You also mentioned that there were online dictionaries available. My only problem with your letter was you didn't mention where these could be found. If you have a few references, a follow up letter to the Gazette would be grealy appreciated. I know I often have need for a dictionary and a theasaurus in my own writings. Although I have one of each, they are pocket editions and somewhat limited. I realize I could look for references via Yahoo or other online search engines, but I thought since you had mentioned their existance you might already have the references.

Thanks.
Michael J. Hammel, The Graphics Muse


 Date: Wed, 01 Apr 1998 11:55:05 +0100
From: John Hartnup, slim@ladle.demon.co.uk
Subject: Regular Expressions

Great April issue. Thanks.

The further reading section for the Regular Expressions in C++ section misses out the *excellent* O'Reilly book Mastering Regular Expressions.

I suspect that most people, like me before I read the book, don't realise the sheer power behind regexs. It's revloutionised my coding methods (especially in Perl!).

John


 Date: Thu, 2 Apr 1998 19:51:30 -0500 (EST)
From: Casimer P. Zakrzewski, zak@acadia.net
Subject: IBM 8514 Monitor and X

I hope you have the space to publish all of this letter. I would certainly appreciate it if you did. Back in the Feb 98 issue of LG, my request for help with installing X on the old IBM monitor I have was published, and I received a number of replies, from all over the world as you'll see. I wish to thank:

        Corey G. <chds652@BOTCC.COM>;
        Todd Jamison <jamison@littonos.com>;
        "War Hound" <warhound@worldnet.att.net>;
        Justin Dossey <dossey@ou.edu>;
        Martin Vermeer <mv@fgi.fi>;
        Alexy Yurchenko <ayurchen@bell.aispbu.spb.su>;
        Robert Reid <reid@astro.utoronto.ca>; and,
        Miss Valarie Frizzle
Many advised using 'xvidtune' to get the proper settings, and a couple advised me to get RH5.0. I only got around to trying out anything about two weeks ago.

Now this may come in handy for anyone else with a monitor like mine. It was so simple it was foolish. First, I couldn't find 'xvidtune' after reinstalling RH4.2, so I figured I'd play around with the X configuration. If I blew the monitor, well.....

In the RH installation, when I got to the selection of monitors, I bit the bullet and selected 'custom'. A new menu came up, and guess what? In it was a listing for an 'IBM 8514 or compatible'. (As the younger people say today, I said "Duh?") I kind of figured my monitor was as compatible as it could get!

After I clicked on that and popped in what freqs I knew, X worked perfectly. Which is a nice end to the tale, but doesn't address the problem. The problem was that I was afraid to (as Ms. Frizzle says) 'Take chances; get messy.' I was too happy webbing along in the Win95 world. To newbies like me out there, all I can say is: do just that. I advise having a notebook and pen handy at all times, though, to write down anything you change and where you changed it.

Does RTFM sound familiar? Do that, too. A lot. Linux can be confusing, especially when you're trying to do something supposedly simple like installing PPP (I'm *still* working on that) and at different web sites you find three or four different ways to do that, and none seem to work in your case.

That's when you take chances and get messy. And you may well (as I've had to do), hit the big RESET button when it's a total SNAFU, and maybe have to reinstall. Breaks of the game. And that's where the notebook you've been writing all your changes comes in very handy. If you try to keep it all in your head, the kumpewter will win every time.

In addition, there is a lot of help from off-line sources, like library book sales. Last year, for example, I picked up an 'outdated' SAMS book entitled, "X Window System Programming". That was before I even thought about putting together another 'puter - over eight years from touching a keyboard. I may never use it; but it only cost $.50. Local gurus; if you're lucky enough to have them, be subtle in your approach to them. Like, 'Uh, gee, you can really get your (whatever it is) really whipping up a storm. Mine kinda...', and let it drag out. Ten years ago when I was a supposed 'guru', that *always* got me going. And I learned from a guy who had a really modern system back in the '80s, so I got one just like it.

When you say, "TRASH-80", you better smile, pardner! Mod-1, no less. 4K RAM. It could do just about anything.

Your ISP may or may not be a help, but try it. Where I am, when I walked in to sign up and the word Linux passed my lips, I thought they'd hang balls of garlic around their necks.

But if you want to do it, you will. I still don't have PPP on Linux, for example, so under Win95, if I find something tempting on the web, I still download it. It can always be put on a disk, if necessary - say you don't have a dos mount - and then tarred to your Linux partitiion.

But write it down; write it all down.

That's all I have to say, except I again all those who sent me help. That's what Linux is all about anyway, isn't it.

PS: I hope I was correct in the above to please the English purists. If not: mea culpa, mea culpa, mea maximum culpa.

Zak


 Date: Mon, 6 Apr 1998 14:30:29 -0600 (MDT)
From: Dale K. Hawkins, dhawkins@teton.Mines.EDU
Subject: Bazaar ISP...

Hello, I was wondering if anyone has ever considered the idea of a bazaar model for running an ISP. By a Bazaar model, I of caurse refer to the infamous Cathedral vs. Bazaar model for software development. So what do I really mean. I mean an ISP by the people for the people. I have found that most ISP's are very restrictive in how things are run, i.e., many of the interesting utilities are strictly off limits. For example, I was recently trying to setup cvs to work as a server. The normal way to do this is by adding a line to inetd.conf. However, being only a "user" on my ISP, I had no way to accomplish this. So I though of a more complex way to set this up, but that method require the use of crontab. Again this service is not available to Joe User.

I am very aware of the obvious security issues, but surely there must be a way to improve the situation in someway. I cannot but think about rms (Richard Stallman) and some of his lestures on the evils of a sysadmin and thinking, "how true". But how can one deal with the open system issue, while still maintain a certain level of system security. I would be very pleased to see this erupt into a deep and lengthy thread somewhere. Just my 2 cents.

-Dale


 Date: Sun, 5 Apr 1998 21:12:00 +0100
From: William Chesters, williamc@dai.ed.ac.uk
Subject: Linux is not ready for the desktop

David Wagle ("Evangelism: A Unix Bigot and Linux Advocate's Spewings", Linux Gazette #27) points out some good reasons why converting people to Linux can be harder than we expect.

But he seems to shy away from the natural conclusion. It is not currently possible to put together a setup which makes it possible for people to do normal day-to-day work and simple admin without serious trouble---whether or not they care about abandoning their existing Windows software. Ergo, Linux is simply not, in all conscience, a suitable platform for unsupported users who just want to get their jobs done.

It very nearly is. I run the maximally friendly Linux installation with Red Hat, linuxconf, KDE, Netscape and Word Perfect; my experience is that intelligent non-Unix users can manage fine 90% of the time. The remaining problems are very obvious, but here there are anyway spelt out in order of seriousness:

Yes, progress over the last year or two has been breathtaking. The developer community has shown itself capable of coming up with really lovely utilities and tools for non-initiates, and it no longer seems implausible that Linux will soon develop into something that rivals NT for ease of use. But in the mean time, proposing Linux to anyone not already conversant with Unix is tantamount to suggesting a new hobby: one with tangible rewards, to be sure, but let's admit that's what it is. Linux is not ready for the desktop.


Published in Linux Gazette Issue 28, May 1998


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Next

This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com


Contents:


Re: Shutdown and Root

Date: Wed, 1 Apr 1998 06:31:04 -0500
From: Buz Cory, adm@bzsys.dyn.ml.org

From the Linux Gazette, #27

Guido Socher, eedgus@eed.ericsson.se wrote:
I noticed that many people still login as root before they power down their system in order to run the command 'shutdown -h now'. This is really not necessary and it may cause problems if everybody working on a machine knows the root password.
Very true.
Most Linux distributions are configured to reboot if ctrl-alt-delete is pressed, but this can be changed to run 'shutdown -h now'. Edit your /etc/inittab ...
 
[snip inittab]
Now you can just press crtl-alt-delete as normal user and your system comes down clean and halts.
Not necessarily the best solution.

It is perfectly safe to simply do a "Three-finger salute", allow a normal shutdown, and then power down the machine anytime after you get the message "unmounting filesystems" until you get the message during reboot saying "mounting all filesystems". Probably the easiest time would be at the LILO boot prompt (assuming you are using LILO).

An alternative I used once on a system that did *not* have <ctrl-alt-del> enabled was to provide a special login that *just* did a shutdown. There is such a line in my /etc/passwd now that I didn't put there, so I guess it's from RedHat two years ago.

Regards, ==Buz :)


Re: Core Dumps

Date: Wed, 1 Apr 1998 14:31:24 -0500 (EST)
From: Claude Morin, klode@isgtec.com

Neat idea!

Christoph Spiel says: I'd like to paste some sample output here, but neither can I find a core dump on my machine, nor do I know a program that generates one.
How to generate a core dump in one easy lesson: You've just generated a core dump by sending SIGQUIT to cat.

If this doesn't work, you probably have core dumps disabled. To check:
within bash: ulimit -a
within tcsh: limit

Lastly, you can kill -QUIT various running processes; if they don't handle the signal, they'll dump core. Remember kids: don't try this as root :-)

Claude


Easter Egg in Netscape

Date: Thu, 2 Apr 1998 11:25:56 +0800 (HKT)
From: Romel Flores, rom@elsi.i-manila.com.ph

Remember the "about:mozilla" egg? Try it again and the usuall egg appears. Now, click on the "N" logo. This will open Netscape's home page as usuall but the meteor shower on the "N" logo is replaced with Godzilla.

--Romel Flores


Host Name Completion

Date: Fri, 3 Apr 1998 01:57:43 -0500 (EST)
From: John Taylor, john@pollux.cs.uga.edu

Host name completion with BASH.

Synopsis : This is how you can use host name completion, which is similar to file name completion.

Put your favorite telnet,ftp,rlogin hosts into $HOME/.hosts, in /etc/hosts format.

example :

 
206.184.214.34   linux.kernel.org  
then put into .bashrc :

------ cut here ------

 
export HOSTFILE="$HOME/.hosts"

# see HOSTFILE in bash man page 
UseHosts()
{
  for i in $* ; do
    eval `echo "$i() { local IFS=\"@\\$IFS\"; set -- \\$1; eval command $i \\\\\\${\\$#} ; }"`
  done
}

UseHosts telnet rlogin ftp
------ cut here ------

Now do a . .bashrc, to re-source the rc file. You should have new 3 shell functions defined...telnet,rlogin,ftp do a "set | less" to verify this

now try this [notice the @]:
ftp @lin<tab-key> which completes to linux.kernel.org

Well, this breaks doing just a "ftp", but this can be fixed by doing a "command ftp", (maybe alias this??) which will give you the ftp> prompt. Rlogin will also break if you have to use the -l switch. This could be incorporated into UseHosts(), I just haven't had time to do it.

If you change the .hosts file, you have to logout and login again to use the new hosts ... don't ask me why.x>

John Taylor


Running Without Logging In

Date: Thu, 2 Apr 1998 22:50:26 -0800 (PST)
From: Jakob Kaivo, jkaivo@nodomainname.net

I notice a lot of discussion in Issue 27 of running shells on vt's without logging in. I'm sure that there are some great solutions, but I would like to add my 1/50 of a dollar to the heap. A while ago I had a need to keep a telnet session open on a vt, so I hacked mingetty to do it. Then I figured, "Hey, why stop there?" So I hacked a little more and came up with rungetty, which can run any program on a vt. It also (in the newest release) can run as any user, so a login is no problem, but you can also tell it to, say, keep a top session running on another vt. It is available from ftp://ftp.nodomainname.net/pub/rungetty/current (home site), ftp://sunsite.unc.edu/pub/Linux/system/serial/getty, and should find it's way into ftp://ftp.redhat.com/pub/contrib soon. It is available in tarball, source RPM, and binary RPM for alpha (glibc2) and i386 (libc5 and glibc2) on nodomainname, and tarball on sunsite.

Jakob Kaivo


Animation Easter Eggs in Netscape

Date: Mon, 06 Apr 1998 12:03:41 +0100 (IST)
From: Caolan McNamara, Caolan.McNamara@ul.ie

with the release of the netscape source the most important fact is now known, if your web page is not under
http://home.netscape.com/people/
http://www.netscape.com/people/
http://people.netscape.com/

then you cant have a mozilla as the animation with the X version of netscape like http://people.netscape.com/briano and 20 others have and only jamie zawinski under that tree gets the compass http://people.netscape.com/jwz

sigh, and i really hoped that i could have one too, :-(

resource for this is lines 292-319 in ns/cmd/xfe/src/Logo.cpp list of names with possible animations easters follows akkana briano bstell converse djw dora dp francis kin jwz lwei mcafee radha ramiro rhess rodt slamm spence tao toshok zjanbay

list of urls under which animation can take place.
http://home.netscape.com/people/
http://www.netscape.com/people/
http://people.netscape.com/

and usual format is
http://people.netscape.com/username

Caolan McNamara


Re: Usershell on Console Without Logging In

Date: Wed, 08 Apr 1998 20:21:42 +0200
From: Soenke J. Peters, soenke@pc1.sjp.de

In LG 27, Kragen@pobox.com announced some utilities to do an automatic login. Besides the fact that this might be a security risk, I use his program "own-tty" to have my dosemu running on a tty. Add the following line (or something adequate) to "/etc/inittab":

 
  6:23:respawn:/sbin/own-tty /dev/tty6 /usr/bin/dos dos
From inside X, CTRL-ALT-F6 beams you into dosemu, from the console ALT-F6 does the same. Press CTRL-ALT-Fx from inside dosemu to go back to ttyx. But be warned: Doing this causes a pretty high cpu-load because dosemu is _always_ runnning. To solve this problem, I inserted a "getchar();" into the source "own-tty.c" right before the "execv()" is done. This makes "own-tty" wait for a key beeing pressed before firing up dosemu.

Soenke J. Peters, Hamburg, Germany


Backing Up Win95 Files

Date: Fri, 10 Apr 1998 07:51:38 -0400
From: Donald Harter Jr., harter@mufn.org

Here is a shell script that will back up some of the windows 95 registry files on your vfat partition. You may not want to backup all the files in the script since the *.da0 files are backups themselves. There may others that I do not know about. You can use cron to run this script on a regular basis.

Donald Harter Jr.

 
#!/bin/sh
#
# This script will backup your windows 95 registry.
# If you ever have problems with windows95, restoring the registry
# might fix the problem.
# By using this script you might not have to reinstall all your
software.
# BASE_DIR is the directory where you want the tar.gz archive to be
written.
# WIN_PATH is the base path of your windows 95 partition in the
/etc/fstab file.
# Change these to suit your own needs.
# written by Donald Harter Jr.
#
BASE_DIR=$HOME
WIN_PATH=/dosc
#
#
REGISTRY_STEM=registry_`date +%m_%d_%Y`
tar -c  -f /tmp/$REGISTRY_STEM.tar --files-from=/dev/null
# some of these files may not needed
#tar -rPf /tmp/$REGISTRY_STEM.tar  file_to_backup
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/windows/system.dat
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/windows/*.da0
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/windows/user.dat
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/windows/*.ini
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/autoexec.bat
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/*.sys
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/windows/command.com
tar -rPf /tmp/$REGISTRY_STEM.tar $WIN_PATH/Program\
Files/Netscape/Users/harter/bookmark.htm
gzip /tmp/$REGISTRY_STEM.tar
mv /tmp/$REGISTRY_STEM.tar.gz $BASE_DIR/$REGISTRY_STEM.tar.gz
echo "To restore your win95 registry type:"
echo  "tar -zPxvf $BASE_DIR/$REGISTRY_STEM.tar.gz  "


Re: X-term for MS-Windows

Date: Fri, 10 Apr 1998 23:47:44 +0000
From: Milton L. Hankins, mlh@swl.msd.ray.com

What it is sounds like you want is an X *server*.

You have several options. There are a few commercial X servers out there: Hummingbird eXceed and LAN Workplace are two I know of. There's also a free X server (with much fewer features) called MI/X. You should be able to find these on the web.

You may also opt to use something like VNC, the virtual network computer. You can also find that on the web.

Milton L. Hankins


Re: Shutdown and Root Again

Date: Wed, 15 Apr 1998 19:16:23 -0600
From: Bob van der Poel, bvdpoel@kootenay.com

In last months 2 cent tips:

------------

In the March issue, you have a tip on using X programs when you've run su to root. By far the easiest method is to simply
 
setenv XAUTHORITY ~khera/.Xauthority
for your own user name, of course... No need to run any other programs or cut and paste anything.

Vivek Khera, Ph.D.

----------

Just adding the needed commands took me more than a few minutes. Part of the problem is that I'm using bash, not csh as Dr. Khera is. My solution was:

  1. Add the following to the .bashrc script for root:
     
    	eval OLDHOME=~$USER
    	RCFILE=$OLDHOME/.rootrc
    	if [ -e $RCFILE ] 
            	then source $RCFILE
    	fi
    
  2. Create a file in each user's home directory called .rootrc. In this have the following line:
     
    	export XAUTHORITY=$OLDHOME/.Xauthority
    
Hope this helps someone.

Bob van der Poel, bvdpoel@kootenay.com


Running an ATAPI Zip Drive

Date: Sun, 19 Apr 1998 01:41:34 +0000
From: Steve Beach, asb4@psu.edu

I just bought an IDE ATAPI iomega Zip drive, and I couldn't find any help at all on how to use it. So, I slogged through, got a great hint from Jeff Tranter (maintainer of the 'eject' utility), and managed to get it working. In the spirit of giving back to the community, here's my (maybe even) five cent tip.

Here's how to use an IDE ATAPI zip drive on Linux.

First, the kernel: Do _not_ use the "IDE FLOPPY" option (officially the name is CONFIG_BLK_DEV_IDEFLOPPY ). This will work perfectly for reading and writing, but it will not work for ejecting. What you need to do is say yes to the option CONFIG_BLK_DEV_IDESCSI. When this is set, you will treat the IDE ATAPI drive just like a SCSI drive, except without the SCSI card and all that other garbage.

After making your kernel, you should get these messages in your startup messages (type dmesg at the prompt if they go by too fast to read):

 
hda: WDC AC34000L, 3815MB w/256kB Cache, CHS=969/128/63
hdb: WDC AC34000L, 3815MB w/256kB Cache, CHS=969/128/63
hdc: TOSHIBA CD-ROM XM-6102B, ATAPI CDROM drive
hdd: IOMEGA ZIP 100 ATAPI, ATAPI FLOPPY drive - enabling SCSI emulation
ide2: ports already in use, skipping probe
ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
ide1 at 0x170-0x177,0x376 on irq 15
Floppy drive(s): fd0 is 1.44M
FDC 0 is a post-1991 82077
scsi0 : SCSI host adapter emulation for IDE ATAPI devices
scsi : 1 host.
  Vendor: IOMEGA    Model: ZIP 100           Rev: 24.D
  Type:   Direct-Access                      ANSI SCSI revision: 00
Detected scsi removable disk sda at scsi0, channel 0, id 0, lun 0
scsi : detected 1 SCSI disk total.
SCSI device sda: hdwr sector= 512 bytes. Sectors= 196608 [96 MB] [0.1
GB]
sda: Write Protect is off
.
.
.
Partition check:
 sda: sda4
 hda: hda1 hda2 hda3 hda4
 hdb: hdb1 hdb2 hdb3
The key is that SCSI simulation will be used only if the native ATAPI driver for that device isn't found. So, since the ATAPI CD driver was compiled into the kernel, it used it. Since the ATAPI removable disk driver wasn't, SCSI emulation was used.

Second, the device: If you want to have non-root users be able to mount, unmount, and eject the Zip disks, you've got to make a couple of changes to the default configuration. First thing to do is to change the permissions on the device. As root, type the following:

 
chmod a+rw /dev/sda4
The next thing to do is set a shortcut (eject is easier). Again, as root, type the following:
 
ln -s /dev/sda4 /dev/zip
Third, the mount point: Create a mount point for your drive. I like /mnt/zip, so I just do a mkdir /mnt/zip. For ease, you now want to put this into your /etc/fstab. Put a line in that file that looks like
 
/dev/sda4                 /mnt/zip                  auto   user,noauto 0
0
The first column is the device, followed by the mount point. The first 'auto' means that it will check to see the file system type when it is mounted. (Hence, you can read not only ext2fs, but also FAT, VFAT, etc.) The 'user' keyword allows average users to mount the disk, and the 'noauto' means that it will not be mounted at startup. I don't know what the two zero's mean, but it works for me.

Now, at this point, any user should be able to mount the Zip disk by typing

 
mount /mnt/zip
Unmounting would just be umount /mnt/zip.

Fourth, formatting the disks: The Zip disks you buy at your corner computer store are formatted for MSDOS. Personally, I prefer to have ext2fs formatted disks, so I don't have to worry about file name conflicts. Hence, I have to reformat them. There are two other oddities. First, the writable partition will be number 4. This is a Macintosh-ism, which you might as well leave. You can run fdisk and change the partition, but it will be much easier to just leave all your disks the same, and that way you won't have to change the line in /etc/fstab for each disk. Second, the initial permissions are not set to be writeable by the user.

To handle all this, I do the following, as root (new disk, initially unmounted): (WARNING: This will erase all data on the disk!)

 
/sbin/mke2fs -m 0 /dev/sda4
mount /mnt/zip
chmod a+w /mnt/zip
umount /mnt/zip
Now, whenever the user mounts that disk, she will be able to write to it.

Fifth, ejecting: The entire reason for using SCSI emulation is to make it easy to eject the disk. It's easy now:

 
eject zip
You can also say 'eject /dev/sda4', but since you created the symbolic link '/dev/zip', eject knows what you mean when you just say 'zip'.

One thing about eject is that the average user does not have permission to use it. So, change the permission via setuid:

 
chmod a+s /usr/bin/eject
That should allow any user to eject any disk.

Sixth, zip tools: Jarrod A. Smith (jsmith@scripps.edu) has written a really nifty little program to make mounting, unmounting, ejecting, documenting, and write protecting Zip disks really easy. The name is jaZip, and it is available as an RPM package (jaZip-0.22-3.i386.rpm) from the usual download sites, including ftp://ftp.redhat.com. Go ahead and download it -- it's only 24 K!

I hope that covers everything -- if anybody has any questions, please let me know!

Steve Beach


New Binaries Script

Date: Sat, 25 Apr 1998 01:06:03 -0700
From: Keith Humphreys, keith@SpeakerKits.com

A friend installed linux and was mystified with the abundance of new binaries. This little script was written to help introduce him to the family members. May need bash >= 2.

 
#!/bin/bash
###########################################################################
#
# mkcontents.b (c) 1998 Keith Humphreys (keith@SpeakerKits.com)  GNUed
#
# 1988.04.22
#
# This little script will create a list of descriptions for your main bins.
# It depends on whatis which appeals to the binaries man pages.  
# Intended as a learning aid for newbies and as a memory crutch (for oldbies.)
#
###########################################################################

# These are the directories to scan:

checkhere='/sbin /bin /usr/sbin /usr/bin'

###########################################################################

if ! [ -f /usr/bin/whatis ]
then
  echo '
    You appear to be missing the /usr/bin/whatis program.

    Sorry charlie,
    only the finest tuna get to be Chicken of the Sea.
    '
  exit 1
fi

for dir in $checkhere
do
  outFile=contents${dir//\//.}

  echo '------------------------------------------------------'
  if [ -f $outFile ]
  then
    rm $outFile
    echo "Removing old $outFile"
  fi
  echo "Scanning $dir and creating $outFile"
  echo '------------------------------------------------------'

  sleep 1   #To see message.

  for file in $(ls $dir)
  do
    echo $file  #For entertainment
    whatis $file >> $outFile
  done
done

exit 0


Script Contributions

Date: Sat, 18 Apr 1998 20:52:07 +0200 (SAT)
From: Stefan van der Walt, trax@the-force.ml.org

In the last few months, I wrote these simple scripts to enhance my Linux environment. I believe some other users might find them useful too, so I send you a copy.

Here are the 4 scripts provided in tar files with a README.

Thanx a mil!
BTW Keep up the great work with the Gazette. You rule :)

Stefan


Re: Core Dumps Again

Date: Sun, 26 Apr 1998 21:21:50 -0700 (PDT)
From: macker, macker@netmagic.net

In issue #26, Marty was saying "I was annoyed on Linux that file(1) couldn't tell what file dumped core if a core dump was seen.", and mentioned size(1). gdb(1) will also do the job...

gdb -c core will show the program and calling arguments, as well as the signal generated when it died, usually signal 11 (segmentation fault). quit will exit the debugger.

-macker


Published in Linux Gazette Issue 28, May 1998


[ TABLE OF 
CONTENTS ] [ FRONT PAGE ]  Back  Next


This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:


News in General


 June Linux Journal

The June issue of Linux Journal will be hitting the newsstands May 8. The focus of this issue is Connectivity with articles on setting up PLIP, NFS and NIS, using Linux with the PalmPilot, a user-friendly GUI for PPP and much more. Check out the Table of Contents. To subscribe to Linux Journal, click here.


 Linux in the News


 Scientific Software Packaging Feedback

Date: Fri, 3 Apr 1998 07:19:12 -0700 (MST)

Purpose?

We at Kachina Technologies, Inc. are very excited about the tremendously increasing popularity of our SAL (Scientific Application on Linux) web site. Based on encouraging user feedbacks, we want to go the extra mile to provide more services to the Linux and the Scientific and Engineering communities.

What is SAL?

SAL (Scientific Applications on Linux) web sites collect thousands of software links for scientists, engineers, and Linux ethusiasts. There is no doubt that SAL has become one of the most important, popular, and exciting software resources for the Linux and UNIX community. SAL was developed by Dr. Herng-Jeng Jou, along with many others at Kachina Technologies, Inc.

While most system/network level applications are packaged by Linux distributors and provided on their CD-ROM distributions, most commonly used free scientific packages are still provided in source code format. Although many users contributed packages which are often found in /contrib directories on Linux distributors' FTP sites, lack of version tracking and centralized control hinder the appreciation of these individual efforts. We are facinated by the team work and coordination of GNU/Debian Linux people and want to pursue the same goal in software packaging for the Scientific and Engineering community.

Our initial goal is to archive Debian (.deb), RedHat (.rpm), and simple binary tree (.tgz) used by many distributions including the popular Slackware distribution. At the same time, SAL's website will function as a repository and users can download the packages and install them on their systems immediately.

However, we'd like to listen to your opinions on the best policies and procedures to get this job done correctly.

Please tell us what scientific and engineering software (those with free source codes that are available of course) you would like to see packaged.

Please email us at sal@kachinatech.com and simply include:

  1. Software name(s)
  2. Preferred packaging format (RPM, DEB, or others)
Although this is just a survey, we are quite serious and excited about doing this. The established packages will be available through SAL.

Currently, there are 14 SAL sites installed worldwide, and their URLs are:

     Austria             http://nswt.tuwien.ac.at/scicomp/sal/
     Finland             http://sal.jyu.fi/
     Germany             http://ftp.llp.fu-berlin.de/lsoft/ 
     Italy               http://chpc06.ch.unito.it/linux/
     Japan               http://ec.tmit.ac.jp/koyama/linux/SAL/
     Poland              http://www.tuniv.szczecin.pl/linux/doc/other/SAL
     Portugal            http://www.idite-minho.pt/SAL/
     Russia              http://www.sai.msu.su/sal/
     South Africa        http://web.ee.up.ac.za/sal/
     South Korea         http://infosite.kordic.re.kr/sal
     Spain               http://ceu.fi.udc.es/SAL/
     Turkey              http://sal.raksnet.com.tr
     United Kingdom      http://www.ch.qub.ac.uk/SAL/
     USA                 http://SAL.KachinaTech.COM
We welcome your feedback, comments, and suggestions. Please send your messages (including mirroring requests) to sal@kachinatech.com

For more information:
Herng-Jeng Jou, hjjou@KachinaTech.COM
Team SAL, Kachina Technologies, Inc.


 MAILING-LIST: Linux Speech Recognition

Date: Fri, 3 Apr 1998 07:19:12 -0700 (MST)
All interested in speech recognition software under Linux are invited to join this new mailing list. The emphasis is on (though discussion is not limited to) finding a means of porting preexisting applications (especially DragonDictate-style ones, or possibly NaturallySpeaking-style) to Linux, rather than developing one from scratch.

To subscribe, remove the spaces from the following address and send mail to:

ddlinux-request @ arborius.net

For more information:
Shore.Net/Eco Software, Inc; info@shore.net


 SAS for Linux

Date: Fri, 3 Apr 1998 07:19:12 -0700 (MST)
SAS is a powerful and popular reporting, analysis, and application development system. The SAS for Linux site is dedicated to the support of SAS on the Linux operating system. Tune in there for information, news, to voice your opinion, and contribute to the cause. Of particular interest is the SAS User Linux Interest Profile, a survey to measure the amount of interest (or not) for SAS on Linux. Stop on by!

For more information:
Karsten M. Self, kmself@ix.netcom.com


 New Pages for Linux Users

Date: Fri, 3 Apr 1998 07:19:12 -0700 (MST)
Here's a page that could prove worthwhile for newbies: "Basic Linux Training"

Here's another new site site dedicated to Lunix. Find links to RPMS, X windows managers, HOWTOS and more. Check it out at:
http://vdpower.gamesmania.com/demoreviews/linux/linux.html


 The KDE Free Qt Foundation

Date: Wed, 15 Apr 1998 14:50:59 GMT
The KDE project and Troll Tech AS, the creators of Qt, are pleased to announce the founding of the "KDE Free Qt Foundation". The purpose of this foundation is to guarantee the availability of Qt for free software development now and in the future.

The foundation will control the rights to the Qt Free Edition and ensure that current and future releases of Qt will be available for free software development at all times. released under the BSD license.

We believe the founding of the KDE Free Qt Foundation to be an ground-breaking step, helping to usher in a new era of software development, allowing the KDE project, the free software community, all free software developers as well as commercial software developers to prosper in a mutually supportive fashion.

For more information:
Bernd Johannes Wuebben, The KDE Project, wuebben@kde.org
Eirik Eng, Troll Tech CEO, Eirik.Eng@troll.no


 Linux Resources -- we need your help

Date: Thu, 16 Apr 1998 19:06:04 GMT
Linux Resources is a community effort brought together by Specialized Systems Consultants (SSC), publishers of Linux Journal, to promote the Linux operating system.

We strive to produce the most informative one-stop Linux resource. We do that with the help of many enthusiastic individuals and companies who produce selected content for Linux Resources. If you or your company would like to contribute content or maintain a page within Linux Resources please e-mail us at webmaster@ssc.com.

After browsing through http://www.linuxresources.com/ please e-mail us and let us know if it addresses your needs and if not, tell what we can add or do differently. For example, perhaps there are other exisiting sites that you feel we should incorporate into Linux Resources -- e-mail us!

Last, Linux Resources is advertising free. Let us know if this is an important factor for you. Again, we want Linux Resources to be *your* Linux resource. Please let us know how we can be of assistance.

For more information:
Carlie Fairchild, mktg@ssc.com, Linux Journal Sales and Marketing


 Open Source Journal, the Magazine for Free Software

Date: Mon, 20 Apr 1998 13:19:03 GMT
The Free Software Union is proud to announce the release of the premiere issue of the 'Open Source Journal, the Magazine for Free Software'.

The Journal is volunteer written and produced, and available free from the Web at:

http://osj.fslu.org/

The Free Software Union ("Free Software Lovers Unite!" = FSLU) is a democratic, non-profit group dedicated to the Free Software/Open-Source community.

For more information:
Braddock Gaskill, braddock@braddock.com


 Linux Conference Announcement

Date: Tue, 28 Apr 1998 09:24:54 +0200
The first Linux conference in Denmark will be held by Denmark's Unix User Group and Skåne/Sjælland Linux User Group May 16, 1998. Among the main speakers are Jesper Pedersen, the creator of the Dotfile Generator, and Image Scandiavia, a large ISP in Denmark using Linux as their main platform. Moreover, experienced Linux users will help novice users by installing Linux on their computers.

The conference has a homepage is at http://sslug.imm.dtu.dk/konference.html, where you can find the programme and more information. Official languages are Swedish and Danish.

For more information:
Kenneth Geisshirt, kge@kb.dk, The Royal Library
Linux konference i København den 16. maj 1998


Software Announcements


 JCam - Digital Camera Software for Linux (with Java)!

Date: Wed, 15 Apr 1998 14:52:50 GMT
JCam - a single software program for (almost) all OSes and (almost) all Digital Cameras ...

The first public release of JCam is now available from "www.jcam.com", featuring support for Digital Still Cameras from Epsom, Casio, Kodak and coming soon, Fuji, Samsung and Olympus.

JCam is currently available for Win95, WinNT and Linux 2.0 ... future versions will offer - subject to demand - support for Mac, OS-2, PPC and a range of other Unices. JCam requires Java 1.1 to be installed on the host machine; later versions will be available bundled with the JRE, simplifying installation for non-Java users.

For more information:
info@jcam.com, http://www.jcam.com/


 GTK+ 1.0.0 GUI Library Released!

Date: Thu, 16 Apr 1998 19:03:14 GMT
The GTK+ Team is proud to announce the release of GTK+ 1.0.0. GTK+, which stands for the GIMP Toolkit, is a library for creating graphical user interfaces for the X Window System. It is designed to be small, efficient, and flexible. GTK+ is written in C with a very object-oriented approach.

The official ftp site is: ftp://ftp.gtk.org/pub/gtk

The official web site is: http://www.gtk.org/

A mailing list is located at: gtk-list@redhat.com

To subscribe: mail -s subscribe gtk-list-request@redhat.com < /dev/null (Send mail to gtk-list-request@redhat.com with the subject "subscribe")

GTK+ was written by Peter Mattis, Spencer Kimball, and Josh MacDonald. Many enhancements and bug fixes have been made by the GTK+ Team. See the AUTHORS file in the distribution for more information.

For more information:
The GTK+ Team, gtk-list@redhat.com


 Poppy 1.3 - Simple POP3 mail program to view/delete/save messages

Date: Thu, 16 Apr 1998 18:51:59 GMT
This is the announcement for Version 1.3 of Poppy. It is a simple Perl script that allows you to view only the headers of messages from a POP3 server and then allows you to selectively view, save, or delete each message. It is ideal for limited resource systems.

A simple perl script to retrieve mail headers from a POP3 server and individually view, save or delete them. Requires perl. Simple mail reader which relies on the POP3 server to do most the work. A good use is to delete or to skip over huge emails on a POP3 server when on a slow link. You may also view only a specified number of lines from a message to see if you would like to download the whole message.

To download:
http://home.sprynet.com/sprynet/cbagwell/projects.html

For more information:
Chris Bagwell, Fujitsu Network Communications, cbagwell@fujitsu-fnc.com


 juju/juen - uu/xx/Base64/BinHex-Decoder/Encoder

Date: Thu, 16 Apr 1998 18:48:53 GMT
The first release of juju has been announced. IT is yet another uu- and Base64-decoder, which also decodes xxencoded and BinHexed data, and includes the following features:

It sould work with MIME data as well.

Also included is juen, a similar powerful encoder, which supports uuencoding, Base64-encoding, xxencoding and MIME. It supports automated mailing and posting if sendmail and inews are present.

Current Version is 0.2.0, which is the first public release. The Program is available as sourcecode only, but should compile on any Unix platform, at least Linux ;-). It is GPL'ed.

The Homepage of juju is: http://hottemax.uni-muenster.de/~grover/juju.html

For more information:
Christoph Gröver, grover@hottemax.uni-muenster.de


 mon-0.37i - Service Monitoring Daemon

Date: Thu, 16 Apr 1998 18:45:18 GMT
mon-037i is an extensible service monitoring daemon which can be used to monitor network or non-network resources. Written in Perl 5, this code should be able to run out-of-the-box on many platforms. It supports a flexible configuration file, and can send out email, alphanumeric pages, or any other type of alert when it detects the failure of a service. Service monitors that come with the distribution can test ping, telnet, ftp, smtp, http, nntp, pop3, imap, disk space, SNMP queries, and arbitrary TCP services.

http://ftp.kernel.org/software/mon/ ftp://ftp.kernel.org/pub/software/admin/mon/mon-0.37i.tar.gz

For more information:
Jim Trocki, trockij@transmeta.com


 newsfetch-1.2 - pull news via NNTP to a mailbox

Date: Thu, 16 Apr 1998 18:54:53 GMT
newsfetch: Most Compact and Powerful Utility to download the news from an NNTP server, filter and stores in the mailbox format.

Available from http://ulf.wep.net/newsfetch.html

New version of newsfetch (1.2) is uploaded to sunsite.unc.edu

newsfetch-1.2.tar.gz newsfetch-1.2-1.i386.rpm newsfetch-1.2-1.src.rpm

available in ftp://sunsite.unc.edu/pub/Linux/Incoming/ and in proper place (/pub/Linux/system/news/readers) when they move the files. New version is available in .tar.gz and .rpm format.

For more information:
ymotiwala@hss.hns.com


 Mesa/Vista Project Collaboration Intranet for Linux

Warwick, RI -- April 14, 1998 -- Mesa Systems Guild, Inc. announced today the immediate availability of Mesa/Vista for the Linux operating system. Mesa's flagship product line, Mesa/Vista provides web-enabled project management automation for development teams who need to collaborate with access to all data related to their project.

Mesa/Vista provides a way to tie all of the project management and product development tools already in use together on the web so that the information can be accessed using a web browser on any platform, from any location. This enables project managers to make better, faster decisions based on the most up-to-date information and increases the productivity of development engineers by providing immediate access to information they need to complete their tasks.

For more information:
http://www.mesasys.com/
Maribeth McNair, Mesa Systems Guild, Inc. mbm@mesasys.com


 Blender 3d beta release

Date: Thu, 16 Apr 1998 19:20:29 GMT
NeoGeo is happy to announce the Beta release of a Linux and FreeBSD version of Blender. We expect the first Beta users to help us complete testing and evaluating, especially for the various PC configurations. An official version will be released 4 to 6 weeks later.

Blender is the freeware 3D package - up until now only available for SGI - that has become very popular with students, artists and at universities. Being the in-house software of a high quality animation studio, it has proven to be an extremely fast and versatile design instrument. Use Blender to create TV commercials, to make technical visualizations, business graphics, to do some morphing, or design user interfaces. You can easily build and manage complex environments. The renderer is reliable and extremely fast. All basic animation principles are well implemented.

For more information:
http://www.neogeo.nl/, blender@neogeo.nl


 consd 1.0: virtual console management daemon

Date: Fri, 17 Apr 1998 07:48:06 GMT

consd manages virtual consoles silently in the background. It starts and kills gettys there depending on how many gettys are just sitting around and waiting (and wasting ressources). Usually, consd ensures there's always one (and only one) getty waiting for someone to login. The virtual consoles with lower numbers are preferred.

consd does not interfere with gettys started by init.

As always - if you can't find it on the ftp servers listed below, try the 'incoming' directories. http://sunsite.unc.edu/pub/Linux/utils/ in the file called consd-1.0.tgz (12KB).

For more information:
Frank Gockel, gockel@etecs4.uni-duisburg.de


 dancing_linux - a rendered 3D-Linux-animation / eyecatcher

Date: Fri, 17 Apr 1998 08:06:23 GMT
The animation shows a nice linux-logo, consisting of the five letters and additional artwork. Everything is in motion and is twisting around (glass-, metal-, and light-effects!). It has a black background and is VERY suitable as an eyecatcher for shopwindows or your own linux-box.

=================================================
Format:         *.flc movie
Resolution:     320*200, 8bit color
                120 frames (about 20 fps)
Renderplatform: Linux-Povray 3.01  (of course)
Rendertime:     15 min/frame on a P133
=================================================
The movie may also be viewed with xanim, but it looks _much_ better fullscreen! I included John Remyn's SVGA-Player "flip" (binary).

The animation is available at:
http://sunsite.unc.edu/pub/Linux/logos/raytraced/dancing_linux.lsm http://sunsite.unc.edu/pub/Linux/logos/raytraced/dancing_linux.tar.gz (1.07 MB) and mirrors...

For more information:
Roland Berger robe@cip.e-technik.uni-erlangen.de


 Port of InterBase Database to Linux

Date: Wed, 15 Apr 1998 15:03:18 GMT
InterBase Software Corp has ported InterBase 4.0 to the Linux platform. We plan to allow this database software to be downloaded for free use as of April 29, 1998.

The primary download site will be http://www.interbase.com/

In the July timeframe, we expect to release a commercial version of InterBase 5 for Linux.

There is a monitored news group borland.public.interbase.com available for the users of InterBase.

For more information:
Wyliam Holder, wholder@interbase.com


 SECURITY: procps 1.2.7 fixes security hole

Date: Mon, 20 Apr 1998 13:18:42 GMT
A file creation and corruption bug in XConsole included in procps-X11 versions 1.2.6 and earlier has been found. To fix it, you can either remove the XConsole program or upgrade to procps-1.2.7, available from ftp://tsx-11.mit.edu/pub/linux/sources/usr.bin/procps-1.2.7.tar.gz

Thanks to Alan Iwi for finding the bug.

A few other bugs have been fixed in this version. Read the NEWS file for details.

If you have Red Hat Linux or another RPM-based distribution, libc5-based RPM packages are available from ftp://ftp.redhat.com/updates/4.2/ and glibc-based RPM packages are available from ftp://ftp.redhat.com/updates/5.0/

Fuller upgrade instructions for Red Hat Linux users have been given in a separate post to redhat-announce-list@redhat.com

For more information:
Michael K. Johnson, johnsonm@redhat.com


Published in Linux Gazette Issue 28, May 1998


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Back  Next


This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1998 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


The Answer Guy


By James T. Dennis, answerguy@ssc.com
Starshine Technical Services, http://www.starshine.org/


Contents:

(!)Greetings from Jim Dennis

(?) Problems with SCSI-CDROM and Audio CDs --or--
Sinister 'xmcd' Permanently Disables Right Speaker Channel
(?)Email Alpha-Paging software
(?)xdm in 16bpp Mode
(?)Bad cluster in HDD
(?)Complex network and NetBIOS
(?) Lets vote for Linus --or--
Some Thoughts on "The Man of the Century"
(?) How do I setup gateway server? --or--
Linux as a General Purpose SOHO to Internet Gateway
(?) Linux.bat -or--
LOADLIN.EXE, Plug & "Pray" and "Win(Lose)Modems"
(?)'sendmail' Masquerading: What and Why
(?)Tools for converting X output to java
(?) Fwd: Please Be Careful --or--
"Good Times" Are Here Again? NOT!
(?) LinuxGazette Mar 1998: xdm Login doesn't!


Linux Gazette: The Answer Guy for May, 1998

Well, plenty has happened in the world of Linux this month:

My wife has decided to take responsibility for marking up the mail that I do as "The Answer Guy" (I didn't pick the name, honest!). Traditionally I've just answered the e-mail that was forwarded to me by the Linux Gazette editors and copied them on it. Marjorie and her husband have normally done the rest.

This has been O.K. since we've focused on content rather than form. However, I've wanted to improve it a little bit ever since I first found out that the answers I was giving were being included in LG. (Mainly I want the URL references I make to other various web sites to be rendered as links so you don't have to cut and paste those into your "Go" or "Open Location" prompt).

Obviously I've procrastinated on that for over a year. Yes, I fiddle with Hypermail and MHOnArc. Finally, Heather took matters into her own hands and modified a copy of 'babymail' (a Perl script) to do most of the work. Unfortunately it appears that this still requires quite a bit of hand tweaking. Oh well.

So, I hope everyone likes the new look. [Me too! -- Heather] To any of you that have written to me and been ignored or never received your responses I'd like to apologize. Sometimes I procrastinate on more than just the cosmetics and I certainly hope you eventually got your answers from other venues like comp.os.linux.* or the L.U.S.T. (Linux Users Support Team) mailing list, or even (horrors!) from one of the LDP (Linux Documentation Project) mirrors.

Another, budding new source of support info for Linux users will hopefully be the "self-service" Linux Search Engine which hopefully will eventually be a complete replacement for Yahoo! (the source for most of the answers I've ever given here).

Well, enough of my rambling and onto my usual collection of questions and answers. As usual I've also included a couple of items which are my responses to posts in newsgroups or mailing lists --- items that I personally think are important enough to be restated here.


Jim Dennis


(?)Sinister 'xmcd' Permanently Disables Right Speaker Channel

From Birger Koblitz on Fri, 24 Apr 1998

Hi,
I'v got a strange problem with my Toshiba 12X-SCSI-CDRom and xmcd. Since I started to use this program, music from audio CDs is only played through the left speaker, the right speaker is dead. The strange thing is, all this worked well on Windoze before. Now even the windoze player uses only the left channel. This doesend seem to be a hardware problem allthough there now is only one channel available out of the Headphone connector on the front of the device,too, since I tried the progam also at a friend with a Sanyo SCSI-CDRom resulting in the same problem (but both channels available from the front plug there). My friend is now quite angry since evrything worked fine under windoze for him before... It seems that xmcd turns of one of the channels of the CDRom. Sadly using the balance control within xmcd doesnt turn it on any more. Is there a way to get things working again?

Yours, Birger.

(!)That's very odd. I've never heard of any CD's or sound cards with NVRAM in them. I presume you've powered off the affected systems, let them sit for a minute or two and tried again (under the formerly "known working" configuration).

I suppose it would give offense to suggest that you actually check the wires that lead to that speaker?

Traditionally I've been a curmudgeon about "toys" like CD players and sound cards (never used them under DOS, Windows or Linux). My traditional opinion has been that CD-ROM's are for data --- and that there are perfectly good, inexpensive, devices for playing audio CD's --- devices that require no special drivers and have no opportunity to conflict with your other equipment and software. (You don't want to know how I feel about those loathsome bandwidth robbers with their "Internet Telephone" and "Cyber Video Phone" toys either. That's our bandwidth they're hogging).

However, yesterday (by coincidence) I bit the bullet and spent a little time compiling a new kernel with sound support. Then I went into the CMOS and re-enabled the sound support that's on the motherboard of that machine I bought from VAResearch.

So, I slipped in a copy of Aaron Copeland's Greatest Hits, logged into to my virtual console, (I still prefer text consoles for most of my work, especially for e-mail), fired up xmcd (X Motif CD Player) and let it loose.

Strains of "Celebration" are streaming out of both speakers as I type this. (Yes, I [Ctrl]+[Alt]+[F1]'d back to my text console after starting xmcd).

So, it's not inherently a problem with xmcd under Linux. This particular installation is a S.u.S.E. 5.1 running under a 2.1.97 kernel that I just grabbed off of kernel.org yesterday.

So, that leave us with other questions.

Do you have a sound card or are you playing this through a headphone jack on the front of your CD player? (I'm not familiar with the specific CD drives to which you refer, but many of them have built in head phone jacks. Mine is a Toshiba 3801 which I gather is sold as a 15x drive).

Are there any configuration or diagnostic utilities for your CD drive and/or sound card? (Presumably they would be DOS or Win '95 utilities that shipped with the device or that you might get from their web site, ftp site, or BBS).

Have you called your CD-ROM or sound card vendors (or BCC'd their support on this e-mail)?

Did you do an Alta Vista or Yahoo! search? (I used "+xmcd +sound +problem") or check out the xmcd home page:
http://sunsite.unc.edu/~cddb/xmcd/

... which has links to their FAQ (and other useful) info.

There was an FAQ entry about Toshiba drives and "sometimes" getting "no sound." Although it doesn't sound like it matches your symptoms exactly you might read that and try the suggestions they list.

Just off hand I don't know of any newsgroups or mailing lists that are particularly good venues for this questions (which I suspect it why you sent it to me). news:comp.os.linux.hardware might be one. Another might be news:comp.sys.ibm.pc.hardware.cd-rom or alt.cd-rom.

Hope that helps. However, it's still hard to imagine any problem that would match these symptoms and persist through a power cycle (not just a reboot -- a power cycle).


(?)Email Alpha-Paging software

How to build a mail to pager gateway

From John DiSpirito on Sat, 18 Apr 1998

Hello Answerguy,

I was wondering if you could help me with something? I was looking for a package that sits on my linux machine and will do email alpha-paging. Im sure you know what this is, but just in case:

A person emails an account: johndoe_page@somemail.com, and it pages them...

I know they are out there, but I dont know where they are. Could you lend some assistance?

Thanks.

(!)John,

There are several ways to do this, as you suspected.

First you could just use the TAP (telephony acces protocol) script that was published in Frank de la Cruz' book on C-Kermit. (The paging can be done as a kermit script and the mail gateway would be a quick procmail script to call it).

That approach requires a little bit of coding but uses tools you hopefully already have around. You can get out of the kermit coding/typing by looking at:
http://fohnix.metronet.com/~tye/textpage.html

For more specialized tools to do this, I just went to the Linux Software Map search engine at: http://www.boutell.com/lsm/ ... selected the search by "keyword" options and typed in "pager"

I expected this to hit dozens of entries for 'more' 'less' 'most' and other Unix "pagers" (that is, programs for "paging" through a file). However, only Xless showed up under that false hit category.

The first "real" hit was a program by a Joshua Koplik. The LSM entry for it has some typos (or is just out-of-date from some directory restructuring at sunsite) so I had to chase down the real URL with a few judicious clicks:
http://sunsite.unc.edu/pub/Linux/system/mail/mailhandlers/!INDEX.html

... gets you to the right directory.

The other few links returned on this search were for 'man' pagers.

Now I'm also sure I recently saw another news article somewhere about telecom/paging software for Linux so I decided to hunt further.

So, I hit my old standby, Yahoo! (most of the answers I give are researched through Yahoo!). I used the string:
"+Linux +pager +alpha"

... and rapidly found a mini-HOWTO on this very topic at:
http://ir.parks.lv/li/Resources/HOWTO/mini/Pager

... by Chris Snell.

Despite, Chris' "disclaimer" (first line of the HOWTO reads "This document sucks.") the directions are very clear and seem to be very complete. I gather that it used to be listed on the LDP mini-HOWTO's and I'd like to see it re-appear there. (There are old, out-of-date mirrors of the LDP pages that have it and the current ones at:
http://sunsite.unc.edu/LDP/

... and at:
http://www.linuxresources.com/LDP/

don't show it.

In this mini-HOWTO Chris refers to a package called "sendpage" (with URL's).

If you get this, I'd suggest that there are easier ways to configuring 'sendmail' You really don't need to do any of that (writing custom rulesets) with a modern sendmail. Something similar can be done via m4 configuration macros and built-in features (or easily handled with a simple one line procmail script).

Another great set of links is on Celeste Stokely's widely acclaimed "Serial Ports Resources" for Unix:
http://www.stokely.com/unix.serial.port.resources/fax.pager.html#pager.unix.link

(which suggests that HylaFax supports pagers in some way!)

It turns out that there is apparently a mailing list devoted to this topic at ixo-request@plts.org. (IXO is one of the other protocols that modems use to talk to alpha pagers -- I don't know the details).

In retrospect I think the recent posting I saw on the subject may have been at the "Linux Weekly News" site (http://www.eklektix.com/lwn/). Hitting their search engine revealed links to:
QuickPage (ftp.it.mtu.edu:/pub/QuickPage) (in a comment to their staff)

... but, oddly, didn't find the paragraph in their previous issue. It turns out that they didn't know about any of the links I've discussed above and were referring readers to a commercial package (of which there are several --- the most well-known being at http://www.spatch.com/).

[I've copied the LWN staff as well. This really wasn't meant to "scoop" them, since I think that LWN is the best thing since Linux Gazette --- and it comes out four times as often! Every LG reader should also check it out! I just can't figure out where they get all the time to work on it.]

Finally the oldest freely available package for this that I know of is a perl scripts called 'tpage' (Tom's Pager) a.k.a. ixobeeper.gz at:

http://www.oasis.leo.org/perl/exts/date-time/scripts/comm/ixobeeper.dsc.html

Anyway I hope that helps. Obviously you have plenty of options (which is the PERL motto).


(?)xdm in 16bpp Mode

From Aubrey Pic on Wed, 15 Apr 1998

How do you get XDM to run in 16 bpp???? I belive it runs 8bpp by default. I have a .xserverrc file that forces 16 bpp. Whenever I ran XBANNER, it would default to 8 bpp. Thank you.

(!)Did you try putting it into your XConfig file? That seems to work for me. I presume this is an XFree86 installation (rather than Xig or X-insides or MetroX).

Does it work when you use startx -- 16bpp from a shell command line?


(?)Bad cluster in HDD

From Thomas Vavra on Wed, 15 Apr 1998

Hi there!

I got a neat, fast 1,6GB HDD (WD IDE) with one "bad cluster" as DOS calls it. Is there any way in using it for linux(marking the cluster as bad or something like that?)

(!)No problem. Linux distributions come with a program named 'badblocks' it handles this for you. The best way to do it is to let 'mkfs' call badblocks using its internally supported switches.

For ext2 filesystems you'd use mke2fs or mkfs.ext2 (usually links to the same file). Just add the -c switch to the command when you invoke it (and read the man page for details).

If you already have an ext2fs on a drive and you suspect that new bad blocks have developed (for example you've dropped the drive or the machine's been through an earthquake) you can run e2fsck (or fsck.ext2 as it may be linked) with the -c switch.

Like I said, easy!

(Naturally I suggest you do these from single user mode, and do proper backups).


(?)Complex network and NetBIOS

From Kate Stecenko on Tue, 14 Apr 1998

Hi !

I have some problem, can you help me?

Our network has 2 segments. Each segment have a lot of stations Win 95 & Win NT OS. Segments are connected via router. Router is Linux box with Mars NWE for IPX routing & internal kernel IP routing.

I need that all computers from all segments will be visible by each other by NetBIOS (in Network Neibourhood/Microsoft Windows Network). Not all computers in out network have TCP/IP stack (it's impossible by important reasons), so I cannot use NetBIOS over TCP/IP. If there are any way to make my Linux box and Samba work with NetBEUI or run NetBIOS over IPX?

(!)Last I heard NetBEUI is not routable. Novell's IPX/SPX is routable to about 16 hops --- and a properly configured Netware system should automatically route IPX. I don't know about IPX routing through the Linux kernel (it might require some static tweaking).

I don't know of any way to tunnel NetBIOS traffic over IP or IPX.

Other Options:
Bridging
I think you can configure Linux to do ethernet bridging (seems that an experimental config option for this has crept into the recent 2.0.x kernels). Bridging is a process where ethernet frames are copied from one interface (segment) to another. This is different from routing in that the router works at a higher level in the OSI reference model (it's at the transport layer while bridging occurs at the network layer and normal ethernet hubs work at the physical layer).

One cost of this is that the bandwidth from one segment is usually no longer isolated from the other (meaning that your utilization may become unacceptable high). Some bridges are more "intelligent" than others --- and they "learn" which ethernet cards are on which segment (by promiscuously watching the MAC --- media access control --- addresses on all ethernet frames on each interface).

The smart switches or bridges then selectively forward frames between the segments. (I use the term frames to refer to ethernet data structures or transmission units and "packets" to discuss those from the upper layers).

Some switching hubs (like the Kalpana) are quite expensive but perform all of this in hardware/firmware. The advantage is that traffic that's local to a segment won't be copied to the other --- which should reduce the overall bandwidth utilization of this approach.

The disadvantages involve NetBIOS and Netware/IPX. NetBIOS is a "chatty" protocol involving lots of broadcasts, particularly by servers (which in '95, NT, and WfW is every machine with any "shares"). IPX is better, for the most part, but most of the servers and services utilized by Netware require SAP's (service advertising packets). These are broadcasts as well.

(SAP's are why you don't have to configure a Netware client system with information about default routers, DNS servers, and things like that. The client listens to the wire for some period of time and hears a list of these periodic SAP's. The disadvantage in large networks with lots of servers, print servers, and other services is that the SAP's can chew up a sizable portion of your bandwidth --- and they are routed).
Gateways:
Rather than trying to get this to work at a layer below the transport (NetBIOS, TCP/IP, IPX/SPX) you could try to get above it, into the presentation, session or application layers. These approaches are generically called "gateways."

However. I don't know of any gateways that are appropriate to SMB servers.
Warning:
The rumors I've been hearing are that Microsoft will be phasing NetBEUI out in favor of TCP/IP. So your organization's constraint may not be feasibly in the long run (the next year or two).

(?)Please tell me what to do.

(!)Conclusion:

Question your management's constraint about TCP/IP. NT and '95 both include it (so it can't be a cost issue).

TCP/IP is the most widely used and deployed set of networking protocols in the world --- and has been around longer than anything else in current use. It is clearly scaleable (despite the naysayers and doomsdayers -- "the Death of the Internet" is not imminent). It doesn't suffer from the limitations of IPX and NetBIOS.

I suspect that your management's proscription is based on ignorance. They probably think they know just enough about TCP/IP to worry about security and not enough to know that protocol selection has little to do with system's security. I've seen this discussed several times on the comp.unix.security and BugTraq mailing lists.

If they are concerned about where to get IP addresses it's simply a non-issue. They should read RFC 1918. This RFC establishes several sets of IP addresses to be used by "disconnected" networks. In this case "disconnected" means "behind a firewall" or "not connected to the Internet" (your choice).

You can use any of these that you want --- you don't have to ask anyone's permission. It is your responsibility to prevent any such packets from being routed to the Internet (which is where we get all the discussion of "IP Masquerading" "NAT: network address translation" and "applications proxies" (a form of "gateway").

If their concern is about preventing propagation of "forbidden" protocols (applications layer) or "sensitive" information across the their routers --- there are well established ways of doing that (built right into the Linux kernel, among other places). It's much easier to prevent all propagation than it is to selectively allow access to specific protocols like HTTP (web), SMTP (e-mail) and especially to FTP (which is an ugly protocol for firewall designers to support --- but just as easy as any other to block).

So, I have to question their "important" reasons and suggest that if these reasons are that important and bridging is not feasible than the probably have an unresolvable conflict in their requirements.

(They might consider running polling processes on the Linux Samba/NWE server to replication/mirror all of the data that must be accessible between the segments. This would be a big win in a couple of ways --- if feasible given their usage patterns. It cuts down traffic across the routers (speed/latency benefits for all) and ensures that an extra "backup" of all the relevant data is available. The obvious problems involve concurrency if you allow write access on both sides of the fence. However, if the data is of a type that can be "maintained" on one side and published across in a read-only fashion it is worth a look).

In many ways I'd even question their requirement to share these as files. If you have a few-well managed servers it may be reasonable to make them all "dual homed" (put two ethernet cards in every server and let them all straddle the segments). If they are requiring the propagation of shares created and maintained by desktop users than they probably have a major management problem already.

(!)TIA, Kate Stecenko.

(!)I hope the explanation helps. Just off hand it sounds like you've been saddled with a poorly considered set of constraints and requirements.

It happens to alot of sysadmins and netadmins. While it exercises are creativity and encourages us to socialize (in our mailing lists and newsgroups) --- it also leads to premature graying (or baldness in my case).

Sorry there's no magic bullet for this one.


(?)Some Thoughts on "The Man of the Century"

From Brian Schramm on Sat, 11 Apr 1998 on the Linux Users Support Team (L.U.S.T) Mailing List

Hello,

I think this might interest... It arrived to me without the original sender ID.


The DALnet #Linux has started a movement to get Linus Torvalds voted as Man of the Century. Their idea is to get a massive number of votes for Linus, which would at least get the attention of Linux if nothing else. They estimate that they need about 1 million votes to pull it off.

They've requested everybody to vote for Linus and to pass it along. The category in which Linus is being placed also has a mention of Bill Gates, so we've got some competition. If you would like more information, see the URLs below.
Vote for Linus Torvalds:
http://www.pathfinder.com/time/time100/time100poll.html
Linus as Man of the Century Mailing List:
linusmotc-request@merconline.com

When you vote the system gives you the present ratings. The category where Linus shows now:

(!)While I have the utmost respect for Linus and feel greatly indebted to him for Linux. I have reservations about this suggestion.

First I have to say that the computer has not, in my opinion, been the dominate development of this century. Although microcomputers are the basis for my career and the principle tool in my hobbies (writing and participating in newsgroups and mailing lists) --- I have to step back and try to achieve a more objective view.

I'd rate the development of the telephone and our world wide telecommunications infrastructure as roughly an order of magnitude more important worldwide. Granted that modern telephony would be impossible without the computer. The underlying importance of the telephone has driven computers in large part (specifically in the development of Unix --- at AT&T Bell Labs!). However, my sense of history suggests that the impact of telephony was already evident before that (when the vast majority of it was run by mechanical relays and even by human switchboard operators).

Despite this I wouldn't even say that telephony is the most important development of our century. I think that broadcast media (radio and television) have at least twice as much impact as the phone. The reason is that telephones primarily extend our ability to communicate and shrink our time scales --- but they are still mostly localised geographically and socially. The fact that the technology allows me to call someone in Japan as easily as I could call the local Pizza parlor doesn't matter much when I have no acquaintances in Japan. The telephone doesn't most of us to really connect with a significantly broader or larger set of associates than were possible with old-fashioned postal correspondence.

Broadcast television has had quite a bit of effect on this country and on most of the rest of the world. The results are fundamentally different than anything that could be have been accomplished by correspondence or other forms of individual association. Prior to radio and television we didn't even have a word "broadcast."

I'd put publishing in the same league as broadcast media for potential. However it is several centuries old. Also its potential has never been as widely realized as broadcast media due to the simple hurdle of literacy. This is not so simple as functional literacy. Many people have sufficient academic skills to participate in our (or their) culture --- but are not affected enough by any publications to really move society. I personally consider television to have had a greater effect on our culture based largely on the sheer number of hours that people spend absorbing its emanations.

It doesn't matter how trite most of the "content" has been --- the fact is that a largely percentage of the world's population has been pacified for an astounding number of hours by TV's and movies (silver screen). I'm not nearly so concerned by what television has caused people to do as how much it may have prevented by its diversion.

Despite the its greater importance I still wouldn't say that television, movies and other broadcast media is the most important development in our century. There is one thing that's had even more effect over more of the world that those.

I think I'd have to give the award to Henry Ford. Not only is the automobile one of the most important and ubiquitous developments of this century, but the manufacturing techniques and organisational structures associated with Ford dominate the world's economy and literally shape our cities.

So, despite the fact that Ford appears to have been anti-semitic and to have held elitist views that would disgust many people today --- I'd have to vote for him for this century.

That brings me back to Linus. I think we just might see the real effects of the FSF and Linux later. It may be that people in 2098 will look back and remark on how the spirit of co-operation that was fostered by Richard Stallman and Linus Torvalds (among many others) in the field of microcomputer software fundamentally changed our culture's ethic and economy. We might see radical changes to the publishing industry as more content moves more unto the 'web' (by which I don't just mean HTML carried over HTTP --- but in a broader sense I mean to include the multi-cast communications we see in netnews and on these mailing lists).

This would have to be accompanied by radical solutions to the real problems we face in the world today. We cannot continue to allow our population and resource utilization to grow through another century. In addition the current allocation of natural resources must be rationalized before we can have a better world. If we continue to have less than 5% of the population accounting for 80% of the world's resource consumption and continue to allow individual to rape the land that they "own" and discard it when they've extracted the value from it then most of the world's population will remain poor and miserable (and most of the "developed" nations will see large parts of their own populations degenerate into "third world" conditions). This is not "doom and gloom" prophecy --- it's a simple matter of arithmetic. The question is not "if" but "when" and I think the argument is over decades rather than centuries.

So, if we're still in a position to concern ourselves about a "person of the century" contest ten decades from now, I hope that the standard of living for the rest of the world has improved to the point where we can get more than a .05% participation in the selection process. (There were less than 3 million votes listed in the table that Paul quoted, and that's only 1% of just the U.S. population --- which is about 5% of the world population last I heard).

Who knows, we might then see a bit of national, racial, or even gender diversity in the candidates! (Unfortunately that might take way more than a century).

It's not very likely --- but I'd like to on the next century and be astounded by the spread of altruistic collaboration from software into other endeavors.

While I can't vote for Linus Torvalds as the man of this century I can mark his accomplishment as one of the most amazing things I've ever seen. He might leave a legacy that makes him the man of the next century!


(?)Linux as a General Purpose SOHO to Internet Gateway

From Ron Smith on Sat, 11 Apr 1998 on a newsgroup

I looked thriugh the FAQ and didn't find any answers to this question. I hope this is the right forum.

(!)"The" FAQ. There are a huge number of Linux FAQ and HOW-TO documents. I haven't read them all and I'm "The Answer Guy."

I am a fairly experienced UNIX developer but I usually leave the difficult administrative stuff the the SysAdmins. I have been running a small LAN for my business using Slakware LINUX (currently version 3.2) for some time now. What I really want to do is use the LINUX server as a gateway to the internet for the rest of my LAN. I can connect via PPP to my ISP from the LINUX box with no problems but what I haven't found any good books or documentation on is:

How do I setup the LINUX server to bridge between my local LAN and the internet?

(!) You probably want to read up on IP Masquerading. In it's simplest form you use the ipfw (kernel packet filtering features) and configure them with a command like:
ipfwadm -F -a accept -m -S 192.168.1.0/24 -D any
... which says:
add a rule to accept packets for forwarding from the 192.168.1.* range of addresses, and masquerade the to wherever they are going.
This assumes you have all your internal systems already configured with RFC 1918 IP addresses like 192.168.1.* or 172.16.*.* or 10.*.*.*, and that you have them all configured to use the Linux system as their default router. It also assumes that you are running a reasonably recent kernel with the ipfw options enabled.

There's quite a bit more to it than that --- but that is the core command that makes it work. Note that some protocols --- ftp in particular --- don't work reliably through masquerading. It is often better to get a copy of the TIS FWTK or SOCKS (application layer proxies) to support these (*).

Suggestions: run a caching nameserver and a good caching web proxy (like squid) on the router (the Linux box). Make a "best effort" to "harden" the router's configuration and contract to have a thorough security audit performed on it. If at all possible isolate the gateway on the "outside" of an interior perimeter router (which can be another Linux box running no services, not even inetd).

Adding the caching for DNS and other protocols can greatly reduce the traffic over the network link and only costs a tiny investment in configuration time, RAM, and disk space. Any traffic that's handled by the cache is a bit less contention for everyone else using the link and everyone between you and the servers that you're accessing (i.e. the whole 'net benefits).

(?)I would appreciate any help that you can give...I will check back here periodically or, if possible, email me directly. Thanks in advance.

(!) Feh! I'll try to remember to spool off a copy via e-mail. Find a good consultant in your area. A good one will show you how to do all of this and will be able to explain quite a bit more because he or she will ask quite a bit more about your requirements. I've glossed over quite a bit here -- in particular regarding the security issues.


(?)LOADLIN.EXE, Plug & "Pray" and "Win(Lose)Modems"

From Allen R Gunnerson on Sat, 11 Apr 1998

I was told be several people that I can configure my loadlin so that my plug n play stuff in Win95 would be detected by Linux. Right now, if I use dos mode, I lose all my hardware. I have tried to configure my LTWin modem for Linux with no luck.......

(!)I think you have two different issues embedded in here. Plug -n- Play (hardware) is a fairly lame attempt in recent years to create PC hardware that autoconfigures itself. When talking about ISA cards this is mostly just marketing fluff that fails in many configurations -- and is widely called "Plug -n- Pray" by many of the support reps that I know.

"WinModems" are another issue.

Let's start with the first issue:

A typical PC has two (or three(*)) buses. System "bus" is a hardware interface, with slots or connectors to multiple devices. The original IBM PC (and XT) had 65 pin (8-bit) slots. With the introduction of the AT IBM placed another connector "end-to-end" with the original 65 pin slots -- which allowed many old "8-bit" cards to be used in AT and even in modern systems. These are called "16-bit ISA slots." (The term ISA or "industry standard architecture" was coined after the fact -- near the introduction of MCA (micro-channel architecture) and EISA (Extended ISA). These hardware specifications have almost completely disappeared).

As the industry fought over MCA vs. EISA (largely resulting in the markets rejection of both of them -- due to the crass attempts at exploiting proprietary designs by major vendors of each) the clone manufacturers -- particularly the motherboard and video engineers -- created a high speed 32 bit bus called "VESA local bus" or 'VLB' for short. VESA is the "video electronics standards association" although there were eventually a variety of disk and network controllers that plugged into VLB slots.

These were the rule for late 386 and throughout most of the 486 era (if a period of only 5 years can be called an "era").

With the introduction of the Pentium, Intel also created a number of chipsets and introduced a new bus/interface called "PCI" (sorry I don't remember what the abbreviation stands for -- something like "peripheral to CPU interconnect").

I don't know alot of the low level details about PCI vs. VLB. I've heard that there were very good technical reasons why VLB couldn't be used in Pentium systems. I've also heard that Intel rammed their spec down everyone's throats in a way that has resulted in their clear domination of the chipset market as well as the CPU market.

Prior to this there were a number of companies selling chipsets (all the support circuitry that connects the CPU, the memory, the bus(es), and other interfaces to the motherboard (like the keyboard connector). Now there are practically no other companies selling chipsets. It seems that all of the motherboard manufacturers have been forced to use various Intel chipsets (Neptune, Triton, etc). I've heard that some of these have had bugs as notorious as some of their CPU's.

One problem that has persisted through all of this is that a typical PC owner has had to manually keep track of how each device on the system was integrated with the others. Any individual card might require an IRQ (interupt request), some I/O port addresses, a DMA channel, and/or some "reserved address space" (for memory mapped I/O between the 0xA0000 and the 0xFFFFF regions).

There are only a pitifully limited number of each of these resources. The original PC only had 8 IRQ lines on a single PIC (programmable interrupt controller). A modern PC still only has 15 -- accomplished by "cascading" one PIC off of the IRQ 2 of the other.

Of these the system timer, keyboard, the real-time clock and the FPU (floating point unit) are already taken up -- as well as two serial ports, a hard disk controller (IDE, SCSI, or any other). Usually there is also one associated with each LPT port and one for any bus mouse interface that we have. That leaves nine to be distributed between each of our SCSI, ethernet, sound and other cards. Sound cards often take up two of these incredibly scarce resources.

As if the scarcity weren't enough of a problem, complexity -- the fact that every user has to keep track of these for every system -- was a major kicker. This has been a major failing of the PC architecture. The priority of "backward compatibility" as left us with a "backwards architecture."

Plug and Pray was an attempt to relieve some of that complexity (though it does nothing to resolve the underlying problems of scarcity -- which are deeply ingrained design limitations). It has helped somewhat -- but it requires that all of the components of the systems (hardware and OS) conform to the same spec. A PnP system can work with some old ISA cards, some of the time. The real problem comes when you use multi-boot configuration (as you're doing between DOS and Linux) -- since each of these may try to "tune" the configuration to itself.

The "universal serial bus" (USB) and the "Firewire" specifications offer some hope of relieving the issue of scarcity. Like SCSI these provide an adapter to a semi-intelligent "bus" of external peripherals. In effect the adapter uses one PC IRQ and I/O port range -- and negotiates/arbitrates among many devices on it's own bus using it's own discrimination logic.

However, it looks like it will take some time for practical devices to become widely available in USB form. So far there are a few digital cameras and scanners that support it --- and no modems, ISDN TA's, terminals (or null modem adapters), X-10 powerhouse, or other toys. Ideally someone would make a couple of models of parallel-USB and RS232-USB bridges so I could use existing devices (like parallel port Zip drives and flatbed scanners) with my new USB equipment. It looks like the hardware companies would much rather force us to all buy all new peripherals --- and to get peripherals that aren't usable on any platform other than a PC.

Naturally we can see that Microsoft will benefit from these and any form of "WinModem" or proprietary software drivers for peripherals. I can't think of anything that will perpetuate the status quo of this market more effectively than that short-sighted attitude among hardware vendors.


(?)'sendmail' Masquerading: What and Why

From Stephen Oberther on Tue, 07 Apr 1998

First of all let me say that I love the magazine and your column. This problem has been bothering me for quite some time now and I can't seem to figure out how to remedy it. I have a dial-up internet account but use my local sendmail for email distribution.

My question is this: Is there a way to have my actual email address stampled onto my email so that the recipient can just reply to the email normally and have the reply go to my account with my ISP? Currently, with the exception of this message if netscape works properly, the from field is posted with my username and my local machine name, as it should be. Is this possible at all or is it just a lost cause?

(!)Yes, there is a way to have your system "masquerade" as some other system or domain. In fact this is what most organizations do.

Note: this 'sendmail' masquerading feature should not be confused with "IP Masquerading" (which is a form of TCP/IP network address translation -- or NAT). In the contest of mail the term refers purely to how the headers of each piece of mail are constructed. (IP masquerading is at the transport layer of the OSI reference model while 'sendmail' masquerading is at the applications layer).

Now the fact that you mention Netscape (presumably NS Navigator or Communicator) raises a different issue. Some MUA's --- particularly those that have been ported from or significantly influenced by non-Unix code --- will bypass the normal conventions for mail handling under Unix and deliver their own mail directly to the apparent recipient (by doing the appropriate DNS query for MX records and engaging in a direct TCP dialog with that host's SMTP port. In many cases you can configure these to relay mail through some other system --- such as 'localhost' which will allow your 'sendmail' (or qmail, or vmail, or other local MTA (mail transport agent) to apply your local policies (like header rewrites) to the mail).

Host "hiding" via 'sendmail' masquerading is such a local policy. Assuming you're using 'sendmail' you can enable it with the following lines to your 'mc' (Macro Config) file:
FEATURE(always_add_domain)dnl
FEATURE(allmasquerade)dnl
FEATURE(always_add_domain)dnl
FEATURE(masquerade_envelope)dnl
MASQUERADE_AS($YOURHOST)dnl
Naturally you probably need other lines in there and you need to run this through the M4 macro preprocessor to generate your /etc/sendmail.cf file. (I do not recommend hand hacking the cf files as this is more error prone and harder to document and review later).

You might not need all of these features --- but I use them.

Note: this doesn't "hide" your internal host names and/or IP address in the "Received:" headers --- which is an FAQ in security (via obscurity) features. I merely affects the Reply-Path: and From: addresses.

The part about "masquerade_envelope" is one I've added more recently. It prevents some potentially alarming headers from appearing in my mail when a recieving or relaying mailhost's sendmail (or other MTA) can't do a proper "double reverse" lookup of my address. (Yes, my DNS and reverse DNS are out of sync --- and no, I haven't fought it out with my ISP nor have I assumed control of my own DNS. Let's not talk about the footwear on the cobbler's kids!).

(?)Oh and just in case the from address is wrong on this email it should be ...

Thanks in advance, Stephen Oberther

(!)The first test I would make in your situation is to pass a message straight to sendmail with a command like:
/usr/lib/sendmail -t -v -oe < $TESTMAIL
... where $TESTMAIL is the name of a file that has a reasonably formatted piece of mail (at least a To: and a Subject: line for a header, a blank line and a few lines of text for the body).

point the To: line at one of your other accounts to a friend or through some autoresponder (pick one that doesn't remove the headers). Then you can see what sort of rewriting is occuring. It may be that you system's MTA is already properly configured and you can focus on the MUA (mail user agent).


(?)Tools for converting X output to java

From Sheldon E. Newhouse on Tue, 31 Mar 1998 on the [linuxprog] mailing list

Are there any tools available to convert standard C code with X displays to java displays? Basically, we have a long program which is written in C and does low level X output. We would like to port it so that the number crunching part works on a back-end server and users can view output on java clients. The part of the program that does the display is fairly short, but intimately connected with the X libraires. Any ideas or references would be appreciated.

TIA,
-sen

(!)I'm not sure about a "Java X Server" per se. However there has been some work done on execution of remote X applications through a web browser interface.

I have yet to use any of this stuff (I barely use X) --- so I can only rely on hearsay and a bit of web searching.

The first technology to consider is the latest release of X Windows itself. There was an initiative by the X Consortium (*) called "Broadway" --- this eventually became the widely accepted code name (possibly widely excepted as well -- but we won't get into that) for the entire X11R6.3 release.

Since I don't know most of the details it's probably best for me to just refer you to some online articles that discuss it:
Broadway/Xweb FAQ
http://www.broadwayinfo.com/bwfaq.htm
Can X Survive on the Internet? By Brad Weinert
http://www.sigs.com/publications/docs/xspot/9609/xspot9609.d.execbrief.html
X Is Dead, Long Live X
http://www.sigs.com/publications/docs/xspot/9603/xspot9603.d.edit.html

Note that the constraint of this approach is that it seems to require that you actually have an X Server on the clients. This is great if you have just about any sort of Linux/Unix clients --- but may be prohibitive if you intend for Windows (NT, '95, or 3.x) or MacOS clients to access your applications.

You might be able to deploy the MI/X server (a freely available X Server for Windows and MacOS) but I don't know if it supports the X11R6.x "Broadway" features. You can find out more about MI/X at http://tnt.microimages.com/freestuf/mix/ and read MicroImages Inc.'s FAQ at http://tnt.microimages.com/freestuf/mix/mix-faq.htm.

A quick perusal of that suggests that it won't support the Broadway set of extensions (since the FAQ says that it doesn't support LBX -- the "low bandwidth X" which is apparently a key part of Broadway).

I don't know what commercial X Servers for Windows or the Mac will help --- but I needn't spend any more space on that issue since I you don't specify that as a requirement. Hummingbird's Exceed (http://www.unipres.com/hbird/exceed/) is a likely candidate.

Another approach might be to provide your clients with VNC --- which was listed in Linux Weekly News (http://www.eklektix.com/lwn/) a few weeks ago. This has nothing to do with Java and almost nothing to do with X Windows. However it does allow you to view X Windows displays from Windows and Mac clients and vice versa --- using an alternative to the X communications protocols.

The potential advantage is that it sounds much easier to install and configure than a Windows or MacOS X server. Take a look at at the "Olivetti and Oracle Research Laboratories" (http://www.orl.co.uk/vnc/) for more on that. Another advantage of this over MI/X would be that it is open source available under the GPL --- MI/X is free to use but the sources are not available.

The two approaches that are more directly suggested by your question are:
An Xlib to Java awt cross compiler (or class or library)

... or:
An X Server for the Java virtual machine (a class that implements Xlib).

I suspect that both of these are feasible --- though the performance costs of the latter may not make it palatable and I'm sure that the portability issues in each would be significant.

Despite my search engine efforts I was unable to find any information on any work being done on either approach. However, I'm not an expert in Java and I don't keep up on it much. So, maybe someone else here will help us out.

I did look in several likely places at http://www.developer.com/directories/ (formerly the Gamelan archives). The closest I found is a JXTerm --- allegedly an 'xterm' in Java. This includes telnet and cut and paste features. There are several Java terminal emulators including VT100, VT220, ANSI-BBS, TN3270 and TN5250 applets.

Meanwhile I've been hearing snippets of SCO's (http://www.sco.com/) "Tarantella" for the last year or so. This apparently does act as an X Windows to Java gateway, providing a sort of "web desktop" or "webtop" as their marketing copy refers to it. It appears that you'd have to install a SCO OpenServer system to either run your application or to act as a gateway between your applications server and your clients. (I doubt sincerely that SCO will make Tarantella available for Linux --- and it doesn't seem to be written in Java). I'm really curious as to how this works.

(While I was writing this I installed a new copy of the Netscape RPM from my Red Hat CD -- this is a new system that I just built from parts, it's running S.u.S.E. -- started an X session, started Navigator, pointed my browswer at http://tarantella.sco.com/ and tried to access their demo. It gets to some point before showing my anything like X or it's "webtop" with some complaint about a Java Security violation in some class or another. Exiting and retrying got me further along. Hint: don't try 'Tarantella' during your first Navigator session. Eventually I was able to get it running ...err... walking ...err... limping along. It might be faster over a T1 or an ethernet -- I happen to be using the 28.8 PPP connection at the moment).

If you're curious, go try the demo yourself. You can also find a set of slides from a presentation by Andy Bovington (?) on:
Controlling X/Motif apps from Java and Javascript.
http://www.sco.com/forum97/tarantella/presentations/bov/

In typical "big company" fashion SCO doesn't tell you how much this product costs. They expect you to fill out a survey and have their sales critters call you.

I suspect that this circumlocution may result in more converts to open source software than all of the other "freedom" arguments we can muster.

Meanwhile, my wife, Heather, found a couple of promising links (she's the real search engine wiz in the family).

Here's the most promising:
Eugene O'Neil's XTC, the X Tool Collection
http://www.cs.umb.edu/~eugene/XTC/

... which appears to be at the early alpha stages but, in his own words:
... is an implementation of the X Window Protocol, written in pure Java. It does not require any native C libraries, such as Xlib: instead, it is intended as a complete replacement for Xlib, written from the ground up to be flexible, object-oriented, and multi-threaded.

(Wow!)

There was also some work done by a Mr. Igor Boukanov at:
Pure Java X client
http://asfys3.fi.uib.no/~boukanov/java.doc/lib.x.html

By far the best technical information I found on X Windows in my search was:
Kenton Lee's: Technical X Window System and Motif WWW Sites
http://www.rahul.net/kenton/xsites.html

Kenton has the best set of links on the subject, and apparently has written articles for 'X Advisor Magazine' and several others.


(?)"Good Times" are Here Again? NOT!

E-mail and Internet Hoaxes Exposed

From steve wornham on Mon, 30 Mar 1998 on the [linuxprog] mailing list

I am not sure if this will help anyone but I recently came across it. (forwarded message below)

(!)I hope that I'll be the only one to respond to this and I hope that no one, on any Linux mailing list, will forward this drivel anywhere!

This appears to be yet another variation of the "Good Times" virus hoax. Hopefully my response will help everyone. Please do NOT forward this message (or any message) to "everyone in your address book." Any mail that you receive that makes this plea should be viewed with extreme suspicion --- they are almost always hoaxes, spams, scams, or Ponzi schemes.

Most are illegal in many jurisdictions (internationally and domestically). Even the cases that aren't illegal are obvious abuses of our shared networking resources (bandwidth).

I won't dignify this particular hoax with an analysis. Suffice it to say that it doesn't specify platform, agent, mechanism, or effect (symptoms nor "payload").

For the record it is possible for e-mail to carry some forms of computer virus to some users. Any WinWord .DOC file can contain macro virus code --- and can be attached to any e-mail (via MIME). However, this "virus" is only "data" to the vast majority of Linux users. Even most Windows users won't be affected most of the time --- and all can protect themselves (simply configure your mail user agent to disable any "automatic document viewing" features, and disable the "auto-executing macros" of all your MS Office packages).

Lest you think that MS Windows is the only platform affected by malicious macros that can be embedded in documents --- consider that some versions of the venerable 'vi' and 'emacs' editors for Unix have historically contained similar features (modern implementations either lack them or have them disabled by default).

In any event some of us in the professional virus fighting community (*) consider these "Good Times" messages to be a "social virus" --- one that is transmitted via the gullibility and lapses in judgement by the human recipients. If you have ever forwarded copies of any such warning to anyone else then you have been a carrier of that virus.

Innoculate yourself! Don't perpetuate these hoaxes! When in doubt ask! (Ask one or two trusted associates). Don't do forward any message to "everyone in your address book." Most importantly, don't delete unread mail simply because you think it might contain such a virus. (*)

Incidentally, document macro content isn't the only risk of running some of the modern mail user agents. A number of GUI, MIME enhanced, HTML extended MUA's will default to running JavaScript, or fetching and executing byte-compiled Java code. These should be disabled or limited to "trusted" domains and hosts wherever possible. When that's not possible --- use a different mail reader. Another, risk involves the automated fetching of HTML images. While there is no known mechanism for this to execute hostile code --- it is possible for the server containing such images to perform traffic analysis (finding out what IP address your mail gets forwarded to, and that sort of thing). This is a subtle risk which is more related to privacy than to "virus.'"

As a final note regarding the "Good Times" class of hoaxes --- if a virulent, cross-platform, e-mail transmitted, computer virus ever is created --- it is very unlikely that it would always relay with the same subject. If such a virus were created it almost certainly could not be detected by any feature of the message headers (there is no concievable reason to write such a program with any such constraint).

For those who like to follow links, here's some web info about "Good Times" and similar hoaxes:
anti-Good Times virus page
http://www-students.biola.edu/~dougw/GoodTimes/virus.html
Good Times Virus Hoax FAQ (over 50K)
http://users.aol.com/macfaq/goodtimes.html
http://www.public.usit.net/lesjones/goodtimes.html
CIAC Internet Hoaxes (about 48K)
http://ciac.llnl.gov/ciac/CIACHoaxes.html
Don't Spread that Hoax!
http://www.nonprofit.net/hoax/hoax.html
  • This one covers many other types of Internet and e-mail hoax.
The AFU & Urban Legends Archive
http://www.urbanlegends.com/
  • The alt.folklore.urban-legends newsgroup home page
>>     Hi all, 
>>     This was forwarded to me.  Please feel free to pass this along.
>>     Sherry
>>
>>======== Original Message ======
>>     
>> Please be careful!!
>> If you receive an e-mail titled "WIN A Holiday"  Do Not open it, it 
>> will erase everything on your hard drive.  Forward this letter out 
>> to as many people as you can.  This is a new, very malicious virus 
>> and not many people know about it.  This information was 
>> announced yesterday morning from Microsoft, please share it with 
>> everyone in your address book.  Also, do not open or even look at 
>> any mail that says "Returned or unable to Deliver" this virus will 
>> attach itself to your computer components and render them useless.  
>> AOL has said that this is a very dangerous virus and that there is 
>> NO remedy for it at this time.


The original message referred to in this thread was sent by Cesar Augusto Kant Grossmann.


(?)LinuxGazette Mar 1998: xdm Login doesn't!

From Milton L. Hankins on Mon, 30 Mar 1998

Actually, it sounds like Cesar's .xsession script is exiting before he performs his usual "logout" action.

Cesar, does the user account have a .xsession file in the home directory? If not, create one. The simplest one would contain the line "fvwm" or "xterm".

(!)Milton,

Wouldn't this show up as a problem when he ran 'startx' as well?

As I've said before my practical grasp of X is pretty weak -- but I do understand the concept of a 'session manager.' Most X clients in your start up script are started in the background (with trailing ampersand in shell script syntax). However one (usually the last item executed by the script) must be started in the "foreground." This client, whether it is a window managers, an 'xterm' or even 'xclock' will be the "session manager." When you exit or kill the session manager the X server takes that as a hint to close down (returning you to a shell prompt if you used 'startx' or to an xdm login screen if you started the session graphically.

Can you clarify the differences between ~/.xsession and ~/.Xclients (mine are just symlinked together)?

(?)Milton Replied...

From Jim Dennis on Mon, Mar 30, 1998

Wouldn't this show up as a problem when he ran 'startx' as well?

(!)Not necessarily, Jim. It all depends on the system scripts. Traditionally, startx uses "~/.xinitrc" and xdm uses "~./xsession".

It sounded like he was only having problems as a "normal user" -- that root was OK. You obviously know a lot more about xdmi than I do, but I went on a hunch that his xdm setup was fine.

(?)As I've said before my practical grasp of X is pretty weak -- but I do understand the concept of a 'session manager.' Most X clients in your start up script are started in the background (with trailing ampersand in shell script syntax). However one (usually the last item executed by the script) must be started in the "foreground."

(!)Right. (The very fact that you know the difference between an X client and an X server means you know something.) :) That's what I meant by "usual `logout' action." Most people either logout of a special xterm or exit the window manager.

(?)Can you clarify the differences between ~/.xsession and ~/.Xclients (mine are just symlinked together)?

(!)I honestly have no idea. That's pretty strange. Usually, I link .xsession and .xinitrc together. That way, X looks the same whether I use xdm or startx.


Copyright © 1998, James T. Dennis
Published in Linux Gazette Issue 28 May 1998


[ Table Of Contents ] [ Front Page ] [ Back ] [ Next ]

"Linux Gazette...making Linux just a little more fun!"


BigBen: Network Monitor Utility

By Cesare Pizzi


BigBen is a collection of three Perl scripts useful to monitor a Unix network. The development is not completed, and there are a lot of things to add and improve, but I think that the skeleton of the application is working quite fine. The program was built on a Linux box, but I think it will run fine also on other UNIX environment, with a perl interpreter

BigBen is made of three scripts:

As you can see, the use of this application is quite simple. All the three scripts run as daemons. BigBen and Weber run on the same system, and the LittleBen client can be installed on several systems (all the systems you need to check).

The check logic of Weber is the following:

To avoid problem due to the timings between the client and the server (both wait for a while before check), it's better to start the client before.

Now, we can analyze the scripts in detail.


LittleBen


LittleBen is the client application: it runs as a daemon, and it is configurable through the configuration file (see the LittleBen.conf sample file). In this file you can put the processes you want to monitor, and the Min and Max values you want for these processes; if these values are out of the border you set, the client send an ERROR or a WARNING to the server. See the README file to have a detailed description and an example of this file.


BigBen


This is the server script. Once installed, it listens on a port (default 4455) for the data sent out by the clients. When the packet is received, it saves the data in the proper directory, where the Weber will get the data. See the README file to have a detailed description of the options available.


Weber


The Weber gets the data saved by the BigBen, and creates a couple of HTML pages, so you can check the data with your HTML browser. The HTML file created, is able to auto-reload itself, so you will get the last date in each moment. The Weber and the BigBen will run on the same system.

Weber needs a configuration file. See the README file to have more info about the command line options and the configuration file. The scripts and README are in a gzipped tar file for download purposes.

*** Please report any bug, enhancement request, comment to the author:
*** Cesare Pizzi
*** cpizzi@bigfoot.com
*** www.geocities.com/SiliconValley/Pines/8305


Copyright © 1998, Casare Pizzi
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Building an Audio CD Player, Part 1

By Michael Hamilton


This article outlines my recent experiences writing Jcd, a Java CD player. It is aimed at people who have browsed through an introductory Java or C++ text and feel they know their way around either language. While reading the article, I think it would be a good idea to have a Java textbook on hand to fill in any items I might gloss over.

I have been experimenting with Java in order to evaluate its usefulness as a general purpose language. One of the things I've written is a GUI CD player with a track-title database. I chose to write a CD player because it requires the use of a large part of the language and its associated libraries: graphical user interfaces, threads, file I/O sockets, text parsing, image manipulation, data entry forms and native C calls to interface to the kernel's CD drivers. Since this is one of my initial attempts at using Java, you shouldn't assume everything you read below is authoritative or definitive--I'm just reporting what worked for me.


Features of Jcd


Jcd has the following features:

The finished system is shown here:


Java Application Programming on Linux


Jcd is a Java application, not a Java applet. That is, it can't be run in the secure sand box of a Web browser. This is because Jcd reads and writes files in your local file system, and because it requires a native-machine-language driver specific to your operating system and processor architecture. In the future it may be possible to make Jcd an Applet, as Sun is working on standards for controlled accessed to local files and for portable access to hardware such as CD audio drives. Until then, Jcd must be run in a Java run-time environment, such as that provided by Sun's Java Development Kit. I developed Jcd using the Linux Java Porting Project's port of the Java Development Kit 1.0.2 and 1.1.1. You can find out how to obtain the JDK for Linux by pointing your browser at http://java.blackdown.org/.

This article will lead you through the code of a cut-down version of Jcd, much as it appeared in the early stages of its development. At the end of the article, we will have a working command-line-driven player that can be built upon to create a GUI player such as Jcd.

Linux supports a set of Sun ioctl commands--device I/O control calls to the kernel, for controlling audio CD operations. The kernel's CD-ROM ioctl interface is defined in the /usr/include/linux/cdrom.h file. This interface provides a set of calls for functions such as play, stop, pause, cd-info and others. The Java interface described below closely parallels the functions the kernel provides.

Listing 1 shows a test rig for testing my Java interface. Ignoring the details for the moment, you can see in lines 26 through 71 that I have a loop reading from cmd_stream. On lines 31 through 65, I check the command read for a keyword. If I match a keyword, I call the appropriate cd_drive operation.

At line 1 I declare that Jcd.java is part of the package Jcd; that is, all classes defined in Jcd.java are part of the package Jcd. This serves to keep the Jcd classes together and grants them mutual access to each other's data and methods, except where the data and methods are explicitly declared private. All classes outside the package can only access the data and methods that are explicitly declared public. The Java run-time environment locates the Jcd package by looking for a Jcd subdirectory in the directories listed in the CLASSPATH environment variable. While developing Jcd, I put my working directory . (dot) in the CLASSPATH and created a dummy Jcd subdirectory by using a symbolic link pointing back to my working directory:

ln -s . Jcd
Later, when I installed the finished Jcd, I put the Jcd package in the /usr/local/lib/jcd/Jcd directory and added that directory to my CLASSPATH.

On lines 8 and 10 of Listing 1, I import the standard Java I/O classes--a wild card is used to get them all--and I import the Jcd.Drive class that calls the kernel interface. When referring to the Drive class I have used the package qualifier Jcd, as well as the class name Drive.

At line 14 I declare the main method. The main method is static, which makes it a class method, so it doesn't belong to any particular Jcd instance; instead, it belongs to the class as a whole. This is the method that will be invoked when I run Jcd by typing:

java Jcd.Jcd
The Java loader looks for a static method called main in the class you tell it to run. One implication of this, is that every class you write can have its own test rig by including a static main method in its implementation.

At lines 16 through 18, I declare cd_drive and assign it to a new instance of the Drive object. I pass both the drive device name, /dev/cdrom, and the location of the compiled C shared object module, Jcd_Drive.so, to the object constructor so that the object can initialize itself appropriately.

At line 19 I wrap a DataInputStream object around the System.in standard input object. DataInputStream is a filter that allows me to read a byte stream as strings terminated by newlines. The idea of layering filters over data streams to add new processing functionality is very prominent in the Java I/O classes.

The only remaining unexplained pieces of code in Listing 1 are the try-catch statements that surround most of the code. In Java, errors are signalled by throwing exceptions which, if un-caught, cause the program to issue a diagnostic error. These ``Thowables'' are divided into two sub-classes: Errors, major problems that will probably result in a program crash (such as running out of memory); and Exceptions, problems that you are expected to be able to handle inside the program (such as reading past end-of-file). Any action that can result in an Exception has to be handled in one of two ways: the method in which it can occur must either have a try-catch statement that handles the exception, or the method must declare that it can cause the exception, which passes the buck to callers of the method. Because this is enforced by the compiler, it's a very nice mechanism for ensuring that exceptions do not go unconsidered by the programmer.

In Listing 1, the System.in class can throw an IOException, such as end-of-file. The Jcd main() method either has to catch each IOException or pass it on. In this case, my empty catch body will effectively ignore I/O errors. After catching an exception, execution continues from the catch statement. The cd_drive object that the main method uses to control the CD-ROM can also return a DriveException. The main method has to catch these too--I just print the reason for the exception and let the program continue.


Jcd Design Details


Now look at Drive.java Listing 2. This file declares the Java to C interface as a set of body-less native methods on lines 69 through 138. These native methods are implemented in the Jcd_Drive_ix86-Linux.c C module (Listing 4). The native methods in Listing 2 are augmented by some Java methods that add additional operations to make life simpler for the users of the class--for example on lines 139 to 152, there are several variations of the play() method to simplify the most common types of requests.

On lines 13 through 35 of Listing 2 the Drive class defines static class constants for all the instances of the Drive object to share. The keyword final means a value is constant. There seems to be a convention amongst Java programmers for representing constants in uppercase. All the constants in the Drive class are related to the kernel interface. For example, the frames per second defines the address unit used by CD-ROM drives; the lead out track number defined on line 18 is a dummy track that contains info on the total playing time of the CD.

Lines 35 through 51 define data unique to each Drive object that is created. The C module that carries out the kernel calls will access and update some of this data.

Most of the methods can throw a DriveException. DriveException is defined on line 157, and below it a series of sub-classes define the full range of exceptions that a DriveException may actually represent. The bodies of these exception classes are almost empty because the actual processing is passed on to the super (parent) class to handle which is ultimately the standard Java library's Exception class. The super-class constructors accept calls with and without an explanation string, so I've defined both. It would be nice if the official Java language supported default values for arguments, so that the excessive repetition of nearly identical constructors could be avoided (one of the features of the Python language that I miss the most).

All of my native methods are declared to be ``synchronized''. Making the methods synchronized gives each method exclusive access to the Drive when it is called. This prevents a multi-threaded application from issuing multiple conflicting (or overlapping) calls to the kernel. Synchronized methods carry more overhead than non-synchronized ones, but in this case we are unlikely to request more than a few CD operations per second, so we needn't worry about the overhead.

Having defined the interface, I used the javah utility from Sun's Java Development Kit to help me generate the code for the C module. I used javah to generate the C header file, Jcd_Drive.h, and the C stubs file, Jcd_Drive.c.

javah Jcd.Drive
javah -stubs Jcd.Drive
@lay:Place 2397l3 around here

The Jcd_Drive.h file contains data definitions and function prototypes that define the native C interface. The generated header file is a little messy, so a more readable version of it is presented in Listing 3. It contains a define for each of the final static data items in the Drive class. Note that javah has used the package name (Jcd) and class name (Drive) to form the Jcd_Drive prefix for the native data and function names.

The Jcd_Drive.c that javah generates provides code that handles the messy details of taking the data Java passes and making it more palatable before passing it on to the actual C routines. Aside from compiling this file and including its object with my own code, I pretty much ignore its existence. All I have to do is implement the interface defined in Jcd_Drive.h, I don't need to know which part Jcd_Drive.c plays in the process.


Integrating Java with C


For me, the ease with which Java and C can be integrated is one of Java's biggest selling points. I know there's much talk about sticking to pure Java, but I'm interested in using Java as a general purpose language. I'm sure I'll still need to fall back on C both for reasons of performance, and in order to integrate Java into existing systems. If I can write 90% of my systems code in Java and 10% in a well-defined C module, that may still make for good portability. For example, after writing a CD-ROM module in C for Linux, it only took me a few hours to create another C module for SGI IRIX. The Java code in my final player interrogates its environment to find out which operating system and architecture it's running on and then dynamically loads the appropriate native shared object module.

On line 16 of Listing 3, the Jcd_Drive.h file defines the ClassJcd_Drive structure that the C run-time environment and Java run-time environments can use to gain mutual access to data belonging to Drive objects. The raw data structure has to be augmented with some Java-environment bookkeeping by the HandleTo(Jcd_Drive) macro call which creates a new structure called HJcd_Drive on line 26. The C functions that make up the native interface are always passed HJcd_Drive as their first argument. The prototypes for these functions are listed on lines 28 through 45.

Listing 4 details Jcd_Drive_ix86-Linux.c, the Linux Intel version of the C module. I've used a methodical architecture/OS naming convention based on properties I can retrieve from the Java runtime environment. This allows Jcd to select and locate the appropriate native module for each platform at runtime--for the cut down version of Jcd, I've just hard-coded the module (see line 17 of Listing 1).

Most of the code in Listing 4 is concerned with making the kernel ioctl calls. Before discussing these calls, I'll get the Java to C native call side of the equation squared away. Looking at a simple case first: on lines 181 through 189 of Listing 4, the Jcd_Drive_status() C function implements the Jcd native status() method (from Listing 2 line 122). When called, the status() function is passed the HJcd_Drive struct and can access the ClassJcd_Drive it contains by using the unhand() function. It first checks a C file descriptor to see if the drive has previously been opened successfully. If the file descriptor is -1, the drive isn't currently assigned, so the function just returns the last known status (which was stored in the ClassJcd_Drive structure). Otherwise, if the file descriptor is valid, the new_status() function is called to retrieve a new status value into the ClassJcd_Drive structure.

A slightly more complex case is seen on lines 217 to 234, where the Jcd_Drive_trackAddress() function implements the Jcd native trackAddress() method. The trackAddress() method returns the address of a track as the number of 75ths of a second from the start of the CD. The function is passed two parameters: the HJcd_Drive structure that contains the Java object's data and the track number in the form of a long integer. The integer is declared as Java_Int, but as you can see on line 39, this is just my way of getting around the differences between the Kaffe and Sun Java compilers--looks like native call implementations can vary a bit between compilers-- hopefully this is something that will be standardized. In fact under JDK 1.1.1, my Java_Int should be defined as int32_t.

The trackAddress function sets up a cdrom_tocentry structure (defined in /usr/include/linux/cdrom.h) for passing to a kernel ioctl program. In Unix/Linux, ioctl calls provide access to a variety of kernel services related to devices. The device the ioctl is to work on is determined by the file descriptor passed as its first parameter--in Unix all devices can usually be accessed as special files resident in /dev. The kind of service an ioctl performs is determined by its second parameter. In our case we are doing a CDROMREADTOCENTRY, i.e., CD-ROM read table of contents entry. The third parameter to an ioctl is usually the address of some structure specific to that particular ioctl call. In this case the third parameter, the cdrom_tocentry structure, is initialized to contain the track number, and the kernel will copy the result into fields within the same structure.

If an ioctl call goes wrong, perhaps due to a drive fault or to the drive being empty, the ioctl call returns -1. If this happens, we need to raise a Java exception in the calling Java module. Line 228 accomplishes this by calling SignalError and passing it the text name of the exception as the second parameter--in this case, one of the exceptions declared in Listing 2. The first parameter to SignalError is used to control the environment in which the error handling will occur (I left it to default). The last parameter is any extra text explanation that we may wish to provide--in this case I'm simply translating the C error number to a text string. It's important to note that SignalError sets up an exception that will be processed when the C error routine returns. On returning to the calling Java routine, only the last of any SignalError calls will have any effect, i.e., you can't communicate multiple errors via multiple SignalError calls in one C call.

If the ioctl call succeeds, we take the address the kernel returned in tocentry.cdte_addr.msf and translate it from minute-second frame to an integer number of 75ths of a second. This value is returned as the result of the native method call on line 231.

As we have seen above, passing numerical data backward and forward between Java and C is pretty easy. Character strings are almost as easy, but do require conversion. These two functions do the necessary conversions:

Hjava_lang_String *makeJavaString(char *from,
	int len);
char *javaString2CString(Hjava_lang_String *from, char *to, int max_len);
The function Jcd_Drive_cddbID() on lines 263 to 279 computes the CDDB ID for a CD-ROM and uses makeJavaString() to convert it to a Java string before returning it (CDDB is the database format used by the Xmcd CD player). On lines 64 and 65 the take_player() function uses javaString2CString() to make a C version of the Java string containing the device name of the CD-ROM.


Practical Design Issues


If you look at almost any C routine in Listing 4, you will see that the C code is constantly checking things like whether the drive is open or not. I'm trying to avoid monopolizing the drive. This is especially important on ejecting the drive tray. When the user uses Jcd to eject the tray, I relinquish the the drive by closing it and won't access it again until the user issues another Jcd request. This allows the user to use the drive for other purposes without leaving Jcd. It also prevents a problem on my system--if I keep polling the drive status after an eject, the drive will immediately close again. There's quite a bit of code that attempts to tip-toe around issues such as this one.

I also found that with my particular CD-ROM, if I issue an inappropriate pause or resume (for example, pause when the drive isn't playing anything), the kernel driver may become confused, and further ioctl calls to the drive will hang indefinitely. Once this happens the only way to get the drive to respond is to reboot. The pause/resume code on Lines 359 to 386 is careful to check before proceeding.

I also found that some kernel CD-ROM drivers won't respond to a play command while they are already playing. That's why the STOP_PLAY flag is defined on line 34 in Listing 2.

You would think that CDs would include an ID unique to the album's artist and title, and perhaps even artist and track information--well, apparently, this isn't so. As a result, the writers of CD-players such as Xcd and my own Jcd use a hash-key ID computed from the lengths of the CDs and the lengths of their tracks. The hash key is used to look up a database of CD artists, titles and track-titles. There is a problem with using track lengths to create an ID. For an artist/title album there may be many pressings (if that's the right word) manufactured in different counties and states, and the different pressings may have slightly different lead-in/lead-out times and time intervals between tracks. The ID is constructed so that all approximate matches can be identified-- if there isn't a unique match, the GUI interface of Jcd will present a list of possibilities.


Making the Makefile


The final thing I'd like to discuss is the Makefile that builds this lite version of Jcd. Inter-module/inter-class dependencies in Java tend to depend more on whether a class has changed its interface. Just because a source file has been modified, it does not necessarily follow that defined interfaces within it have changed. If you construct a Makefile using file based inter-dependencies, you are going to do a lot of needless re-compiles. I don't have a solution to this problem--maybe a new kind of Java-aware build tool is necessary. This aside, the Makefile in Listing 5 does the job. The CFLAGS option -fPIC is very important; it makes the gcc compiler generate position-independent code suitable for loading as a shared library. The LDFLAGS option -shared is obvious enough--it tells the loader to create a shared object. The LDFLAGS options -Wl,-soname,Jcd_Drive passes the -soname option to the linker so that the shared object will be named Jcd_Drive. Otherwise, the linker will include its path in its name, and we may get a mismatch on loading a module called Jcd_Drive. The Makefile adds a new default rule--a .class file depends on a corresponding .java file. The Makefile installs the native shared library in an appropriate directory structure to support multiple architectures and operating systems.

That's about all you need to know to create a simple CD player. My next article will examine the Abstract Windowing Toolkit in order to add a GUI and multi-threading in order to add programmed play.


Resources


ftp://sunsite.unc.edu/pub/Linux/apps/sound/cdrom/: The sunsite directory where the latest Jcd can be found. Currently this would be jcd-1.1.tar.gz

http://www.actrix.gen.nz/users/michael/: Patches or news concerning Jcd can be found on my home page.

http://www.blackdown.org/: The Linux Java site.

Java in a Nutshell, David Flanagan, O'Reilly & Associates, 1996. Very nice introduction, with enough detail to build things like Jcd if you team it up with Sun's on-line documentation. By the time you read this article, the second edition will be out--you can use O'Reilly's web page, http://www.ora.com/, to check on its status.

The Java Language Specification, Gosling, Joy, Steele, Addison-Wesley 1996. Good reference on the language and class libraries but doesn't cover the Abstract Windowing Toolkit.

InfoMagic Java CD-ROM, Spring 1996: I used this CD-ROM to gain access to Sun's HTML documentation via my browser. This was my main source of AWT documentation. You can't use the JDK on this CD, as it is out of date--but the documentation was still useful.

The Java Class Libraries, Chan and Lee, Addison-Wesley, 1997. This book covers the AWT, but also repeats much of what I found in the previous two. This is the heaviest book I own--I think I would prefer a lighter AWT only reference. On brief inspection, Java AWT Reference, John Zukowski, O'Reilly & Associates, 1997, looks like a good possibility, and it covers the latest AWT too.

Advanced Programming in the UNIX Environment, W. Richard Stevens. Great general reference on Unix programming and provides a good background for ioctl basics and other stuff.

http://sunsite.unc.edu/~cddb/xmcd/ : The Xmcd and CDDB home page. Ti Kan and Steve Scherf developed a Motif CD player, and its CDDB database format has become a popular standard for free and shareware CD players. They've defined a protocol for remote look ups via TCP sockets.

All listings referred to in this article are available by anonymous download in the file hamilton.tgz.


Copyright © 1998, Michael Hamilton
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


COMDEX/Spring 1998

By Jon "maddog" Hall


COMDEX/Spring 1998 Photo Album


COMDEX in Chicago (April 20-23) was a titanic Linux hit. Vendors around us were astonished by the attention and business we drew to our booths.

The Linux Pavilion had a huge sign overhead (thanks to Carlie Fairchild of Linux Journal and Andy Wahtera, our new ZD/COMDEX representative), and multiple large floor signs guided people entering COMDEX to the Linux International Pavilion. We had a page on the COMDEX Web site, mention in the Show Daily and other marketing ``aids''.

Linux International vendors with booths in the Pavilion were Caldera, S.u.S.E., InfoMagic Inc., Linux Journal and Red Hat Software, Inc., a small number of vendors, but big in heart.

While smaller in attendance than its Las Vegas cousin, COMDEX in Chicago seemed to have a lot more end-user customers than the Las Vegas show--not really surprising when you consider Chicago is a cultural, economic and manufacturing center. While Mr. Bill was still trying to boot Windows 98 and have it stay up, the Linux International Pavilion was singing a sweet song. Some people thought we had set a new world's record for ``longest line at COMDEX''--the line where people waited to pick up a free Linux CD-ROM.

I accompanied Red Hat's ``booth gang'', Anna, Terry and Mike, to visit the Argonne National Laboratory and Western Suburban Chicago Linux Users Group (which thankfully is abbreviated AALUG and has its web site at http://hydra.pns.anl.gov/lug/lug-main.html). The meeting was actually held at the Fermi National Lab, which recently announced that Linux will be officially supported at their laboratory and with their applications. Donnie Barnes flew in from Durham, North Carolina to give a talk on Red Hat 5.0, and to help give out Red Hat ``souvenirs''. I gave a brief talk at the end of Donnie's epic speech.

After the meeting ended, Dr. G. P. Yeh, a physicist in the computing division, invited us on a tour to see a particle-collider detector. Fermi is expanding their collider, and the new one is expected to produce more than 20 times the data of its predecessor. To expand the computing power to analyze and store this data in real time with traditional methods would have been very costly, so now Fermi is building a 1000-node Beowulf system to detect quarks (and other little things). Dr. Yeh told us that without Linux and the concept of Beowulf systems, the costs of supplying computer power for the next generation of collider would be many times what they are now forecasting.

Our sincere thanks to Dan Yocum for setting up the meeting at Fermi and advertising it, and to Dr. Yeh for showing us the collider.

On Wednesday S.u.S.E. gave a talk at the Chicagoland Linux Users Group, and on Thursday I gave a two-hour ``ramble'' to the same group after COMDEX was over. Then, tired and thirsty, most people retired to the Goose Island Brewpub.

The Chicagoland Linux Users Group (http://clug.cc.uic.edu/) helped to staff the Linux International booth, hand out flyers and line up user group meetings. So ``thank you'' to Clyde Reichie, Don Weimann, Simon Epsteyn, William Golembo, Gennagy ``Ugean'' Polishchuk, Long Huynh, Perry Mages, Viktorie Navratilova, Ben Galliart, Richard Hinton, and especially to Dave Blondell, the president, who organized the group and the schedules.

Linux International would like to encourage other Linux vendors to join us in the next Linux Pavilion at COMDEX, whether it be in Las Vegas or Chicago. We are definitely looking forward to the next COMDEX in the windy city. For information on membership or other information about Linux International, visit our web site, http://www.li.org/.


Copyright © 1998, Jon "maddog" Hall
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Home Networking With Linux

By Glen Journeay


  1. Home Networking Arrives
  2. Picking a Network Solution
  3. Linux Networking in Action
  4. So, What's the Catch
  5. Just Do It!
  6. The Future


Home Networking Arrives


It seems it's inevitable that the norm of home PCs becomes not just having one, but a few. Often, we acquire more than one computer at home when we upgrade from our existing system, or we get one for the kids to use, or the spouse brings one home for work. Somehow, we end up with a bunch.

Dealing with the problems that arise with two or more computers is our first exposure to being a network administrator. Let's face it, as soon as you have more than one, you're trying to move or share information between them. The kids want to download a game from the Internet from one PC and install it on another. You brought home a file from work, only to realize that you don't have compatible software at home. You're constantly moving files on disk over the sneaker net to the PC downstairs with the good printer.

The best solution to these problems, a network, is generally staring us in the face out a work, we just don't consider this solution to be economical or practical for home use. But, like the idea of having more than one TV twenty years ago, the day when home networks and multiple computers in the household will be common place is rapidly approaching. Even now, those among us with more money (lots more sometimes) are exploring totally networked and interactive houses. In new houses RJ-45 jacks for 10BaseT , and the 100 MHz 100BaseT Ethernet will be become as common as the phone jacks they look like.

There are drawbacks to having a network at home. First off, we don't have a whole Information Systems (IS) department at home to support us. Also, the networking hardware and software can be expensive. So the advantages of networking have to out weigh the disadvantages of setup and maintenance costs.


Picking A Network Solution


Let's examine some of the networking solutions available that are appropriate for home networks. It turns out that as the PC industry has matured, the variety of networking options has increased. These range from simple plug-n-go printer sharing networks all the way to firewall protected, server supported intranets. Normally, the cost and administration complexity rise as the power and functionality of the network rise, and as always, the proper way to choose the network you need is to determine the functions that you require. Here's a matrix listing normal network functions and solutions among the common operating home operating systems and two non-common networking solutions - Linux and Microsoft NT:

Linux	Unix	NT	Win95	Mac	OS/2
Printer services	x	x	x	x	x	x
File server/sharing	x	*	*	*	*	*
Mail server		x	*	*	*	-	-	
Domain Name Server	x	x	*	*	*	*
Web Server		x	x	*	*	*	*
Firewall		x	*	*	*	-	-
Routing			x	x	x	-	-	-
Gateway			x	x	x	-	-	-
Internet		x	x	x	x	x	x
Ethernet		x	x	x	x	x	x
Token Ring		x	*	*	*	*	*
Arcnet			x	*	*	*	*	*
Framerelay		x	*	*	-	-	-
ISDN			x	*	*	*	-	-	
PPP			x	x	x	x	x	x
SLIP			x	x	x	x	x	x
TCP/IP			x	x	x	x	x	x
X.25			x	*	*	*	*	*
IPX (Novell Netware)	x	x	x	x	*	*
SMB (Windows network)	x	x	x	x	*	*
Appletalk		x	*	*	*	x	*
NFS 			x	x	*	*	*	*


x	Support in system as supplied
*	Support available as extra
-	Support not available
Several home operating systems have been left off of the features comparison chart, most of those have been superceeded by their manufacturers. If your favorite is missing, our apology, but discuss this with the OEM since even they are urging you to switch. Also, several flavors of Unix have been covered under just the Unix heading with one Unix variety, Linux, set aside.

Linux since it's inception in 1991, has been different than the other Unixes in several important ways. Linux is a Unix clone written from scratch by Linus Torvalds with assistance from a loosely-knit team of developers on the Internet. Linux is (and always will be), with very few restrictions (see the GNU General Public License), free and has evolved into a full fledged, high performance Unix originally based on the Intel 386, now available on more different computer architectures than any other single operating system in the world. It should be noted that Linux is not the only freely distributed Unix variety, it just seems to be the best supported one at this time. It has good support available from the team of developers on the Internet and very extensive documentation in the form of HOWTO instructions, FAQs and Unix man pages, also available freely over the Internet. Linux distributions, the operating system and all required other software to have a fully functional system, are available for less than $30 on a CDROM and for free when downloaded from FTP sites on the Internet. Linux network servers for home use actually require no more than an old 386 in order to provide excellent performance for file servers, print servers, mail servers and network gateways and routers. Linux is very robust. Many Linux boxes around the world have not crashed or been rebooted for over a year. I do not think any Windows or Macintosh product can make this claim.

Windows NT, the networking oriented operating system offered by Microsoft, has been available since 1991 too. It has all of the above features available for a price. This system can easily cost over $1000 to get almost all of the features listed above. It has good support available and as it begins to replace Unix as a major operating system on the Internet it will mature into a powerful operating system available on many different computer architectures. It currently cannot perform all of the networking functions that Unix or Linux can offer, but it will soon. Undoubtedly, NT with the continued support of Microsoft has a bright future.

Unix, of course, is a well established networking power house. In fact, Unix is the workhorse of the Internet. All of the protocols and services that the Internet were originally based on were developed on Unix. Because of the maturity of Unix, it has already gone through the growing pains that are now plaguing NT, such as security and crash problems. Despite predictions for years that Unix use will decline, Unix use continues to increase. Until the advent of Linux, there was no inexpensive, powerful Unix available for the home. Unix operating systems generally cost over $2000 for an operating system with the features listed above. And even now the relative complexity of Unix to most other operating systems discourages wide use except at colleges and large businesses.

Windows 95, the Macintosh OS and OS/2 also provide limited networking out of the box, and with additional software can perform as printer servers, file servers, mail servers, name servers, firewalls, and web servers. None of these operating systems were designed originally to support intensive networking services, but can do a good job for small networks with the right software and hardware. These operating systems can be outfitted to perform almost all of the above features for about $500, and the basic operating system is generally supplied with the PC. Also these systems are very easy to setup and configure.

Picking a home networking solution at the present time is largely dependent upon your networking requirements and budget. Obviously, most of us cannot afford to spend large amounts of money unless a home business is involved. Luckily at this time one of most powerful is also the cheapest. Linux offers all of the power of Unix and as software installation programs become more sophisticated, it's also becoming easier (almost painless) to install and administer. Indeed, if you have the time, patience, disk space and an Internet link, you can download Linux from several FTP sites located around the world. With all this to offer, you wonder why Linux is not being used more often, well, it is. Linux is now being used on over 8 million computers around the world, in over 40% of the Internet Service Providers (ISPs) in the world, by large corporations, and US government agencies including NASA which recently put an experiment on the Space Shuttle run by a Linux computer.


Linux Networking in Action


Assuming that you do decide to set up a home network server using Linux, the first step is to find PC hardware that can be used to run the server. This should not present a problem since Linux supports just about any PC configuration made within the last five years, and as stated above, an older 386 PC can easily support a home network of five or more PCs. Linux runs on any 386/486/586/Pentium class processor (including AMD, Cyrix), Dec Alpha, PowerPC (MkLinux for Apple), M68xxx (Amiga, Atari), Sun SPARC and MIPS. A minimum system based on a 386 requires 4 Meg of RAM (more is better) and 50 Meg of disk space (200 Meg is better). While Linux can be run on a 386/4MB/20MB system, this system will be very slow. Eight MB of memory and 50 MB of disk space is a more realistic minimum for a useful system. If this describes the PC you've been using as a door stop for the last few years, then dust it off, because it'll work fine. Many for the more popular Linux distributions along with manuals are now available at bookstores.

Don't worry if you have both Macintoshs and IBM PCs to support at home. Linux happily coexists with all of the most popular home operating systems. Linux knows the networking protocols and file systems native to a multitude of different operating systems on a network: MS DOS, Windows for Workgroups, Win95, Win NT, Mac OS, OS/2, Novell, Amiga, VAX and Unix. Details to implement support for these network support is provided by step-by-step instructions written in HOWTO documents available on the Internet.

You will need to decide on the hardware link to use at home. Ethernet is probably the least expensive and even slow Ethernet (10Mbps) provides performance that exceeds most home requirements. Fast Ethernet (100Mbps) is rapidly becoming the business Ethernet standard and is still reasonable affordable for home use. Ethernet interface cards range in price from $20 for an 8-bit ISA bus 10MHz card to $100 for a 100MHz card. 10Base2 seems to be the home user Ethernet cable choice, but 10BaseT isn't far behind. With 10Base2 there is simply a coaxial cable "daisy chained" between computers on the network. The cable must not ever be discontinuous and 50 ohm terminators are required at both ends of the daisy chain. If you're having a home built, getting a 10BaseT cable network installed is easily done and by choosing Category 5 cable you can ensure an easy upgrade path to Fast Ethernet. Also the 50 ohm terminators are not required. A 10BaseT system of more than two interface cards will require an Ethernet hub, and be warned that Category 5 cable is not cheap ($0.40/foot), so you'll pay more for a 10BaseT installation, but this a system will last longer, and is more convenient than 10base2. Linux supports almost any network interface card, so your networking requirements will probably depend more on the PCs at the other end of the Ethernet cable.

The Linux server could be the gateway to the Internet for all of the rest of the PCs (or whatever) on your home network. This will require a connection to a local or national ISP which can provide an IP address (preferably a static IP address) for the Linux gateway. The Internet link can be over a modem, ISDN, frame relay or ATM connection. Linux will also provide firewall services so that no one will be able to invade your home network from the Internet connection. Using a process called IP masquerading, Linux will provide Internet access to all of the PCs on your home network even with only one valid IP address and fully qualified domain name. This is done by making it appear that all of the TCP/IP traffic coming from your home network is coming from the Linux PC. When traffic comes back for the other machines, the Linux PC will act like the post office and sort all the network traffic back to the proper PC. A Linux machine can easily support two to five PCs surfing the Internet on a 28.8 modem link. A Linux computer can provide mail server services allowing you to create as many e-mail addresses as you require at home. All of the following can be done with ONLY ONE NORMAL PPP or SLIP link to an Internet Service Provider. There would be no extra charges for additional e-mail services or subnetworks since all of these functions would be performed at home by your Linux server. Are you tired of having only one PC on the Internet at home or paying for multiple Internet accounts? Then Linux is the answer.

The Linux PC will provide printer and file server functions. Samba, a free software package, supports the SMB protocol used by Win95 and WFW. Samba is used by many large companies on large company networks. Once configured, it interfaces into the Windows operating system and works flawlessly allowing all of the network users to have individual and shared disk space, plus allowing the user specify and use any printer on the Linux server (or network printer for that matter). Here again, as for all of Linux, the software is available for free over the Internet, complete with installation instructions and source code, and the software is being actively developed and maintained by the original developer. Linux has a similar software package called Netatalk which provides similar support for the Apple Localtalk protocol. A tape backup system can be installed in the Linux server to automatically back up your server.

Linux provides all of the network services traditionally associated with Unix. Mail server service can be accomplished using using sendmail or smail. Any user on the Linux system can then have an e-mail address. The e-mail account can be accessed from the network PCs using an e-mail client with the POP3 protocol, such as Eudora or Pegasus. If your network needs Domain Name Service, the named program can provide it. If you support more than on type of network or several small networks, the Linux server can act as a gateway to tie all of the subnets together. Kernel routing rules can also be used to allow the server to act as a firewall and control access to the internal network. NFS, which stands for Network File System, allows computers to mount disk drives on remote machines. NFS is also available with any Linux distribution, although most other operating systems will require additional software to use NFS. The other network standard applications used on TCP/IP networks are always available: FTP, telnet, remote shells, ping, etc.

A Linux server also provides a state-of-the-art web server and Java development system. Several web servers are available for running on Linux with the Apache web server being the most popular. Apache is now the most popular web server on the Internet with over 45% of all web servers running it. The Java Development Kit is being ported from Sun Microsystems and provides a Java compiler for developing Java applications. In fact support for Java can be complied into the Linux operating system allowing the server to run native Java code. This feature is still being discussed for most operating systems.


So, What's the Catch?


By now most of you are wondering what the catch is with running Linux. There's no real catch. Linux has been developing and maturing at a rate that far exceeds that of such well supported systems as Windows NT. For example, Windows NT just announced support for up to eight CPUs in a multiprocessor system. Linux now supports the Intel SMP multiprocessor specification which provides support for up to twenty CPUs in a single system.

With this power comes the complexity of the installation and support. In fact, installation and maintenance have been the subject of many recent articles. However, recent Linux distributions have made the installation process much easier and provide tools to make system administration easier. Also helping the situation is that, unlike Windows NT which is a relatively new operating system, Unix has been around for decades, so there exists a larger base of trained system personnel for Unix systems than there are for Windows NT. The traditional support market was with large installations at large companies in a workstation environment, but it's now shifting to support the use of Unix in smaller businesses. Since UNIX is such a strait forward operating system to develop software in, many young and eager software developers and hobbiests are turning to Linux for an inexpensive development platform. These young people are an excellent source of system administration knowledge, and most of them can be contacted for free advise on news groups on the Internet.

Linux, like other Unixes, has not previously been an operating system commonly used at home. Many of the applications developed for Unix systems are available on Linux. This was traditionally the scientific workstation area, and the quality of the applications reflects this. Unfortunately, Linux suffers from a shortage of applications oriented towards the average computer consumer. So, even though a Linux makes for an excellent server,and it is also an excellent workstation, running the latest release of the free graphical environment, X windows, it cannot run the latest version of Office 97 (although many Windows applications can be run using the WABI Windows emulator available from Caldera software or the Wine windows emulator). There are several software companies (and others) now developing and selling consumer applications to fill this gap.


Just Do It


Linux is an operating system with minimal initial cost, yet powerful enough to easily handle a home network or small business. Especially nice is that the older PC hardware which is normally retired can be very effectively used as a network server for a small network. Local Linux users groups and computer stores provide excellent Linux support. The support available from the Internet is also excellent. It's always comforting to receive an e-mail from the original developer of some Linux software acknowledging that the bug you've reported has been fixed.


The Future


In many ways, the advent and growth of Linux has gone hand-in-hand with the growth of the Internet, and the work of Linus Torvalds and many hard working developers. Presently, Linux is an extremely capable operating system available at an unbelievable price. Development of the operating system to incorporate the latest hardware and software developments is continuing at a rapid pace. Although the future of the Internet, the Personal Computer, and the Network Computer is unknowable, it would seem that Linux will be part of that future.


Copyright © 1998, Glen Journeay
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


IPmasquerading with Roadrunner or Second Ethernet Card

By Mark Nielsen


This is for Red Hat 5.0 systems. You can probably do a similar thing for other linux systems. It is specifically configured for roadrunner in Columbus, Ohio. If you live somewhere else, you will have to change anything with "columbus" in the configuration to something else. So far, the only thing I see you have change is in /etc/resolv.conf, but I believe that gets changed everytime you start rrhdcpcd.

If you manage to pull this off, you are almost one step away from being able to install a real network to the internet. Think about it, the only difference between what we are doing here and a real network connected to the internet is that fact that your local intranet doesn't have real valid ip addresses. If you had real valid ip addresses and your gateway addressed stayed the same (it changes everytime you log into roadrunner) then you would have a real fixed network connected to the internet. Do this, and you can actually say you have real networking experience. This involves ethernet, DNS, ip forwarding, ip masquerading, ethernet configuration, and a lot of other stuff. Good luck!

I also wish to thank a bunch of people at The Ohio State University for their suggestions. I hope I was able to implement them correctly!

  1. References
  2. Install roadrunner first on a windows95 computer and get the configuration files. You will need them.
  3. Short way, if you have 2 3com 3c509 cards aet at irq=10, address 300, and irq=11,address=310. If this works, great, but otherwise do all the steps.
  4. Setup your ethernet cards.
  5. Setup your DNS on the server. Just use my examples. I have it setup for 9 computers if you need that many. Also, you must have the DNS rpm installed. Here is a dns caching server from my cheapbytes Redhat 5.0 cdrom.
  6. Setup your clients.
  7. Compile kernel for ip masquerading and ethernet card driver.
  8. Change various configuration files.
  9. Hook up your hub, gateway computer, and roadrunner.
  10. Connect your gateway computer to the internet with roadrunner.
  11. Setting roadrunner up as a service and making the first ethernet card use rrdhcpcd.
  12. Starting and stopping the roadrunner service and rrdhpcd.
  13. Other things.
  14. Index of files. You should not have blank lines at the beginning of the files!

References

  1. RoadRunner Columbus, OH Infosite. You can get roadrunner stuff from here.
  2. HOWTO -- Compiling the Kernel for IP Masquerade Support
  3. Linux IP Masquerade Resource
  4. Linux IP Masquerading Web Site
  5. RoadRunner help webpage. This is where I got my rrclientd program.
  6. DNS. You almost don't need this if you use /etc/hosts file for your linux computers. For Windoze95 and other operating systems you will have to.
  7. Ethernet. Howto set up your ethernet cards.
  8. Firewall. If you want to setup your firewall, it is trivial with this setup.
  9. Networking-3. How to do networking in general.
  10. Bootprompt. Howto modify what the kernel does at boottime.
  11. Linux kernel
  12. DHCPcd mini howto. I found this useful in answering some questions.

Short way

THIS SHORT SECTION ONLY WORKS IF YOU MANAGE to get both ethernet cards detected with modules. It will probably not work for most people.

This will probably only work with RedHat 5.0. These steps you must not deviate from. For some reason, the installation of redhat detected both ethernet cards properly and also the kernel has ip forwarding in the kernel. It just needs to be enabled. Thus, Installing your own network is just a bunch of file copying and a couple of commands and you are done. Be sure to install roadrunner with Windows95 first to get a configuration file.

1. Install both ethernet cards before you install RedHat 5.0 The two ethernet cards I used were 3com 3c509. The first had values of, irq=10, address=300 and the second had irq=11, address=310. Also, when you install RedHat 5.0, go ahead and install it for a LAN and have it autoprobe the ethernet cards. I cannot figure it out, but when I installed redhat after installing these two ethernet cards, it gets them both everytime, when before it wouldn't. When it comes close to the end of the RedHat 5.0 installation, it will ask to to select which services you want started on bootup. I turn off sendmail and smb. I do this because it hanged on me at boot time. When you install RedHat 5.0, install everything. I did. You also shouldn't have to change /etc/lilo.conf.

2. You don't need to recompile the kernel. Just add this file to yours.
/etc/rc.d/rc.local
Also, add the krb5.ini file in C:\NETMANAG for Windows95 to
/etc/krb5.conf
Also, make a file called "/etc/rrpasswd" which only has one line on it which is the password for your roadrunner username.

3. Execute the commands
mkdir /etc/dhcpc
unset noglob

4. Copy these files to their exact location
/etc/sysconfig/network
/etc/sysconfig/network-scripts/ifcfg-eth0
/etc/sysconfig/network-scripts/ifcfg-eth1
/etc/rc.d/init.d/roadrunner
/etc/dhcpc/resolv.conf

/etc/named.conf
/var/named/10.0.0
/var/named/mark.local
/var/named/named.local

/etc/HOSTNAME
/etc/hosts

/root/Login2.bat
/root/email.pl
/root/cron2

5. Downlaod rrclientd-1.3, untar and ungzip it, and copy all the files in rrclientd-1.3/bin to /sbin. For example, if you are in rrclientd-1.3/bin, execute "cp * /sbin". I had the binaries when I got mine, so hopefully you won't have to compile them. Compiling with the new libraries Red Hat has had has been tricky at times.

You may have to alter the /etc/services file as it says in the README file for rrclientd-1.3.

6. Execute the commands
mv /etc/resolv.conf /etc/resolv.conf_old
ln -s /etc/dhcpc/resolv.conf /etc/resolv.conf
mv /sbin/dhcpcd /sbin/dhcpcd_old
mv /usr/sbin/dhcpcd /usr/sbin/dhcpcd_old
mv /usr/bin/rdate /usr/bin/rdate_old

ln -s /sbin/rdate /usr/bin/rdate
ln -s /sbin/rrdhcpcd /sbin/dhcpcd
ln -s /sbin/rrdhcpcd /usr/sbin/dhcpcd
ln -s /sbin/rrclientd /usr/sbin/rrclientd

cp /root/roadrunner /etc/rc.d/init.d ## adding roadrunner service
chkconfig --add roadrunner

crontab /root/RR/cron2 ### resetting connection in a cron job

## Making it so we can execute the scripts with cron2
chmod 755 /root/Login2.bat /root/email.pl /etc/rc.d/init.d/roadrunner

### We only want root to see the password!
chmod 700 /etc/rrpasswd

7. In /etc/rc.d/init.d/roadrunner, make sure you change the username to your username that you got for roadrunner.

8. Follow the steps in Hook up your hub, gateway computer, and roadrunner and if you have clients, follow the steps in Setup your clients .

9. Now we need to attach dhpc to your first ethernet card. Follow the instructions on part b of Setting roadrunner up as a service.

10. Reboot your computer and you are done!

11. If you have any problems whatsoever, all I can say is, make sure your timezone is correct and that your time is not ahead of the current time by one second or behind it by more than 5 minutes, and if that doesn't help, use the rest of the instructions I have.


First ethernet card

Install your first ethernet card as normal when you install the operating system. Give it a phony ip address you will not use on your network. For some reason, this ethernet card has to be the one connected to the outside in order to get everything to work right with xwindows forwarding. Now, let me state, I had to do this for only people coming in. Going out, you should have no problem with the second ethernet card being the one hooked up to roadrunner. However, I could never telnet in from work to my house and get an xwindows program to work when roadrunner was using the second ethernet card. The second ethernet card will be for the intranet.

Also, setup your second ethernet card to NOT use the same irq and address of your first ethernet card. Traditionally, I use lower irq and address for my first ethernet card compared to the second. Often I use a dos computer using a dos program to set the values for the ethernet cards. You will probably have to do this as well.

Don't do anything yet with roadrunner or your rrdhcpcd program yet. Also, don't worry about the network configurations yet, we will take care of it later. Also don't worry about the fact Linux probably won't see the second ethernet card, we will take care of this later. Just make sure the ethernet cards don't use the same hardware values. Also, if your second ethernet card has a lower irq and address than the first, the computer might think it is the first, so I believe there really is a reason why I make the first ethernet card with the lower values. I ain't gonna test if I am wrong. You should just make sure at least one ethernet card is detected.


Setup your DNS server on your server connected to the internet.

If you know what you are doing, you can change the configurations. Because I am silly, I choose the domain "mark.local". If "mark.local" every becomes an official domain, then you will have to change every occurance of "mark.local" to something else in the files below.
  1. Copy the following files to your server
    1. /etc/named.boot
    2. /etc/resolv.conf
    3. /var/named/mark.local
    4. /var/named/10.0.0
    5. /var/named/named.local
    6. /etc/hosts is a file I would use, but don't need. Just in case your dns server fails, this is handy for a backup.
  2. Leave /var/named/named.ca and named.local the same
  3. Restart named with this command
    /etc/rc.d/init.d/named restart

There are a couple of things you could change for your own personal needs. In /var/named/mark.local, I disabled localhost definition.

Now at least your clients computers can find each other. I assume you know how to setup ip addresses, gateways, and other stuff for your clients. I will give some pointers on this anyways. Setup clients to use DNS server.


Setup your client computers

In your computer that is acting like the DNS server, I have upto eight additional entries in the dns server so that you can have upto eight computers using the dns server. I am assuming you know a little bit about ethernet cards. Here are the following configurations I did for a computer of mine.
  1. ip address = 10.0.0.21
  2. name address = c1.mark.local
  3. gateway address = 10.0.0.10 <-- second ethernet card on server computer
  4. netmask = 255.255.255.0
  5. As for the file /etc/resolv.conf, use this one for the clients.
  6. I didn't have to fuss with the kernel on the client computers. As far as the clients are concerened, your server is just a normal gateway.
  7. /etc/hosts is a file I would use, but don't need. Just in case your dns server fails, this is handy for a backup.

The only thing you should have to change to each additional computer is the ip address and the name address. c2.mark.local and 10.0.0.22 would be used for the next computer. Get the idea?

Also, if you are using pc or mac clients or other stuff, check out the masquerading mini-howto.


Setup masquerading on the server

If you were able to get your module(s) to detect both ethernet cards, then this section doesn't apply. But if you could not get the modules to recognize both ethernet cards, which will be the case for most people, you MUST COMPILE THE DRIVER of the ethernet card into your kernel and also compile in the masquerading bit. I have had problems getting modules to work with 2 ethernet cards of the same type. Somehow, when I installed RedHat 5.0 from scratch, it got both of my ethernet cards, but it was probably an unusal case.

Read the ip masquerading HOWTO. Follow its steps on compiling the kernel for masquerading. NOTE -- VERY DANGEROUS if you screw this up. About, installing the kernel, RedHat did something silly when they configured the /etc/lilo.conf file. Change this line "image=/boot/vmlinuz-2.0.32" to "image=/boot/vmlinuz" and make sure you run "lilo" at some point before you reboot your computer. Do it now to be safe.

1. And also, these are the steps I use to compile the kernel, first configure it like the howto says
cd /usr/src/linux
make config

2. and when that is done, compile it
make dep
make clean
make zImage

3. and if it worked, compile and install the modules
make modules
make modules_install

4. After you created your kernel, do the following steps to install your new kernel.

cp /usr/src/linux arch/i386/boot/zImage /boot/vmlinuz_NEW
rm /boot/vmlinuz
ln -s /boot/vmlinuz_NEW /boot/vmlinuz
lilo

That should install your kernel if you compiled it. Here is an example of my /etc/lilo.conf file.

YOUR /etc/lilo.conf WILL NOT BE THE SAME AS MINE. Change /etc/lilo.conf for your specific needs and please read about append in the BOOTPROMPT howto before you use it. You will have to modify this file yourself. Add the append statement like I did for two ethernet cards.


Change some configuration files

Use these files.
  1. Change /etc/rc.d/rc.local which will start the ip masquerading. Actually, ADD THIS to your rc.local file and do not overwite it.
  2. Change /etc/sysconfig/network and remember that these values don't me anything and will get changed once you log into the internet.
  3. Change /etc/sysconfig/network-scripts/ifcfg-eth0 and also change /etc/sysconfig/network-scripts/ifcfg-eth0.

Hook up the network.

  1. Put your gateway server computer between the roadrunner box and the hub.
  2. Reboot the computer.
  3. Hook up all your other computers to the hub.
  4. See if you can ping or connect from a client computer to your gateway computer. If so good.
  5. See if your internal computers can see each other. You don't need the gateway computer to do this, this is just to check to see if your hub is working. Telnet, ping, ftp, or others should work. For example, "ping c1" would ping your c1.mark.local computer. If you used /etc/hosts like I told you to, you don't need the gateway server to resolve the ip address. Or you could just do "ping 10.0.0.21" to do the same thing.
Specifically, you should hook up the first ethernet card to the roadrunner modem thing and the second ethernet card to the hub.

Now we need to get your gateway computer connected to the internet.


Connect your gateway to the internet.

Did you remember to first install roadrunner on a Windows95 computer to get the configuration files? If so, you better.

Download the rrclientd-1.3.tar.gz and rrdhcpcd-1.02.tar.gz files. You don't need rrdhcpcd-1.02.tar.gz unless you want to compile as it has a binary contained in rrclientd-1.3.tar.gz. Here are the briefs instructions on what to do, but read the README file that comes with rrclientd-1.3.tar.gz. It tells you in better detail what to do next. Use rrdhcpcd instead of dhcpcd. It works better and seems to initiate faster.

  1. Create a /etc/rrpasswd file that contains the password for your account. A "chmod 700 /etc/rrpasswd" command if you only want root to be able to read it.
  2. Link your /etc/resolv.conf file to /etc/dhcpc/resolv.conf with the commands
    mkdir /etc/dhcpc
    cp /etc/resolv.conf /etc/dhcpc/resolv.conf
    rm /etc/resolv.conf
    ln -s /etc/dhcpc/resolv.conf /etc/resolv.conf
  3. Copy a file from your windows95 installation to /etc/krb5.conf
  4. Make changes to your /etc/services file as said in the readme file from rrclientd-1.3.tar.gz.
  5. Copy the binaries you need for rrclientd into /sbin, or at least, that is what I did. The rest of this document will assume you put your binaries in /sbin.
  6. Make sure the time on your computer is not ahead of the current time and not behind by more than 5 minutes. Also, make sure your timezone is correct.
  7. I am going to assume you are using rrdhcpcd. If you don't have a binary of it, you will have to compile it. Execute these commands to make sure you have the correct links to use the new programs you copied to /sbin.
    mv /sbin/dhcpcd /sbin/dhcpcd_old
    mv /usr/sbin/dhcpcd /usr/sbin/dhcpcd_old
    mv /usr/bin/rdate /usr/bin/rdate_old

    ln -s /sbin/rdate /usr/bin/rdate
    ln -s /sbin/rrdhcpcd /sbin/dhcpcd
    ln -s /sbin/rrdhcpcd /usr/sbin/dhcpcd
    ln -s /sbin/rrclientd /usr/sbin/rrclientd

Once you have installed rrclientd-1.3.tar.gz properly, use this file /root/Login.bat to start your login session with "source /root/Login.bat". Remember to change USERNAME in the file to whatever username it is that you have. In my script, I stop and start rrdhcpcd, which is unecessary. Once rrdhcpcd is started, it tries to renew the ip address every 3 hours. Thus, you should never have to stop rrdhcpcd, but I do it anyways.


Setting roadrunner up as a service.

You need to set your ethernet card to use the dhcp protocol and to have roadrunner as an activated service in order for it to start when your computer is turned on. This worked for me. I read the dhcpcd program (and probably rrdhcpcd does the same thing) tries to renew the ip address every 3 hours. This is good. This means users don't have to start and stop it.

A. First, install roadrunner as a service.

  1. This webpage is the guide I used.
  2. /root/roadruner is the file you need. I downloaded this April 7th, 1998. It looks pretty straightforward, so I doubt it will change.
  3. Change your username in the file like it says to.
  4. Copy it to the /etc/rc.d/init.d directory like
    cp /root/roadrunner /etc/rc.d/init.d
  5. Issue the command
    chkconfig --list | grep roadrunner
    and you should see
    roadrunner 0:off 1:off 2:off 3:off 4:off 5:off 6:off
  6. Activate the service by
    chkconfig --add roadrunner
  7. Issue the command
    chkconfig --list | grep roadrunner
    and you should see
    roadrunner 0:off 1:off 2:on 3:on 4:on 5:on 6:off

B. Now use the control panel. Log in as root and use xwindows. "startx" will start xwindows at the prompt if you don't have xdm running. The control-panel should be there. This next step will set the first ethernet card to use dhcp which we replaced with rrdhcp (the computer doesn't know any better).

  1. Click on the "Network Configuration" icon in the control panel.
  2. Click on "Interfaces".
  3. Click on "eth0"
  4. Click on "edit"
  5. OPTIONAL: Click on "Allow user to (de)active interface".
  6. Choose "dhcp" for Interface configuration protocol.
  7. Click somewhere to save the changes.

Okay, we got rrdhcpcd running by setting the first ethernet card to use the protocl dhcp and we installed the roadrunner service which uses rrclientd.

Reboot your computer and see what happens!

You should be connected to the internet when your server boots up as well as all your clients. If you use a web browser, you might have to set it to use the "proxy-server" on port 8080. Programs like telnet, ssh, nslookup, ftp, and ping should work. Actually, ping might work with just rrdhcpcd being activated.

Starting and stopping the roadrunner service and rrdhpcd.

Well, to stop everything down and start everything up, you would do
/etc/rc.d/init.d/roadrunner stop
/etc/rc.d/init.d/network stop
/etc/rc.d/init.d/network start
/etc/rc.d/init.d/roadrunner start

But of course, that is a little drastic. Rrdhpcd supposedly tries to renew the ip address every 3 hours, so you should never have to start and stop it. That is good because it takes a while to initiate and stalls the network.

Why would you want to stop and start the roadrunner service? Well, in theory, rrdhpcd should get the same ip address 99% of the time if you leave it on all the time. If it doesn't, you are screwed and you will have to restart the roadrunner service. Thus, instead of using my /root/Login.bat script, just put into the cron for the roadrunner service to be stopped and started at specific times. Use the files /root/Login2.bat and /root/cron2 and /root/email.pl. Make sure you do a "chmod 755 /root/Login2.bat /root/email.pl". And also do a "crontab /root/cron2". Oh, uh, if you had other stuff cronned as root, I would "add the cron stuff" or otherwise you are going to blow away your previous cron jobs.

Also, a quote from Joshua Jackson when I e-mailed him about what problems you might have being logged in all the time.

If for some reason you lose your IP address (this SHOULD NOT happen under
normal circumstances), you Kerberos tickets and GSS auth info would become
invalid.  If this happens, rrclientd will exit and you will need to log   
back in.

The only reason that you would lose your IP address would either be a
hardware/software malfunction at either the client or server end or a
reset of the dhcp servers at RR.

Joshua Jackson

Other things

Use this section at your own risk. This stuff I plan to elaborate more on given that some people have made good suggestions about them. For updates to this webpage, look at http://linux.med.ohio-state.edu/nielsen/rr.html, but that might even change someday. NONE OF THIS STUFF in this section is explained well. Modifications are probably in order at some point. This is just what I would do.
  1. If you don't setup the roadrunner service and bind rrdhcpcd to the ethernet card but you want to stay connected almost 24 hours a day, you could do something like send yourself e-mail once an hour use a perl script and and cron job on your gateway server and issue the command "crontab cron" to get it started. The /root/cron file uses /root/Login.bat file, so you need it also. Also the /root/Kill.bat file to kill it at 1 a.m. You don't have to, I do. Do a "chmod 755 /root/email.pl /root/Kill.bat /root/Login.bat". Modify the stuff as you see fit. Also, add this to your /etc/rc.d/rc.local file and replace USERNAME with the username you use for roadrunner. This will start the stuff at boot time if you didn't set it up to do so with the roadrunner service and control panel.
    echo starting rrdhcpcd
    /sbin/rrdhcpcd eth0
    echo sleeping for 5 seconds
    sleep 5
    echo Starting rrclientd
    /sbin/rrclientd  -u USERNAME /etc/rrpasswd dce-server 
    echo finished
    echo sleeping 20 more seconds
    sleep 20
    
  2. If you are interested in what programs you can use, telnet, ssh, ftp, ping, nslookup, and xwindows programs seem to work. I have heard other ones do as well. Some of resources in "References" above mention other things like ircs and other stuff.
  3. In /etc/inet.d, I would comment out ftp, telnet, rsh, pop3, pop2, imap, and gopher for security reasons. Compile and install ssh.
  4. If you are interested in fixing /etc/resolv.conf so that it doesn't change, do a "chmod 444 /etc/dhcpc/resolv.conf" after you configure it. I recommend you only add information like nameservers and domains, like mine is
    domain columbus.rr.com
    search mark.local columbus.rr.com 
    nameserver 10.0.0.10  ### this is our DNS
    nameserver 204.210.252.18 ### this is the roadrunner dns
    nameserver 128.146.1.7  ### ONLY FOR OSU PEOPLE IN COLUMBUS OHIO!
    

Index of files

You should not have blank lines at the beginning of the files!


/etc/named.boot for server
 
;
; a caching only nameserver config
;
directory                              /var/named
cache           .                      named.ca
primary         0.0.127.in-addr.arpa   named.local
primary        mark.local        mark.local
primary          0.0.10.in-addr.arpa   10.0.0

/etc/resolv.conf for server and clients
domain columbus.rr.com
search mark.local columbus.rr.com 
nameserver 10.0.0.10  ### this is our DNS
nameserver 204.210.252.18 ### this is the roadrunner dns
nameserver 128.146.1.7  ### ONLY FOR OSU PEOPLE IN COLUMBUS OHIO!

### You can probably use our dns first if you want. Actually, I would.


/var/named/mark.local for server
 
mark.local.       IN      SOA  main.mark.local. root.main.mark.local.  (
                                      1997022700 ; Serial
                                      28800      ; Refresh
                                      14400      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
mark.local.       IN      NS      main.mark.local.
;localhost IN       A       127.0.0.1
main.mark.local.   IN     A       10.0.0.10
c1.mark.local.     IN      A       10.0.0.21
c2.mark.local.     IN     A       10.0.0.22
c3.mark.local.     IN     A       10.0.0.23
c4.mark.local.     IN     A       10.0.0.24
c5.mark.local.     IN     A       10.0.0.25
c6.mark.local.     IN     A       10.0.0.26
c7.mark.local.     IN     A       10.0.0.27
c8.mark.local.     IN     A       10.0.0.28


/var/named/10.0.0 for server
 
0.0.10.in-addr.arpa. IN   SOA  main.mark.local. root.main.mark.local. (
                                      1997022700 ; Serial
                                      28800      ; Refresh
                                      14400      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
              IN      NS      main.mark.local.
10.0.0.10.in-addr.arpa.       IN      PTR     main.mark.local.
21.0.0.10.in-addr.arpa.       IN      PTR     c1.mark.local.
22.0.0.10.in-addr.arpa.       IN      PTR     c2.mark.local.
23.0.0.10.in-addr.arpa.       IN      PTR     c3.mark.local.
24.0.0.10.in-addr.arpa.       IN      PTR     c4.mark.local.
25.0.0.10.in-addr.arpa.       IN      PTR     c5.mark.local.
26.0.0.10.in-addr.arpa.       IN      PTR     c6.mark.local.
27.0.0.10.in-addr.arpa.       IN      PTR     c7.mark.local.
28.0.0.10.in-addr.arpa.       IN      PTR     c8.mark.local.


/var/named/named.local for server


@       IN      SOA     localhost. root.localhost.  (
                                      1997022700 ; Serial
                                      28800      ; Refresh
                                      14400      ; Retry
                                      3600000    ; Expire
                                      86400 )    ; Minimum
              IN      NS      localhost.

1       IN      PTR     localhost.


/etc/hosts for server and clients
 
127.0.0.1 localhost     localhost.localdomain
10.0.0.21 c1.mark.local c1
10.0.0.10 main.mark.local       main
10.0.0.22 c2.mark.local c2
10.0.0.23 c3.mark.local c3
10.0.0.24 c4.mark.local c4
10.0.0.25 c5.mark.local c5
10.0.0.26 c6.mark.local c6
10.0.0.27 c7.mark.local c7
10.0.0.28 c8.mark.local c8



/etc/resolv.conf for the client computers
 
search mark.local
nameserver 10.0.0.10


/etc/lilo.conf
 
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
prompt
timeout=50
### WARNING!!! THE APPEND STATEMENT IS FOR MY COMPUTER ONLY!!!!!
image=/boot/vmlinuz
        label=linux
        root=/dev/hda1
        append="ether10,0x300,eth0 ether=11,0x310,eth1"
        read-only


/etc/rc.d/rc.local for server
## Add this file to /etc/rc.d/rc.local 

echo "setting up ip masquerde"
/sbin/depmod -a
/sbin/modprobe ip_masq_ftp
/sbin/modprobe ip_masq_raudio
/sbin/modprobe ip_masq_irc

echo "setting up permissions for 10.0.0.0 domain for mas"
ipfwadm -F -p deny
ipfwadm -F -a m -S 10.0.0.0/24 -D 0.0.0.0/0


/etc/sysconfig/network for server
NETWORKING=yes
FORWARD_IPV4=true
HOSTNAME=main.mark.local
DOMAINNAME=mark.local
GATEWAY=
GATEWAYDEV=eth0


/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
IPADDR=10.0.1.10
NETMASK=255.255.255.0
NETWORK=10.0.1.0
BROADCAST=10.0.1.255
ONBOOT=yes


/etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
IPADDR=10.0.0.10
NETMASK=255.255.255.0
NETWORK=10.0.0.0
BROADCAST=10.0.0.255
ONBOOT=yes


/root/Login.bat
## This is a drastic solution to stop and start your roadrunner stuff
## Personally, you should only have to start and stop the roadunner
## service every once in a while, and you shouldn't mess with the network
date
echo killing rrclientd
/sbin/rrclientd -k
sleep 5
echo killing rrdhcpcd
/sbin/rrdhcpcd -k eth0
echo sleeping 2 seconds
sleep 2
### uncomment the next 5 lines if you wish, you probably don't have to
##echo stopping and starting the network
##/etc/rc.d/init.d/network stop
##echo sleeping 5 seconds
##sleep 5 
##/etc/rc.d/init.d/network start
echo starting rrdhcpcd
/sbin/rrdhcpcd eth0
echo sleeping for 5 seconds
sleep 5
echo Starting rrclientd
/sbin/rrclientd  -u USERNAME /etc/rrpasswd dce-server
echo finished
echo sleeping 20 more seconds
sleep 20


/root/roadrunner for server
#!/bin/sh
#
# roadrunner  This shell script takes care of starting and stopping
#             rrclientd.
#
# chkconfig: 2345 11 30
# description: Logs the system into TWC Road Runner Internet Service
#
# Author: Joshua Jackson  jjackson@neo.lrun.com
#         1/6/98
#
# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ ${NETWORKING} = "no" ] && exit 0

[ -f /usr/sbin/rrclientd ] || exit 0

RRUSER="Your Username Goes here!"

# See how we were called.
case "$1" in
  start)
        # Start daemons.
        echo -n "Logging into Road Runner: "
        daemon rrclientd -u ${RRUSER} /etc/rrpasswd dce-server
        echo
        ;;
  stop)
        # Stop daemons.
        echo -n "Logging out of Road Runner "
        killproc rrclientd
        echo
        ;;
  status)
        status rrclientd
        ;;
  restart)
        $0 stop
        $0 start
        ;;
  *)
        echo "Usage: roadrunner start|stop|restart"
        exit 1
esac

exit 0


/root/email.pl for server
!/usr/bin/perl

$EMAIL = "USERNAME\@somewhere.foo.edu";

open(EMAIL,"| /bin/mail -s RR $EMAIL");
$date = `date`;
chop $date;

print EMAIL "DATE AND TIME: $date\n";
print EMAIL "--------------------------------------------------\n";
print EMAIL "test\n";

close(EMAIL);


/root/cron for server
# Let us restart dhcpd and rrclient 7 a.m., 2 p.m., and 10 p.m.
# and kill it at 1 a.m.
# and e-mail once an hour 5 minutes after the hour
0 7,14,22 * * *     /root/Login.bat  >> /root/Login.log
0 1 * * *     /root/Kill.bat >> /root/Kill.log
5 * * * * /root/mail.pl  


/root/Kill.bat for server
date
/sbin/rrclientd -k
sleep 5
/sbin/rrdhcpcd -k eth0
sleep 5


/root/Login2.bat
### We just need to quickly stop and start roadrunner
/etc/rc.d/init.d/roadrunner stop
sleep 5
/etc/rc.d/init.d/roadrunner start


/root/cron2
# Let us restart roadrunner 7 a.m., 2 p.m., and 10 p.m.
# and e-mail once an hour 5 minutes after the hour
0 7,14,22 * * *     /root/Login2.bat  >> /root/Login2.log
5 * * * * /root/mail.pl


/etc/HOSTNAME
main.mark.local


Copyright © 1998, Mark Nielsen
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


blue ribbon Keep Your Eye On The Prize

By Dave Winer

To people who think that open source is a panacea, think again. I have a bunch of experience here, predating the current euphoria. A few bullet points follow.

Our SDK, first released in 1992, and often updated, was totally open source. Some parts of it gained traction, but most of it was ignored. We did the vast majority of the work on the toolkit, with paid employees.

In January 1995 we released the source to MacBird Runtime, hoping to ignite a move towards real user interfaces on the web. Nothing happened. It was never ported, and no bug fixes were ever supplied.

In 1997, I looked on the web for base64 code that worked with handles, not files. Finding none, I adapted existing implementations and released the source. This one worked, sort of. It was ported to HyperCard. Has anyone else used the code? I don't know.

Why didn't these things work?

First, I have to tell you, I don't know why they didn't work. I can only guess.

I certainly was excited about the idea of releasing the source and having other people do the work for me. Did I underhype these source releases? I did the best I could with what I had at the time.

The Frontier community tends to group where we are active. So if I back off updating something, they focus their attention where we are working. I find this frustrating, but I think it's just human nature at work.

We aren't updating MacBird (or DocServer or a bunch of other things) but no one is clamoring for the source for those things. We are actively updating Frontier and the website framework and workflow facilities, and every day I get an email or two saying that it would all work much better if I just gave them the source, implying that the people would know what to do with it, or would keep up their interest in the code after they got it.

I have mixed experiences with this. I have given away code, sometimes with good results, but really, most of the time the projects get started with some enthusiasm, and wither on the vine. I've learned that if I want something to take root, I have to make a financial commitment to that happening.

Chuck's sig

My friend Chuck Shotton uses a great signature on his email. He says "Shut up and eat your vegetables!" I can't be that direct, I wish I could, but I'd like to tell the people who want to replace my team, you haven't got a clue how hard it is to maintain a source code base like Frontier. If you want me to trust you, get started working on fixing the problems in the code we have released, and then we'll take a look at trusting you with our family jewels.

Further, having invested several million dollars in this source code base, why do you think I should give it to you? If you want to know how this feels to me, imagine a stranger arriving at your front door and demanding food and a warm bed. You let them in. Sure! You can sleep here. It would be stupid to think that it would all come out for the best. Love at first sight? Nahh. I prefer to be romanced.

No other creative or engineering art works this way. Art and money are closely related. Try sitting down with a group of artists and ask them what's on their mind. Very quickly the topic shifts to money. And it can be very hard to get them off that subject.

Think about art yourself. When you look at a piece of art does it cross your mind how much it's worth? If you meet someone who's an artist, do you ask how much money they make from their art? Be honest. Try some experiments. Tell people you're an artist. See how often the conversation turns to how much money you make. What about all the actors who are waiters and taxi drivers? Do you take them seriously before they "make it"? What does making it mean? What's it? Does money have anything to do with it?

The open source message is all about money, by the way, paid versus unpaid programmers, and in that sense it diflects us from where the most good can be done. Open source is a tactic. It's a zig to the commercial industry's zag. It can gain a market presence. Good tactics. But it's not right in every situation, far from it, nor can you make the world revolve around free source. It certainly has a place. But in itself it is not the revolution. I'm sure of this.

Netscape Netscape Netscape

I think Netscape has not done a service to the software world, users and developers, by going so strong on open source. If they want to please their shareholders, it's going to be complex and potentially very political. Like O'Reilly, who ships a mixture of open and closed source, so must Netscape, or the shareholders will throw out current management and replace them with people who are focused on making money. You can be sure of one thing, shareholders are not going to pay for what's right, they pay for what provides a good return on investment.

The focus is on Netscape. They're going to have to do an unprecedented political balancing act as developers feed them new features and bug-fixes, expecting that open source means open access to Netscape's distribution. There are indications, no clear statements, that this is *not* the way it will work. There's no commitment on Netscape's part to ship the work you do, nor would it be reasonable for them to make that commitment. Sometimes it's easier and more cost-effective to do the work yourself than to evaluate all the possible (sometimes conflicting!) submitted implementations.

We go thru this all the time. Developers, paid or not, make mistakes, or see things thru a narrower perspective than you can support. You have to read the code carefully before giving it to your users. I've learned this the hard way. Software has to have goals. And if Navigator fragments into fifteen incompatible browsers, they play right into Microsoft's strength. MSIE will become the market share leader, and content will be coded for their browser, not the various flavors of Navigator.

Think it thru. Even if Netscape hasn't said how it will work, how *must* it work? Netscape has bitten off a huge political task. Do they have the mature wisdom and global perspective needed to pull it off? Even if they were King Solomon, could they do it?

Philosophers

At age 42, I hear much of the philosophy of open source coming from people who are younger. It truly is a generational thing. I've said before, if I were in my early 20s I would probably be part of the open source thing, but I'm not in my early 20s, I'm in my early 40s.

As all middle-aged people seem to believe, I think the younger people have a lot to learn. I look at them and I see myself at their age. I'm sure some part of this is real, but most of it is projection. They aren't me, they are them. They're throwing out some of our lessons and beliefs. That's inevitable, and therefore good.

My advantage is deep experience. Their advantage is lack of experience. I really mean that. When I was young we threw out the ways of mainframes and discovered the power of minis and then micros. The older folk sniffed. "We did that years ago!" they said. But the seeds of their demise were already in the ground and we were the sprouts. The people of my generation, the two Steves at Apple, Bill Gates at Microsoft, and others, really did kick the legs out from under IBM, Sperry, DEC, etc. We know how it turned out.

But Compaq and Lotus had Ben Rosen and Apple had Mike Markkula. They were the adult supervision, the teachers, they gave us the inside scoop on the opposition. They told us how the world worked, and we worked around that. They were our surrogate fathers, caring about our success, enjoying it vicariously. I had my own angel, a man named Bill Jordan. When I was in my 20s and 30s, he was in his 50s and 60s. He taught me a lot. I owe much of my success to Bill.

The best revolutions embrace all that was learned in past revolutions. Keep your eyes open, understand how the system you're trying to undermine really works. Bill Gates never said publicly that he was going to take IBM out of its strategic place in the software business. For all I know, he didn't even intend to do it. Looking at it from my insider perspective, while all this was going on, I know they had doubts about their ability to lead the industry as late as 1990.

That's why I say it isn't about open source, it's about open minds. Drawing lines alienates people to you. Attacking Microsoft verbally causes Windows users to tune out. You can't undermine by trying to dictate the terms, you have to do it by invading at night, slipping in the back door unnoticed. Then when the old folks wake up, it's too late.

So, speaking as an old geezer to a bunch of young whippersnappers, let's really cause some trouble, keep your eyes and ears open and stop attacking so openly.

A new slogan. It goes right along with the old slogans, Dig We Must, Let's Have Fun, Namaste Y'all.

Keep Your Eye on the Prize.

Know what you want, and get it.

Dave Winer


Copyright © 1998, Dave Winer
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Linux Fax for Dummies :-)

By Martin Vermeer


If you are like me, you may be all the time on the lookout for nifty little utilities and the like, that might make life easier and more pleasant for end users. I came into Linux rather late in the process, February 1997, and although I know how to program (and have written a few handy little things too, using this great new language tcl/tk), due to time constraints, I have felt no strong inclination to join ranks with the hacker community. Heck, I never even compiled my own kernel!



For someone as ancient as me (vintage 1953) it is probably better to concentrate on things that I am good at, without requiring a substantial investment in time. So, I have been looking around, learning tcl/tk, and writting little things that make life easier especially for people that are not very computer literate. Because, what is happening now to Linux, is that it is gaining a technically less sophisticated user base. We should adapt to this. Part of this adaptation is taking place; the new, gorgeous-looking desktop environments such as KDE and Gnome are soon becoming standard stuff on the Linux desktop, and more and more software is acquiring a graphical user interface. tcl/tk, perl/tk, gtk, and java, are among the tools that make this possible. tcl/tk especially is ideal for "glueing" already existing command-line oriented utilities together into great-looking desktop thingies.

A recommended fax package for Linux is fax/efax/efix by Ed Casas. fax is an ordinary shell script, containing calls to the binary modules efax (taking care of the difficult, low-level faxing stuff) and efix (taking care of some file format conversions needed). They work, but are command-line stuff; not for dummies.

For sending faxes using efax you can use LyX, the graphical word processor running on top of LaTeX. I can really recommend this word processor: especially the new release 0.12  is great, with lots of new features including on-the-fly localization. See the review by Larry Ayers in the last issue of Linux Gazette, and a picture (I couldn't resist) below.

[lyx picture]

For receiving faxes, it is possible to install efax as a daemon through the bootup script (see man page), so it continually waits for faxes to come in. It is even possible to do this in such a way, that it does not get in the way of outgoing traffic, e.g. an Internet connection. Then, when this connection closes, the deamon starts again listening to the serial port.

Any faxes received will be stored into a spool directory, typically (Red Hat) /var/spool/fax/. You can make xbiff or xmailbox look if faxes have arrived into the spool directory, and signal it to the user. I haven't tried this, though. There are various ways to read messages from the spool directory. Ed Casas' script fax can be used, but is so unbecoming. I really would love it if there was a graphical client to do this!

So... I decided to do a search, an extensive one, using Alta Vista. No luck.  Following links, I found a number of  listings of fax and communication software, including Hylafax, which is a fax server application for network use, undoubtedly good, but not what I was looking for. Then I decided, OK, it cannot be too hard to write a thing like this myself. I started coding, and after three hours or so, I had the skeleton of a working graphical fax client running.

I wanted my skeleton script to use the existing utility viewfax (a GNU product) to display fax pages on the screen. The program -- found in the RPM package mgetty-viewfax --  is very fast and very convenient, but with a slightly "emacsish" user interface. Well, what the heck. I read the man page and found there a reference to faxview.tcl, a tcl/tk "graphical front end" to viewfax. Precisely what I was trying to write!
.

[faxview GUI]

I downloaded the faxview-1.0 tarball from the ftp server at ftp://ftp.UL.BaWue.DE/pub/purple/fax, extracted the files faxview (the tcl/tk script) and faxview.1 (the man page) from it. It worked great (see picture)! The author of this software is Ralph Schleicher from Ulm, Germany. So much for reinventing the wheel... this really raises some questions:

If anyone has any useful software to refer me to, found by accident against the slings and arrows of poor posting... let me know! What is your favourite "under-advertised" Linux software?
 

Martin Vermeer
mv@fgi.fi
 


Copyright © 1998, Martin Vermeer
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Marketing Linux

By Jim Schweizer


I was just reading Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers by Geoffry Moore and sat back for a few minutes to muse on how Linux could benefit from its strategies.

Mr Moore suggests that there is a bell curve that describes how a new technology is adapted by a market. At the front of the curve is a small section of Innovators and next to it a slightly larger Early Adapter section. After that there are two large sections at the middle of the curve, the Early Majority and then the Late Majority. Finally, there is a Laggards section.

The hypothisis is there is a 'chasm' between the way one markets to the Innovators/Early Adapters and a completely different strategy for the Early Majority. It's my belief that Linux now has one foot out over the chasm, and companies that plan to market Linux or Linux related products should examine their marketng strategies closely.

For Linux to be successful it must adopt a marketing strategy designed to 'cross the chasm' between selling to Innovators/early adapters and an early majority of users.

Visionaries and early adapters are already singing the praises of Linux and of the Free Software movement. But there are big differences between them and the people in mainstream corporate environments who make the buying decisions.

The early adapters differ greatly from the early majority in their buying habits - early majority buyers are pragmatists. They want to buy from market leaders with proven track records and from companies that adhere to industry standards. They want a high degree of customer support, etc.

In the words of my brother, Paul, "This is just my opinion but, when I look at Linux from the standpoint of an MIS director, I see a cost nightmare. I have no single source that is accountable for support issues. I see a limited supply of qualified support engineers and technicians. I see a limited supply of drivers for new hardware. And the list goes on...." email Fri. Feb. 28, 1997

Pragmatists may not trust the visionaries in the inovator/early adapter market. They want references from within their own group - references that come from companies already using the product.

If you are in a start-up period, where do the references come from? One possible strategy is the 'Storming the Beach' approach. The visionaries have given you the base of operations (an island off the coast), and you have the goal of market domination (liberation of the continent.) What you do is establish yourself firmly in a market niche (The Beach) then throw all your marketing/sales efforts into expanding the market (moving off the beaches and into the country side.)

Where within the purchasing departments of large corporations is Linux's Beach? Is it in the web server loaded with RedHat/Apache quietly counting up hits day after day, week after week without any down time? Is it in the old Slackware box hiding in the corner providing print spooling services without a large maintenance overhead? Is it in the new TurboLinux box running Samba?

"Reduction in scope is key to the chasm crossing strategy." If you want to dominate a market, you must first dominate one section of it. Does being a web server utilize all of Linux's strenghts? No way! But it does get Linux noticed.

IMHO the talk about Linux needing an Office Suite is misguided. M$ Word and Excel are solidly entrenched. Linux jumping up and down shouting, "Me too!" is not going to get it noticed.

What Linux needs is a well designed application profile. One thing that Linux can focus on and use as a landing point. People use toasters, coffee makers and refrigerators in the kitchen, but they don't combine them into one product and try to get Housing Inc. to sell it.

This one application needs to be a must-have, something that provides a dramatic competitive advantage and improved productivity in an area already well-understood by corporate America. To find this must-have product companies can use traditional macro- and microlevel market research.

Innovators and early adapters are interested in how all parts of Linux works. Mainstream customers aren't going to be. For them it's going to be more like buying a Christmas tree - as long as the good part is showing, they're happy.

So, as the troops mass in the ports of our small island, have the Chiefs of Staff found a landing point yet?


Copyright © 1998, Jim Schweizer
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Product Review: Music Publisher

By Bob van der Poel


If you are a musician, you can only cry about the lack of music programs which run under Linux. Yes, there are many CD players, sound editors, etc. But, when it comes to notation programs your choices are severely limited. My search for notation editors (a program which will produce printed sheet music) has turned up three choices:

  1. The graphical program Rosegarden (http://www.bath.ac.uk/~masjpf/rose.html). This is a very interesting program which tries its best to be all things. It has a notation editor which handles most of the normal editing functions, a midi sequencer which will play music from the notation editor as well as record data from a midi keyboard, and the ability to import midi files and convert them to notation. Sounds wonderful...but, unfortunately, Rosegarden is a work in progress and simply doesn't do all it is supposed to, or does them awkwardly. I have been unable to get the sequencer to work using my Gravis Ultra soundcard; and I find that the notation editor is tedious to use since there are no keyboard accelerators for entering note data. In addition there is no easy way to print music. Rosegarden does have a option to export files in MusicTex, OpusTex and PMX (a preprocessor for MusiXTex). I tired some of the combinations, but was not really all that impressed by the output. However, the biggest problem I have with Rosegarden (and a lot of other music editors) is that they work on the music as if it were a long string. This means changes to the start of the music work there way to the end of the chart. For example, if in bar one of a piece you have four quarter notes and you wish to change to the first quarter to two eights, you first change the first quarter to an eighth, then insert an eighth. When the first change is done, everything to the right of the edit point is reformatted with the result that none of the music is now in the correct measure. Of course, inserting the second eighth fixes this. But, if you have several staves of music and you do a few edits, it is really easy to mess the entire piece up.

  2. The various TeX music systems. I must admit that I did not spend a lot of time with any of the variants. I can handle LaTeX for word processing, but the music variants seemed to be pretty complicated to use, all in a state of beta, and none seem to produce what looks like a finished work.

  3. MUP. This, at first look, would probably be the last program to pick. But after a fair bit of testing I have decided to use it. So far, I'm happy with my choice.

Quoting from the user's manual: "The music publisher program called 'Mup' takes a text file describing music as input, and generates PostScript output for printing that music. The input file can be created using your favorite text editor, or generated from any other source, such as another program. The input must be written in a special language designed especially for describing music."

Unlike Rosegarden (and the Windows offerings) MUP does not operate in a WYSIWYG environment. As a matter of fact, the MUP distribution doesn't even have a means of editing music. MUP uses plain text files which look much like source code for a program as its input. Use vi, emacs or whatever your flavor of editor is. Then, process the file with MUP to create postscript; and finally print the postscript file. If you don't have a postscript printer you'll need ghostscript to print things out. And ghostview is handy for screen previews. MUP uses lines of text to describe a piece of music.

For example, here are the first few bars of Bye Bye Blackbird:

The "1:" at the start of each line is the staff/voice indicator (in this example it refers to staff 1 and, since there is no additional argument, voice 1). Following the staff/voice are the notes for the measure. The first measure has an eighth note g, eight note c, etc. The next measure has several eight notes as well as two quarter notes. At first this might seem to be a bit difficult to follow, but in no time at all it does make sense.

A MUP score can contain up to 32 staves of music, each with two voices. Each voice can have multiple notes (or chords)...so complex arrangements are quite possible. In addition to the actual staves, you can also include lyrics, musical symbols, etc.

I started to use MUP when I started to play saxophone in a small combo. We all play from fake-type music (chords, lyrics and the melody line). But I'm not the greatest sax player in the world and find it pretty hard to transpose from C to B flat while sight reading. So, I started to rewrite the C charts in to B flat by hand. I find anything which needs a pen to be tedious, which is what led me to try MUP. After a few practice charts, I can enter a page of one line music with lyrics in about an hour. And since MUP can produce MIDI files as well as doing transpositions, it really works well for what I needed. I can print the music in different keys for everyone in the combo, and I can create a midi file in the right key for practicing at home.

I have included the actual mup file I created for Bye Bye Blackbird and a printed music image.

Flushed with the success of doing these simple charts, I decided to try a more complex task. I also play in a fifteen piece dance band. Most of the music we play is arranged by our leader, but recently some of the members have doing them as well. So, I decided to give it a try. My first arrangement of the old standard Fever took the better part of two days to complete. It is arranged for 11 voices on 6 staves. We played it the other night and I was pleased--not only was everyone impressed by the appearance of the charts, it didn't sound to bad either. I have printed out the first page of the conductor's score .

If you would like to see some of my other arrangements, I have posted them along with a copy of this review from http://www.kootenay.com/~bvdpoel.

I certainly don't have room in this short review to cover all the features of a complex program like MUP. Just a few of the more useful items I've been using are if/else statements to produce charts for different instruments, file includes to read in my own "boiler plate", and macros to make my input files easier to create, read and revise.

MUP comes complete with well written, 99 page users manual in postscript (you'll have to print it out), as well as the the same information in HTML format. Equally impressive is the customer support available via email. I've sent a number of queries to the authors and have received courteous, timely replies to each and every one.

MUP is not free. You can download a working copy of the program, the source code if you want it, and the manual from http://www.Arkkra.com. The program is a complete working copy--however it prints a "this is an unregistered copy" watermark on all pages of the score. MUP registration is only $29.00, after which you get a license which turns off the marks. This is a pretty low price to pay for such a well thought out program.


Copyright © 1998, Bob van der Poel
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Book Review: Netscape IFC In a Nutshell

By R. J. Celestino


Netscape quietly released its Internet Foundation Classes (IFC) in the spring of 1996. This free Java class library offers developers a welcome alternative to the lifeless GUI widgets and basic graphics provided in Sun's Abstract Windowing Toolkit (AWT). However, the IFC can be intimidating even for experienced Java developers. To effectively use the IFC, you must learn its event model and its container model, both of which differ from standard Java. Additionally, learning the IFL's classes and API is no small endeavor. Netscape IFC in a Nutshell helps, with good explanations, examples and a handy quick reference guide.

Netscape IFC in a Nutshell is written by Dan Petrich and David Flanangan. In the Nutshell tradition, it is not too big or intimidating. The cover graphic will surely lead IFC geeks to refer to it as ``the fish book''. It weighs in at around 350 pages, approximately 140 pages of which are the quick reference section. This same reference information is available in HTML format from Netscape. I think it is helpful to complement on-line documentation with a compact hard copy reference, and I'm sure we all agree that making notes in the margin of an HTML page can be a bit difficult.

The book is far more than an API reference. It takes you through the IFC classes in a logical and matter-of-fact manner. Numerous examples cover most of the important aspects of the IFC. The examples are typically small but effective. They are well focused on the topics at hand and help immensely with understanding. The book does not include a CD-ROM, which is too bad as it would make a nice addition to the book. As an alternative, O'Reilly and Associates could have provided space on their web site for on-line examples and running applets. Unfortunately, they didn't do that either, so be prepared to type in each example and code fragment by hand.

The authors begin with a high-level introduction to the IFC. They discuss its capabilities including the sophisticated GUI widgets, persistence mechanism and the visual construction tool Constructor. If you are not sure what the IFC is all about or if it's right for you, this section is very helpful.

The next topic is an introduction to the most basic class in the IFC, the Application Class. As an IFC user, I have noticed that the IFC often departs from traditional Java vernacular. The Application class is one example of many you will come across. Seasoned Java developers typically understand application to mean a stand-alone Java program, and applet to mean a restricted Java program that runs in a browser. Forget all that when using the IFC. Every running IFC program, be it stand-alone or applet, is an application. Don't worry though; it's all explained very simply in this chapter using the power of the venerable Hello World program.

Having straightened out what an application is, the book takes you through a good explanation of the IFC's View classes. View classes in the IFC are akin to the Container classes in standard Java, but with differences in both architecture and capabilities. The authors devote three chapters to a discussion of these classes. They make good use of examples to show you many of the important features, from tiling a bitmap on the background to drawing.

Users interact with GUIs using a mouse and keyboard. In Java, these devices make themselves known by generating events. The mechanism by which these events are made available to your program is known as the event model. The Java standard event model has evolved over its short lifetime. It began as an inheritance-based model, eventually maturing to the more powerful delegation-based model used today. The IFC uses a hybrid model. The chapter on mouse and keyboard events covers the inheritance-based model. The chapter on targets covers the IFCs implementation of delegation-style event handling. The details of using each type of event are discussed in reasonable detail. Numerous examples show how to code your own event processing. However, I felt that the authors should have contrasted the two styles better. In addition, it would be helpful if they offered some guidelines on when to use each model.

The book moves on to cover each widget in considerable detail. Remember that the IFC provides a replacement for every standard widget, so even items as simple as a push-button or text field have a new API. These chapters really drive home the power of these IFC widgets, making the traditional widgets from Sun look absolutely bland.

I found their treatment of scrolling particularly useful. The IFC provides a framework to support scrolling of text and graphics. While very powerful, understanding the details can be daunting. The authors present scrolling clearly and offer good illustrative examples. With the details in this chapter, you will be able to add scrolling to any application with ease.

Another excellent section is the one on layout managers. Layout managers are classes that help View classes decide upon the size and position of the widgets they contain. The IFC includes a powerful new layout manager, the PackLayout. The book covers this complex layout manager in excellent detail. Outside of this book, there is virtually no useful documentation on the true behavior of the pack layout.

Anyone who has ever written an applet that displays images will eventually reach the conclusion ``why don't they handle all of this mess for me?'' Netscape's engineers felt the same way, and they did something about it. In the chapters on images and animation, you will learn how simply your application can read and display image files (both GIFs and JPEGs). Creating animation is a simple matter as well. The book explains how the IFC handles threading, sequencing and double buffering. If you are interested in images and animation, these chapters will get you going quickly and painlessly.

In the Advanced Topics section, the authors cover the details of the TextView class, archives, and the free GUI builder Constructor.

The authors rightly spend a considerable amount of time discussing the TextView class. They show how to use TextView to display HTML documents, handle hyperlinks, open a mini editor and more. With all this versatility from a single component, it is a comfort to have clear explanations and examples to lead you.

The authors next discuss archives. Archives provide object persistence. Every class in the IFC can be archived to disk so that it can outlive the process that created it. The authors detail how to use IFC archives to read and write objects in a number of situations. They also discuss how to archive existing IFC classes and explain how to archive classes that you have created.

Finally, you will learn how to use Netscape Constructor. Unlike the rest of the IFC, Constructor is an application not a component. Perhaps this is why the chapter is so skimpy. Considering this chapter is in the ``Advanced Topics'' section, I would have expected more detailed information. Nonetheless, the chapter does provide some good information on few aspects of the elusive Constructor.

The GUIs that are created using the IFC are truly a pleasure to behold. The widgets, in stark contrast to their plain Java brethren, have a polished look and feel. Some of the components, such as TextView, are so powerful that they could be marketed on their own. Learning such a comprehensive class library can be downright scary. Netscape IFC in a Nutshell provides numerous easy to follow examples, detailed explanations and a quick reference guide.

Eventually, the IFC will be subsumed by the Java Foundation Classes (JFC), a joint venture between Sun and Netscape. But if you want beautiful user interfaces today, the IFC is the way to go, and Netscape IFC in a Nutshell is a great way to get you there.


Copyright © 1998, R. J. Celestino
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


The Xfstt True-Type Font Server

By Larry Ayers


Introduction

After using Linux for some time I had developed a less-than-favorable attitude towards TrueType fonts, partly because of their close association with Microsoft products and partly because of the high-quality printed output which Postscript fonts typically yield. I had become accustomed to the poor-quality of the X windows screen display when using scaled (rather than bit-mapped) PS fonts, only occasionally finding the un-aliased jagginess of certain font sizes jarring. This is particularly noticeable in Netscape when large fonts are displayed (titles, etc).

It was with some bemusement that I read various announcements of TrueType font-renderers and libraries for Linux in the past year. "Why", I wondered, "are people expending so much energy developing TTF support for Linux when Postscript fonts are supposed to be superior in so many ways?". I supposed these packages were for people who had bought TrueType fonts and wanted them available under Linux.

Last month I was idly scanning messages posted to the XEmacs-beta mailing list. I happened across a passing reference to the use of something called xfstt to provide TrueType fonts for XEmacs. The writer of the message stated that these fonts display well under X windows. This intrigued me, and later I happened to be discussing various Linux matters with Chris Gonnerman, who runs a small computer business in a nearby small town here in northern Missouri. He showed me a Linux machine running a TT font-server, which piqued my interest further.

A few days later I saw an announcement posted on the freshmeat web-site stating that xfstt-0.9.8 had just been released. Something about a new release irrationally induces me to try it out, so I got the package from Sunsite and compiled it.

Getting Xfstt to Work

Xfstt is being developed by Herbert Duerr, and as far as I can tell it seems to be a one-man project. The documentation is rather scanty, but the FAQ file in the distribution provides enough information to get started. Xfstt is a font server similar to X's native xfs. Once the server has been compiled and installed, all that needs to be done is to populate the directory  /usr/ttfonts  with *.ttf files (this directory should have been created for you during the installation process), run  xfstt -sync (which lets the server know about the fonts), then add the following line to your XF86Config file, near the end of the font-path section:

  Fontpath "unix/:7100"

I'm guessing that the "path" above is actually the port at which xfstt listens for font-requests from applications.

Once these tasks have been completed, shut down X and execute the backgrounded command  xfstt &, wait a few seconds, then restart X. The easiest way to try it out is to start Netscape, and in the Options->General Preferences->Fonts dialog scroll through your installed fonts and select one with (Ttf) appended to the font-name. Netscape showcases xfstt's capabilities due to the variety of font-sizes in many web-pages. The larger fonts in particular are much improved, without the jagginess they usually exhibit.

A little experimentation is needed to determine which of your applications can make use of these fonts. The Gimp will use them, but it already does a good job smoothing Postscript fonts, and I didn't see any great improvement using TrueType fonts. XEmacs will display scalable fonts, but I'd never used it for long with Postscript fonts due to X-induced scaling and rendering problems. The new TT fonts will be available from the Options menu and the improvement is remarkable.

Another use for this font server is as a supplier of window-manager title and menu fonts. A well-chosen font can really enhance the appearance of a desktop; I've tried this with fvwm2 and icewm, and I'm sure it would work with others. Lines such as these:

MenuStyle  gold darkslateblue bisque3 -ttf-americana-bold-r-*-*-12-*-*-*-*-*-*-* mwm

WindowFont   -ttf-americana-bold-r-*-*-14-*-*-*-*-*-*-*

(for fvwm2)should work. The -ttf- prefix of the font-specifier is the usual location of the font foundry name, such as Adobe or Bitstream.

According to the xfstt FAQ the StarOffice suite, the Xcoral editor, and Java can use these fonts, but I haven't tried them. The distribution includes a sample font.properties file for use with Java.

Possible Problems

The xfstt FAQ lists several problems people have had, mostly due to misconfiguration. The only one I've seen is not serious, but deserves mention. Once your XF86Config file has been modified (with the new Fontpath added) the xfstt server will need to be running first before X is started. If it's not running, X will fail to start, generating one of those classicly cryptic error messages X is so fond of:

  _FontTransSocketUNIXConnect: Can't connect: errno = 111
  _X11TransSocketUNIXConnect: Can't connect: errno = 111

Either xfstt will have to be started from the rc.init scripts (and thus be running constantly) or it can be manually started just before starting an X session. A shell script or alias could also be used to first start xfstt followed by X.

Other Implementations

Xfstt isn't the only way to use TT fonts in a Linux X session. Another project consists of a series of patches to the XFree86 source which will enable X's native font-server to provide TT fonts. Confusingly, the name of the project is xfsft. The home page of this effort is a good central site for other information on the web concerning TrueType fonts and Linux. It can be accessed here. A link on the page will take you to the FTP site where the patches can be found. The site also contains links to screen-shots of Netscape displaying TT fonts.

The Freetype project is yet another approach. It isn't an end-user application or server, but a library intended for use by programmers desiring to embed TT support in their applications; the project home-page is here.

Conclusions

According to Herbert Duerr (in the FAQ) TT fonts are particularly suited for display on low-resolution devices such as a computer monitor. Even though xfstt doesn't do any anti-aliasing of the fonts (since there's no support for this in X) nonetheless the fonts are displayed very clearly in all sizes. Unix traditionalists will stick with their tried-and-true fixed-width fonts, but users familiar with the font display properties of the various mswindows OS's might want to give xfstt a try. It sure works well for me!


Last modified: Wed 29 Apr 1998


Copyright © 1998, Larry Ayers
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Updates to Past Reviews

By Larry Ayers


The Linux software world has been extraordinarily fecund lately. I could write all day every day and still not adequately describe and evaluate the many new software packages released in the past couple of months. But I need to backtrack a bit and bring up-to-date some of the reviews from past issues of the Gazette (otherwise I'll be spending much time answering e-mail concerning broken links, etc.).


Icewm

I LG #21 I wrote a short piece about the first public beta of Marco Macek's window-manager icewm; afterwards I continued to use it from time to time until it began crashing at random intervals, which discouraged me. Lately I'd noticed several new icewm releases mentioned on various Linux web-sites, so I thought I'd try the most recent one (as I write this, the current version is 0.8.16; they have been incrementing rapidly, so there is probably a newer one by the time you read this).

I've probably tried just about every window-manager out there; perhaps I'm becoming a trifle jaded, but the prospect of spending hours, if not days, learning to configure a WM to my liking isn't too appealing. I did just that with fvwm2 a couple of years ago and just don't have the time or inclination to repeat the process. I've tried Enlightenment, Afterstep and WindowMaker, and though I appreciate their features and configurability, I haven't yet devoted the time needed to effectively use them. Icewm is by design not as complex and feature-laden as the above-mentioned managers, therefore developing a pleasing and usable configuration can be done in a fairly short time.

Memory usage is another factor to consider. Most window-managers tend to use about one megabyte of memory, but the various modules (such as fvwm2's pager and Afterstep's numerous add-ons) add significant amounts. Although minimalist managers such as wmx are available, they seem to use nearly the same amount of memory as fvwm2 (in wmx's case, possibly because of the use of the shaped window X-extension). Icewm uses a remarkably small amount of memory (averaging about 600-800 kb. on my system), considering that it does 90% of what the others do.

I asked Marco Macek what his original motivations were when he started coding icewm, and this was his response:

Well, I was using fvwm and while it was quite configurable, there were lots of little things that I could never get right, even if by hacking the source. I wanted the wm that would feel right to users used to CUA-style GUIs (windows, os2, motif). fvwm95 was an improvement (I contributed a few things to it), but since I wanted a more configurable look, I quickly realized that writing it from scratch was the right thing to do. The result seems to be a leaner WM that feels good to use. At least to me (and it seems quite a few other people). For me, the feel is more important than look. People get much more used to feel (keystrokes, behaviour) than look. That is the reason for configurable look but not feel. Changing the look occasionally makes things interesting while the feel should really stay the same.

Icewm isn't difficult or time-consuming to set up. Several pre-defined themes are included, and the configuration variables are split into several files, making it easy to edit, say, just the colors or menus without having to wade through a long config file looking for particular sections. The icons can take some time, as they need to have particular sizes and filenames in order for icewm to be able to make use of them. Any item in the root menu can have a mini-icon displayed next to its menu-entry, with the same icon used as the leftmost titlebar icon. John Bradley's excellent xv graphics program can be used to resize an *.xpm file to 16x16 and 32x32 pixels, which are the two sizes needed. The icon files then need to be renamed to [name]_16x16.xpm and [name]_32x32.xpm and put in the window-manager's icon directory, which defaults to /usr/local/lib/X11/icewm/icons.

The menu configuration file, located in  /usr/local/lib/X11/icewm/menu has entries in this format:

prog Xvile edit xvile

The first word after "prog" is the name as you want it shown in the menu, the second word is the prefix of the xpm icon-file (that is, the part before the underscore), and the third word is the command which actually starts the program. If there are no icon-files named edit_16x16.xpm and edit_32x32.xpm error messages will be displayed on the console from which X was started but they are harmless and default icons will be used in the titlebar, while there won't be one at all next to the corresponding menu-entry.

If you happen to try editing any of the theme configuration files (where the various frame and title-bar colors are set) you will notice that the colors are specified in hex format (such as "rgb:E0/E0/E0"), which isn't too intuitive. After configuring various X windows programs for a while, you probably will be able to remember several favorite color-names from the rgb.txt color-database file, such as darkslateblue and navajowhite. This isn't mentioned in the icewm docs, but I've found that these easily-remembered color-names can be substituted for the hex names and will work just as well. Just remember to put the names within double quotes.

One feature of fvwm which I've grown accustomed to, and which icewm can also do, is displaying certain windows without a titlebar and/or appearing on all desktops. The icewm docs explain these settings. A pager isn't included, but the Afterstep-like "workspaces" icewm provides perform a similar function. The win95-like taskbar, complete with a start-menu, is a help while gaining familiarity with the window-manager, but its functions are available using either the keyboard or the root-menu, and it can be turned off by setting ShowTaskBar=0 in the preferences config file.

Admittedly, this sort of desktop configuration is much easier with KDE beta4, but you pay for the ease of using and configuring KDE; it uses a quite a bit of memory and takes an awfully long time to start up. Of all the window-managers I've used, icewm and Chris Cannam's wmx seem to be the quickest to start.

I think that with the release of version 0.8.16 icewm is stable enough for heavy use and deserves wider exposure. Marco Macek is currently adapting icewm to the GNOME desktop, and further enhancements are likely.

Maxwell

Naturally, as soon as LG #27 appeared, the link I had provided for the binary Maxwell word-processor distribution was no longer valid. The Sunsite incoming directory was cleaned up for the first time in many months and the file was moved. Here are two links which hopefully will work for a while:

Still no news on whether Maxwell's source will be made available or not.

NEdit

NEdit version 5.02 was released recently. It's available from the home site. This Motif-based editor has become increasingly popular due to its easy-to-learn interface and intuitive mouse support. Its CUA-style menus and keystrokes are easy to learn for users coming from a windows or mac background, and it's a good choice for people desiring a powerful, syntax-highlighting editor complete with a native macro language. If the prospect of learning Emacs or VI is daunting, NEdit is ideal.

XaoS

Version 3.00 of the fast interactive fractal zoomer XaoS was released recently by its maintainer Jan Hubicka of Czechoslovakia. Xaos is now officially a part of the GNU project; I'm not sure just what this means beyond receiving Richard Stallman's blessing and new availability from the GNU FTP site.

This version has many new features, though the interface to them is still a series of text-based information panels. An interesting animated tutorial is now included which can be accessed by typing h twice. There are now so many new options and filters that the tutorial is very helpful in gaining an overview of XaoS's powers. Check out the "motion blur" filter, which really makes you feel like you are plunging headlong into the fractal depths. Here's a small screenshot:

motion blur screenshot

Another nice effect is gray-scale embossed fractal zooming, which looks like this:

embossed fractal

XaoS has really come a long way since the first version I tried. Keep up the good work, Jan!

Tcd

A non-beta 2.1 release of Tcd and its GTK version Gtcd is now available from the home site. This has become one the nicest CD-players available, with theme support using pixmap backgrounds. Several sample themes are included, several of which were contributed by users. The bug which caused the GTK version to crash at the end of a CD has been fixed, and the CDDB support has been enhanced.

XEphem

Version 3.1 of XEphem, Elwood Downey's astronomical program which I wrote about in LG #25, has been released. Noteworthy is the announcement on the XEphem home page of recent successes compiling XEphem with Lesstif rather than Motif. Here are some of the other new features:

  Sky View:

  General:

Lout

Jeffrey H. Kingston has released the source for version 3.12 of Lout, an upstart TeX/LaTeX competitor. New in this release is an option to output PDF documents rather than Postscript. As usual, the GPL-ed source can be obtained from this FTP site.

WordNet

In my review of the WordNet dictionary/thesaurus package last issue I mentioned that it would be useful to be able to compile the source, and that success had eluded me. Christopher Richardson e-mailed me a suggestion which enabled the WordNet files to build here; it's worth trying this if you have installed the package. The change is small, just a couple of lines in the top-level Makefile.

Try commenting out line number 101 (LOCAL_LDFLAGS = -static), then change line 135 from

#WNB_LIBS = -ltk4.2 -ltcl7.6 -lX11 -lm -ldl -lsocket -lnsl

to

WNB_LIBS = -ltk4.2 -ltcl7.6 -lX11 -lm -ldl # -lsocket -lnsl

The other changes needed in the Makefile are explained well in the comments. A natively compiled WordNet wish interpreter is only 61 kb., whereas the included statically-linked interpreter is 1.39 mb.

S-lang

John Davis, developer of the S-Lang programming language and a collection of excellent programs which make use of it, has released a new version of the S-Lang library package, along with new versions of the slrn newsreader, the jed emacs-like editor, and the most pager. One of the most interesting changes is the inclusion of exhaustive and readable documentation in a variety of formats for the S-lang language. If you install the new S-lang library and header files any applications which use S-lang will have to be recompiled.

Bomb

Scott Draves has released a new version of his Bomb interactive visual stimulus package, which I reviewed in LG #18. The svgalib and X programs have been merged into one executable, and it now works in X windows on 8, 16, and 32 bpp displays. An interesting new feature is the addition of Scheme-based scripting. GNU Guile is the Scheme-variant which Bomb needs, and a compiled libguile and other necessary files are included in the archive along with a script which is supposed to run Bomb with these files loaded rather than with any Guile version which might happen to be installed elsewhere. I couldn't get it to work, though it looks to be an interesting development. Several sample Scheme scripts are included as examples. Rather than needing separate executables for console use (using svgalib) and X, this new version will detect the current display-type and adapt itself accordingly. Bomb in an X session is no longer limited to 8-bit (256 color) displays; I've been using it in 16-bit X sessions and it works well, though it runs somewhat slower than in in a full-screen console display. Perhaps when Guile development stabilizes and a new official release is available (1.2 is the current release), scripting Bomb's behaviour will be possible for Linux users. Perhaps my Guile problems with bomb are due to my particular set-up; if it worked for you, let me know!

Version 1.18 of Bomb is available from this WWW site.


Last modified: Wed 29 Apr 1998


Copyright © 1998, Larry Ayers
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Open Source Summit

By Eric Raymond


On April 7th, 1998, a select group of the most influential people in the Open Source community gathered in Palo Alto to meet each other, consider the implications of Netscape's browser source release, and discuss where the Open Source movement is headed (and, especially how it can work with the market rather than against it, for the benefit of both).

The summit was hosted by O'Reilly & Associates, a company that has been symbiotic with the Open Source movement for many years. Linux's own Linus Torvalds attended. The inventors of all three major scripting languages were present: Larry Wall (Perl), John Ousterhout (Tcl) and Guido Van Rossum (Python). Eric Allman (Sendmail) and Paul Vixie (BIND/DNS) were present, representing their own projects and the BSD community. Phil Zimmerman, the author of PGP, was there too, as was John Gilmore, a co-founder of Cygnus and the Electronic Frontier Foundation. Brian Behlendorf spoke for the maintainers of Apache. Jamie Zawinski and Tom Paquin represented Netscape and mozilla.org. For my semi-accidental role in motivating the Netscape source release with ``The Cathedral and the Bazaar'', I also had the honor to be among those invited.

We met from 8:30AM to 5PM, following up with a well-attended press briefing. It was invigorating just to be around the amount of intelligence and accomplishment there, and a bit sobering to realize how absolutely critical their work has become--not just to the hacker culture but to the world expecting the Internet to become the vital communications medium of the next century.

One of the most important purposes of the meeting was simply to permit everyone to meet face to face, shake hands, look in each others' eyes and hear each others' voices. Many of us had never actually met each other before, despite having been in e-mail conversations for many years. Tim O'Reilly felt (correctly, I think) that Net contact has not been quite enough as a community builder; that the opportunities and challenges we face now require an attempt to build more personal trust among the chieftains of the major Open Source tribes.

In that, I think, the meeting was very successful. But it also certainly dealt with substance as well. We discussed different perspectives on the Open source/free software phenomenon and different definitions of it. One of the meeting's important results was a general agreement that, in all the variant definitions, public access to source was the most important and only absolutely critical common element.

We discussed the vexing issue of labels, considering the implications of ``freeware'', ``sourceware'', ``open source'', and ``freed software''. After a vote, we agreed to use ``Open Source'' as our label. The implication of this label is that we intend to convince the corporate world to adopt our way for economic, self-interested, non-ideological reasons. (This is the line of attack I've been pursuing though www.opensource.org and many recent interviews with the national press.)

We talked about business models. Several people in the room are facing questions about how to ride the interface between the market and the hacker culture. Netscape is approaching this from one side; Scriptics (John Ousterhout's Tcl company) and Eric Allman's commercial Sendmail launch are approaching it from the other. No one is certain yet what will work, but we were able to identify common problems and some possible strategies for attacking them.

We talked about development models--the various ways in which projects are organized, the strengths and weaknesses of each model, and what our individual experiences have been. There were no magic insights, but again it seemed helpful to recognize common problems.

We all understood this meeting could be only a beginning. Late in the day we developed a tentative agenda for a larger follow-up conference which O'Reilly may host later in the year. We hope to bring other key people from the Open Source community in on that follow-up--one of the last things Tim asked us to think about was who should have been with us, but was not.

The day ended with a well-attended press briefing at which all of us answered questions from Bay Area and national reporters--some got the message, some didn't. For every one that genuinely wanted to understand the logic of the Open Source approach, there was another who repeated ``lets-you-and-him-fight'' questions about Microsoft. Still, the first burst of publicity about our gathering (it is two days later as I write) has been very positive.

We are entering a very exciting time. In the wake of the Netscape release, the Open Source community, has achieved a visibility it never had before. We're making friends in new places and meeting new challenges. The larger world we're now trying to persuade to adopt our way doesn't care about our factional differences; it wants to know what we can do for it that is valuable enough to motivate a major change in the ground rules of the software industry.

To do that persuading, we'll need to pull together as one community more than we have in the past. We--not just the Linux community but the BSD people, the Perl, Python and Tcl hackers, the Internet infrastructure people and the Free Software Foundation--will need to present one face and speak one language and tell one story to that larger world.

That is, ultimately, why this meeting was so important. All of us came away with a better sense of what that story is and how each of the major tribes fits into it. Just the fact that we faced the reporters (and, by extension, the rest of the world) together was a very powerful statement. The summit was a good beginning--one to build on in the coming months.


Copyright © 1998, Eric Raymond
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Open Source Summit Trip Report

By Guido van Rossum


Date: Fri, 10 Apr 1998
This last Tuesday was the date for O'Reilly's Freeware Open Source summit and press conference. I promised a few people a trip report, and decided I might as well post it to the Python newsgroup. Warning: this is biased and may occasionally be mistaken where facts are concerned -- read at your own risk! It's also longer than I planned -- we did cover a lot...

Background

Tim O'Reilly realized that freely available software is much more important for the Internet than most people, especially many of the higher level decision makers and their informers (e.g. the press) think. The common perception is that the Internet is mostly built on proprietary software, e.g. Solaris, Netscape, and Microsoft products. But in fact, many crucial pieces of software are in fact not proprietary: sendmail delivers 80% of all email mail, BIND (the Berkeley Internet Name Daemon) does most of the name-to-IP-address translations, Apache is the #1 web server, the most popular encryption software is PGP, the three most commonly used scripting languages are Perl, Tcl and Python, and so on.

All these great pieces of software are freely available in source form! What's going on here? Of course, we all know why this is -- this is a great software development model. But corporate America is slow to discover this. The release of the Mozilla sources by Netscape was the first hint that this may be changing -- and once this was announced, O'Reilly started to receive calls from the press about this mysterious "freeware", and realized there was an opportunity to increase the press's awareness of open source software.

The meeting had two distinct parts: first, the invited software developers discussed the merits and problems of open source software, their plans, and so on. Second, at the end of the day we held a press conference. I'll report on both events separately.

The Summit

Basically, the gathered software developers talked amongst themselves from 9 till 5. It would have lasted all week if it weren't for Tim O'Reilly's talent as a moderator! Most participants were from the Bay Area; I flew in from Washington DC, and John Ousterhout (who normally also lives there) interrupted his vacation in Hawaii for a day.

We started off by a round where everyone was asked to mention their motivation and positive experiences: why did make source available, what worked well, what do you like about the process. In the end we agreed on two main reasons why the open source development model works so well. Most other reasons can be reduced to a special case of either of these.

One, from the developer's point of view, there's the advantage of *massive peer review* (also formulated as "debugging is parallellizable"). There is no other methodology for software development that yields software that is as reliable and robust as open source.

Two, from the user's point of view, the big advantage of open source flexibility. Linus Torvalds emphasized this with the following example: he is generally quite happy with Netscape's browser, but he has one wish: to disable animated GIFs, which are used almost exclusively for advertising. Without source, he couldn't do this! (Jamie Zawinski of Netscape called this "scratching itches." :-)

Other advantages of open source software development that were mentioned included low cost technology transfer, and the use of a reference implementation to help develop a standard.

As far as the initial motivation for making source available, a (surprising, to me!) large number of developers said their initial or ulterior motivation was moral/ethical: they believe that it is "the right thing to do" to make source available.

In the next round, we discussed our negative experiences -- what doesn't work, what are your biggest problems, and so on. Apart from some joke entries like "our biggest problem are stupid people", two problems were common.

One, as a package becomes more popular, the developer spends more time on helping users than on developing software. While there are ways to avoid this (e.g., don't answer email), it remains a problem -- without a support organization, you're it! This can be summarized as "you're crushed by your own success."

Two, it's often a problem to get the people who want to contribute to do so in a meaningful manner. The darkest picture was painted by John Ousterhout, who claims that a contributed patch saves him only 50% of the development time compared to writing it himself from scratch. Some agreed; others (like me) rebutted that it's a sliding scale -- yes, for the core of the package, this may be true -- but there are many peripheral items where much contributed code can be accepted "as is" -- even if it doesn't work -- since the massive peer review / debugging once it is released will eventually fix it. Linus has an extreme but clear point of view: the *interfaces* need to be designed carefully by the main developer; the implementations may be buggy. For example, Linus doesn't mind at all if there are some buggy device drivers -- that only affects a small number of people, and only until they get fixed -- while a bad design will haunt you until the end of times. (This matches my own experience, but Linus said it clearer.)

This boils down to a matter of control. It was noted that almost all systems represented have a core that's kept under (relatively) tight control by the main developer, and a well-defined and flexible extension mechanism which is used by most contributors, where control is less important.

Other problems that were mentioned included the current intellectual property laws and the way the legal system is abused to enforce them in strange ways, and unfair "badmouthing" of open source software by competitors trying to peddle proprietary solutions. Also, the distribution model (revenue model) isn't ideal -- you can't buy most freeware packages at your local neighborhood software supermarket, even in Palo Alto. (You can buy Red Hat Linux there, though!) Code bloat was also mentioned (but the Netscape boys pointed out that this is not a unique problem of open source software :-).

After lunch, we discussed what to do about the problems.

We didn't say much more about the control issues, except to note that managing a distributed development team like the contributors to the average open source package is a bit like herding cats. (The first time this came up I heard "hurting cats", which I found a bit strange -- luckily Tim or other O'Reilly people made copious notes on a flip board. :-) The best contribution (for me) came from Eric Raymond and Cygnus' John Gilmore, who noted that it's possible to train your contributors, (e.g. through style guides, coding standards etc.), and that this is actually an effective way to improve the quality of the contributions. One way to go at it is simply saving scraps of "internal documentation" as you are producing them, e.g., in response to email questions from other developers, and in a couple of years, voila, an internals manual!

The rest of the time (and also interspersed throughout the rest of the day) we discussed various business models that may make open source a sustainable activity, rather than a hobby or a questionable skunkworks activity.

It turns out that almost everyone present was involved in an attempt to commercialize their software -- and *everyone* wanted to do so without making the sources proprietary. Everybody's situation is a little different though -- sometimes because of the user base of their software, sometimes because of the competition, sometimes for legal reasons, and sometimes simply because they have different motivation.

For example, John Gilmore told us how Cygnus is successful selling GCC ports to the embedded systems industry -- a small niche market that, before Cygnus came in, was monopolized by a small number of compiler companies who'd charge a million to retarget an existing compiler to a slightly different chip.

Another success story was told by Sameer Parekh of C2Net, who are selling Stronghold, a commercial, secure version of Apache. Because of the patent situation on encryption software, there is no free encryption code that can be used for commercial purposes, so companies in need of a web server with encryption have to pay *some* vendor. Note that C2Net provides their customers with the source for their version of Apache, but only with binaries of their encryption library.

Yet another story was told by Paul Vixie of the Internet Software Consortium, a non-profit that's maintaining BIND. Some big computer vendors paid the ISC a lot of money for Paul to further develop BIND, and didn't mind that Paul would make the sources available for free to others, as long as the work got done.

There were also those for whom it was too early to declare success (or failure): Larry Wall and Linus Torvalds aren't making any money directly off selling copies of Perl and Linux. Others are though, and of course O'Reilly makes a lot of money on the Perl books -- as are other publishers. Linus has an exciting non-Linux related job at Transmeta, and has no plans to personally commercialize Linux; Larry however is working for O'Reilly and there are some plans to commercialize at least the Windows port (which is done by an outside company with some kind of license agreement from O'Reilly).

John Ousterhout has just made the jump to the commercial world for Tcl/Tk with his new company Scriptics, formed after Sun canceled its plans for producing Tcl/Tk products. John is planning on a mixture of open source and proprietary software: Tcl and Tk themselves will remain open source forever, but Scriptics plans to make money off proprietary tools like a debugger and a source analyzer. One reason to keep Tcl/Tk free is to ensure that nobody has an incentive to "fork off" an incompatible version.

Eric Allman of Senmail, Inc told a similar story -- he had first hoped to create a consortium but found all doors closed, so in order to remain in control he quit his job and formed Sendmail, Inc. with Greg Olson. He promises that a free version will remain available, but seems to aim at licensing it to the big computer vendors.

While everybody's story is different, there's one common line: everybody is working on a *sustainable* business model that produces a sufficient revenue stream to pay for developers and a support organization, without giving up the advantages of open source software. As Netscape's freeing of the Mozilla source shows, this idea is even getting some attention amongst traditional proprietary software vendors!

Freeware, Open Source or Sourceware?

We spent some time discussing the terminology of choice. Tim took a straw poll. Free software or freeware got almost no positive votes the cutesy "freed software" even got many negative votes). The winner was a tie between open source software (favored by Eric Raymond) and sourceware (which has been used by Cygnus).

I've had some reservations about "open source", but I like it better than the too-cute sourceware, and I agree with the perception that freeware has a bad reputation -- and of course, much "freeware" comes without source, while the common factor of the software represented at the summit is the availability of source code.

Eric Raymond has trademarked the term "Open Source" (capitalized) and has a somewhat precise definition of what is or isn't Open Source on his web site (see below). I sometimes worry that this can become a limitation: what if I call my software Open Source, with his approval, and later I change the terms and conditions, or Eric changes his definition -- I could be sued by someone who says I have to stick to the Open Source rules. Eric believes that this won't happen, and besides says that everyone is free to use the "open source" (lowercase) without sticking to his definition. We'll see -- for now, I'm favorable to the concept, but we won't put "Open Source" on the Python web site yet. (Note that we don't use "freeware" either.)

Where next?

We briefly discussed how to approach the press and possible follow-up meetings. The general conclusion seems to be that the time is ripe to try and get the message to the next level of managers in companies that are already using the products of open source software development -- the CIOs who don't even know that their developers are using Perl or Python, and only listen to their peer CIOs, the Wall Street Journal, and the expensive consulting and market analysis firms that haven't discovered open source software yet either. How we're going to do that? Clearly the press conference is a step in the right direction, and O'Reilly will be following up to the press. Eric Raymond is very active in talking to corporate people. Sarah Daniels of Scriptics has some big ideas for a joint ad campaign (we'll see...).

What I got out of it? Lots -- more clarity about why open source software works so well, and how to make it work even better, as well as motivation to try and find a revenue stream.

The press conference

The press conference started around 5.30 and lasted until 7.30 or 8.00 PM. All developers sat behind a long table behind name tags, with Tim O'Reilly in the middle. There were about 20-30 reporters; the first hour we had our pictures taken about twice a second. Tim O'Reilly gave a short introduction (see URLs below) and then let the press go loose. They mostly picked the better-known names, so I didn't get to say much (of course, much attention went to the two guys from Netscape).

As predicted, it was at times difficult to divert the subject away from "how are you taking on Microsoft" or "clearly this can't work". With some journalists, you can give a perfectly clear answer to the question, and all they do is give you a blank stare and ask the same question again with slightly different words. But most of them were really trying to understand the message (and some clearly had already gotten it before they came). Once the formal part of the press conference was over, everyone stuck around and many one-on-one or two-on-two interviews were carried out.

All in all it was an interesting and useful event; see below for the first results. Of course, we'll have to see if we really change the perception of the open source software development model as a fringe freak issue...

References

(These predate the summit.)

Press coverage

(As forwarded to my by O'Reilly's PR team.)

Before the summit, Tom Abate wrote a column in the SF Chronicle, "The Brains Behind Freeware to Meet."

"Open source gurus convene"

Wednesday's San Francisco Chronicle had a full page piece by Tom Abate on the Open Source story, containing interviews with (and pictures of) Larry Wall, Linus Torvalds, Paul Vixie, and Tom Paquin. The on-line version, sans pictures.

Also, NPR ran a long piece on Linux Wednesday evening. There's a RealAudio version

Judy DeMocker's piece in Meckler's internetnews.com.

John Markoff is planning on running his piece on the summit in next Monday's New York Times.

For even more information on this topic, see webreview.

--Guido van Rossum (home page)


Copyright © 1998, Guido van Rossum
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next



FOR IMMEDIATE RELEASE: April 15, 1998
FOR FURTHER INFORMATION CONTACT
http://www.oreilly.com/, http://www.webreview.com/


OPEN SOURCE PIONEERS MEET IN HISTORIC SUMMIT

Developers of key Internet technologies confirm advantages of open source development process and agree to cooperate in spreading the word

Sebastopol, CA--Heavyweights of the Internet software community met in an historic summit in Palo Alto on April 7 to explore ways of expanding the use and acceptance of open source software development, which relies on wide distribution of program source code to spur innovation and increase software quality. Organized by Tim O'Reilly, CEO of O'Reilly & Associates, the attendees included creators of underlying Internet services such as the Domain Name System and email routing, as well as web servers and browsers, scripting languages, and even whole operating systems.

The meeting's purpose was to facilitate a high-level discussion of the successes and challenges facing the developers. While this type of software has often been called "freeware" or "free software" in the past, the developers agreed that commercial development of the software is part of the picture, and that the terms "open source" or "sourceware" best describe the development method they support.

Open source software, or sourceware, was defined at the summit as "software whose source code is available, so that users can customize or extend it." This is in contrast to most software, whose source code is not available to the public. Sourceware may be available for free or in commercial packages.

Summit attendees also agreed on the most important aspects of open source software:

  1. Flexibility. Because the source code is freely available, any given program may have hundreds or thousands of developers. Each open source community has tremendous flexibility in modifying the program. Developers can modify the software to suit their needs, or the needs of their companies, customers or communities. Stability and consistency for open source software is typically maintained by the creator or a development team who controls the core release of the software. Commercial entities generally can't afford to spend the resources on niche markets, of which there may be thousands. But developers working on their own can easily do so, then make their work available to others for further modification and improvement.
  2. Innovation. The development model encourages tremendous innovation. When developers can see and modify source code, they receive rapid feedback and a constant flow of ideas from other developers. Innovation is also taking place with many companies creating new approaches to business, successfully integrating sourceware and commercial efforts. Many of the companies present at the summit freely distribute source code, and earn revenue through offering services, support, documentation, customization, or additional software products to their customers.
  3. Reliability. With hundreds or thousands of developers testing, inspecting, and fixing bugs for a given program, the quality assurance program for open source software is far more reliable and efficient than any commercial effort can afford to be. Massive, independent peer review, similar to what takes place in the scientific community but on a much larger scale thanks to the Internet, is a major strength.
  4. Faster development time. With so many more testers, development cycles can go much faster than in typical commercial efforts.
The group identified numerous ways that sourceware is already mission-critical throughout industry, academia, and government. The myth is that IT managers won't rely on free or open source software. As Tim O'Reilly pointed out at the press conference following the event, at least two of the open source programs whose developers attended the summit, Bind and Sendmail, form the backbone of the Internet infrastructure that all Internet-connected companies rely on. Languages such as Perl, Tcl and Python are intimately involved in the operation of virtually all major web sites, and Apache is the server of choice for more than half of all web sites.

The attendees agreed that future collaboration would take place in coming months, including workshops on open source business models, project management and source code licensing issues, and coordinated public relations efforts involving open source programs. There are tens of thousands of developers worldwide who were not at the summit, but who are integral to the development of open source software. Followup meetings will focus on bringing together larger groups.

Spreading the word about the importance and value of open source software was seen as vital to the group's efforts. O'Reilly noted, "Until Netscape announced that they would release the source code to Communicator, open source software received little attention in the press. Now everyone wants to know about it. It's important to realize just how successful and widespread open source development is. Much of today's most innovative and important software has been built using this model."

OPEN SOURCE SUMMIT ATTENDEES & AFFILIATIONS

Attendees included:
* Tim O'Reilly, CEO of O'Reilly & Associates, publisher of books on Linux, Perl, Apache, DNS & Bind, sendmail, Tcl, PGP, and other open source software, and presenters of the Perl Conference. * Linus Torvalds, creator of the Linux operating system, considered by many to be the only real competitor to Microsoft's hold on the desktop; * Tom Paquin and Jamie Zawinski of mozilla.org, Netscape Communications; * Larry Wall, creator of the Perl language, which is used even more widely than Java to create active content and manage web sites; * Brian Behlendorf, one of the founders of the Apache Group, whose Apache web server runs more than 50% of all Web sites; * Sameer Parekh, President of C2Net Software, Inc. and member of the Apache Group; * Eric Allman, CTO of Sendmail, Inc.; author of sendmail, the mail transport agent which routes over 75% of mail on the Internet today; * Greg Olson, CEO of Sendmail, Inc.; * Paul Vixie, maintainer of the Bind program, which manages the Internet's Domain Naming System; * John Ousterhout, CEO, Scriptics Corp. and creator of the popular Tcl scripting language which is widely used for rapid GUI development, web content generation and extensible applications; * Guido van Rossum, creator of the fast-growing Python language; * Phil Zimmermann, creator of the well-known PGP (Pretty Good Privacy) cryptography program; * John Gilmore, co-founder of Cygnus Solutions, commercial supporters of open sourceware programming tools like the ubiquitous GNU C compiler; and * Eric Raymond, independent developer active in the Linux community and author of the influential paper, "The Cathedral and the Bazaar."

RELATED WEB SITES

Apache: http://www.apache.org/ Bind: http://www.isc.org/ C2Net/Stronghold: http://www.c2.net/ Cygnus Solutions: http://www.cygnus.com/ Free Software Foundation: http://www.fsf.org Linux: http://www.svlug.org/ Mozilla: http://www.mozilla.org/ Netscape: http://www.netscape.com/ Open Source: http://www.opensource.org/ O'Reilly: http://www.oreilly.com/ Perl: http://www.perl.com / also http://www.perl.org/ PGP: http://www.nai.com/products/security/freeware.asp Prime Time Freeware: http://www.ptf.com/ Python: http://www.python.org/ Scriptics/Tcl: http://www.scriptics.com/ Sendmail: http://www.sendmail.com/ WebReview: http://www.webreview.com/


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


A Tale in Writing

By Martin Vermeer


A tale in writing

I suppose most of you are like me, every evening watching the tv, and on friday going out and renting the latest DV's from the corner store. Everybody does that; after work, you want some fun and don't want to think too much, and those stories are quite good, actually. Of course they all look the same, after a while; I suppose that's what you get when one company is controlling all production. But at least it's safe for the kids to watch. You know what to expect for your money.

Now the other day, a couple years ago, a funny thing happened to me. A friend -- I won't tell you his name -- put this thing into my hands. A small rectangular thing, you could put it into your pocket. No power chord; funny. And you could open it, just like that. Inside, sheets and sheets of white paper bound together; and on those sheets, small black marks, thousands of them. You could open the thing -- a book, they call it -- at any point; it is random-access, just like a video disk, if you know what I mean. Not like those oldfashioned video tapes that you have to reel to get to the point you want.

Now the most fascinating thing about this book, and those black markings: they mean something! Some people actually look at them and get the meaning straightaway, turning page after page, taking in a complete story as if they were watching it from the DV screen. I looked at it with amazement. It was really baffling -- there were just these black signs on the paper, rows and rows of them, letters and words and sentences -- and as dead as doornails. But the moment someone capable of taking them in, looked at them, they turned into a living story, with really living people talking back to the person looking at the book! You know, I got all worked up about it, when the reality of the thing was finally penetrating. Some of those words even referred to non-existing things. Boy, this was wild stuff!

I wanted to learn this too. I told my friend, and he said "you want to learn to read? Sure!". That's how I started learning to read books. It wasn't easy, mind you. It was a lot of hard work, and took me many months before I would be able to understand the meaning even of simple text (sorry for the jargon). Several times, I would quit in desperation. But suddenly it started to make sense, and things would miraculously come to life before my mind's eyes -- without any television screen, just me and my mind. A new universe opened itself to me!

Through my friend, I met other people who had gone through the same experience. What struck me was, how friendly and helpful -- and civilized -- they all were. They didn't look down on me for only recently having joined their ranks; no, they helped me, provided me with books to read, and gradually I became versed in the ways of this new culture, and made a habit of reading books all the time. If only I could explain the experience, of real people from near and far, coming to life just from dead marks on paper, no electricity, no display screen involved, no nothing... just the miraculous working of the unaided human mind...

My family was worried about me; they witnessed with growing concern how my previous voracious appetite for digital video cartridges all but disappeared -- those are really, and I mean really, mediocre and devoid of imagination once you get to know books -- and I would withdraw with these weird, archaic-looking rectangular paper objects, spiritualistic stuff that sane people would not have anything to do with... I tried, with little luck, to explain to them what had happened to me.

Now, reading has become a way of life for me; I sometimes withdraw to remote places, with just books as company. On one occasion I climbed a tree to read a book while sitting up there; very uncomfortable, but I just wanted to show to myself  that it could be done, as books have no power cord etc. Since I've found out that this is rather typical behaviour for newbie reading geeks. Or, I go out and meet my reading friends, and discuss at length all the things of common interest. There is no end to it really.

A funny thing about the reading subculture is that you can get a discussion going about the most far-out and irrelevant subjects. I remember a heated debate going on over many evenings on where the page numbers (the sequence numbers added to the pages in a book to more easily refer to them) should be put: bottom right, bottom middle or top right! One would imagine that better uses for one's time could be found... and then there are books containing, in addition to text, pictures. These are a sort of hybrid between "real" books and digital video's. I have been told that they may help to spread the reading art to a broader audience... others, however, especially the veteran reading subculture members, are disgusted by this, saying that it contaminates the true and noble art and is a concession to commercialism -- meaning, of course, Universal Digital Video Inc.

One thing I also learned, was that books, or texts,  are something you can produce yourself. You can put black marks on paper -- writing, they call it -- until you got the equivalent of a book made all by you. Then, when people read your book, you spring to life before their eyes, and you can tell them whatever you want -- without even appearing before a camera! Imagine.

Not that producing text is easy! I know, because I tried it. I still do it to keep a record of my experiences, for later (I have since learned that many people do this). But the things I have tried to write for others turned out rather awful. It takes skill and training, lots of training, to produce something worthwhile! That's what experienced writers have told me. They also kindly offered to help me develop my skills. Perhaps someday...

All this has been now several years past. You may have noticed, from the above, how helpful people in the reading subculture are towards newcomers; they really go to great lengths to help you, if you are prepared to learn. They have little patience with intellectual laziness. And you know something: I too, quite automatically and selfevidently adopted the rules of the subculture, and I too find myself instructing newcomers in the noble art and its cultural premises. And I am writing texts that are read by people, about things existing and non-existing, about people living and dead and imaginary, in the comforting knowledge that, by my writing, all this does exist, and all these people do live, in a very real sense. Figure that. It is indeed better to give than to receive!

As a final word, I have referred to the community of reading people -- literates, they call themselves -- as a subculture. Numerically this is true, but I find it unfortunate. They should be the mainstrain culture! Think, however, of the effort required to teach the whole population the art of reading! This would be obviously quite unthinkable. Imagine, a fully literate population! Bullshit. So, a subculture it will undoubtedly remain, and I have been fortunate and privileged to have been able to join it. You can too -- you only have to want it hard enough.

Welcome to my world!
 

Martin Vermeer
mv@liisa.pp.fi

Any similarity to real circumstances in the real world is wholly and fully intentional.


Copyright © 1998, Martin Vermeer
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


"Linux Gazette...making Linux just a little more fun!"


Where Nothing Else Will Do

By Chris Gonnerman


Introduction

Long, long ago I was hired for my first paying job, working on a Tandy 16B running XENIX. It had about a meg of RAM (if I remember right), a total of 30 Mb of hard drive (two 15 Mb SCSI-1 external drives), and it actually served three users at the same time!

Some years later I joined the Air Force, and because I knew Unix (and liked it), I was assigned to program and administer a pair of Unix-based Motorola computers. Wow! Four meg of RAM, 60 Mb of hard disk, and 32 serial ports each, running Unix System V Release 3. I thought I was in heaven! (Note, Windows 3.0 was just about to be released...)

Well, to make a long story short, several years later I have my own company, New Century Computers, selling and servicing Intel and compatible PC equipment. Most of these computers run Windows 95, and while I have always liked the interface, I have always missed Unix. The Motorola's I worked with were shut down every six months (whether they needed it or not) and cleaned and serviced. Crashes? Only in the early stages, when we were debugging the vendor's tape drivers.

I don't remember when I first heard of Linux, but after browsing the book section at WaldenBooks I bought "Linux SECRETS" by Naba Barkakati (printed by IDG Books). I installed Linux on a spare 486 and played around with it. Still, my customers want Windows, so I sell them Windows.

But sometimes, Linux is the only choice. Nothing else will do.


Evil DOS Programs

One of my oldest customers is the Juvenile Office in Adair County, Missouri. When the chief Juvenile Officer called me one day and said he needed to run Lotus Notes on his network, I started sweating bullets.

A few years ago (before I started doing business with them), the JO purchased a custom written DOS program called "Juvenile Court Records." The programmer wrote it using the Clarion database manager, and he was not much of a programmer. They ran it on a LANtastic network.

This "evil DOS program" requires 615Kb of conventional memory. It actually only runs right in 620 or more Kb. They did this under DOS using a QEMM (the Quarterdeck Expanded Memory Manager) feature called VIDRAM. VIDRAM remaps the color text screen to the monochrome address range, and adds the color text memory region to the conventional memory pool, resulting in 784 Kb of total conventional RAM. After DOS boots and the LANtastic TSR's load, they still had 630 Kb or so.

Anyway, this program will not run under Windows 95. No way, no how. Windows 95 will never allow enough conventional RAM, and VIDRAM won't work there (blows up very nicely). I knew this, and so I began to sweat bullets. What Mike (the chief Juvenile Officer) expected to see was a Windows 95 window with JCR in it on the same screen as a Lotus Notes window.

Why is keeping JCR running so important? First, because all of their records regarding the disposition of juvenile cases for the last five or six years are in it. We can't get the Clarion source code; the programmer has evaporated. We could probably rewrite the program from scratch, but the State of Missouri (Office of State Courts Administrator, or OSCA) has stated that they will be providing a case-management program "in about two years." Spending to rewrite the program would cost more than keeping it running (at this point), and would be wasted money when OSCA delivers their solution. So, they are stuck with this boat-anchor program.

Enter Linux. I had been searching for an answer for most of a month when I remembered DOSEMU. I fired up XDOS on my Linux box and looked at the conventional RAM total.

627Kb free. Hallelujah!

We sold them a new file server, a Pentium 166 with 16 Mb RAM, running Debian Linux. I installed DOSEMU 0.66.7, straight from the CD, and then configured it with Windows 95 DOS. While OpenDOS is supposed to get an extra K free (628 Kb), since Windows 95 comes with a new system, I just used it. After SYSing the diskimage with Win95, I basically installed the minimum DOS commands needed to run the JCR program and work with the filesystem. I copied the EXITEMU command to the names LOGOUT and LOGOFF. I also set up "conventional" file sharing via Samba. Then the work began.

First, several DOS machines had to be upgraded to Windows 95, which of course required RAM for most, hard drives for some, and new mainboards for a few. To access the server from these Windows 95 machines, we elected to use NCD PC-Xware. Of all the commercial X Servers for Windows 95, it is the simplest (and cheapest) I have found. It makes "shortcut" icons for connections automatically after you go through the Connection Wizard, and it works well with XDM.

As an aside, telnet would have worked, but getting a usable color telnet is hard. The JCR program is written like many DOS programs, using a broad range of colors. XDOS over PC-Xware also makes all the function keys work.

Now, Mike (and the other JO employees) can see Lotus Notes and the JCR program at the same time, on the same system. No rebooting. The total price tag for PC-Xware was higher than the cost of the equipment, but the customer is happy.


Unexpected ISP Problems

The Paris R-II School District (in Paris, Missouri; I'm not international yet) is another regular customer. In the High School, they have a well-stocked lab of Windows 95 computers with a Windows NT 3.51 server. MORENet, a state contractor, provides them Internet service at a subsidized rate, and the High School and Elementary are connected via fiberoptics. The whole Internet connection setup was a bargain (as these things go) since the school district received a lot of donated labor and discounted materials.

Unfortunately, the Junior High is across town. After looking into leasing a line between the two locations, and considering even radio frequency networking, the Superintendent hit upon a better idea. He approached several ISP's in neighboring towns (Paris had no ISP at that time) and offered them co-location on school district property. One of them, MCM Systems in Moberly, Missouri agreed: In return for unlimited free rent of a closet, the Junior High would receive unlimited free Internet access. MCM Systems installed a dial-up router in the closet of the Band Room and we ran Ethernet cable from there to the Computer Lab. MCM is providing Internet access in the Paris area for the cost of the leased line and equipment only, and they are happy (as far as I can tell).

So, I arrived one day to set up the Computer Lab. After installing all the net cards and connecting everything, I called MCM Systems to get the IP address range we were to use.

The tech I spoke to informed me that we had a range of ten IP addresses available. Oops. In the Junior High they already had about 25 computers to connect to the Internet. We had a problem.

This time it took me only a few minutes to decide on Linux. My mandate was to "make it work," and outside of the costs already bid the district didn't have a lot of money to spend to do it. I selected a computer in the Junior High lab with a Pentium 75 mainboard, transferred a larger (1.6 Gb) hard drive from another computer, and started installing Slackware Linux. I copied a full install set from the Internet to a DOS partition on the 1.6 Gb drive, created and booted from the boot/root set, and got to work setting everything up.

Naturally the whole purpose of this exercise was to use IP Masquerade, and this was my first time using this feature. I installed a second NE2000 compatible, set up a private network address (10.0.0.1) on the "inside" network adapter, and the first assigned Internet address on the "outside" adapter. So far, so good. I tested the installation from a Windows 95 computer using 10.0.0.2 (assigned through the network control panel), and it worked great!

The problem started when I installed ISC (Internet Software Consortium) dhcpd and tried to set it up. It configured and compiled fine, and I set the broadcast route as described in the README, but when I tried to start it I got an error message:

 
The standard socket API can only support hosts
with a single network interface. If you must run
dhcpd on a host with multiple interfaces, you must
compile in BPF or NIT support. If neither option
is supported on your system, please let us know.

I did some research, and learned that in this version of dhcpd, any system lacking Berkley Packet Filter and Network Interface Tap technology (such as Linux) can't run dhcpd with more than one network interface (PPP and SLIP excepted). I examined the program closely, and hacked it to hard-code the interface name I wanted to support. After all, I didn't want to support both interfaces; the outside interface doesn't need DHCP.

Later, in the privacy of my office, I tore apart another copy of dhcpd. Instead of hard-coding the interface name, I added a command to the dhcpd.conf file. I also refined the lease-time format. Here is a sample:

 
dhcpd.conf
server-identifier 10.0.0.10;
interface "eth0";

subnet 10.0.0.0 netmask 255.0.0.0 {
option domain-name "paris.mcmsys.com";
default-lease-time 3 days 12 hours;
max-lease-time 7 days;
option subnet-mask 255.0.0.0;
range 10.0.1.20 10.0.1.250;
}

Previously, the *-lease-time commands used a single number, as seconds, as in:

 
default-lease-time 302400;

I think my way is more readable.

Later still I took this code to the school and implemented it. It worked fine, so I called the installation done. For the cost of labor and one extra net card, we converted a workstation into a capable server and solved an Internet connection problem.

We are running Apache on that system now, not so much for web services as for its web proxy. Twenty-five students on a single 56Kbps line is kind of slow, but with the proxy the performance is acceptable. It is also used as a file server via Samba.

I submitted my dhcpd patches to ISC, and received an email back from Ted Lemon telling me that my patch doesn't do what I think it does. Hmm. So I studied his explanation and concluded that my patch is not a good all-around solution. However, the BPF/NIT problem with Linux has apparently been fixed somewhere along the line, and ISC's version 2.0 dhcpd (now in final beta) handles it. I highly recommend ISC's dhcpd; it seems solid (even the pre-1.0.0 beta I originally used).

I have also uploaded my patches to Sunsite, as dhcpdpatch-1.0.0.tar.gz, in case someone would like to try the *-lease-time feature.


When Linux is the Best Choice

I love Linux. I am addicted to the features, the power, and the configurability. I especially like that I can work directly with the source for almost any program, fixing bugs and adding features as I need or want to.

The power of Linux is flexibility, stability, and economy... the ability to run a real OS on older or inexpensive hardware.

For my average customer (home users and small businesses with a single computer), Linux is not presently the best choice. These users don't have the depth of understanding to work with Linux as we do. (Frankly, few of them have any real understanding of Windows 95 either). The average user expects to be able to purchase anything off the shelf at Wal-Mart and have it work for them. Someday, maybe, Linux will be a good choice for these users, if we (the entire Linux community) keep supporting and improving it.


Copyright © 1998, Chris Gonnerman
Published in Issue 28 of Linux Gazette, May 1998


[ TABLE OF CONTENTS ] [ FRONT PAGE ]  Back  Next


Linux Gazette Back Page

Copyright © 1998 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the Copying License.


Contents:


About This Month's Authors


Larry Ayers

Larry lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP.

R. J. Celestino

Bob Celestino holds an undergradutate degree in Mechanical Engineering and advanced degrees in Electrical and Computer Engineering. He has been a Linux devotee for more than four years. When not recompiling his kernel or pushing Java to its limits, he enjoys spending time with his wife and three kids. He pays the bills by posing as a software engineer at Harris Corp. in sunny Florida.

Jim Dennis

Jim is the proprietor of Starshine Technical Services. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/ Peter Norton Group, and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim.

Chris Gonnerman

Chris lives and works in the small northeast Missouri town of LaBelle, running New Century Computers. His wife, Tracy, works with him in his business, and he also employs two full time technicians.

Michael Hamilton

Michael has been working as a freelance Unix C/C++ developer since 1989. More recently he's been working on web applications and Unix server administration. Michael tripped over one of Linus's postings back at the beginning of 1992 and has been hooked ever since.

Glen Journeay

Glen is a mechanical engineer with a background in automated test and real-time control using a variety of computer systems. He's designed, built and maintained automated test systems for Boeing and the U.S. Navy, since graduating from University of California, Davis in 1983. Glen, his wife and two children live in Poulsbo where they like to ski, hike, sail, and ride. He's been using a Linux-based home network since 1996.

Mike List

Mike is a father of four teenagers, musician, and recently reformed technophobe, who has been into computers since April,1996, and Linux since July, 1997.

Mark Nielsen

Mark is a Systems Specialist at The Ohio State University managing a linux users group. His main goal is to try and help the downfall of the evil empire or to help split the evil empire into a company with the operating system and a company with the applications. He loves Perl, Apache, Linux, and PostgreSQL as free and open software for the people by the people to counteract the meaningless purchase of terrible software based on ignorance. With knowledge comes power. Spread the word.

Cesare Pizzi

Cesare started to play with computers when he was 10 years old; his first box was a Commodore VIC20. During the years he changed several systems, and a couple of years ago met Linux. After the first impact, he started to develop software, little kernel patches, and so on. In the real life, he works in a support team in HP as a contractor. He likes music, movies, travels and red wines.

Bob van der Poel

Bob started using computers in 1982 when he purchased a Radio Shack Color Computer complete with 32K of memory and a cassette tape recorder for storing programs and data. He has written many programs, and marketed many for the OS9 operating system. He lives with his wife, two cats and Tora (the wonder dog) on a small acreage in S.E. British Columbia, Canada. You can reach him via email: bvdpoel@kootenay.com. If he's not too busy gardening, practicing sax or just having fun he'll probably send a prompt reply.

Eric S. Raymond

Eric is a semi-regular contributor to Linux Journal. You can find more of his writings, including his paper ``The Cathedral and the Bazaar'', at http://www.ccil.org/~esr/.

Guido van Rossum

Guido is the father of the Python programming language and a leader in the Open Source community.

Jim Schweizer

Jim is currently a Consultant in web site administration and design. He is the author of an on-line textbook about Computer and Internet use and is an Instructor of English at several universities in Western Japan. His main hobby is being the Webmaster for the Tokyo Linux Users Group.

Martin Vermeer

Martin is a European citizen born in The Netherlands in 1953 and living with his wife in Helsinki, Finland, since 1981, where he is employed as a research professor at the Finnish Geodetic Institute. His first UNIX experience was in 1984 with OS-9, running on a Dragon MC6809E home computer (64k memory, 720k disk!). He is a relative newcomer to Linux, installing RH4.0 February 1997 on his home PC and, encouraged, only a week later on his job PC. Now he runs 5.0 at home, job soon to follow. Special Linux interests: LyX, Pascal (p2c), tcl/tk.


Not Linux


Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites.

I've been slightly ill the past week, and as a result, I have been more than slightly grumpy. My fellow employees are avoiding me like the plague, and I don't blame them. Illness should be declared illegal, so this sort of alienation doesn't go on. :-)

I went to see Titanic for the second time--this time with my husband Riley. We saw it at the local Cinerama--that big curved screen is just awesome for "larger than life" films such as this one. It was fun seeing the February cover of Linux Journal sail by.

I'm planning to set up a page for translations of LG into languages other than English. If you have such a site or know of one, please let me know.

Have fun!


Marjorie L. Richardson
Editor, Linux Gazette, gazette@ssc.com


[ TABLE OF 
CONTENTS ] [ FRONT 
PAGE ]  Back


Linux Gazette Issue 28, May 1998, http://www.linuxgazette.com
This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com