
           Linux Gazette... making Linux just a little more fun!
                                      
         Copyright  1996-98 Specialized Systems Consultants, Inc.
     _________________________________________________________________
   
                       Welcome to Linux Gazette! (tm)
     _________________________________________________________________
   
                                 Published by:
                                       
                               Linux Journal
     _________________________________________________________________
   
                                 Sponsored by:
                                       
                                   InfoMagic
                                       
                                   S.u.S.E.
                                       
                                    Red Hat
                                       
                                   LinuxMall
                                       
                                Linux Resources
                                       
                                   cyclades
                                       
                                    stalker
                                       
                                  LinuxToday
                                       
   Our sponsors make financial contributions toward the costs of
   publishing Linux Gazette. If you would like to become a sponsor of LG,
   e-mail us at sponsor@ssc.com.
   
   Linux Gazette is a non-commercial, freely available publication and
   will remain that way. Show your support by using the products of our
   sponsors and publisher.
     _________________________________________________________________
   
                             Table of Contents
                           January 1999 Issue #36
     _________________________________________________________________
   
     * The Front Page
     * The MailBag
          + Help Wanted--Article Ideas
          + General Mail
     * More 2 Cent Tips
     * News Bytes
          + News in General
          + Software Announcements
     * The Answer Guy, by James T. Dennis
     * Booting Linux with the NT Loader, by Gustavo Larriera
     * Defining a Linux-based Production System, by Jurgen Defurne
     * EMACSulation, by Eric Marsden
     * Evaluating postgreSQL for a Production Environment, by Jurgen
       Defurne
     * Introducing Samba, by John Blair
     * Linux Installation Primer, Part 5, by Ron Jenkins
     * Linux on a Shoestring, by Vivek Haldar
     * The Linux User, by Bryan Patrick Coleman
     * New Release Reviews, by Larry Ayers
          + Kernel 2.2's Frame-buffer Option
     * Running Your Own Domain Over a Part Time Dialup, by Joe Merlino
     * Setting Up a PPP/POP Dial-in Server USING Red Hat Linux 5.1, by
       Hassan Ali
     * Touchpad Cures Inflammation, by Bill Bennet
     * Through the Looking Glass: Finding Evidence of Your Cracker, by
       Chris Kuethe
     * USENIX LISA Vendor Exhibit Trip Report, by Paul L. Lussier
     * X Windows versus Windows 95/98/NT: No Contest, by Paul Gregory
       Cooper
     * Announcements by Sun and Troll Tech by Marjorie Richardson
     * The Back Page
          + About This Month's Authors
          + Not Linux
       
   The Answer Guy
   The Graphics Muse will return next month.
     _________________________________________________________________
   
   TWDT 1 (text)
   TWDT 2 (HTML)
   are files containing the entire issue: one in text format, one in
   HTML. They are provided strictly as a way to save the contents as one
   file for later printing in the format of your choice; there is no
   guarantee of working links in the HTML version.
     _________________________________________________________________
   
   Got any great ideas for improvements? Send your comments, criticisms,
   suggestions and ideas.
     _________________________________________________________________
   
   This page written and maintained by the Editor of Linux Gazette,
   gazette@ssc.com
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                                The Mailbag!
                                      
                    Write the Gazette at gazette@ssc.com
                                      
                                 Contents:
                                      
     * Help Wanted -- Article Ideas
     * General Mail
     _________________________________________________________________
   
                        Help Wanted -- Article Ideas
     _________________________________________________________________
   
   Date: Wed, 30 Dec 1998 05:04:56 -0800
   From: "Fields, Aubrey", Aubrey.Fields@PSS.Boeing.com
   Subject: I have two ideas for articles.
   
   I am a new user to the Linux community. I have two ideas for articles
   that I would read, print, and distribute to the other Linux newbees
   that I know.
   
   1. PPP using minicom. I have read several articles on using PPP, pppd,
   minicom and other dialup and networking issues. Being a new, however,
   I would find it very valuable to read "the definitive new users guide
   to configuring PPP on Linux". I've gotten a lot of pointers and some
   advanced tips, but what I'd like to see is how to setup a stand alone
   Linux 2.0.x machine (Red Hat v4 in my case) for dialing up via PPP
   using minicom with dhcp and dns provided by an ISP.
   
   2. basic xfree86 / fvwm95 config tricks. For example, how to change
   the word "start" on the menu button at the bottom of fvwm95 to
   ANYTHING else! I kick Bill Gate off my PC for a reason! I don't find
   it cute, funny, nor reassuring to see the "I want to be windows95
   'Start'" button on my Linux machine.
   
   also, how to use icons, get rid of the "virtual" desktop so that I can
   see my entire window without scrolling.
   
   Thank you very much, the Linux Gazette has proven to be a valuable
   resource!
   
   --
   Aubrey
     _________________________________________________________________
   
   Date: Wed, 02 Dec 1998 13:33:11 PST
   From: David Camara, cpdj2@hotmail.com
   Subject: connecting to novell 3.12 servers...
   
   Hi, I'm trying to connect to netware 3.12 servers. I am using the IPX
   module and ncpfs 2.2.0.7-1 (for Debian 2.0). Now, I don't use the
   auto_primary and auto_interface since a number of old posts recommend
   adding the ipx interface manually.
   
   I use:

ipx_interface add -p eth0 802.3 xxxxxxxx

   When I cat /proc/net/ipx_interface:

Network    Node_Address   Primary   Device    Frame_Type
xxxxxxxx   yyyyyyyyyyyy   Yes       eth0      802.3

   When I try to slist, I get:

slist: No server found in ncp_open

   When I try to mount a Novell volume using:

ncpmount -S server_name -U user_name -V sys /mnt/ncp

   I get:

ncpmount: No server found when trying to find server_name

   All this as su root... Any ideas? Thanks!
   
   --
   David
     _________________________________________________________________
   
   Date: Tue, 8 Dec 1998 12:16:20 -0500
   From: Blazek, Daniel, blazek@globalserve.net
   Subject: Ethernet
   
   Which Ethernet cards are compatible with Linux with minimum ease of
   installation, also does the make/model of the hub matter?
   
   --
   Dan
     _________________________________________________________________
   
   Date: Tue, 15 Dec 1998 12:29:37 +0000
   From: Tomos Llewelyn, tml@aber.ac.uk
   Subject: "Unable to open console..." Why?
   
   Can anyone tell me why I'm getting this message?
   
   Trying to boot a 2.0.36 kernel on a PII350 with an ATI Xpert@Play 8Mb
   AGP card. Should I be tweaking the video mode?
   
   --
   Tom Llewelyn
     _________________________________________________________________
   
   Date: Mon, 14 Dec 1998 12:46:57 -0500
   From: Michael Bright mabright@us.ibm.com
   Subject: Token Ring Errors with SuSE 5.3
   
   Hi, I would seriously appreciate any help you can give. I had the
   evaluation copy of SuSE 5.3 running fine on this machine. I loaded the
   full version of SuSE 5.3 and the Token ring went south. During install
   everything went fine, including loading the token ring module. I have
   replaced the ibmtr.o module file from a working machine with _no_
   change in the error. I also checked the /etc/conf.modules file to make
   sure the alias is defined right ( alias tr0 ibmtr.o ) and the options
   line is right ( options ibmtr io=0xa20 ). At this point I see two
   options, reload the machine with the eval copy and do an upgrade or
   recompile the kernel and hope for the best.

initialising tr0
general protection: 0000
CPU:    0
EIP:    0010:[]
EFLAGS: 00010212
eax: 00000003   ebx: 0009e658   ecx: fffffff7   edx: 00000000
esi: f000f84d   edi: 00000003   ebp: 00000000   esp: 019b7e0c
ds: 0018   es: 0018   fs: 002b   gs: 002b   ss: 0018
Process insmod (pid: 66, process nr: 16, stackpage=019b7000)
Stack: 0009e658 00000000 00000003 019b7e4c 00000008 0010ca1c 00000003
00000000
       019b7e4c 019b7e4c 00000003 00000000 0009e658 0010bae1 00000003
019b7e4c
       001f9b7c fffffff7 00108e00 00000003 00000000 0009e658 ffffff50
00000018
Call Trace: [] [] [] []
[] [] []
       [] [] [] [] []
[] [] []
       [] [] [] [] []
[] [] []
       []
Code: 0f b6 56 2f 83 fa 01 0f 84 9e 07 00 00 83 fa 02 0f 85 a9 07
Aiee, killing interrupt handler

   OS: SuSE 5.3 Hardware: IBM ISA Auto 16/4 Tokenring adapter.
   
   Thanks,
   --
   Michael
     _________________________________________________________________
   
   Date: Wed, 16 Dec 1998 14:33:55 -0600
   From: David Caliguire, djc@sgi.com
   Subject: Driver for Netflex III card on Linux
   
   I noticed a question posed to the Gazette about drivers for Netflex 3
   cards on Compaq on Linux. I have a Compaq with this card and would
   like to know where I could get a driver for this card for Linux.......
   
   Thanks
   --
   Dave
     _________________________________________________________________
   
   Date: Wed, 16 Dec 1998 16:06:42 -0300
   From: Saltiel, Hernan Claudio, hsaltiel@infovia.com.ar
   Subject: Help Wanted!!!
   
   I have a Linux box, with S.u.S.E., and a Lotus Notes server. I want to
   e-mail the status of my workstation to another user that belongs to
   the Notes Network. Does anybody know how to do that, or just the
   concepts to do this?
   
   --
   Hernn Claudio Saltiel
     _________________________________________________________________
   
   Date: Sun, 13 Dec 1998 14:35:20 -0500
   From: John, john@maxom.com
   Subject: Accounting
   
   I am looking for some inexpensive Accounting w/Inventory Software that
   will run on Linux . If you could point me in the right direction I
   would be greatly thankful
   
   Thank You
   --
   John Nelson
     _________________________________________________________________
   
   Date: Thu, 24 Dec 1998 14:47:09 +0200
   From: "tdk001", tdk001@mweb.co.za
   Subject: Linux and UNIX
   
   I am a 2nd year computer science student. I have looked everywhere for
   the answer and found only basic answers. My question is what exactly
   is the difference between Linux and UNIX, excluding size and speed. I
   would appreciate it if you could just send me a few of the
   differences.
   
   Thank you
   --
   Frans
     _________________________________________________________________
   
   Date: Tue, 22 Dec 1998 12:33:42 -0000
   From: "James Jackson", james.jackson@3f.co.uk
   Subject: Intellimouse
   
   Does anybody know how to enable the wheel on an Intellimouse under
   Linux? (Red Hat 5.2)
   
   --
   James
     _________________________________________________________________
   
   Date: Sat, 19 Dec 1998 13:53:33 PST
   From: "Thomas Smith", highminded015@hotmail.com
   Subject: Upgrading Red Hat
   
   I just installed Red Hat 5.0 and I hear about the newer versions out
   there and I want to upgrade but I don't want to buy a brand new CD or
   download everything and then re-install. I have been to a couple of
   sites and I have found no real help for this at any of them, so could
   you please help me out. Thank you.
   
   --
   Thomas
     _________________________________________________________________
   
   Date: Fri, 18 Dec 1998 23:20:12 -0800
   From: Taro Fukunaga, tarozax@earthlink.net
   Subject: How to get CPU info
   
   I am writing a Tcl/Tk program that prints info about the CPU, memory
   usage, processes, and disk usage of a Linux computer. On problem I
   have is in getting info about the CPU. Because the contents (ie field
   names) of /proc/cpuinfo may vary from one machine (perhaps kernel
   build is the right answer) to the next, I decided to use the program
   uname. However, this also doesn't work well, and simply lists my
   processor as "unknown". I looked at the source code, and "unknown" is
   the default value for the CPU!
   
   So my question is, is there any way to write a program that can get
   the type of CPU on any Linux computer?
   
   Thank you, anyone.
   
   --
   Taro
     _________________________________________________________________
   
   Date: Thu, 31 Dec 1998 21:19:48 -0600
   From: dcramer@midusa.net
   Subject: Does Linux have multimedia support?
   
   I just finished reading Marjorie Richardson's comments about Linux in
   the January '99 issue of Computer Shopper, and I was wondering if
   Linux now has, or will support any of the multimedia formats supported
   by Windows, such as AVI, JPG, WAV, MOV, etc? I have looked into some
   of the basics of the OS, but I have not tried to install it. Thank
   you.
   
   --
   Don Cramer
     _________________________________________________________________
   
   Date: Wed, 30 Dec 1998 14:03:42 -0500
   From: Soraia Paz, spaz@rens.com
   Subject: LILO Problems
   
   I originally had Windows NT on my PC with some room left for Linux. I
   installed Linux and I set up LILO to boot both operating systems. I
   got into Linux fine but when I tried to get into NT it kept on
   crashing. I tried using DOS's fdisk to get rid of Linux but LILO is
   still there. How can I get rid of it?
   
   --
   Soraia
     _________________________________________________________________
   
   Date: Wed, 30 Dec 1998 09:42:23 -0600
   From: Bill McConnaughey, mcconnau@biochem.wustl.edu
   Subject: DB9 serial port
   
   I degraded my floppy disk drive, apparently by doing fdformat with
   inappropriate parameters and/or media. In order to back up my work, I
   want to use minicom or seyon to transfer files over the DB-9 serial
   port. I can get the computers to type to each other, but file transfer
   protocols (xmodem and ymodem) don't work. There is no Kermit in my
   installation and I don't know where to get it. What is the correct
   wiring for a direct connection of the DB-9 com ports on two pc's? How
   can I transfer files?
   
   --
   Bill
     _________________________________________________________________
   
   Date: Tue, 29 Dec 1998 10:40:15 -0500 (EST)
   From: ive.db@usa.com
   Subject: HELP
   
   I have a jamicon 36X cd player.
   
   It doesn't work under Linux. I tried to install Linux but I failed.
   
   Could you please help me with this. I also need to say that you can
   set my cd-player master,slave and CSEL with a jumper.
     _________________________________________________________________
   
   Date: Mon, 28 Dec 1998 03:49:21 -0500
   From: "david marcelle", marcelle@avana.net
   Subject: Audio-Only CDRs
   
   Do you have for sale or do you know where I can purchase audio-only
   blank CDRs (for my phillips CD recorder) for $4.00 each or less?
   
   Thanks
   --
   David
     _________________________________________________________________
   
   Date: Mon, 28 Dec 1998 02:15:26 -0500
   From: "Clayton J. Ramseyer", cyberzard@earthlink.net
   Subject: IP Masquerading and related
   
   I am writing this message to you, because I am new to Linux. (I love
   it by the way) Anyway, I have a small LAN setup at home and would like
   to provide access to the Internet for my other machine.
   
   The HOWTO is a bit confusing when it comes to setting this up.
   
   If someone could write me with a possible offer for help, I'd surely
   appreciate it.
   
   The commands I have are probably correct. Yet the HOWTOs don't mention
   which machine these commands are entered on.
   
   I assume it would be the machine connected to the net.
   
   By the way, I connect with a USR 56K v.90 compatible modem. My service
   provider is earthlink.
   
   I look forward to your responses.
   
   Thanks,
   --
   CJ
     _________________________________________________________________
   
   Date: Sat, 2 Jan 1999 23:05:13 +0530
   From: "L.V.Gandhi", lvgandhi@vsnl.com
   Subject: Netscape help
   
   I have installed NC4.5 for Linux. I could edit preferences both as
   root and an user. Once closed and then restarted I am unable to do
   that. I am not sure from when it happened. It may be due improper
   shutdown due to power outage or hanging of nc after many windows are
   open. I have system PII with 780MB partition for Linux with 64 MB swap
   space, 32 MB ram. Is there any easy way to remove an installed
   software and reinstall it in Linux?
   
   --
   L.V.Gandhi
     _________________________________________________________________
   
   Date: Sat, 2 Jan 1999 23:03:35 +0530
   From: "L.V.Gandhi", lvgandhi@vsnl.com
   Subject: help for microsoft intellimouse
   
   I have installed RH5.0 and upgraded to 5.1. I have Microsoft
   intellimouse and logitech super mouse. when I configure mi, the same
   is not recognized by Linux and xserver. The same is recognized in
   win98. But logi mouse is recognized in both. Any solutions welcome.
   
   --
   L.V.Gandhi
     _________________________________________________________________
   
                                General Mail
     _________________________________________________________________
   
   Date: Tue, 1 Dec 1998 13:39:58 -0500
   From: Brad Gerrard, bradgerrard@x-stream.co.uk
   Subject: The Future Of Artificial Intelligence and Linux
   
   Can you imagine, 'eureka' you've done it, you're going to make
   millions neigh billions, you've created a programme that gives a
   computer the seeming ability to think.
   
   There it is flashing away 'walking the walk', bezazz it thinks.
   
   Hold on a moment the operating system, no the skeleton of this
   thinking machine has crashed.
   
   What say you, shall we change the operating system? Not arf we will.
   
   How about something a little more stable, how about an operating
   system that will go for at least a year. Is that to much to ask? One
   might well wonder were we not acquainted with the genie in the bottle,
   yes 'Linux'.
   
   Linux is gaining in popularity, that makes it commercial, that means
   money, and money means more thinkers are turning their attention
   towards it as a viable alternative to some of it's less exciting
   competition. Linux is a stable operating system, freely available, an
   operating system for Man All Born Equal as written in the American
   constitution, yes could this operating system level out the playing
   field.
   
   Artificial Intelligence requires a very stable platform, and I believe
   that given the limitations of present day hardware, AI requires an
   operating system with a small foot print in order to possibly tackle
   the problem of achieving any potential of new thought, which could
   possibly be termed artificial intelligence in it's true sense. Linux
   is a Unix operating system, it can be and usually is networked, this
   is a plus when it comes to composing an AI operating programme.
   
   The very makeup and variable structure lends it's self to AI.
   
   Yes I believe that Linux is an operating system with a bright future.
   
   --
   Brad
     _________________________________________________________________
   
   Date: Tue, 1 Dec 1998 13:39:58 -0500
   From: "Serge E. Hallyn", hallyn@CS.WM.EDU
   Subject: happy hacking keyboard
   
   wow. $140 for a keyboard because it has fewer keys? I simply don't
   think the arguments in favor make sense - namely that you don't have
   to reach for any keys, because you should never need to with other
   normal keyboards either. Let's see:
     * control not being next to A should never be a problem for anyone
       who'd dare call him/herself a hacker, happy or otherwise - if you
       can't figure out how to remap caps lock,...
     * escape should not be a problem, since any self-respecting vi user
       uses ctrl-[ anyway
     * backspace: ctrl-h (well, OK, emacs users are out of luck :)
     * tab, if it's a really weird keyboard, ctrl-i, though i seldom do
       that
       
   $140. ridiculous.
   
   --
   serge
     _________________________________________________________________
   
   Date: Tue, 01 Dec 1998 12:28:06 -0600
   From: Tim Kelley, tpkelley@winkinc.com
   Subject: Jeremy Dinsel's review of keyboard ...
   
   He did not mention something which many people would be very
   interested in knowing - is it a clicking, spring action style keyboard
   or a membrane (mushy) style keyboard?
   
   At that price (~$150), I can't believe it's one of those cheap
   membrane things, but one can never be sure. Actually, at that price, I
   can't believe anyone would buy it, but whatever.
   
   --
   Tim
     _________________________________________________________________
   
   Date: Wed, 2 Dec 1998 01:32:06 +1000 (GMT)
   From: Norman Widders winspace@paladincorp.com.au
   Subject: Linux Gazette
   
   I just read David Jao's article in Linux Gazette #35 and enjoyed it.
   He had one fact wrong though, he mentioned:
   
     Currently, a limitation of the UW IMAP server is that a folder
     cannot contain both messages and subfolders.
     
   This is not a limitation of the UOW server. It is a limitation of the
   default UNIX mail files... There are other available mailbox types
   available on the UNIX platform that will allow UOW to create
   subfolders... see the release notes with UOW for more info :)
   
   --
   Norman
     _________________________________________________________________
   
   Date: Tue, 1 Dec 1998 08:24:43 -0500 (EST)
   From: Walt Taninatz, waldo@voicenet.com
   Subject: Re: Linux Gazette #35
   
   Thank you for the reminders and for making such a great magazine. The
   content is always useful, interesting and well written.
   
   Best Regards,
   --
   Walter
     _________________________________________________________________
   
   Date: Thu, 03 Dec 1998 13:52:14 -0800
   From: Jauder Ho, jauderho@transmeta.com
   Subject: Re: IMAP on Linux: A Practical Guide 
   
   I have some comments on the article written by David Jao. There are
   some inaccuracies that I need to correct. We use IMAP here and it is
   indeed excellent technology.
     * NS 4.08 is out. NS 4.5 is actually pretty stable when it comes to
       IMAP based mail, we have no problems using it.
     * By default, imapd uses UNIX spool. This is horrendously
       inefficient. So I am not surprised by it crashing over 1000
       messages. HOWEVER, if you change the mailbox format to something
       like mbx (Modify the Makefile, change unixproto to mbxproto), it
       can easily handle much more messages and allow concurrent access.
       I have users with 8000 messages in their in box with not problem.
     * Netscape is now beta testing Linux versions of their Messaging and
       Directory servers.
     * There is one more way to do IMAP securely and that is to use
       stunnel under IMAP.
       
   More information about what we do site specific can be found at
   http://www.carumba.com/imap/
   
   --
   Jauder
     _________________________________________________________________
   
   Date: Wed, 02 Dec 1998 08:46:18 +0100
   From: "Thomas Diehl", th.diehl@dtp-service.com
   Subject: Editor's Choice Awards: Most Desired Port?
   
   This is on your "Editor's Choice Awards", esp. the following from your
   article "Most Desired Port--QuarkXPress":
   
     For layout, we must have an MS Windows 95 machine in order to run
     QuarkXPress... We are more than ready to be rid of this albatross
     and have a total Linux shop. Next, like everyone else, we'd like
     Adobe to port all its products to Linux.
     
   I'm a professional DTPer and a Linux user myself. So I would certainly
   like to see the whole Acrobat suite for Linux as well as good font and
   printing solutions from Adobe. And, of course, I don't have anything
   against porting PM, Frame, PShop, Illustrator, or XPress to the
   penguin platform. No doubt about it.
   
   I find it problematic, however, that hardly anybody in the DTP area
   seems to do justice to the fact that there is a complete suite for our
   kind of work coming up just NOW: Corel promised repeatedly to port
   _all_ their DTP programs to Linux: Ventura, Draw, PhotoPaint as well
   as a lot of helpful apps like WordPerfect and their whole Office
   suite. (See eg www.zdnet.co.uk/news/1998/45/ns-6073.html)
   
   This would be an incredible step forward for Linux -- but somehow
   nobody in DTP seems to care. I wonder why?
   
   Of course, I'm fully aware of the bad reputation Corel software has
   among DTPers (and also how much of this they deserved). But I can
   assure you and everybody from daily, first hand experience that the
   situation has incredibly improved over the last years. Today the Corel
   DTP apps brings a wealth of functionality to the users that, as a
   whole, is unmatched by anything I know in this area.
   
   I'm also aware that this will not be enough to make XPress users
   really consider a switch and that they have perfectly good reasons for
   this attitude. But, nevertheless, I would appreciate it VERY much if
   the Corel announcements would at least be taken into account when
   talking about this area. If Corel keep their promise there will be a
   complete publishing suite for Linux very soon. And I would ask
   everybody to spread the good news, esp. those who may be held "opinion
   leaders" by many people out there. I'm sure it would be a real loss
   for everybody if Corel would get second thoughts about their plans
   because of apparent "lack of demand" among professional DTPers.
   
   Just in case you are prepared to look a little more at this I'm
   attaching some more material on the aptness of Corel DTP software.
   
   Kind regards,
   --
   Thomas
   
     We use many of Corel's products including Ventura (for book
     layout). Editor's choice is after all my opinion only, but I do
     know that many magazines besides Linux Journal use QuarkXPress for
     layout. --Editor 
     _________________________________________________________________
   
   Date: Tue, 8 Dec 1998 16:30:53 -0500
   From: "Adams, Ranald", Ranald.Adams@ctny.com
   Subject: Compaq
   
   There's a lot of this sort of thing on Compaq's forum. Please publish
   to interested parties so that they can become subject to the
   appropriate level of ridicule (in a caring, motivationally productive
   kind of way).
   
     Topic: Servers - Banyan-Unix Subject: Linux and Compaq Servers
     From: COMPAQ - Robert G 05/11/98 09:10:13 Compaq now or in the
     future will not be providing Linux drivers. This is because the
     Linux operating system is a public domain OS. There is not a single
     source of ownership to go to when trying to resolve OS issues like
     there is for SCO Unix and other versions of Unix on the market.
     Because there is no single source for the compiled binary code
     required to install and run the OS there is no way to guarantee
     driver compatibility with all the flavors of Linux.
     
     Compaq Engineering has decided that they will not provide or
     release hardware drivers unless they can be fully certified and
     supported. Since Linux does not have a single source manufacture,
     this is not possible with Linux. But you can by all means make a
     formal request in writing to Compaq Engineering concerning your
     need for Linux drivers. The address is:
     
     Compaq Computer Corp.
     Attn. Engineering Dept.
     MS. 050702
     20555 State Hwy. 249
     Houston, TX 77070 b4
     _________________________________________________________________
   
   Date: Sat, 5 Dec 1998 10:53:00 -0800
   From: Mike Wiley, npg@integrityonline.com
   Subject: Corel Ventura would be best DTP port
   
   I agree that Linux needs a DTP program, but the one which should be
   desired is Corel Ventura Publisher, not Quark. CVP version 8 is at
   least one generation ahead of Quark and include many features which we
   use regularly =97 features which are completely absent from Quark. It
   is more powerful and easier to use. From my perspective, Quark shows
   all the signs of product arrogance which arises from having a
   monopoly, or near monopoly, in a field.
   
   Another point: Corel Corp has made a commitment to Linux. Adobe and
   Quark, to my knowledge have not. Why not support those who support
   you, especially when those who support you have the best product?
   
   Just a couple of thoughts...
   
   Sincerely,
   
   --
   Mike
   
     We support Corel in every way we can, but Quark is more suited for
     our purposes in printing the magazine than is Ventura. Corel's
     NetWinder will be featured on the April Linux Journal cover.
     --Editor 
     _________________________________________________________________
   
   Date: Fri, 11 Dec 1998 14:19:43 -0500
   From: "Nils Lohner", lohner@debian.org
   Subject: Debian Powers 512 Node Cluster into Book of Records
   
   Over 512 computers were assembled for the CLOWN (CLuster Of Working
   Nodes) system that ran on the night of December 5-6. This cluster used
   a modified version of the Debian GNU/Linux distribution (reduced in
   size to a mere 16 MB, and boot script modifications) to run a
   combination of PVM (Parallel Virtual Machine) and several application
   programs. These programs included povray (a ray tracing program used
   to calculate frames for a film), Cactus, a program that solves the
   Einstein Equations, which are ten non-linear joint
   hyperbolic-elliptical partial differential equations. These are used
   to describe Black Holes, Neutron Stars, etc. and are among the most
   complex in the field of mathematical Physics.
   
   For more information, please visit the following sites (mostly in
   German):
   
   http://www.ccnacht.de/
   http://www.linux-magazin.de/cluster/
   http://www.heise.de/ix/artikel/1999/01/010/
   http://europium.oc2.uni-duesseldorf.de/cluster/tech.html
   
   --
   Nils
     _________________________________________________________________
   
   Date: Fri, 11 Dec 1998 02:31:14 -0500
   From: Paul Iadonisi, iadonisi@colltech.com
   Subject: Re: USENIX LISA Vendor Exhibit trip report
   
     There were a lot of what I call "Want-Ad" booths to. Collective
     Technologies (formerly Pencom System Administration), Sprint
     Paranet, Fidelity, and several other companies there for sole
     reason of trying to recruit people.
     
   Hmmm. I take exception to this. We (Collective Technologies) have many
   reasons for being at LISA. Like any business, we work to get name
   recognition. We want people to know who we are. But we also seek to
   educate our members (look in the rear of the Attendee List for the
   list by company and you will see how many of us went -- I think we
   have the largest number of attendees) and give back to the System
   Administration community at large. Take a look at the Technical Talks
   and BoFs and you will find four events each sponsored by a Collective
   Technologies member. Five of our members also wrote summaries for SANS
   in the August issue of ;login:.
   
   I hope no one sees this as a marketing message and my intention is not
   to try to sell my company on a Linux mailing list. The point is that
   we do all of this without tootin' our own horn that much. I think
   reducing our booth to a "Want-Ad" type booth is a little unfair. I
   normally wouldn't post a message like this on this list, but couldn't
   let the '...there for sole reason of trying to recruit people...'
   comment pass, especially since we were the first company listed. No
   ill will, I just wanted to clear that up.
   
   --
   Paul Iadonisi
   
     You must be clairvoyant! :-) That article is just being posted in
     this issue. Of course, it's on Paul's web site, but to know to send
     a copy of your letter to me. Wow! --Editor 
     _________________________________________________________________
   
   Date: Fri, 11 Dec 1998 20:09:57 -0500
   From: Kevin Forge, forgeltd@usa.net
   Subject: Quark
   
   Most Desired Port--QuarkXPress
   
   Hate to say it but "BUY A MAC". Mind you I don't like the Mac. I don't
   use a Mac. I don't even like the few occasions when I must attempt to
   repair a Mac ( often it's cheaper to ditch it than buy parts ).
   
   All this considered even Microsoft uses Quark on a Mac to do it's
   manuals and stuff. As far as I know a Mac used in this post may never
   crash. Sure Mac OS isn't Linux quality in terms of stability but it
   beets NT.
   
   In the mean time whine for a port ... It may never happen though since
   even the windows port is 1/2 harted, unstable and not quite what the
   printers want ( they all use Macs. )
   
   --
   Kevin
   
     We started out with a Mac but at that time it wasn't as easy to
     network a Mac with Linux as it now is with Netatalk. So the
     decision was made to go with Windows. It happens. --Editor 
     _________________________________________________________________
   
   Date: Tue, 22 Dec 1998 21:13:56 -0600
   From: Sam, myoldkh@earthlink.net
   Subject: Sponsorship

gts global >>myoldkh<< 12-22-98     09:15:32 PM:

   You will be very pleased to know that yesterday I made a credit card
   order on the Web for a copy of the Linux OS from one of your sponsors
   - Red Hat Software.
   
   I support quality web sites and their sponsors! (I am also sick and
   tired of MS Windows crashing my computer all of the time - I think
   that Microsoft writes software about the same way that GM builds cars
   - I know cause I drive a Pontiac lemon!)
   
   --
   Sam
     _________________________________________________________________
   
   Date: Thu, 17 Dec 1998 19:46:04 -0600 (EST)
   From: "Michael J. Hammel", mjhammel@graphics-muse.org
   Subject: Logo
   
     From LG Editor:
     I get at least one letter a month asking that we change the quote
     in the logo to be attributed directly to Gandhi rather than a movie
     actor, as well as ones requesting that the graphic be made smaller.
     What do you think? Is it time to make either of these changes?
     
   I'll look at making the image smaller, but it may not be till next
   month. I'm still getting things back together at home.
   
   As to the quote, I'll stick to the attribution until someone provides
   a definitive resource that attributes it to Gandhi. I'm fairly certain
   he would have said it, but I don't want to give him the attribution
   unless I can find some other resource to back it up. After all, I only
   know about it because of a movie.
   
   I have no objection to changing it - I just need some other definitive
   attribution to do so.
   
   --
   Michael
     _________________________________________________________________
   
   Date: Mon, 28 Dec 1998 16:46:13 -0800
   From: Randy Herrick, HERRICK@PACBELL.NET
   Subject: graphics on title page
   
   Great site, just one thing, I think Tux needs to look like, well, the
   real Tux, in real Tux colors. In the beginning there were several
   kinds of birds from seagulls to penguins, but I think nowadays most
   everyone has adopted the standard Tux penguin that is siting down
   (looking happy from eating herring-as Linus Torvald's put it )in the
   black and white and yellow colors. We need to have a standard logo for
   Linux, don't you think? Thanks for your time. :)
   
   --
   Randy
   
     As far as graphics go, I trust Michael's judgment in all
     things--even the way Tux is drawn. --Editor 
     _________________________________________________________________
   
   Date: Sun, 27 Dec 1998 13:38:52 -0600
   From: Lyno Sullivan, lls@freedomain.org
   Subject: MPDN - Minnesota Public Digital Network
   
   I would appreciate your support of the following initiative.
   Specifically, I will need the help of the free software community
   during discussions of item 4 and the excerpt listed below: December
   27, 1998
   The full MPDN announcement may be viewed at:
   http://www.freedomain.org/^lls/free-mn/19981222-mpdn.html
   
   This post constitutes an invitation to join discussions concerning the
   MPDN. Beginning in January, 1999, I will present each goal of the MPDN
   for discussion within the MN-NETGOV listserv. If you are a stake
   holder to these goals, please join the listserv.
   
   Anyone can join that listserv by sending an email to
   
   mailto:mn-netgov-subscribe@egroups.com
   
   Members may view past messages, calendars, and other group features
   at:
   
   http://www.egroups.com/list/mn-netgov/
   
   ABSTRACT
   
   In preparation for my requesting Legislative hearings in 1999, this
   article explains my vision of the Minnesota Public Digital Network
   (MPDN), which is:
   
   1) to provide every Minnesota citizen with a secure and authenticated
   email address within the mn.us hierarchy,
   
   2) to assure that every citizen can use email to dialogue with the
   elected and the appointed offices of government,
   
   3) to assure that every local community has a high speed digital
   network and a repository for the creative works and letters of the
   Minnesota people, and
   
   4) to collect the free software tools necessary to attain these goals,
   within the Government Information Freedom Toolbox (the GIFT), which
   will be created as a byproduct of Minnesota State government's
   conversion to free software.
   
   EXCERPT
   
   GOAL 1) Effective immediately, freeze (at current levels or lower) all
   spending for non-free, closed source, software. Establish a
   Legislative audit to determine the Total Cost of Operation (TCO) costs
   of non-free server and desktop software. Establish a cost reduction
   plan that will result in the elimination of spending on non-free
   software. Collect all those monies, identified by the TCO analysis,
   together into a revolving Software Freedom Fund, to be administered by
   the Office of Technology. Require that all further purchases and
   upgrades of non-free, closed source server and desktop software must
   be approved by the Minnesota Office of Technology's, Information
   Policy Council (IPC). The IPC will be charged to develop a statewide
   model of the MPDN. The IPC will be charged to connect every public
   sector worker in Minnesota to the MPDN. Savings within the Software
   Freedom Fund may be spent on writing free software. Revenues of the
   Software Freedom Fund must be spent, to endow the creation of free
   software and free content, all of which, must be licensed under the
   GNU General Public License (GPL) or a suitable copyleft license.
   
   --
   Lyno Sullivan
     _________________________________________________________________
   
             Published in Linux Gazette Issue 36, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Next 
   
      This page written and maintained by the Editor of Linux Gazette,
      gazette@ssc.com
      Copyright  1999 Specialized Systems Consultants, Inc.
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                               More 2 Tips!
                                      
               Send Linux Tips and Tricks to gazette@ssc.com 
     _________________________________________________________________
   
  Contents:
  
     * Forcing fsck on Red Hat 5.1 
     * Personal Listserver 
     * Re: Back Ups 
     * ANSWER: Your Supra Internal Modem Problems 
     * ANSWER: Single Floppy Linux 
     * ANSWER: Re: scsi + ide; boot ide 
     * ANSWER: Numlock at startup 
     * ANSWER: Re: graphics for disabled 
     * ANSWER: BTS: GNU wget for updating web site 
     * ANSWER: Linux Boot-Root 
     * Replies to My Questions in Nov. 98 Linux Gazette 
     _________________________________________________________________
   
  Forcing fsck on Red Hat 5.1
  
   Date: Tue, 08 Dec 1998 18:20:28 -0500
   From: James Dahlgren, jdahlgren@netreach.net
   
   I don't know if this is a 2 cent tip or what, and since it's
   distribution specific, it's applicability is limited, but I still
   thought it was worth sharing.
   
   The shutdown command accepts a -F switch to force a fsck when the
   system is rebooted. This switch just writes a flag file /forcefsck, it
   is up to the initialization scripts do do something about it. In Red
   Hat 5.1 ( I don't know about 5.2 ) the rc.sysinit script uses a
   different method to force a fsck.
   
   It checks for the existence of /fsckoptions and if it exists uses it's
   contents as a switch when calling fsck. The command "echo -n '-f' >
   /fsckoptions" will create a file, /fsckoptions, with "-f" in it and
   will force a fsck the next time the system is booted. The rc.sysinit
   script removes the /fsckoptions file after remounting the drive
   read-write, so that the fsck won't be forced every time the system is
   booted.
   
   If you want the -F switch from the shutdown command to work, a little
   editing of the /etc/rc.d/rc.sysinit file will do it.
   
   near the beginning of the rc.sysinit file is the following:

if [ -f /fsckoptions ]; then
        fsckoptions=`cat /fsckoptions`
else
        fsckoptions=''
fi

   This is where it checks for the /fsckoptions file and reads its
   contents into a variable for later use. We add an elif to check for
   the /forcefsck file and set the variable accordingly:

if [ -f /fsckoptions ]; then
        fsckoptions=`cat /fsckoptions`
elif [ -f /forcefsck ]; then
        fsckoptions='-f'
else
        fsckoptions=''
fi

   Now the /forcefsck flag file created by using the -F switch with
   shutdown will force a fsck on reboot. Now we need to get rid of the
   /forcefsck file, or it will force the check every time the system is
   started. Further down in the rc.sysinit file, after the disk is
   remounted read-write, is the following line which removes any existing
   /fsckoptions file:

rm -f /etc/mtab~ /fastboot /fsckoptions

   We just add /forcefsck to the list of files to delete:

rm -f /etc/mtab~ /fastboot /fsckoptions /forcefsck

   Now we have two ways to force the fsck, we can use the -F switch when
   running shutdown, or we can put specific flags in a /fsckoptions file.
   
   CAUTION!
   The rc.sysinit file is critical to system startup. A silly typo in it
   can make the system hang when it boots. ( I've been there! ) Make a
   backup before you edit it. Edit it carefully. If you do blotch it, you
   can recover by rebooting and using the -b switch after the image name
   on the lilo command line. This brings you up in maintenance mode
   without running the rc.sysininit script. The disk is in read-only
   mode.

mount -n -o remount,rw /

   will get you to read-write mode so you can fix the problem.

mount -n -o remount,ro /

   after fixing the problem to prepare the system for continuing startup.
   
   exit or ctl-d to exit the maintenance shell and continue on to the
   default runlevel.
   
   Hope this is of some use to someone.
   
   --
   Jim
     _________________________________________________________________
   
  Personal Listserver
  
   Date: Mon, 07 Dec 1998 01:59:48 +0100
   From: "Soenke J. Peters", peters@simprovement.com
   
   An often unused feature of "sendmail" is it's "plussed user feature"
   which makes mails to "user+testlist@localhost" match "user@localhost".
   I will show you how to use this to implement personal mailing lists.
   
   First, you have to set up "procmail" to act as a filter on your
   incoming mails. This could be done inside sendmail by setting it up as
   your local mailer, or simply via your "~/.forward" file.
   
   Now, you should get a mailing list program. I prefer BeroList, because
   it's easy to configure. Compile it (don't forget to adjust the paths!)
   and install it somewhere in your home directory.
   
   Done that, you have to tell procmail what mails are to be passed to
   the mailing list program. This is done inside "~/.procmailrc" and
   should contain something like the following for every list (in this
   example, the list is called "testlist", the mailname of the user is
   "username"):

:0
* ^To:.*username\+testlist
| path/to/the/listprogram testlist

   The last step is to prepare the configuration files for the mailing
   list. As this is specific to the program you use, I can't tell you
   here.
   
   For a german description see:
   http://www.simprovement.com/linux/listserver.html
   
   --
   Soenke Jan Peters
     _________________________________________________________________
   
  Re: Back Ups
  
   Date: Tue, 1 Dec 1998 10:07:46 -0500 (EST)
   From: Jim Buchanan, c22jrb@koptsv01.delcoelect.com
   
     From: Anthony Baldwin:
     Disk space is relatively cheap, so why not buy a small drive say
     500Meg which is used for holding just the root /lib /bin /sbin
     directories. Then setup a job to automatically back this up to
     another drive using "cp -ax" (and possibly pipe it through gzip and
     tar). This way when the unthinkable happens and you loose something
     vital, all you have to do is boot from floppy mount the 2 drives
     and do a copy. This has just saved my bacon while installing
     gnu-libc2
     
   A good idea as far as it goes, but there is one gotcha. If lightning
   or some other power surge takes out one drive, it might take out the
   on-line backup as well.
   
   I use a very similar method where each night, on each machine, I have
   a cron job back up vital information to another HD in another machine
   on my home network.
   
   In addition to the nightly back-ups, I do a weekly backup to removable
   media, which I keep in a separate building (my workshop at the back of
   my lot). That way if lightning takes out everything on the network, I
   have lost a weeks or less work. The separate building part might be
   paranoia, but I really recommend at least weekly off-line back ups.
   
   --
   Jim Buchanan
     _________________________________________________________________
   
    Tips in the following section are answers to questions printed in the Mail
    Bag column of previous issues.
     _________________________________________________________________
   
  ANSWER: Your Supra Internal Modem Problems
  
   Date: Tue, 1 Dec 1998 09:48:10 -0500 From: "Brower, William"
   wbrower@indiana.edu
   
     Richard wrote:
     I have a PII (350MHz) running with an AGP ATI 3DRage graphics card
     (which works fine) and a Sound Blaster 16 PnP (which also works
     fine). But, I can't get my internal SupraExpress 56k modem to work.
     
   Your modem sounded familiar from a past search I had done, so I went
   to Red Hat's www site (http://www.redhat.com/) and followed the
   support | hardware link. You will find this reference in the modem
   category:
   
   Modems that require software drivers for compression, error
   correction, high-speed operation, etc.
   PCI Memory Mapped Modems (these do not act like serial ports)
   Internal SupraExpress 56k & also the Internal SupraSonic 56k
   
   It appears that your modem is inherently not compatible with Linux. I
   use an inexpensive clone modem called the E-Tech Bullet, pc336rvp
   model - paid $28 for it and it operates with no problems at all. Good
   luck in finding a compatible modem!
   
   --
   Bill
     _________________________________________________________________
   
  ANSWER: Single Floppy Linux
  
   Date: Tue, 01 Dec 1998 22:05:59 -0800
   From: Ken Leyba, kleyba@pacbell.net
   
   To: roberto.urban@uk.symbol.com
   There are a few choices for a single floppy Linux (O.K. some are more
   than one floppy). I haven't tried them, but I will be doing a Unix
   presentation next month and plan to demo and handout a single or
   double floppy sets for hands-on.
   
   muLinux (micro linux):
   http://www4.pisoft.it/~andreoli/mulinux.html
   
   tomsrtbt:
   http://www.toms.net/rb/
   
   Linux Router Project:
   http://www.linuxrouter.org/
   
   Trinux:
   http://www.trinux.org/
   
   Good Luck,
   --
   Ken
     _________________________________________________________________
   
  ANSWER: Re: scsi + ide; boot ide
  
   Date: Sun, 29 Nov 1998 07:42:29 -0800 (PST)
   From: Phil Hughes, fyl@ssc.com
   
     The amazing Al Goldstein wrote:
     I have only linux on a scsi disk. I want to add an ide disk and
     want to continue to boot from the scsi which has scsi id=0. Redhat
     installation says this is possible. Is that true? If so how is it
     done?
     
   First, you should be able to tell your BIOS where to boot from. Just
   set it to SCSI first and all should be ok.
   
   If that isn't an option, just configure LILO (/etc/lilo.conf) so that
   it resides on the MBR of the IDE disk (probably /dev/hda) but boots
   Linux from where it lives on the SCSI disk.
   
   --
   Phil
     _________________________________________________________________
   
  ANSWER: Numlock at startup
  
   Date: Thu, 03 Dec 1998 21:51:08 -0800
   From: "D. Cooper Stevenson", coopers@proaxis.com
   
     To: bmtrapp@acsu.buffalo.edu
     
   Here's a bit of code I found while searching the documentation for
   "numlock" It turns numlock on for all terminals at startup! The bolded
   code is the added code in the /etc/rc.d/rc file of my Redhat 5.1
   Linux:

 Is there an rc directory for this new runlevel?
if [ -d /etc/rc.d/rc$runlevel.d ]; then
        # First, run the KILL scripts.
        for i in /etc/rc.d/rc$runlevel.d/K*; do
                # Check if the script is there.
                [ ! -f $i ] && continue

                # Check if the subsystem is already up.
                subsys=${i#/etc/rc.d/rc$runlevel.d/K??}
                [ ! -f /var/lock/subsys/$subsys ] && \
                    [ ! -f /var/lock/subsys/${subsys}.init ] && continue

                # Bring the subsystem down.
                $i stop
        done

        # Now run the START scripts.
        for i in /etc/rc.d/rc$runlevel.d/S*; do
                # Check if the script is there.
                [ ! -f $i ] && continue

                # Check if the subsystem is already up.
                subsys=${i#/etc/rc.d/rc$runlevel.d/S??}
                [ -f /var/lock/subsys/$subsys ] || \
                    [ -f /var/lock/subsys/${subsys}.init ] && continue

                # Bring the subsystem up.
                $i start
        done

        # Turn the NumLock key on at startup
        INITTY=/dev/tty[1-8]
        for tty in $INITTY; do
             setleds -D +num < $tty
        done
fi
     _________________________________________________________________
   
  ANSWER: Re: graphics for disabled
  
   Date: Wed, 16 Dec 1998 00:13:19 GMT
   From: Enrique I.R., esoft@arrakis.es
   
     In a previous message, Pierre LAURIER says: - control of the
     pointer device with the keyboard
     
   You can do it with any windowmanager. It's a XFree86 feature (v3.2,
   don't know of older versions). You only have to use the XKB extension.
   You enable it hiting the Control+Shift+NumLock. You should hear a beep
   here. Now you use the numerical keypad to:

Numbers (cursors) -> Move pointer.
/,*,- -> l,r&m buttons.
5 -> Click selected button.
+ -> Doubleclick selected button.
0(ins) -> Click&Hold selected button.
.(del) -> Release holded button.

   Read the XFree86 docs to get details.
   
   --
   Enrique I.R.
     _________________________________________________________________
   
  ANSWER: BTS: GNU wget for updating web site
  
   Date: Thu, 24 Dec 1998 03:15:16 -0500
   From: "J. Milgram", milgram@cgpp.com
   
     Re. the question "Updating Web Site" in the Jan 1999 Linux Journal,
     p. 61 ...
     
   Haven't tried the mirror package - might be good, but you can also use
   GNU wget (prep.ai.mit.edu). Below is the script I use to keep the
   University of Maryland LUG's Slackware mirror up-to-date. "Crude but
   effective".

#!/bin/bash
#
#  Update slackware
#
#  JM 7/1998

# usage:   slackware.wget [anything]
# any argument at all skips mirroring, moves right to cleanup.

site=ftp://sunsite.unc.edu
sitedir=pub/Linux/distributions/slackware-3.6; cutdirs=3
localdir=`basename $sitedir`
log=slackware.log
excludes=""
for exclude in bootdsks.12 source slaktest live kernels; do
  [ "$excludes" ] && excludes="${excludes},"
  excludes="${excludes}${sitedir}/${exclude}"
done

# Do the mirroring:

if [ ! "$*" ]; then
 echo -n "Mirroring from $site (see $log) ... "
 wget -w 5 --mirror $site/$sitedir -o $log -nH --cut-dirs=$cutdirs -X"$excludes
"
 echo "done."
fi

# Remove old stuff
# (important, but wipes out extra stuff you might have added)

echo "Removing old stuff ..."
for d in `find $localdir -depth -type d`; do
  pushd $d > /dev/null
  for f in *; do
     grep -q "$f" .listing || { rm -rf "$f" && echo $d/$f; }
  done
  popd > /dev/null
done
echo "Done."

   --
   Judah
     _________________________________________________________________
   
  ANSWER: Linux Boot-Root
  
   Date: Mon, 7 Dec 1998 12:57:34 +0100
   From: Ian Carr-de Avelon, ian@emit.pl
   
   This is an answer to one of the letters in the December '98 issue.
   
     Date: Wed, 04 Nov 1998 19:01:02 +0000 From: Roberto Urban,
     roberto.urban@uk.symbol.com Subject: Help Wanted - Installation On
     Single Floppy
     
     My problem seems to be very simple yet I am struggling to solve it.
     I am trying to have a very basic installation of Linux on a single
     1.44MB floppy disk and I cannot find any documents on how to do
     that. My goal is to have just one floppy with the kernel, TCP/IP,
     network driver for 3COM PCMCIA card, Telnet daemon, so I could
     demonstrate our RF products (which have a wireless Ethernet
     interface - 802.11 in case you are interested) with just a laptop
     PC and this floppy. I have found several suggestions on how to
     create a compressed image on a diskette but the problem is how to
     create and install a _working_ system on the same diskette, either
     through a RAM disk or an unused partition. The distribution I am
     currently using is Slackware 3.5.
     
   Making a "boot-root" disk is not too difficult and there is
   information and and examples available:
   http://metalab.unc.edu/LDP/HOWTO/Bootdisk-HOWTO.html
   http://www.linuxrouter.org/
   
   Maybe the new LDP site should have a link from every page of Linux
   Gazett: http://metalab.unc.edu/LDP/
   
   I build boot-root disks quite regularly and they have lots of uses Eg:
    1. change an old PC into a dial on demand router for a net.
    2. Give clients and emergency disk which will ring in to us so we can
       log in and fix things. (Even if the main OS on the machine is not
       Linux)
    3. Turn any Windows PC on the net into a terminal, or testbed for
       network hardware.
    4. Clients often bring laptops for installations with no easy way of
       connecting them to the net. A bootroot disk and a PLIP cable gives
       me a simple way to get the laptop to let me telnet to it and ftp
       files across.
       
   Basicly it is just a matter of reducing what you are trying to
   something which will fit on the floppy and following the HOWTO. If you
   are short of space you can usually gain a little by using older
   versions.
   
   Having said that you are putting yourself up against some additional
   problems here. Laptops are notorious for being only PC compatable with
   drivers which are only available for Windows. Even here there is some
   support: http://www.cs.utexas.edu/users/kharker/linux-laptop/ but you
   should realise that not all PCMCIA chip sets are supported and that is
   before you get onto support for the card itself. Obvioulsy if the card
   is your own product you have some advantages as far as getting access
   to technical information :-) but in general if the laptop and card
   manufacturers are unwilling to give information you can end up wasting
   a lot of time on reverse engineering and sometimes still fail.
   
   --
   Ian
     _________________________________________________________________
   
  Replies to My Questions in Nov. 98 Linux Gazette
  
   Date: Tue, 15 Dec 1998 20:23:48 -0800
   From: Sergio Martinez, sergiomart@csi.com
   
   Last month, Ms. Richardson published a short letter I wrote that asked
   some questions about the differences among the terminology of GUIs,
   window managers, desktops, interfaces, and a bit about the differences
   among GNOME, KDE, and Windows. These matters came to mind as I
   switched from Windows 95 to Linux, with its multiple choices of window
   managers.
   
   Several people were kind enough to send long replies. I'm forwarding
   them to you in case you would like to consider using one as an
   article, or editing them into one. I suppose the title could be
   something like "A Vocabulary Primer to GUI's, Window Managers,
   Desktops, Interfaces, and All That".
   
   I'm leaving all this to your judgment. It would be an article for
   newbies, but I found most of the replies very informative for this
   migrant from Windows 95.
   
   --
   Sergio E. Martinez
   
       --------------------------------------------------------------
                                      
   Date: Tue, 1 Dec 1998 13:44:20 -0500
   From: Moore, Tim, Tim.Moore@ThomsonConsulting.com
   
   I don't have time to write a full article, but I can answer your
   questions. Unfortunately, I'm using MS Outlook to do so (I'm at work
   and I have to )-: ) so sorry if this comes out formatted funny in your
   mailer.
   
     Terminology: The differences (if any) among a GUI, a window
     manager, a desktop, and an interface. How do they differ from X
     windows?
     
   In the X world, things tend to be split up into multiple components,
   whereas in other systems, everything is just part of the "OS". Here
   are some definitions:
   
   Interface is a general term which really just means a connection
   between two somewhat independent components -- a bridge. It is often
   used to mean "user interface" which is just the component of a
   computer system which interacts with the user.
   
   GUI is another general term, and stands for graphical user interface.
   It's pretty much just what it sounds like; a user interface that is
   primarily graphical in nature. Mac OS and Windows are both GUIs. In
   fact, pretty much everything intended for desktop machines is these
   days.
   
   On Mac OS and Windows, capabilities for building a graphical interface
   are built into the OS, and you just use those. It's pretty simple that
   way, but not very flexible. Unix and Unix-like OSes don't have these
   built in capabilities -- to use a GUI, you have to have a "windowing
   system." X is one of them -- the only one that sees much use these
   days.
   
   All X provides is a way to make boxes on the screen (windows) and draw
   stuff in them. It doesn't provide a) ways to move windows around,
   resize them, or close them, b) standard controls like buttons and
   menus, c) standards or guidelines for designing user interfaces for
   programs, or for interoperating between programs (e.g., via drag and
   drop or a standard help system).
   
   A window manager is a program which lets you move windows around and
   resize them. It also usually provides a way to shrink a window into an
   icon or a taskbar, and often has some kind of a program launcher. The
   user can use any window manager that he or she wants -- any X
   application is supposed to work with any window manager, but you can
   only run one at a time. That is, you can switch between window
   managers as much as you want, but at most one can be running at a
   time, and all programs on screen are managed by whichever one is
   running (if any).
   
   A widget set is a library of routines that programmers can use to make
   standard controls like buttons and menus (which are called widgets by
   X programmers). The widget set that an application uses is chosen by
   the *programmer* (not the user). Most people have multiple widget sets
   installed, and can run multiple programs using different widget sets
   at the same time.
   
   Finally, there's the desktop environment. This is the newest and most
   nebulous X term. It basically means "the things that the Mac OS and
   Windows GUIs have that X doesn't but should" which generally consists
   a set of interacting applications with a common look and feel, and
   libraries and guidelines for creating new applications that "fit in"
   with the rest of the environment. For example, all KDE applications
   use the same widget set (Qt) and help program, and you can drag and
   drop between them. You can have multiple desktop environments
   installed at the same time, and you can run programs written for a
   different environment than the one you're running without having to
   switch, as long as you have it installed. That is, if you use GNOME,
   but like the KDE word processor KLyX, you can run KLyX without running
   any other KDE programs, but it won't necessarily interoperate well
   with your GNOME programs. You can even run the GNOME core programs and
   the KDE core programs at the same time, thought it doesn't really make
   much sense to, as you would just end up with two file managers, two
   panels, etc.
   
     Do all window managers (like GNOME or KDE or FVWM95) run on top of
     X windows?
     
   Yes, though GNOME and KDE aren't window managers (they're desktop
   environments). KDE comes with a windowmanager (called KWM). GNOME
   doesn't come with a window manager -- you can use whichever one you
   want, though some have been specifically written to interoperate well
   with GNOME programs (Enlightenment being the furthest along). But yes,
   they all require X to be running.
   
     What exactly does it mean for an application to be GNOME or KDE
     aware? What happens if it's not? Can you still run it?
     
   It just means that it was written using the GNOME or KDE libraries.
   This means a few things: 1) programs will probably *not* be both GNOME
   *and* KDE aware, 2) you have to have the GNOME libraries installed to
   run GNOME-aware applications, 3) you can run GNOME applications and
   KDE applications side-by-side, and to answer your question, 4) you can
   always run non-aware applications if you use either environment.
   
     What exactly do the GTK+ (for GNOME) or Troll (for KDE) libraries
     do?
     
   GTK+ and Qt (which is the name of the product by Troll Tech that KDE
   uses) are both widget sets. That is, they provide buttons, menus,
   scrollbars, and that sort of thing to application developers. Note
   that applications can use GTK+ or Qt without being GNOME or KDE aware,
   but *all* GNOME apps use GTK+ and *all* KDE apps use Qt.
   
     How does the history of Linux (or UNIX) window managers compare to
     that of say, the desktop given to Win98/95 users? How,
     specifically, does Microsoft limit consumer's choices by giving
     them just one kind of desktop, supposedly one designed for ease of
     use?
     
   This is a much more complicated question. In essence, Windows provides
   a built in windowing system, window manager, widget set, and desktop
   environment, so everybody uses those instead of being able to chose
   the one they like.
   
     What's happening with Common Desktop Environment? Is it correct
     that it's not widely adopted among Linux users because it's a
     resource hog, or not open source?
     
   Yes. Also, it costs a lot of money. You can get it from Red Hat,
   though.
   
   --
   Tim
   
       --------------------------------------------------------------
                                      
   Date: Wed, 2 Dec 1998 00:34:46 +0100 (AMT)
   From: Hans Nieuwenhuis, niha@ing.hj.se
   
   I read your mail today in the Linux Gazette and decided to answer (or
   try to) your questions.
   
   Here it goes:
   
   X-Windows is designed as a client-server system. Advantage is that you
   can run the server on another machine then the machine your monitor is
   connected to. Then you need a client. This can be a program or a
   window manager. A window manager communicates with the server by
   asking it to create a window. When the server fullfilled the requests
   the windowmanager ads a nice titlebar to it and lets the application
   create its interface. Basicly the window manager stand between the
   server and the application, but that is not necessary. It is possible
   to run an application on a X server without a window manager but the
   only thing you are able to do is run that specific application, close
   it and kill the X server.
   
   A GUI is a Graphical User Interface, which means all of the
   information presented on the screen is done by windows, menus, buttons
   etc... Just like Windows. Also all the interaction, the interface is
   based upon those windows and buttons. The main goal of a GUI is to
   provide a uniform system of presenting windows and gathering
   information. A good example in MS Windows is the Alt+F4 keystroke,
   with this keystroke you can close any window on your screen. A window
   manager can be part of this system. This is what happens with KDE and
   CDE. They both feature their own window manager and then you are able
   to bring this same uniformity to your desktop. Basicly what I see as a
   desktop is the set of applications which are availeble on a certain
   system. A uniform GUI can bring also features like drag and drop and
   "point and shoot", associate applications to a certain filetype. One
   question you ask about the awareness for GNOME or KDE, this means,
   that a program that is designed for those environment is (or should
   be) able to communicate with other programs that are designed for
   those environments. This brings you for example drag and drop. Some
   programs can indeed not run without the desktop environment for which
   they are designed, but some can. For example I use KDE programs, but I
   do not like their window manager so I use Window Maker, which is not
   designed for use in the KDE environment, therefore I have to lack some
   features.
   
   The libraries: GTK+ and Qt (Troll, as you mentioned it) are toolkits.
   What they basicly do is draw windows, buttons and menus. These are
   tour Legos with which you build your interface. And yes, if you want
   to run applications designed for a specif environment, say GNOME, you
   need atleast the GNOME libaries, like GTK+ and a few others.
   
   As I mentioned before, the client-server design of X-Windows gives the
   user the flexibility to choose a window manager they like, but basicly
   they do the same as the win95/98 system. Win95/98 limits you to one
   look and feel (yeah you can change the color of your background, but
   that is about it), but manages also windows. But it does not give the
   user the freedom to experiment with other looks and feels. Most modern
   window managers permits you to define other keybindings and such. And
   if you don't like GNOME you can use KDE and vice versa (there are a
   few others btw).
   
   All I know about CDE is that it is based on the Motif toolkit (compare
   GTK+ and Qt) and this toolkit is not free (better say GPLed software)
   like GTK+. I think that is the main reason why it is not used very
   much on Linux. But if it is a resource hog I do not know. Personally
   the main reason why I will not use it is because it looks ugly :-)
   
   Well that is about it, I hope this information is a bit usefull. If
   you have questions, do not hesitate...
   
   --
   Hans Nieuwenhuis
   
       --------------------------------------------------------------
                                      
   Date: Sat, 05 Dec 1998 00:29:34 -0500
   From: sottek, sottek@quiknet.com
   
   I thought I would take the time to send you some information about the
   questions you have posted on Linux Gazette. From your question I can
   tell that even though you are new to Linux you have seen some of the
   fundamental differences in the interface workings. I currently work
   for Intel where I administrate Unix Cad tools, and am having to
   explain these differences to management everyday... I think you will
   understand far better than they do :)
   
     1.Terminology: The differences (if any) among a GUI, a window
     manager, a desktop, and an interface. How do they differ from X
     windows?
     
   X windows is a method by which things get drawn on your screen. All x
   windows clients (the part drawing in front of you) have to know how to
   respond to certain commands, like 'draw a green box', 'Draw a pixel'
   allocate memory for client images... This in itself is NOT what you
   think of as "Windows". All applications send these commands to your
   client. This is done through tcp/ip, even if your application and your
   client are both on the machine in front of you. This is VERY VERY
   important. The #1 design flaw in MS Windows is the lack of this
   network layer in the windows system. Every X application (any
   window... xterm netscape xclock) looks at your "DISPLAY" environment
   variable to find out who it should tell to draw itself. IF your
   DISPLAY is set to computer1:0.0 and you are on computer2 and you type
   'xterm' it will pop up on computer1's screen (Provided you have
   permission) This is why on my computer at work I have windows open
   from HP's RS6000's Sun's... Linux(when I'm sneeky) and they all work
   just fine together.
   
     2.Do all window managers (like GNOME or KDE or FVWM95) run on top
     of X windows?
     
   Well, yes. Given the above you should now know that X is the thing
   that draws. Anything that needs to draw has to run "on" X.
   
   BUT, we need to get a better understanding of the window manager
   because I didn't tel you about that yet. In MS Windows when a program
   hangs it sits on your screen until you can kill it. There is usually
   no way to move it, or minimize it. This is design flaw #2 in windows.
   Every MS Windows program has to have some code for the title bar,
   close, maximize, and minimize buttons. This code is in shared libs so
   you don't have to write it yourself but never the less it IS there. In
   X windows the program knows nothing about its titlebar, or the buttons
   on it. The program just keeps telling X to draw whatever it needs.
   Another program, the window manager does those things (It 'Manages
   windows') The window manager draws the title bars and the buttons. The
   window manager also 'hides' a window from you when it is minimized and
   replaces it with an icon. The program has NO say so in the matter.
   This means that even is a program is totally locked up it can be
   moved, minimized, and killed. (Sometimes not killed unless you window
   manager is set to send a kill -9)
   
   That being said here is the bad news. KDE and gnome and NOT window
   managers. They do not draw title bars, allow you to resize windows and
   stuff like that. They are just a program that does things like provide
   a button bar (which some window managers do too) and the stuff like
   telling programs how they should look.
   
     3.What exactly does it mean for an application to be GNOME or KDE
     aware? What happens if it's not? Can you still run it?
     
   gnome aware applications do what I was just about to mention. They pay
   attention to gnome when it tells them how to look and act. If gnome
   says 'you should have a red background' they do it. Also there will be
   some advanced things like an app can ask gnome if it can have a spell
   checker and gnome can supply it with one (See CORBA stuff) KDE is the
   same way minus the CORBA (I think)
   
     4.What exactly do the GTK+ (for GNOME) or Troll (for KDE) libraries
     do?
     
   This is a hidden layer called widgets. It allows you do say 'draw a
   button' rather than 'draw a box, draw an edge on that box so it looks
   3d, put some text in that box, make sure this box looks for mouse
   clicks, if a click happens remove that 3d stuff and put it back pretty
   quick'. It would not be a good idea to try to program complex things
   without a widget set.
   
     5.How does the history of Linux (or UNIX) window managers compare
     to that of say, the desktop given to Win98/95 users? How,
     specifically, does Microsoft limit consumer's choices by giving
     them just one kind of desktop, supposedly one designed for ease of
     use?
     
   I think you can get this from the other answers. really the limit
   are...
    1. You have to run the program on the same machine where you want to
       see it.
    2. You can't choose another window manager if you don't like the way
       windows works.
    3. No matter how configurable windows is, if there is just 1 thing
       you need that it doesn't have built in , there is no way to get
       it. With X you just use a different wm,desktop,widget set,
       whatever.
       
     6.What's happening with Common Desktop Environment? Is it correct
     that it's not widely adopted among Linux users because it's a
     resource hog, or not open source?
     
   CDE what a thing driven by big Unix verdors for their own needs.
   Things that start that way get re-invented to suit everyones needs,
   hence Gnome and KDE.
   
   Well, when I get going I can sure waste some time. I hope I haven't
   taken up too much of you time with this. I'll leave you with just 1
   thing.
   
   I know hundreds of world class programmers, and administrators who are
   gods on BOTH NT and Unix. I know not a single one who prefers NT. Keep
   learning until you agree, I know you will.
   
   --
   SOTTEK
   
       --------------------------------------------------------------
                                      
   Date: Sat, 5 Dec 1998 09:48:43 -0600
   From: Dustin Puryear, dpuryear@usa.net
   
     desktop, and an interface. How do they differ from X windows?
     
   X windows is what sits behind it all. More or less, it controls the
   access to your hardware and provides the basic functionality that is
   needed by the wm. The wm controls windows, and how the user interacts
   with them. A desktop, such as KDE or GNOME, provides more services
   than a wm. For instance, drag 'n drop is a feature of a desktop, not a
   wm.
   
     Do all window managers (like GNOME or KDE or FVWM95) run on top of
     X windows?
     
   Yes.
   
     What exactly does it mean for an application to be GNOME or KDE
     aware? What happens if it's not? Can you still run it?
     
   They use the functions provided by GNOME or KDE, not just X.
   
     What exactly do the GTK+ (for GNOME) or Troll (for KDE) libraries
     do?
     
   GTK+ and Qt (KDE) provide the basic foundation for the desktops. For
   instance, Qt provides the code to actually create a ListBox (a list of
   items a user can choose). KDE just uses this code to do it's thing.
   Note that Qt can be used for console apps just as well as for X apps.
   I'm not familiar with GTK+, so I can't comment.
   
     What's happening with Common Desktop Environment? Is it correct
     that it's not widely adopted among Linux users because it's a
     resource hog, or not open source?
     
   Well, Red Hat used CDE for a while (I think). However, they could not
   actually fix anything with it since it's was closed source. They have
   since moved to GNOME. However, there are some CDE clones out there.
   
   --
   Dustin
   
       --------------------------------------------------------------
                                      
   Date: Sat, 05 Dec 1998 19:45:34 +0000
   From: "Richard J. Moore", moorer@cs.man.ac.uk
   
   Hope this helps:
   
     1.Terminology: The differences (if any) among a GUI, a window
     manager, a desktop, and an interface. How do they differ from X
     windows?
     
   A GUI (Graphical User Interface) is a general term that refers to the
   basic idea of using a graphical representation to communicate with the
   user (as opposed to a text based interface such as the command line).
   
   A window manager is an idea that is really specific to X windows. In X
   windows the policy for how windows are arranged and controlled is
   separated from the core system, the window manager is a special
   program that does this. This allows people to choose a window manager
   that has a policy that is good for them, and allows new window
   managers to be created that have different policies. The window
   manager draws window borders, minimise/maximise buttons etc. You can
   mix and match window managers, but most GUI toolkits for UNIX will
   provide one as standard.
   
   A desktop is a metaphor used by many GUIs it is basically an attempt
   to make computers fit in with the way people would work in an office.
   The hope is that this will make it easy for people to operate the
   system. The term is also used more generally to refer to a combination
   of window manager, toolkit (the box of parts used by the programmers
   of the system), and other 'standard' applications. If a set of tools
   is referred to as a desktop, it generally means that it will provide
   all of these things, and that they will be designed to work together
   in an integrated fashion. An example would be KDE
   (http://www.kde.org/).
   
   An 'interface' is just an abbreviation for a user interface. This is
   the view that a program presents to the user, and (for a graphical
   user interface) is usually composed of widgets such as menus,
   checkboxes, push buttons etc.
   
   Finally X windows is a toolkit for actually getting all of the widgets
   etc. onto your screen. It provides routines for drawing lines, circles
   etc. and these are used to draw everything you see. X windows is a lot
   more complicated and powerful than this really, but it would take a
   book to explain why. If you want this level of detail then look at the
   O'Reilly X windows programming series.
   
     2.Do all window managers (like GNOME or KDE or FVWM95) run on top
     of X windows?
     
   Yes, though neither Gnome nor KDE is a window manager. Both of these
   are complete desktops and though they provide window managers, there
   is much more to them than just that. The window manager in KDE is
   called kwm.
   
     3.What exactly does it mean for an application to be GNOME or KDE
     aware? What happens if it's not? Can you still run it?
     
   It means the app will talk to the window manager to get support for
   special features of that environment, and that it will use the
   standard look and feel of the desktop. If the app is not compliant
   then it should still work fine, but the special features will be
   unavailable. The other situation is using a compliant app with a
   nonstandard window manager, in this case too the app should work fine
   (but some feature may be unavailable). It is possible for window
   managers other than the standard ones to be compliant, for example
   there is now a KDE-Compliant version of the BlackBox WM.
   
     4.What exactly do the GTK+ (for GNOME) or Troll (for KDE) libraries
     do?
     
   They provide tools such as edit widgets, menus etc. in a form that
   makes them easy to reuse. The library used by KDE (called Qt, see
   http://www.troll.no/qt) is written in a language called C++ and also
   provides tools for programmers such as routines for platform
   independent access to files and directories etc. GTK+ is similar
   though it has narrower scope and is written in C.
   
     5.How does the history of Linux (or UNIX) window managers compare
     to that of say, the desktop given to Win98/95 users?
     
   Badly :-(
   
     How, specifically, does Microsoft limit consumer's choices by
     giving them just one kind of desktop, supposedly one designed for
     ease of use?
     
   They restrict the system to a single view which may not be the best
   one for the job. Allowing people the choice means people can choose
   the best for them, even if it is nonstandard. The downside of this is
   that if everyone uses a different window manager then supporting and
   managing the system becomes difficult. In between these two options is
   the choice made by most UNIX toolkits - have a standard window window
   manager, but allow people to use another if they want.
   
     6.What's happening with Common Desktop Environment? Is it correct
     that it's not widely adopted among Linux users because it's a
     resource hog, or not open source?
     
   CDE is based on Motif which is an old C toolkit that is (IMHO) looking
   rather dated. Motif is very slow, and as you say is very resource
   hungry. In the past linux versions have often been buggy, though this
   situation may have improved. I found CDE itself to be quite poor, it
   works fine if you spend all your time in a single application (such as
   emacs), but using the drag and drop, and some of the built in tools
   was generally problematic. IMHO It is unlikely to take off on linux
   because it it pricey and of lower quality than the free alternatives.
   
   --
   Rich
     _________________________________________________________________
   
             Published in Linux Gazette Issue 36, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
      This page maintained by the Editor of Linux Gazette, gazette@ssc.com
      Copyright  1999 Specialized Systems Consultants, Inc.
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                                 News Bytes
                                      
                                 Contents:
                                      
     * News in General
     * Software Announcements
     _________________________________________________________________
   
                              News in General
     _________________________________________________________________
   
  February 1999 Linux Journal
  
   The February issue of Linux Journal will be hitting the newsstands
   January 11. This issue focuses on Cutting Edge Linux with an article
   on wearable computers by Dr. Steve Mann. Also, featured are articles
   on COAS, Csound, VNC, KDE and GNOME. Check out the Table of Contents
   at http://www.linuxjournal.com/issue58/index.html. To subscribe to
   Linux Journal, go to http://www.linuxjournal.com/ljsubsorder.html.
     _________________________________________________________________
   
  Open Source Petition
  
   Date: Sat, 05 Dec 1998 15:20:19 -0500
   A petition has recently been launched asking the General Services
   Administration of the US Government to evaluate Open Source software
   (OSS) alongside commercial software whenever it buys or upgrades
   computers. The goal of the petition, written by Prof. Clay Shirky and
   sponsored by the Open Source Iniative and O'Reilly and Associates, and
   hosted on www.e-thepeople.com, is to point out that OSS has reached a
   level of quality, reliability and support that makes it competitive
   with existing commercial products.
   
   The ultimate hope is to get vendors of Open Source software included
   in contract bids for Federal Government work.
   
   If you are interested in this petition, there are three things you can
   do:
     * Sign it:
       http://www.ethepeople.com/etp/affiliates/national/fullview.cfm?ETP
       ID=0&PETID=74386&ETPDIR=affiliates/national/
     * Post the press release URL on sites or in other forums whose
       members might be interested in such a thing:
       http://www.shirky.com/opensource/petition.html
     * Pass this message on.
       
   For more information:
   Clay Shirky, clay@shirky.com
     _________________________________________________________________
   
  LinuxWorld Conference & Expo - March 1999
  
   IDG World Expo, the world's leading producer of IT-focused conferences
   and expositions, will produce LinuxWorld Conference & Expo, the first
   international exposition addressing the business and technology issues
   of the Linux operating environment.
   
   Addressing the needs of both the Linux business and development
   communities, LinuxWorld Conference and Expo, headed by Charles Greco,
   President of IDG World Expo, features a high-level, technical
   conference program led by industry luminaries offering advice and
   solutions on the industry's fastest growing operating systems
   technology. An exhibit floor highlighting leading service providers,
   solutions integrators, and development organizations -- Pacific
   HiTech, Enchanced Software, Linux Journal, Knock Software, and Oracle
   among others -- will also include customized event areas such as
   Start-up City, Developer Central and Developer Greenhouse, which will
   spotlight the latest developments and emerging companies in the Linux
   arena.
   
   The first LinuxWorld Conference and Expo will be held March 1-4, 1999
   in San Jose, California at the San Jose Convention Center. The target
   audience includes Linux developers, Fortune 1000 business leaders,
   enterprise managers, CIOs, service providers, system administrators,
   software solution providers, computer consultants, and solutions
   integrators.
   
   Dr. Michael Cowpland, President and CEO, Corel Corporation, Mark
   Jarvis, Senior Vice President of World Wide Marketing, Oracle and
   Linus Torvalds, Creator of Linux, the open source operating system,
   will be the featured keynote speakers on Tuesday, March 2. Keynotes
   are open to all registered attendees.
   
   For more information:
   http://www.linuxworldexpo.com/
     _________________________________________________________________
   
  Debian Project Adopts a Constitution
  
   December 14, 1998
   The Debian Project adopted a constitution which can be viewed at
   http://www.debian.org/devel/constitution/. The highlights of the
   constitution include the creation of the Technical Committee, the
   Project Leader postion, the Project Secretary position, Leader
   Delegate positions and a voting proceedure. The constitution was
   proposed in September 1998, and after a discussion period the vote
   took place in December 1998. It was virtually unanimously in favor
   with 86 valid votes.
   
   The discussion about the constitution began in early 1998 and was
   carried out on the Debian mailing lists. Most of the discussion can be
   found in the archives of the debian-devel mailing list at
   http://www.debian.org/Lists-Ar chives/. Details of the vote can be
   found at http://www.debian.org/vote/19 99/vote_0000.
   
   The constitution describes the organisational structure for formal
   decisionmaking within the Debian Project. As Debian continues to grow,
   this will be a valuable document to ensure that Debian continues to
   evolve and grow with the input and contributions from its membership.
   
   For more information:
   http://www.debian.org/
     _________________________________________________________________
   
  Linux Links
  
   Linux is the cover story of December Network Magazine:
   http://www.networkmagazine.com/
   
   Perl Web site at The Mining Co.: http://perl.miningco.com/
   
   LinuxCAD review: http://pw2.netcom.com/~rwuest/linuxcadreview.html
   
   Comdex and the Linux pavilion: http://marc.merlins.org/linux/comdex98/
   
   Tea Party: http://marc.merlins.org/linux/teaparty/
   
   The Internet an International Public Treasure: A Proposal:
   http://firstmonday.dk/issues/issue3_10/hauben/index.html
   
   Linux and Apple: http://www.techweb.com/wire/story/TWB19981215S0011
   
   "The money's too good":
   http://www.salonmagazine.com/21st/rose/1998/10/23straight.html
     _________________________________________________________________
   
                           Software Announcements
     _________________________________________________________________
   
  Applix Adds Applixware for Linux On Compaq Alpha
  
   Date: Fri, 4 Dec 1998 18:28:28 -0500
   
   WESTBORO, Mass.--Dec. 1, 1998--Applix, Inc. announced today the
   release of Applixware 4.4.1 for Linux running on COMPAQ's Alpha
   processor.
   
   Applixware includes Applix Words, Spreadsheets, Graphics, Presents,
   HTML Author and Applix Data which provides database connectivity to
   Oracle, Informix, Sybase and other Linux databases. Applix Builder, a
   graphical, object oriented development tool with CORBA connectivity is
   also included in the suite. Microsoft Office 97 document interchange
   is provided through an Applix developed set of filters for Word, Excel
   and PowerPoint.
   
   For more information:
   Applix, Inc., Richard Manly, rmanly@applix.com
   http://linux.applixware.com/
     _________________________________________________________________
   
  NetBeans Announces Support for the Java Development Kit 1.2
  
   New York, Java Business Expo, December 8, 1998 - NetBeans today
   announced that its Java(tm) IDE, NetBeans DeveloperX2, supports and
   runs on Sun Microsystems, Inc.'s Java Development Kit (JDK version
   1.2). This latest release of the JDK provides a rich feature set of
   new class libraries and tools, making it easier than ever for
   developers to create portable, distributed, enterprise-class
   applications. Sun's announcement of the availability of the next
   version of the JDK was made today during the Java Business Expo in New
   York. NetBeans Developer X2 2.1 (beta) supports JDK 1.2 and uses it
   internally. It is available to NetBeans' Early Access Program
   participants.
   
   In addition to overall performance improvements, Sun's new version of
   the JDK enhances the NetBeans IDE by offering features such as drag 'n
   drop, Beans enhancements, collections, JDBC 2.0, and Swing 1.1. Among
   other new features, NetBeans DeveloperX2 will utilize the new APIs for
   grouping and manipulating objects of different types and for extending
   server functionality. JDK 1.2 will also strengthen NetBeans users'
   ability to design more user-friendly interfaces, process images,
   address multilingual requirements, use stylized text, and print.
   
   The final release of NetBeans DeveloperX2 2.1 will be available in
   January, 1999. NetBeans Developer will also be available in a
   concurrent version, which will continue to support JDK 1.1.x. NetBeans
   Enterprise, a multi-user edition of the IDE due in Beta version in
   January, 1999, will support JDK 1.2. The full release of this edition
   of the IDE is due in Spring, '99.
   
   For more information:
   http://www.netbeans.com/ 
   Helena Stolka, helena.stolka@netbeans.com
     _________________________________________________________________
   
  Zope Goes Open Source
  
   Date: Sat, 5 Dec 1998 06:19:32 -0500 (EST)
   Just in case you missed this in LWN, http://www.zope.org/ just went
   online. It's a really nice product for developing web sites. The
   company that created it gave a talk at the DCLUG meeting a few months
   back. They dropped are strong Linux supporters. It's there principal
   platform in house.
   
   For more information:
   http://www.zope.org/
     _________________________________________________________________
   
  KDE on Corel's Netwinder
  
   Ottawa, Canada--November 25, 1998--
   Corel Computer and the KDE project today announced a technology
   relationship that will bring the K Desktop Environment (KDE), a
   sophisticated graphical user environment for Linux and UNIX, to future
   desktop versions of the NetWinder family of Linux-based thin-clients
   and thin-servers. A graphical user interface is a necessary element
   for Corel Computer to create a family of highly reliable, easy-to-use,
   easy-to-manage desktop computers. The alliance between Corel Computer
   and KDE, a non-commercial association of Open Source programmers,
   provides NetWinder users a sophisticated front-end to Linux, a stable
   and robust Unix-like operating system.
   
   Corel Computer has shipped a number of NetWinder DM, or development
   machines, to KDE developers who are helping to port the desktop
   environment. Additionally, NetWinder.Org developers, Raffaele Saena
   and John Olson, were responsible for championing development of KDE on
   the NetWinder. Corel Computer plans to announce the availability of
   desktop versions of the NetWinder running KDE beginning in early 1999.
   Early demonstrations of the port, such as the one shown at the Open
   Systems fair in Wiesbaden, Germany, in September, have been
   enthusiastically received by potential customers.
   
   Based on the Open Source model, Corel Computer is devoting internal
   development resources to the improvement of the KDE project including
   rigorous testing of the environment on the NetWinder. As a developing
   partner, Corel Computer will release its work back to the KDE
   development community.
   
   For more information:
   http://www.corelcomputer.com/
   htt://www.kde.org/
     _________________________________________________________________
   
  New Perl Module Enables Application Developers to Use XML
  
   Date: Wed, 25 Nov 1998 06:36:08 -0800 (PST)
   Sebastopol, CA--Perl is the language operating behind the scenes of
   most dynamic Web sites. XML (Extensible Markup Language) is emerging
   as a core standard for Web development. Now a new Perl module (or
   extension) known as XML::Parser allows Perl programmers building
   applications to use XML, and provides an efficient, easy way to parse
   (break down and process) XML document parts.
   
   Perl is renowned for its superior text processing capabilities; XML is
   text that contains markup tags and structures. Thus Perl's support for
   XML offers a natural expansion of the capabilities of both.
   
   XML::Parser is built upon a C library, expat, that is very fast and
   robust. Perl, expat and XML::Parser are all Unicode-aware; that is,
   they read encoding declarations and perform necessary conversions into
   Unicode, a system for "the interchange, processing, and display of the
   written texts of the diverse languages of the modern world"
   (http://www.unicode.org/). Thus a single XML document written in Perl
   can now contain Greek, Hebrew, Chinese and Russian in their proper
   scripts. Expat was authored by James Clark, a highly respected leader
   in the SGML/XML community.
   
   For more information:
   http://www.perl.com/
   http://www.oreilly.com/
   http://perl.oreilly.com/
     _________________________________________________________________
   
  QLM for IT Reduces Cost & Guarantees Certainty of Application Development
  
   Newton, Mass., December 9, 1998 - Kalman Saffran Associates, Inc.
   (KSA), a leading developer of state-of-the-art products and complex IT
   systems for data communications, telecommunications, financial, and
   interactive/CATV industries, today announced the availability of its
   new Quantum Leap Methodology (QLM(tm) ) for IT. QLM for IT is an
   innovative process for information technology organizations looking to
   decrease expense and speed application development. Using QLM for IT,
   KSA increases productivity and certainty by pre-empting the mistakes
   that have historically created barriers to IT project success.
   Successful application of QLM for IT allows upper management to
   refocus on strategic planning and IT objectives, and away from budget
   and schedule overruns. At the same time the methodology sharpens an
   organization's focus on assessment, implementation, verification,
   customization and quantification. This approach allows KSA to
   guarantee speedy results and high quality.
   
   The QLM for IT offering is available starting at $20,000. Companies
   interested in QLM for IT analysis and recommendations or learning more
   about KSA's comprehensive training program should call 1.888.597.9284
   For more information:
   kalsaf@email.msn.com
     _________________________________________________________________
   
  Spectra Logic Announces Alex 4.50, Has Linux Support
  
   BOULDER, Colo., Dec. 15, 1998 - Spectra Logic Corp. today announced
   the availability of Version 4.50 of its award winning Alexandria
   Backup and Archival Librarian software. Alexandria 4.50 adds a number
   of significant new features to provide users with greater
   functionality, reliability, and ease-of-use for backup and recovery of
   large distributed databases and data center applications.
   
   Alexandria 4.50 has been ported to Red Hat and Slackware Linux OSes,
   and additional ports are being developed for Linux OSes from SuSE,
   Caldera, and TurboLinux. Alexandria Linux support is available on the
   Red Hat distribution CD or from Spectra Logic's website at
   www.spectralogic.com/linux/index.htm
   http://www.spectralogic.com/linux/index.htm.
   
   For more information:
   http://www.spectralogic.com/
     _________________________________________________________________
   
  WebMaker
  
   Date: Thu, 10 Dec 1998 21:22:25 GMT
   WebMaker, an HTML Editor for UNIX, version 0.6 is out now. (Copyright
   - GPL)
   
   Main features:
     * nice GUI interface;
     * menus, toolbar and dialogs for tag editing - like HomeSite and
       asWedit;
     * HTML 4.0 support;
     * preview for <IMG> tag (see screenshot);
     * color selectors for bgcolor and other color attributes;
     * color syntax highlighting;
     * preview with external browser (Netscape);
     * ability to filter editor content through any external program that
       support stdin/stdout interface;
     * KDE integration.
       
   For more information:
   http://www.services.ru/linux/webmaker/
     _________________________________________________________________
   
             Published in Linux Gazette Issue 36, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
      This page written and maintained by the Editor of Linux Gazette,
      gazette@ssc.com
      Copyright  1999 Specialized Systems Consultants, Inc.
      
    "The Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                           (?) The Answer Guy (!)
                                      
                   By James T. Dennis, answerguy@ssc.com
          Starshine Technical Services, http://www.starshine.org/
     _________________________________________________________________
   
  Contents:
  
   (!)Greetings From Jim Dennis
   
   (?)Routing and Subnetting 101
          
   (?)No STREAMS Error while Installing Netware for Linux
          
   (?)More than 8 loopfs Mounts?
          
   (?)eql dual line ppp --or--
          EQL Serial Line "Load Balancing" 
          
   (?)who to report gcc bug to? --or--
          Where to Report Bugs and Send Patches 
          
   (?)RedHat Linux (5.1) and Brand X --or--
          How to "get into" an Linux system from a Microsoft client 
          
   (?)Linux File System recommendations --or--
          Where to Put New and Supplemental Packages 
          
   (?)Your book --or--
          Book: Linux Systems Administration 
          
   (?)FTP Site... --or--
          'ls' Doesn't work for FTP Site 
          
   (?)very general process question --or--
          An Anthropologist Asks About the Linux "Process" 
          
   (?)Locating AV Research --or--
          Looking for a Hardware Vendor: In all the Wrong Places 
          
   (?)question for answerguy --or--
          Letting Those Transfers Run Unattended 
          
   (?)where can i find information about LOFS, TFS --or--
          Translucent, Overlay, Loop, and Union Filesystems 
          
   (?)Modem dial out
          
   (?)Linux Gazette --or--
          Mea Culpea
          
   (?)PAM & chroot (fwd) --or--
          'chroot()' Jails or Cardboard Boxes 
          
   (?)The Linux Swap File --or--
          Swap file on a RAM Disk 
          
   (?)RedHat Linux (5.1) and Brand X --or--
          How to "get into" an Linux system from a Microsoft client 
          
   (?)Dynamic IP Address Publishing Hack
          
   (?)Why 40-second delay in sending mail to SMTP server?
          
   (?)how to install two ethernet cards for proxy server for red hat
          linux --or--
          Linux as Router and Proxy Server: HOWTO? 
          
   (?)ey answer guy! answer this! --or--
          PostScript to GIF 
          
   (?)troubleshooting
          
   (?)More on: "Remote Login as root"
          
   (?)Thank You --or--
          Kudos 
          
   (?)Question --or--
          Linux Support for Intel Pentium II Xeon CPU's and Chipsets 
          
   (?)isp --or--
          Linux Friendly ISP's: SF Bay Area 
          
   (?)Hello I need some help --or--
          Eight Character login Name Limit 
          
   (?)Locked Out of His Mailserver
          
   (?)Changing the color depth for your x-server? --or--
          Changing the X Server's Default Color Depth 
          
   (?)Num Lock and X apps --or--
          NumLock and X Problems 
          
   (?)NE2000 "clones" --- not "cloney" enough! --or--
          Expansion on NE-2000 Cards: Some PCI models "okay" 
          
   (?)MySql --or--
          Finding info on MySqL? 
          
   (?)read please very important --or--
          Spying: (AOL Instant Messenger or ICQ): No Joy! 
          
   (?)Tuning monitors for use with X --or--
          Fraser Valley LUG's Monitor DB 
          
   (?)chattr =u and then what? --or--
          ext2fs "Undeletable" Attribute 
          
   (?)How to Install Linux on an RS6000?
          
   (?)Real PS Printing --or--
          Advanced Printer Support: 800x600 dpi + 11x17" Paper 
          
   (?)TAG suggestions
          
   (?)password change --or--
          CGI Driven Password Changes 
          
   (?)ifconfig reports TX errors on v2.1.x kernels
          
   (?)Trident 9685 tv --or--
          Support for Trident Video/Television Adapter 
          
   (?)Looking for info on BIOS setup --or--
          Plug and Pray Problems 
          
   (?)Mount linux drives from win9x/nt? password encryption seems to be a
          problem... --or--
          Sharing/Exporting Linux Directories to Windows '9x/NT 
          
   (?)Mail processing
          
   (?)Printing question --or--
          Extra Formfeed from Windows '95 
          
   (?)Root password --or--
          Can't Login in as Root 
          
   (?)Alternate root-password recovery option --or--
          Alternative Method for Recovering from Root Password Loss 
          
   (?)Journal File Support and Tarantella? --or--
          SCOldies Bragging Rights 
          
   (?)Remote tape access, using local CPU --or--
          Application Direct Access to Remote Tape Drive 
          
   (?)Mounting CD Drives from SoundCard --or--
          Mounting multiple CD's 
          
   (?)Re: leafnode-1.7 -- news server for small sites --or--
          More on Multi-Feed Netnews (leafnode) 
          
   (?)rsh config --or--
          Getting 'rsh' to work 
          
   (?)update on your answer - netware clients --or--
          Linux as a Netware Client 
          
   (?)LILO Default
          
   (?)uninstall help --or--
          Uninstalling Linux 
          
   (?)Compiling kernel --or--
          Making a Kernel Requires 'make' 
          
   (?)memory usage --or--
          Using only 64Mb out of 128Mb Available 
          
   (?)Manipulating Clusters on a Floppy ...
          
   (?)Setting up ircd
          
   (?)Sendmail on private net with UUCP link to Internet
          
   (?)Linux in general --or--
          Complaint Department: 
          
   (?)A Dual Modem configuration... how do I get it to work? --or--
          eql Not Working 
          
   (?)HELP: fetchmail dies after RH 5.2 upgrade --or--
          Upgrade Kills Name Server 
          
   (?)Question (what else?) --or--
          MS Applications Support For Linux 
          
   (?)Linux as a Home Internet Gateway and Server
          
   (?)lilo --or--
          Persistent Boot Sector 
          
   (?)preference=20 --or--
          Secondary MX Records: How and Why 
          
   (?)LPD forks and hangs/Linux --or--
          'lpd' Bug: "restricted service" option; Hangs Printer Daemon 
          
   (?)Dual booting NT or Win9x with Linux (Red Hat 5.2) --or--
          Dual Boot Configurations 
          
   (?)Can you give me a Suggestion?/ --or--
          Microtek Scanner Support: Alejandro's Tale 
          
   (?)Offer to make available Winmodem interface spec --or--
          Modem HOWTO Author Gets Offer RE: WinModems 
          
   (?)I do know i am boring (ma windows fa veramente cagare) --or--
          Condolences to Another Victim of the "LoseModem" Conspiracy 
          
   (?)Kai Makisara: Re: audio-DAT on SCSI streamer? --or--
          More on: Reading Audio Tapes using HP-DAT Drive 
          
   (?)Just a sugestion... --or--
          Best of Answer Guy: A Volunteer? 
          
   (?)more on keybindings --or--
          termcap/terminfo Oddities to Remotely Run SCO App 
          
   (?)Arabic? --or--
          Arabic BiDi Support for Linux 
          
   (?)Updates: Risks and rewards --or--
          Automated Updates 
          
   (?)Liam Greenwood: Your XDM question
          
   (?)rsh on 2.0.34 --or--
          'rsh' as 'root' Denied 
            ____________________________________________________
   
(!) Greetings from Jim Dennis

   Happy New Year everybody. I would say more, but I think I've said
   enough for this month...
            ____________________________________________________
   
(?) Routing and Subnetting 101

   From pashah on Wed, 18 Nov 1998 on the L.U.S.T List
   
   Hullo list, 
   
   what is the way to devide a net into subnets according to bits
   bourder? 
   
     (!) This is a very large subject --- and your question isn't
     sufficiently detailed to offer much of a clue as to how much
     background you really need.
     
     However, I'm writing a book on Linux Systems Administration, and I
     have to put some discussion of this somewhere in around chapter 12,
     so I might as well try here.
     
     "subnetting" is a means of dividing a block of IP addresses into
     separately routable groups. If you are assigned a class C address
     block (255 addresses) it often makes sense to subnet those in some
     way that's appropriate to your LAN layout.
     
     (!) [Paul Anderson] Also known as a /24, IIRC. TTYL!
     
     Paul Anderson - Self-employed Megalomaniac
     Member of the Sarnia Linux User's Group http://www.sar-net.com/slug
     
     For example you might split the block (lets say it's 192.168.200.*)
     into two subnets of 126 hosts each. We might assign half of them to
     an "external" or "perimeter" segment (an ethernet segment that
     contains all of our Internet visible hosts) while we assign the
     other addresses to our "internal" LAN.
     
     (Actually there are better ways to do that --- where we use
     "private net" (RFC1918) addresses on all of our internal LAN's ---
     and masquerading and/or proxying for all Internet access and
     internetwork routing. However, we'll ignore those methods for now).
     
     To do this we use a "netmask" option on the 'ifconfig' commands for
     each of the interfaces on our network. We'll have to put a router
     between our two segments. Conventionally primary routers are
     assigned the first available address on their subnets. So we'd
     assume that we're using a Linux system with two ethernet cards as
     our router. This would use the following commands to configure
     those two addresses:
     
                ifconfig  eth0 192.168.200.1 \
                        netmask 255.255.255.128 \
                        broadcast 192.168.200.127

                ifconfig  eth1 192.168.200.129 \
                        netmask 255.255.255.128 \
                        broadcast 192.168.200.255

     ... note that the 129 address in our original block becomes the
     first address in our upper subnet. We have subnetted into two
     blocks. (None of this makes sense unless you look at these numbers
     in binary).
     
     For this to work we'll also have to configure corresponding routes.
     In the 2.0 kernels and earlier it is/was necessary to do this as a
     separate operation. In the 2.1 kernel a route is automatically
     added for each 'ifconfig' command. For our example the routes would
     look like:
     
                route add -net 192.168.200.0 eth0
                route add -net 192.168.200.120 eth1

     ... I'm assuming, in this case, that we also have an ISP that has
     assigned this address block. Actually my examples are using
     addresses from RFC1918, these are reserved for "private" or
     "non-Internet" use --- and would never actually be issued by an
     ISP. However, they'll serve for our purposes. Let's assume that you
     had a simple PPP link to your ISP (or to some external ISDN, xDSL,
     CSU/DSU or other ISP provided device which is your connection point
     to them). They might have assigned one of their addresses to your
     border router, or they might expect that you'll assign your .1
     address to it. Somewhere on their end they'll have a route that
     looks something like:
     
                route add -net 192.168.200.0 gw 192.168.200.1

     This says that your router (.1) is the gateway (gw) for that
     network (192.168.200.*). Note that their netmask for you is
     255.255.255.0 --- their's differs from your idea of your netmask.
     That's because your router will handle the routing internal to your
     LAN.
     
     It might be the case that you have to assign your .1 address to
     your ppp0 interface, and perhaps your .2 address to eth0. That
     won't affect any of what I've said so far (other than the one digit
     in one of our 'ifconfig' commands). All of our routes are the same.
     
     In any event we'll want a default route to be active on our router
     anytime our connection to the Internet is up. The hosts on either
     of our subnets can all declare our router as their default route.
     Thus all of the hosts on the 192.168.200 subnet (2 through 126) can
     use a command like:
     
                route add default gw 102.168.200.1

     ... while all of the hosts on our upper subnet (192.168.200.128 ---
     129 through 254) would use:
     
                route add default gw 102.168.200.1

     Note that we can't use hosts numbered ...127 and ...255 in this
     example. For each subnet we create we "lose" two IP addresses. One
     is for the "network number" (offset zero from our subnet) and the
     other is for the broadcast address (the last offset from our
     network number for our subnet).
     
     We can have routes to gateways other than our "default." For
     example if I had a more complicated internetwork with a set of
     machines with addresses of the form 172.16.*.* (another RFC1918
     reserved block) I could use a command like:
     
                route add -net 172.16.0.0 gw 192.168.200.5

     ... to declare my local system (....5) as the gateway to that whole
     block of Class B addresses. Locally I don't care how the 172.16.*.*
     addresses are subnetted on their end. I just send all of their
     packets to their routers and those routers figure out the details.
     Of course if our .1/.129 router (from our earlier examples) has
     this route, than all of our other client systems on both
     192.168.200 subnets could just use their default route. This might
     result in an extra hope for the systems on the 192.168.200.0 lower
     network (one to the .1 router, and another from there to the .5
     router). However, it does centralize the administration of our
     locate routing tables.
     
     All of the routing that I've been describing is "static" (I've
     using the 'route' command to establish all of the routes). Another
     option for larger and more complicated networks is to use a dynamic
     routing protocol, such as RIP. To do that, we have to run the
     'routed' or (better) the 'gated' command on each of our routers.
     
     In a typical leaf site (a LAN with only one router, therefore only
     one route in or out) we only run 'routed' or 'gated' on the router.
     All nonlocal traffic has to go to that one router anyway. In many
     cases we want our routers to be "quiet" (to listen to our routes,
     but not advertise any of their own). There are options to the
     'routed' and 'gated' commands to do this. As you get into the
     intricacies of routing in larger environments, and of dynamically
     maintaining routes (like ISP's must do for their customers) you
     enter into some pretty specialized and rarefied territory (and will
     fly past my level of expertise).
     
     Routing on the Internet is currently managed through the BGP4
     protocols, as implemented in 'gated' and various dedicated router
     products like Cisco's IOS.
     
     More about ' gated 'can be found at the Merit site:
     
     http://www.gated.merit.edu/~gated
     
     In order to participate in routing on the Internet (to be a first
     tier ISP like UUNet, PSInet, etc) or to be a truly "multi-homed"
     site (to optimally use feeds from multiple ISP's concurrently)
     you'd have get an AS (autonomous systems) number and "peer" with
     your ISP's. Because any mistake on your part can propaget bogus
     routes to your peers --- which can cause considerable disruption
     across the net --- this is all way beyond the typical network
     administrator.
     
     * (I'm told that the routing infrastructure
     
     has been tightened up quite a bit in the last of years. Some of the
     great Internet "blackouts" from '96 and '97 were caused by
     erroneous route propations across the backbone peers. So now most
     of these sites have configured their routers to only accept
     appropriate routes from each peer.)
     
     The subnet I've been describing is a "1-bit" subnet. That is that
     we're only masking off one extra bit from the default for our
     addressing class. In other words, the default mask for a Class C
     network block is 255.255.255.0 --- which is a decimal
     representation of a 32-bit field where the first 24 bits are set to
     "1" our subnet mask, represented in binary, would have the first 25
     bits set. The next legal subnet would have the first 26 bits set
     (which divides a Class C into four subnets of 62 hosts each).
     Beyond that we can subnet to 27 bits (eight subnets of 30 hosts
     each), 28 bits (16 subnets of 14 hosts each), 29 bits (32 subnets
     of 6 each) and even 30 bits (64 subnets of 2 each).
     
     So far as I know a 31 bit mask is useless. A 32 bit mask defines a
     point-to-point route.
     
     Ultimately all these masks and subnets are used for all routing
     decisions. In a typical host with only one interface the subnet
     mask is used only to distinguish between "local" and "non-local"
     addresses.
     
     For any destination IP address the host "masks off" the trailing
     bits, and then compares the result to the "masked off" versions of
     each local interface address. If the the masks match then the
     address is local, and the kernel (or other routing code) looks for
     a MAC (media access control) or lower level (framing) address. If
     one isn't found an ARP (address resolution protocol) transaction is
     performed where the host broadcasts a message to the local LAN to
     ask where it should set a locally destined packet.
     
     If you have a bad subnet set on a host one of two things can
     happen. It might be unable to communicate with the hosts on any
     other subnets (it thinks those are local addresses and tries to do
     ARP's to find them --- then it figures they must be down since
     there's now response to the ARP requests). It might also send
     locally destined packets to the router (which should bounce them
     back to the local net --- if the router is properly configured). Of
     course that might only work if the bad subnet mask doesn't
     interfere with the host's ability to get packets to it's
     gateway/router. Obviously it's better to have your subnet masks
     properly defined throughout.
     
     If the address isn't local to any interface than the routing code
     searches through its list of routes to look for the "most specific"
     or "best" match. If there is a default route (pointing to a
     gateway) then anything with no other match will get sent to that.
     
     Obviously one of the constraints posed by this classic routing and
     subnetting model is that you can only subnet to a few even sized
     blocks. We can't define one block of 14 or 30 addresses (for our
     perimeter net) have all of the rest routed to our larger internal
     LAN segment. Actually it is possible, with some equipment, to do
     this. That's called "variable length subnetting" or VLSN sometimes
     called VLSM's for VLS "masks").
     
     RIP and the other old routing protocols (EGP, IGRP, etc) don't
     support VLSN (from what I've read in the Cisco FAQ). However, the
     modern OSPF, BGP4, and EIGRP protocols do. Each routing table entry
     has it's own independent mask or "prefix" number.
     
     It appears that Linux can handle VLSN by simply over-riding the
     netmask for a given network when defining static routes. Presumably
     packages like 'gated' can also provide the appropriate arguments
     when updating the kernel's routing table, so long as the route
     exchange protocol can provide it with the requisite extra
     information.
     
     Thus, going back to our example, you might configure your
     192.168.200 network into a block of 30 addresses for the perimeter
     network (one eth0 in our example) and put the rest unto the
     interior net (using eth0). I'm just guesing here --- since I
     haven't actually done this, but I guess that you'd define the
     netmasks in the ifconfig command to be "255.255.255.0" (24 bit),
     while over-riding it in the routes with commands like:
     
                route add -net 192.168.200.0 \
                        netmask 255.255.255.224  eth0
                route add -net 192.168.200.0 \
                        netmask 255.255.255.0   eth1

     At a glance this would appear to be ambiguous. There would seem to
     be two possible routes for some addresses. However, the routing
     rules handle it just find. One of the masks is longer than the
     other --- and the "most specific" (longest mask) wins.
     
     That's why we can have a host route (one without the "-net" option)
     that over-rides any of our network routes. (It's mask is 32 bits
     long). Note: although I've shown these in order, most specific
     towards least so --- it shouldn't matter what order you add the
     routes in.
     
     It's also possible for us to have these two subnets separated from
     one another by intervening networks. I should be able to define a
     gateway to a subnet with a command like:
     
                route add -net 192.168.200.0 \
                        netmask 255.255.255.0 gw 172.17.2.1

     ... where 172.17.2.1 is some host, somewhere, to which I do have a
     valid route.
     
     In any event I did hit Yahoo! to try and confirm that Linux
     supports VLSN's. I found a message from a frustrated network
     manager who had prototyped a whole network, testing it with Linux
     and depending on VLSN support --- and then finding that Solaris 2.5
     didn't support them. (That was in early '97 --- allegedly 2.6 has
     added this support and presumably the new Solaris 7 also supports
     them). I also know that the route commands will actually add
     entries to your routing table (I created some bogus routes on
     another VC while I was writing this). However, I don't have time to
     set up a proper experiment to prove the point. It appears that
     Linux has supported VLSN's for some time.
     
     Throughout this message I've talked about "classes" of addresses.
     These were classic categories into which IPv4 addresses are cast
     which define the default netmasks and addressing blocks for them.
     For example 10.*.*.* is a Class A network. (In fact it is the one
     Class A address block that is reserved for private network use in
     RFC1918 et al). 56.*.*.* is the Class A network assigned to the
     United States Postal Service, and 17.*.*.* is reserved for Apple
     Computing Inc. However, these classes are being phased out of the
     Internet routing infrastructure through a process called
     "supernetting" or CIDR (classless Internet domain routing). Support
     for VLSN is a requirement for CIDR. (That's a matter for your ISP
     or your ISP's NAP -- network access point --- to worry about).
     
     In the old days if you got a block of addresses and you changed
     ISP's you'd take your addresses with you. You new ISP would add
     your block of addresses to his routing tables and propagate this
     route to his peers and so on through the Internet routing chain.
     The problem was that this isn't scaleable. The routing tables were
     getting so big that the first tier routers couldn't handle them.
     
     So we started using CIDR. CIDR block is a large chunk of addresses
     (32 Class C's minimum). These are given to NAP's and ISP's, and a
     single route, for the whole block, is added to the top level
     routers. The ISP then subnets those and handles the routing
     locally. Although addresses are now routed in a "classless" manner
     --- we still talk about the addressing classes in networking
     discussions. It's convenient, though sometimes not technically
     precise.
     
     The main implication of this for most of us is that you don't get
     "take your addresses with you" if you change ISP's. You can keep
     your domain name, of course. That's completely independent of the
     routing. (Theoretically it's always been possible to have a block
     of addresses with no associated DNS at all. I don't know anyone
     that does that --- but there isn't any rule against it).
     
     I said earlier that the "better" solution to your internal network
     addressing is to use private network addresses (per RFC1918) and
     use IP masquerading, NAT (network address translation) or
     applications level proxies at your borders for all of your client
     Internet access.
     
     In this model you only assign "real" IP addresses to your publicly
     accessible servers.
     
     This is "better" for several reasons. First, you conserve
     addresses. You can have thousands of hosts on your network and they
     can all access the Internet using only one or a few "real" IP
     addresses.
     
     This is particularly handy these days since ISP's (feeling a bit of
     an addressing crunch themselves) often charge premium rates for
     larger subnets. In the "old days" you got a Class C or larger
     address block for any dedicated Internet connection that you
     established. Now you usually get a subnet. For the xDSL line I just
     got into my office/home I got a subnet of 30 addresses
     (255.255.255.224, or 27 bits for the netmask).
     
     So, you can use 192.168.x.* addresses for all/most of your clients
     and reserve your "real" IP's for your router, and your mail, web,
     FTP, DNS, proxy and other servers (including any old-fashioned
     virtual web hosting; newer HTTP 1.1 style web hosting doesn't
     require an extra IP address and IP aliasing but "virtual hosting"
     for most other protocols and services does).
     
     If you're really ambitous you could probably configure a server
     with 'ipportfw' and/or 'ipautofw' (or 'ipmasqadm') to redirect each
     service on this list through a masquerade to its own dedicated
     server(s). I've heard that there's even a "load balancing" patch to
     one of these port forwarders. That would conserve more addresses by
     making one system appear to be running many services --- while
     allowing you to isolate those services on their own systems for
     security or load management reasons.
     
     Another advantage of this model is that you can change ISP's more
     readily. For any network of more than about five IP hosts, address
     renumbering is difficult and expensive. You want to avoid it. Of
     course you can use DHCP to make that easier --- but then you have
     to carry around your DHCP infrasture, and you can only imagine the
     disruption that this might still cause for your internal servers.
     I've known companies that were very unhappy with their ISP but not
     quite mad enough to shutdown their network for a week to renumber
     (large novice userbase, small IS staff, mostly Windows clients ---
     it's a real concern).
     
     Yet another advantage relates to your network security. It is
     easier to enforce your network policies and protect your internal
     systems if you prevent direct routing into your internal LAN. It is
     much easier to ensure that a few machines (your routers, proxy
     servers, and publicly accessible hosts) are secure from known
     attacks (source routing, "ping of death" and various things like
     nestea, boink, land/latierra, etc) than to apply those patches to
     every host on your network. (Indeed in many cases it is not
     possible to apply necessary patches to some of those hosts because
     they are running proprietary, or "closed source" operating systems
     --- and you have to wait for your vendor to make correct patches or
     "service packs" available).
     
     It is folly to think that no new attacks of this sort will be
     discovered. It is also usually futile to have an unenforced policy
     that no insecure services be allowed on internal systems.
     
     So you should use IP masquerading and/or applications proxying for
     most hosts on most networks. Of course you can use "real" IP
     addresses and still "hide" them behind a firewall (any combination
     of packet filters, and proxying can be called a 'firewall').
     However, there's no reason (at that point) to do so.
     
     It should be noted that use of masquerading and/or proxying will
     not inherently improve your security overall security. These are
     not a panacea. If an attacker can gain sufficient access to any of
     the hosts that do have a valid route into your internal LAN (such
     as the interior routers and/or proxy hosts) or trick any such
     system into routing packets for them (with source routing, for
     example) or embed hostile code into any of the data streams that
     will be executed by any of your systems ... if they can do any of
     that then the firewall will just be a minor nuisance to their other
     mischief.
     
     Indeed using masquerading and proxying is a bit of a nuisance. It's
     an extra step in configuring your systems, and you'll probably
     still occasionally bump into some new or obscure protocol that
     can't be easily proxied or masqueraded. Luckily, as the number of
     sites that must use firewalls increases (the percentage of
     "directly routable clients" decreases) the programmers and groups
     that design these protocols and tools becomes more aware of the
     problem and less likely to implement them in problematic ways.
     
     One aspect of this that is a bit confusing is that you can put
     multiple subnets and IP address blocks on a single ethernet
     segment.
     
     For example, a few years ago I was the admin of a large site which
     had established permanent connections to three ISP's. They had not
     yet applied for an AS number and were not "peering" with those
     ISP's. So they were assigning addresses to different groups of
     computers from all three ISP's (about eight different Class C
     addresses). However, they used a VLAN architecture internally.
     (That --- and the fact that they were using direct routing to
     clients --- was counter to my recommendations; but I was just a
     lowly "junior" netadmin, so they didn't listen, until much later
     --- after I'd left).
     
     So they had a flat internal topology and some routing problems
     (their senior netadmin didn't know how to trick the Ciscos into
     this using static routes and we didn't use IP RIP or anything like
     internally). I used IP aliases on a Linux box and defined the
     static routes there. Under current versions of Linux you can use IP
     aliases in your route commands:
     
                ifconfig  eth0 192.168.200.1 \
                        netmask 255.255.255.0

                ifconfig  eth0:1 192.168.100.1 \
                        netmask 255.255.255.0

                route add -net 192.168.200.0 eth0
                route add -net 192.168.100.0 eth0:1

     ... here I've route the 200 net to eth0, the 100 net to eth0:1 (a
     "sub-interface" or IP alias), and added routes to each.
     
     Under the newer (2.1.x) kernels this works a little differently ---
     you just use the device name without the aliasing suffix in the
     route command. In other words the ifconfig commands would the be
     same, the first route command would be unecessary (its added
     automatically) and the second route command would just refer to
     eth0 --- not eth0:1.
     
     This may look a bit odd. (It certainly did to me at the time). You
     clients on the 100 network are sending their 200-net destined
     packets to this host which is then resending them over the same LAN
     segments back to destinations on the 200 net and vice versa. I
     still think its a stupid way to do it --- but it worked. I
     personally think that VLAN's are a bad idea --- and they seem to
     have been a kludge to deal with overgrown clusters of
     NetBIOS/NetBEUI (MS Windows) boxes that were too braindead to talk
     IP.
     
     One thing I haven't covered in this (extremely long) discussion is
     "proxyarp." This is a technique to allow one system to accept IP
     packets for other systems without changing the subnet masks and/or
     routes for the rest of the segment. It's most often used with PPP
     or SLIP dial-up lines --- though I've seen examples posted to
     newsgroups that were done between ethernet segments.
     
     Basically, the proxyarp host will respond to ARP requests IP
     addresses that are not assigned to any of it's interfaces, and. The
     proxyarp host needs a valid route to the proxied IP address --- but
     other systems will consider it to be a "local" address (local to
     their LAN segment). Obviously the address to be proxied must be
     valid for one of the subnet masks on the "local side."
     
     I'm sure this is all very confusing. So I'll give a simple example:
     
     I might have a host on 192.168.200 net with its own address of
     192.168.200.13 (eth0). I might also have a system connected to that
     system's ppp0 port --- and that might be configured to use
     192.168.200.44. When any of the systems on my LAN (eth0) have
     packets for 192.168.200.44 (which is local to them according to
     their subnet masks and routing tables) they perform an ARP (or
     search their ARP cache, of cours). My system (listening on
     192.168.200.13) responds with its ethernet MAC address. So the
     localhost hosts and routers send those packets to me. (So far as
     they are concerned that's just another IP alias of mine).
     
     When I (.13) get this packet I find that it is NOT an alias of
     mine, but I have a valid route to it (over my ppp0 interface) so I
     forward it. The .44 system presumably has it's ppp0 interface
     configured as the default route and certainly has 192.168.200.0
     routed to it's ppp0 --- so any packets to my (.13's) ethernet LAN
     get routed, too. Note that I (the .13 host) don't have to publish
     routes to .44. The routers and other hosts on the 200 LAN don't
     know or care whether I really am .44 --- just that IP packets for
     .44 can be encapsulated in data frames addressed to my ethernet
     card, where I'll deal with them as though it were my address (so
     far as they know).
     
     I realize it's a bit confusing. I've probably over-simplified in a
     few areas and probably gotten some of this completely wrong
     (corrections gratefully accepted). However, that's the basics of
     routing and subnetting.
     
     One of these days I really should read Comer's "Internetworking
     With Tcp/Ip : Principles, Protocols, and Architecture Vol 1" which
     I've heard is essentially the TCP/IP bible. However, I've had
     Christian Huitema's "Routing in the Internet" (a 300 page text book
     on routing) sitting next to my desk for about a year --- and
     Comer's book is much larger and not to hand.
     
     So, in answer to your original question:
     
     You divide a group of systems into subnets by assigning them
     addresses that lie within valid groupings of your address blocks,
     and creating routes to those blocks. Most of this is done with the
     'ifconfig' command's "netmask" option and with appropriate 'route'
     commands (if you're using static routes).
     
     (Any other readers want to tell me how 'routed' and 'gated' get
     their routes? I guess that you still add static routes for your
     local nets and the local daemon picks them up and
     publishes/propagates them via broadcasts and their own router
     discovery mechanisms).
                        ____________________________
   
(?) Subnetting and Routing 101 (continued)

                          Some examples and tables
                                      
   From Pavel Plankov on Fri, 20 Nov 1998 L.U.S.T List 
   
   (?) Thank you, that was very informative, but could you be more
   specific about "masking off" For example I have a 62.200.34 net, how
   can I subnet it? 
   
   ...the only thing I am sure about is that 62.200.34.0/24 - is the C
   subnet. the quote at the bottom sounds rather vague %) 
   
     (!) The subnet I've been describing is a "1-bit" subnet. That is
     that we're only masking off one extra bit from the default for our
     addressing class. In other words, the default mask for a Class C
     network block is 255.255.255.0 --- which is a decimal
     representation of a 32-bit field where the first 24 bits are set to
     "1" our subnet mask, represented in binary, would have the first 25
     bits set. The next legal subnet would have the first 26 bits set
     (which divides a Class C into four subnets of 62 hosts each).
     Beyond that we can subnet to 27 bits (eight subnets of 30 hosts
     each), 28 bits (16 subnets of 14 hosts each), 29 bits (32 subnets
     of 6 each) and even 30 bits (64 subnets of 2 each).
     
     Any Class C (or 8 bit network) can be subnet into the following
     combinations:
     
            1  subnetwork of    254 hosts       (255.255.255.0)/24
            2  subnetworks of   126 hosts each  (255.255.255.128)/25
            4  subnetworks of    62 hosts each  (255.255.255.192)/26
            8  subnetworks of    30 hosts each  (255.255.255.224)/27
           16  subnetworks of    14 hosts each  (255.255.255.240)/28
           32  subnetworks of     6 hosts each  (255.255.255.248)/29
           64  subnetworks of     2 hosts each  (255.255.255.252)/30

     ... or (from what I gather) it can be treated as a set of 254
     separate point-to-point links. A subnet consisting of a network
     number and a broadcast address is absurd -- so we don't have "128
     nets of 0 hosts each" with a mask ending it 254).
     
     Notice that I've specified the netmask and the number of network
     bits in the last column of this table.
     
     So. Let's say I didn't have this table. (I didn't when I started
     this message). So I want to find all of the valid netmasks on an
     eight bit network. I start the 'bc' command (big calculator ---
     it's a multi-precision "calculations shell" and scripting language
     that's included with most versions of Unix and Linux). I issue the
     following commands:
     
                ibase=2
                10000000
                11000000
                11100000
                11110000
                11111000
                11111100

     This sets the input base to 2 (binary), leaving the output base at
     the default (decimal). Then, entering each of these binary numbers
     (note that this is every combination of 8 bits with anywhere from
     one to six leading one's and a corresponding number of trailing
     zeros. All (modern) legal netmasks have this property. As each of
     these numbers is entered, 'bc' spits out the decimal equivalent:
     
                128 192 224 240 248 252

     ... which matches my table -- these are the valid ways to subnet on
     8 bits. (Actually I memorized those along time ago --- but
     hopefully this makes it clear where they came from).
     
     For "classic" subnetting, you pick any one of these entries. You
     then divide your network that number of segments (2, 4, 8, etc)
     with up to the corresponding hosts per segment (126, 62, 30, etc),
     and you use the corresponding netmask in the 'ifconfig' commands
     for all hosts on that network. 'route add -net' commands will
     default to following the chosen netmask.
     
     VSLN (variable length subnetting) is a little more confusing, so we
     won't cover it at this point.
     
     Given that we've chosen a subnetting paradigm (one line from this
     table) we now have to figure out what the valid network number,
     broadcast addresses, and range of host IP addresses are within each
     subnet.
     
     We could have a table for each of these. This would take too much
     space (actually it's about 128 lines long plus headers, etc). So,
     I'll give an example of the .224 netmask used to created 8 subnets.
     
     For all of these the netmask would be 255.255.255.224 (as listed in
     our previous table). The three prefix octets would be same in all
     cases (62.200.34 in your example).
     
     Here's our networks:
     
            8  subnetworks of    30 hosts each  (255.255.255.224)

           net#         broadcast       Hosts:  low     high
             0             31                     1      30
            32             63                    33      62
            64             95                    65      94
            96            127                    97     126
           128            159                   129     158
           160            191                   161     190
           192            223                   193     222
           224            255                   225     254

     ... I think I got all those right (I just made up that table). It
     should be fairly obvious that the networks begin every 32 IP's
     between 0 and 256. The rest of the table is constructed by adding
     or subtracting one from the current or next network number or the
     by subtracting one from the broacast address.
     
     The lowest permitted host number in every subnet is that network's
     number plus one.
     
     The broadcast address for any subnet is the network number of the
     NEXT network minus one.
     
     The highest allowed host address on a subnet is the broadcast
     number minus one.
     
     So, your fourth subnet on this table would be 62.200.34.96/27.
     You're netmask would be 255.255.255.224 (as I said before), and the
     broadcast for this subnet would be 62.200.34.127.
     
     In other words, all of the hosts from 62.200.34.97 through
     62.200.34.126 would use the 62.200.34.127 address for ARP requests
     and other broadcasts. Those from ...161 to ...190 would use the
     .191 address for their broadcasts. They'd be on the ...160 subnet.
     
     I'll do another one for comparison:
     
           16  subnetworks of    14 hosts each  (255.255.255.240)/28

           net#         broadcast       Hosts:  low     high
             0             15                     1      14
            16             31                    17      30
            32             47                    33      46
            48             63                    49      62
            64             79                    65      78
            80             95                    81      94
            96            111                    97     110
           112            127                   113     126
           128            143                   129     142
           144            159                   145     158
           160            175                   161     174
           176            191                   177     190
           192            207                   193     206
           208            223                   209     222
           224            239                   225     238
           240            255                   241     254

     ... That table is twice as long (obviously) and the number is it
     "look weird" However, it should be obvious where these number came
     from. Start with zero can keep adding 16 until we get to 256 to get
     the first column. Those are the network numbers. 256 can't be a
     network number. To get the second column we add fifteen to the
     network number (or we subtract one from the next network's number
     -- which is the network number on the next line). To get the third
     column we add one to the network number. To get the last column we
     subtract one from the broadcast number (the second column).
     
     I'll include one last table because it's shorter than the others:
     
            4  subnetworks of    62 hosts each  (255.255.255.192)/26

           net#         broadcast       Hosts:  low     high
             0             63                     1      62
            64            127                    65     126
           128            191                   129     190
           192            255                   193     254

     ... I really hope this one comes as no surprise.
     
     From here I would hope that you'd be able to generate the larger
     tables of 32 and 64 subnets if you were insane enough to use those.
     (The only organizations I know of that subnet that way are ISP's).
     I could write a perl script to generate subnet tables like these in
     far less time than this message took to write.
     
     Now, if you wanted to use VLSN, to create one small subnet and one
     larger one, I guess you'd pick a block of addresses, suitable for
     any of these subnets --- reserving the whole block (from the
     network# through the broadcast) and only assigning those in the
     range (from the low to high numbers). Those would be a subnet.
     You'd construct your route for that subnet, and put one of those
     addresses (the low or the high usually) unto one of your
     interfaces, and point your route (with its netmask override) to
     that interface. You'd put the rest of your network unto another
     interface with a broader route (one with fewer network bits in the
     netmask) to that.
     
     Example:
     
     Let's put a 14 host subnet on our perimeter and hide the rest of
     our hosts behind our router (with packet filters):
     
     We'll arbitrarily choose the first available 14 host subnet (from
     our table above). This should make it easier to remember which
     hosts are "outside" and which ones are available for assignment
     "inside"
     
     So we assign eth1 an address of 14 (the highest available address
     in this block --- I'm assuming that .1 is already in use by another
     router on that subnet, and we give eth0 (the interface to our
     internal network) an address of .17 (the first available address
     that's after our subnet). Then we set that up like so:
     
                  ifconfig eth1 62.200.34.14 \
                     netmask 255.255.255.240 broadcast 62.200.34.15

                   route add -net 62.200.34.0 \
                        netmask 255.255.255.240   eth1

                   ifconfig eth0 62.200.34.17 \
                      netmask 255.255.255.0 broadcast 62.200.34.255

                   route add -net 62.200.34.0 \
                        netmask 255.255.255.0   eth0

     I haven't actually done VLSN. However I think this would work. One
     important consideration about this would be that every internal
     system would have to know about this first route (the one with the
     .240 netmask).
     
     They could have this as a static route, or it could be propagated
     to them via some routing protocol (I'm not sure if RIP can handle
     that --- I think there was a RIPv2 that could --- while RIP would
     have to propagate this as a list of 14 host routes rather than a
     subnet route --- or some silly thing like that).
     
     The other thing that we'd have to be sure of is that we didn't use
     any of these subnet addresses inside of our domain. That includes
     the network number and the broadcast address. By choosing the first
     subnet for my example I cheated. You'd never try to assign the .0
     address anyway. However, if you'd picked a subnet from somewhere in
     the middle of your address range --- everything should work. It
     would just be more confusing.
     
     Notice that I also skipped .16 (which would be the "next" network
     number if we were to use two of these subnets --- while leaving the
     rest on one segment. This should be unnecessary. However, I'd avoid
     assigning it an address just in case I need to add the additional
     small subnet later.
     
     Actually if you wanted to use a sophisticated address allocation
     strategy, to minimize the disruption that would be caused by most
     future subnetting strategies you could limit yourself to assigning
     addresses from the following groups:
     
     1-14, 17-30, 33-46, 49-62, 65-78, 81-94, 97-110, 113-126, 129-142,
     145-158, 161-174, 177-190, 193-206, 209-222, 225-238, 241-254
     
     ... or better yet:
     
     2-13, 18-29, 34-45, 50-61, 66-77, 82-93, 98-109, etc
     
     ... so that you're not issuing the possible network numbers,
     broadcast numbers, and first or last addresses in each of your
     possible subnets.
     
     Using this strategy you could start with a flat topology and later
     break it into anywhere from two to sixteen classic subnets or split
     off VLSN's (and add/propagate appropriate routes to them).
     
     As I've said, this sort of obtuse allocation strategy isn't
     necessary for most of us these days because we can use private net
     (RFC1918) addresses for our internal networks.
     
     However, if you're going to use direct routable addresses in your
     domain --- following this allocation schedule might actually help
     (and can't really hurt if you simply prepare the list ahead of
     time).
     
     It's possible to define some netmasks that aren't on even octet
     boundaries. For example the RFC1918 group of Class B addresses is
     172.16.*.* through 172.31.*.*. That can be described with the
     address/mask 172.16.0.0/12 (which you could then then subnet into
     various ways).
     
     Most sane people reduce that ugliness to a "known" problem for
     which we've already described a solution. They treat these as a
     large group of Class C addresses and do all their network design
     based on those. The RFC1918 addresses: 192.168.x.* (for x from 0 to
     255) is usually described as 255 contiguous class C address blocks.
     However, there is nothing prevent us from using this as a single
     16-bit network (192.168.0.0/16).
     
     The only case where I've used these notations is when I'm writing a
     set of packet filters. I customary add the following four address
     masks to the source deny lists on perimeter routers:
     
     10.0.0.0/8 127.0.0.0/8 172.16.0.0/12 192.168.0.0/16
     
     These are denied in both directions.
     
     The outbound denials are "anti-leakage." We shouldn't be sending
     any packets onto the Internet which claim to be from these IP
     addresses. They are "non-routable" on the open Internet. So, any
     that "try" to get out are either a mistake (they were supposed to
     go through masquerading or network address translations --- NAT),
     or they are hostile actions possibly by users on our networks or by
     some subverted services or hosts (something's been "tricked" into
     it).
     
     The inbound denials are part of an anti-spoofing screen. No legal
     packet should get to us from any of these addresses (there should
     be no legal route back to any such host over the Internet).
     
     The 127.* filtering is also interesting. If I actually allowed
     packets through my router that claimed to be from "localhost" I
     might find that some services on some hosts could be exploited
     using it.
     
     I've heard of such packets being referred to as "martians."
     However, I'm not sure if the term is supposed to apply just to
     packets that claim a 127.* source address or to any of these "bad
     boys."
     
     To complete our anti-spoofing we also want to deny any inbound
     packets that claim to be from any of our real IP addresses. Thus
     you'd want to add a rule to deny 62.200.34.0/24. All of the hosts
     which are legitimately assigned any of those IP addresses should
     already be inside your network perimeter --- none should be
     traversing the inbound interface on any of your border routers. I
     might add a rule to block: 214.185.47.32/27 if I was given the
     second 30 host subnet on the 214.185.47.0 network (for example).
     
     Anti-spoofing gives us considerable protection from a variety of
     exploits. It really doesn't leave us "secure" --- IP ADDRESSES AND
     DNS HOSTNAMES ARE NOT AUTHENTICATION CREDENTIALS! However it limits
     the exploits that can be mounted from outside of our network.
     That's why you should ideal have sets of anti-spoofing packet
     filters at your border (between the Internet and your perimeter
     network) and at your interior router (between your internal and
     your permimeter networks).
     
     In some organizations you may also want to have anti-spoofing
     between your internal client networks and your "sanctum" of
     servers.
     
     In addition to the anti-spoofing rules it's a good idea to add a
     couple of rules to limit some known-to-be-bogus destinations (Thus
     far we've only been discussing packet filtering policies based on
     source addresses).
     
     I suggest that any of your local "real" IP addresses that translate
     into network or broadcast numbers for your network topology should
     be forbidden as destinations. These extra rules may seem
     unnecessary --- but there have been "denial of service" exploits
     that used these sorts of addresses to create packet storms and
     disrupt your networks. (A few broadcast packets that get in can
     cause reponses from all or most of your active hosts).
     
     So you should at least add: $YOURNET.0 and $YOURNET.255 to your
     denied destinations list (where these are the network number and
     broadcast for your block of assigned addresses.
     
     No one outside your domain has any business addressing packets to
     your whole network. If you are subnetted in other ways --- you'd
     face the possibility that some attacker might try sending to
     $YOURSUBNET.31, etc. However, this is probably just not such a big
     problem. If you use IP masquerading and/or proxying for all or most
     of your client hosts (as I recommended in my last post) you won't
     see any of that anyway. Meanwhile, how much do you need to subnet
     your banks of servers (in most cases, not much).
     
   (?) Thanx in advance.
   Pavel Piankov 
   
     (!) Gosh I hope that helps. I also hope I haven't bored the rest of
     the list too much with this. I simply don't know of a way to
     describe subnetting and routing more concisely than this. If you
     really understand what I've written in these two messages --- you
     can probably get a job as a junior netadmin.
            ____________________________________________________
   
(?) No STREAMS Error while Installing Netware for Linux

   From Sean McMurray on Tue, 17 Nov 1998 
   
   I'm trying to install Caldera Netware for Linux on Redhat 5.1.
   Following the instructions from
   ftp://ftp.caldea.com/pub/netware/INSTALL.redhat, I get to Step 5 under
   "Downloading the Files." 
   
     (!) Well, I haven't played with this yet, since I don't have any
     Netware client systems around here. (Maybe one of these days I'll
     fire up one of my old XT's to use for clients).
     
   (?) When I type in rpm -i kernel-2_0_35-1nw_i386.rpm, I get the
   following error: 
   
   ln: boot/vmlinuz-2.0.35-1nw-streams: No such file or directory 
   
   Can you tell me why? More importantly, can you tell me how to fix it? 
   
     (!) Well, the Netware for Linux requires a kernel with STREAMS and
     IPX patches built into it.
     
     STREAMS is an alternative to BSD sockets. It's a programming model
     for communications within a Unix or other kernel --- between the
     applications interfaces and the devices. The Linux kernel core team
     has soundly reject suggestions that Linux adopt a STREAMS
     networking model for its native internal interfaces and we won't go
     into their reasoning here. (I'm inclined to agree with them on this
     issue in any event.)
     
     So, this error suggests that the 'ln' command (creates hard and
     symbolic links) can't find the '/boot/vmlinuz...' files to which it
     refers.
     
     One trick to try is to view the contents of the rpm file using 'mc'
     (Midnight Commander). Just bring up 'mc', select the RPM file with
     your cursor keys and highlight bar, and hit [Enter]. That will
     treat the RPM file as a "virtual directory" and allow you to view
     and manually extract the contents. Look in RPM:/boot for the kernel
     file --- also look for the README files.
     
     I've occasionally manually extracted the files from an RPM and just
     put them in place myself. Then I read through any scripts that and
     docs contained therein to see what should have been done by the rpm
     system. (Usually this sort of dodge is only necessary when doing
     piecemeal upgrades to the rpm package itself).
     
     There are other times when I have to resort to 'rpm -i --force
     --nodeps ...' to get things to work.
     
     Note that this kernel may not support you hardware configuration
     (that's one reason why many Linux users build custom kernels). So
     you may have to find and install the kernel source patches and
     build your own --- or at least build a set of modules that match
     that version.
     
     Probably your best bet would be to subscribe to the caldera-netware
     mailing list. Look to Liszt to help find specific mailing lists and
     newsgroups:
     
    Liszt: caldera-netware
    http://www.liszt.com/cgi-bin\
        /start.lcgi?list=caldera-netware&server=majordomo@rim.caldera.com
                        ____________________________
   
(?) No STREAMS Error while Installing Netware for Linux

   From Sean McMurray on Wed, 18 Nov 1998 
   
   Jim Dennis wrote: 
   
   >When I type in rpm -i kernel-2_0_35-1nw_i386.rpm, I get the
   >following error:
   >ln: boot/vmlinuz-2.0.35-1nw-streams: No such file or directory
   >Can you tell me why? More importantly, can you tell me how to fix it?
   
   Well, the Netware for Linux requires a kernel with STREAMS and IPX
   patches built into it. 
   
   Shouldn't it be included in Caldera's RPMs then. It seems that the
   first they their install does is try to build a new kernel.Also, does
   the fact that ncpfs is built in indicate that the STREAMS and IPX
   patches already exist - the IPX patches, anyway?<clipped> 
   
     (!) It is. That's what that kernel is saying. However it seems that
     the /boot directory isn't there (my to 'mkdir' that) and, for some
     reason, your 'rpm' command isn't or can't make it. (If you do have
     a /boot directory --- maybe you've used 'chattr +i' to make it
     immutable. Maybe you have a file named /boot so that a directory
     can't be made by that name. Who knows?).
     
   (?) Midnight Commander won't open the RPMs on my system, but I
   executed rpm -qpl kernel-2_0_35-1nw_i386.rpm > dump.txt to get a
   listing. The /boot files are: /boot/WHATSIN-2.0.35-1nw
   /boot/vmlinuz-2.0.35-1nw 
   
   The only files with the word stream in the title is
   /lib/modules/2.0.35-1nw/misc/streams.o 
   
     (!) ... that would be the STREAMS loadable kernel module. The other
     support and IPX patches are compiled into that kernel, and the FAQ
     tells you how to build a kernel to match the shipping one (close
     enough to load the requisite modules and route/utilize the IPX
     protocols anyway).
     
   (?) There are other times when I have to resort to 'rpm -i --force
   --nodeps ...' to get things to work. 
   
   I tried to rpm -e kernel-2_0_35-1nw_i386.rpm, but rpm says that it
   isn't installed. 
   
     (!) That tries to "erase" (uninstall) that package --- except that
     you have to use the package's name not the package *file's* name.
     kernel-2.0.35-1nw is probably the package name. The filename is
     independent of that, though it is conventionally similar.
     
     You can use the 'rpm -qpi' command to extract information about the
     RPM file including the package name.
     
     In general the -i and -p options to 'rpm' refer to file while
     others refer to "packages."
     
     If you issued the command 'rpm -ql foo-1.2.3-bang' RPM would list
     all of the files that are "owned by" the foo-1.2.3-bang package. If
     you issue the command 'rpm -qpl foo-1.2.3-bang.i386.rpm' then the
     command would list all of the file in that package file. If (by
     some chance) you had a different implementation of the same package
     these two lists might differ.
     
     (That's a minor problem with the RPM system --- there's no central
     naming authority on package naming and versioning so you can have
     differences between, for example, the S.u.S.E. and Red Hat
     packages, with some differences in dependencies --- etc. Actually
     it's a rather major pain in the patootie when you're a S.u.S.E.
     user and you keep getting packages that are contributed to the Red
     Hat site. However, it's still usually easier than building them
     from tarballs and the "right" answer for me is probably for me to
     learn enough about building my own RPM's that I can grap the source
     RPM packages and modify them to fit. The "right" answer for Red Hat
     and S.u.S.E. and Caldera is to make their packages as compatable
     with one another as possible --- particularly with regards to
     dependences and provision identification).
     
   (?) So I tried to rpm -e kernel-2_0_35-1nw_i386.rpm again, but rpm
   says it's already installed 
   
     (!) That sounds wrong. Are you sure you typed exactly that?
     
   (?) I don't know rpm (or Linux) well enough to trust not hosing my
   kernel. I guess it's not that big of deal. I can just re-install RH5.1
   from scratch. 
   
     (!) After awhile building and installing new kernels will seem as
     routine and editing an old DOS CONFIG.SYS file (though you probably
     won't do anywhere near as often.
     
   (?) Probably your best bet would be to subscribe to the
   caldera-netware mailing list. 
   
   I'm subscribed, but impatient. Thank you for your help. 
   
     (!) I'd manually extract the kernel file from that RPM file, put it
     in the /boot/ directory, edit your /etc/lilo.conf file, run the
     /sbin/lilo command and try to reboot. Search through the old back
     issues of LG to read many messages about how LILO works -- or just
     read the HOWTO at:
     
     http://www.ssc.com/linux/LDP/HOWTO/mini/LILO.html
     
     (... and other LDP mirrors all over).
     
     Naturally you'll want to leave an entry for your existing (working)
     kernel so that you can reboot into that if this Caldera supplied
     kernel is inappropriate for your system. You'll also want to
     prepare a boot/root (rescue) diskette. Although one (image) comes
     with each Red Hat distribution I personally prefer Tom Oehser's
     "rtbt" (a full mini distribution on a single floppy --- with a
     suite of Unix tools sufficient to do most networking and rescue
     operations). You can find that at:
     
     http://www.toms.net/rb
            ____________________________________________________
   
(?) More than 8 loopfs Mounts?

   From Philippe Thibault on Fri, 20 Nov 1998 
   
   I've setup an image easily enough and mounted it with the iso9660 file
   system and asigned it to one of my loop devices. It works fine. What I
   was wondering was, can I add more than the eight loops devices in my
   dev directory and how so. What I'm trying to do is share these CD
   images through SMB services to a group of Win 95 machines. Is what I'm
   trying to do feasable or possible. 
   
     (!) Good question. You probably need to patch the kernel in
     addition to making the additional block device nodes. So my first
     stab is, look in:
     
     /usr/src/linux/drivers/block/loop.c
     
     There I find a #define around line 50 that looks like:
     
                #define MAX_LOOP 8

     .... (lucky guess, with filename completion to help).
     
     So, the obvious first experiment is to bump that up, recompile,
     make some additional loop* nodes under the /dev/ directory and try
     to use them.
     
     To make the additional nodes just use:
     
                for i in 8 9 10 11 12 13 14 15; do
                        mknod  /dev/loop$i b 7 $i; done

     I don't know if there are any interdependencies between the
     MAX_LOOP limit and any other kernel structures or variables.
     However, it's fairly unlikely (Ted T'so, the author of 'loop.c'
     hopefully would have commented on such a thing). It's easier to do
     the experiment than to fuss over the possibility.
     
     In any event I doubt you'd want to push that value much beyond 16
     or 32 (I don't know what the 'mount' maximums are --- and I don't
     feel like digging those up too). However, doing a test with that
     set to 60 or 100 is still a pretty low-risk and inexpensive affair
     (on a non-production server, or over a weekend when you're sure you
     have a good backup and plenty of time).
     
     So, try that and let us know how it goes. (Ain't open source (tm)
     great!)
     
     Of course you might find that a couple of SCSI controllers and
     about 15 or 30 SCSI CD-ROM drives (mostly in external SCSI cases)
     could be built for about what you'd be spending in the 16 Gig of
     diskspace that you're devoting to this. (Especially if you can find
     a cachet of old 2X CD drives for sale somewhere).
            ____________________________________________________
   
(?) EQL Serial Line "Load Balancing"

   From Jim Kjorlaug on Mon, 30 Nov 1998 
   
   (?) I live in an area where ISDN services have been promised but no
   delivered. I had read a howto for EQL but can no longer find the
   documention on this method of ganging two modems together. Can you
   please let me know where I can find the source for this and the howto.
   
   Thanks for any help you can offer.
   Jim Kjorlaug 
   
     (!) The README.eql (EQL Driver: Serial IP Load Balancing HOWTO) by
     Simon "Guru Aleph-Null" Janes (simon@ncm.com) doesn't seem to be in
     the LDP HOWTO Index. However it is included with the Linux kernel
     sources under
     
     .../drivers/net/README.eql
     
     ... so that's probably your best bet. Naturally the sources to the
     driver are also included therein. This README doesn't appear to
     have been updated since 1995.
     
     Note that this requires support from your ISP. In other words, to
     use EQL to effectively double you bandwidth, you need support for
     the same version of EQL load balancing at each end of the
     connection. Most ISP's are likely to be somewhat averse to this
     prospect (or to charge extra) since you'll be taking up two of
     their modems while connected over EQL.
     
     Another thing to consider is the difference between latency and
     bandwidth. Bandwidth refers to the amount of data that can be
     transmitted over a communications channel in a given amount of
     time. Latency refers to the propagation delay --- the amount of
     time before the first bits get to one end or the other of the
     channel.
     
     EQL can provide more bandwidth. However modem latency is pretty
     high and nothing can improve that within the constraints of the
     current standards.
            ____________________________________________________
   
(?) Where to Report Bugs and Send Patches

   From Elwood C. Downey on Mon, 30 Nov 1998 
   
   (?) Hello, 
   
   I have found (and believe fixed) a bug in gcc libc, version 2.7.2.3
   related to handling of daylight savings time and timezones. I would
   like to know exactly to whom I should send the report so it gets into
   the correct hands asap. Part of my confusion is gcc vs the new egcs
   (or whatever the new one is). I happen to be running Red Hat 5.1 if
   that matters. 
   
   Thanks,
   Elwood Downey
   President/Chief Programmer
   Clear Sky Institute, Inc. 
   
     (!) One of the "dirty little secrets" of FSF/GNU documentation is
     that "they" have an official bias against 'man' pages. If you look
     at the 'gcc' man pages you'll find that they refer you to the
     "Info" (or "Texinfo" pages) and list the man pages as
     non-authoritative, deprecated, unmaintained etc.
     
     'Info' is a hypertext documentation system which is nothing like
     HTML. The easiest way to access them for this case would be to
     issue the command:
     
     info gcc
     
     There we'll find a node/link labeled "Bugs::" and following that
     will provide us with some guidelines for reporting problems. I'll
     refer you to those pages so that you'll get the full details rather
     than just an @ddress.
     
     Since 2.8.1 is the current version from the Free Software
     Foundation (http://www.gnu.org) you might encounter some resistance
     to accepting patches for 2.7.x at this point. Their maintainers may
     refer you to the more recent version. You might want to try the
     Debian package, which might include patches that update the GNU
     version.
     
     According to the Debian site (http://www.debian.org) the maintainer
     for the Debian GCC package is Galen Hazelwood. You can use 'alien'
     to to convert among RPM (Red Hat et al), Debian, SLP (Stampede
     Linux Packages) and Slackage package formats.
     
     Note that egcs is a spinoff of the GCC development.
            ____________________________________________________
   
(?) How to "get into" an Linux system from a Microsoft client

   From WRB on Mon, 30 Nov 1998 
   
   (?) I know you don't like questions concerning Brand X (w95 and nt40),
   however, I am a NEWBEEEEE to RedHat Linux (5.1) and I don't know where
   to go for this answer. Over my internal network, when I try to get
   into the RedHat (5.1) machine using Brand X (nt40 SP4), I get the
   message "\\computer4 is not accessible" "the account is not authorized
   to log in from this station" I don't have a problem with the other
   Brand X product (W95 OSR2.1), it goes right in. I have no problems
   with FTP or TELNET with either of the Brand X machines. Without
   getting tooooo condescending, is this a Brand X problem or is it a
   RedHat (5.1) issue? 
   
   Thanks for your help
   Ron Botzen 
   
     (!) The big problem here is with the phrase "get into."
     
     By this you seem to mean "share files on my Red Hat (Linux) system
     from on of my MS Windows clients" or "make my Linux system a file
     server to my MS Windows clients."
     
     My clue that this is your intent is from the syntax "\\computer4"
     is an SMB UNC (so-called "universal naming convention") designation
     which is used for file and print services over the SMB protocols
     (server message block).
     
     Samba is the Unix/Linux package that provides SMB services to your
     MS Windows, OS/2, and similar clients. Also Linux supports an
     'smbfs' module and 'smbmount' command to allow it to act as a
     client in an SMB network.
     
     So, install the Samba package from your RH CD set, and read the
     docs therefrom. For the latest information on Samba go to:
     
     http://samba.anu.edu.au/samba/samba.html
     
     (or one of its mirrors).
            ____________________________________________________
   
(?) Where to Put New and Supplemental Packages

   From Lew Pitcher on Tue, 01 Dec 1998 
   
   (?) Hello from the Great White North. 
   
   A few months ago, I installed the Slackware 3.3 distribution on a
   second-hand 486 system, and upgraded the kernel to the (then current)
   2.0.35 level. 
   
   I've been slowly accumulating packages (like Smail and iBCS) that I'd
   like to put up on this machine, and have a question about the
   placement of package installs. Given that I've acquired a system-level
   package with source code, where in the file system should I install
   it? 
   
   From inspection, it looks like I've got several alternatives...
   /usr/src looks like the obvious place to start, but /usr/local also
   looks good. Do the Linux FileSystem Standards specify a place to put
   packages? If not, do you have a recommendation in this regard? 
   
     (!) The Linux FHS (File Hierarchy Standard --- the descendent of
     the FSSTND --- filesystem standard) does have guidelines for system
     administrators and distribution developers and maintainers.
     
     I would say that the latter groups (those who produce and maintain
     general purpose distributions and packages) should be strongly
     encouraged (nigh on required) to follow these conventions.
     Sysadmins should be encouraged to follow them to the degree that
     makes sense for their site. Home users can do whatever the heck
     they like.
     
     I suggest '/usr/local/' for normal freeware packages that I install
     from tarball and compile myself. For commercial packages that are
     distributed as binaries I recommend '/opt' (which is, in my case, a
     link to '/usr/local/opt').
     
     One of my continuing gripes about Red Hat and Debian is that there
     is no easy way for me to "partition" my packages such that all
     packages installed or updated after the initial OS/program load
     (IPL) default to installation on '/usr/local'. This, and the fact
     that I sometimes have a perfectly legitimate reason for
     concurrently maintaining two or more versions of a given package
     are my main gripes about those package management tools.
     
     The canonical home of the FHS seems to be:
     
   Filesystem Hierarchy Standard
          http://www.pathname.com/fhs
          
   (?) Thanks in advance for the advice. 
   
     (!) You're welcome.
     
   (?) Lew Pitcher
   Joat-in-training
   If everyone has an angle, why are most of them so obtuse? 
   
     (!) Shouldn't that be JOAT (jack of all trades)?
            ____________________________________________________
   
(?) Book: Linux Systems Administration

   From Jim Buchanan on Tue, 01 Dec 1998 
   
   (?) I hope to finish my book real soon now. 
   
   Let us know when it's done. I'll surely order a copy. 
   
     (!) I'll do my best to promote it without getting crass.
     
   (?) Aeleen Frisch's Essential System Administration
   Unix System Administrator's Handbook by Evi Nemeth et al 
   
   Some real competition. I certainly wish you well, such a book would be
   a valuable addition to the many other Linux books available. 
   
     (!) I'm focusing a bit more on "soft skills" like requirements
     analysis, recovery and capacity planning, the view that security
     considerations permeate all aspects of professional systems
     administration, and the design of whole networks rather than
     isolated hosts.
     
     These are elements that seem to be missing from the existing
     literature.
     
   (?) Macmillan Computer Publishing: 
   
   The Macmillan folks are really nice people. They host our local LUG,
   INLUC (Indiana Linux Users Consortium, http://inluc.tctc.com) 
   
     (!) My editor mentioned something along those lines.
     
   (?) If you ever make a trip to the Indiana Macmillan offices, maybe we
   can arrange the date so that you can come to one of our meetings,
   which are usually held on the third Wednesday of the month. 
   
   Jim Buchanan 
   
     (!) If I can afford it I'll do a full tour.
     
     Thanks for your supportive comments. Now all I have to do is get
     the thing done!
            ____________________________________________________
   
(?) 'ls' Doesn't work for FTP Site

   From Reuel Q. Salamatin on Tue, 01 Dec 1998 
   
   (?) Mr. James T. Dennis, 
   
   I am so happy to have known that you are available to anwer Linux
   questions. I have tried emailing persons I found from how-to files and
   documentations about ftp, but as of yet, got no answers. 
   
   Here's my problem. Our ftp site doesn't seem to support the ls
   command. 
   
   Usually, upon log-in, or with a browser it should display directory
   listings. Now it worked just like that before. But now, it doesn't. I
   don't actually remember how it came about to be like that. 
   
   I have followed instructions listed on the ftpd man page, about making
   a copy of the ls command on the bin directory of ftp home. I did just
   that but still no directory listing output. I was wondering what else
   could have gone wrong. 
   
   Thank you even now in anticipation of your response. 
   
   Sincerely yours,
   Mr. Roland Reuel Q. Salamatin 
   
     (!) Assuming that you're using one of the traditional FTP servers
     (daemons) such as the BSD derived one, or WU-FTPD (which has been
     the default on most Linux distributions for several years), this
     probably relates to one of three problems. All have to do with the
     'chroot' jail in which anonymous FTP (and the "guestgroups" from
     WU-FTP) operate.
     
     The idea here is that we've tried to minimize the risks to your
     system that are associated with having untrusted parties (anonymous
     and guest FTP users) accessing your directories. So we set up a
     psuedo "root" directory and issue the 'chroot()' system call to
     "lock the process into a directory."
     
     On problem with this approach is that most Unix/Linux programs need
     access to files like '/etc/passwd' and '/etc/group' (to map the
     numeric ownership codes that are stored in the inodes of file and
     directories to the associated names and groups. Also most modern
     programs (dynamically linked ELF binaries) require access to
     '/dev/zero' (a psuedo-device) for fairly obtuse reasons that amount
     to "because that's the way they work."
     
     So we need to build a skeletal copy/shadow of the system's
     directory structure to support this. That must contain at least the
     following files:
     
     * 'ls' binary in the [chroot]/usr/bin
     * Fake 'passwd' and 'group' files for [chroot]/etc
     * A copy of (or hard link to) /dev/zero and /dev/null under
       [chroot]/dev/
     * (Possibly) copies of any shared libraries to which your copy of
       'ls' is linked.
       
     (You can compile a statically linked 'ls' or you can use the 'ldd'
     command to get a list of the required shared libraries).
     
     Another option is to replace the BSD or WU ftp daemon with Mike
     Gleason's 'ncftpd', or with ProFTPD which both have built-in static
     'ls' support.
     
     'ncftpd' is not free. It is shareware and can be registered for
     about $200 for a high volume server (more than 50 concurrent users)
     or ~$40 for a smaller server. Mike Gleason continues to support and
     release the best FTP client for free. There is also a free
     "personal use" option (upto 3 concurrent users). You can find out
     more:
     
     http://www.ncftp.com
     
     Of the FTP daemons that I've tried, 'ncftpd' was the easiest to set
     up and definitely the easiest to configure. It also supports
     "virtual FTP hosting" (where one host appears to be several
     different FTP servers, each with different directory structures and
     separate user lists). My only complaint was that this server
     doesn't seem to like being dynamically loaded from 'inetd' (unlike
     the normal ftp daemons --- but more like 'sendmail' and most web
     servers).
     
     ProFTPD is under the GPL. I know know the author's name and it may
     be a whole team that's worked on it.
     
     http://www.proftpd.org
     
     I have yet to try this one. However it looks very ambitious --- and
     might appeal to Apache webmasters in particular. The configuration
     files and directives are intentionally set to match or resemble
     Apache configuration options wherever possible.
     
     From what I've read the original author started working on a
     security audit and patch set to WU-FTPD and gave up. He then wrote
     the whole thing from scratch.
     
     So, I hope that helps. Naturally you could just fuss with the
     existing ftp daemon and "get it to work." Alternatively either of
     these replacements might be much better for your needs --- and
     considerably easier, as well.
     
     If not then there are a few other choices:
     
   BeroFTPD:
          ftp://ftp.aachen.linux.de/pub/BeroFTPD
          This is a WU-FTPD derivative.
          
   Troll Tech FTP Daemon:
          http://www.troll.no/freebies/ftpd.html
          Troll Tech is the publisher of the Qt libraries on which KDE is
          built.
          
   anonftpd
          ftp://koobera.math.uic.edu/www/anonftpd.html
          by D.J Bernstein (author of qmail) --- very lightweight FTP
          daemon, purely for read-only anonymous access. (Doesn't support
          normal user or "guest" accounts). Main focus is on security and
          low memory footprint.
          
     ... and I'm sure we could find many others.
            ____________________________________________________
   
(?) An Anthropologist Asks About the Linux "Process"

   From donald.braman on Mon, 23 Nov 1998 
   
   (?) I don't know if you cover non-technical questions, but here
   goes... 
   
     (!) Then you haven't read enough of the back issues.
     
     I babble about all sorts of things and have even been know to
     respond to questions that have NOTHING to do with Linux. (Usually
     those responses are less than cordial --- but hey, you can have
     answers that are good, courteous, quick, and/or free (pick any
     three)).
     
   (?) I'm interested in finding a summary of the process by which LINUX
   is maintained and updated. 
   
   Where is Linus in the LINUX community and loose organizational
   structure, and how does he decide what to do with all of the stuff he
   get? (I always see "Linus just released kernel 2.xxx" messages.) 
   
     (!) Linus "owns" the kernel. He primarily focuses his work on the
     developmental kernels (2.1.x right now --- will probably be 2.3.x
     within a month or so). The stable kernels (2.0 currently) are
     largely maintained by Alan Cox, though they are still sent to Linus
     for final approval and official release.
     
     When Linus decides that the work is complete on the 2.1 series
     he'll declare it to be "2.2" --- then he'll start a 2.3 series (and
     there will be a quick flood of patches posted to that, since we've
     been in "feature freeze" for a couple of months and there are
     people who have been privately working on some new features in
     anticipation of the next development cycle.
     
     I've heard that Linus plans to turn the maintenance of 2.2
     immediately over to Alan and Stephen Tweedie. That will allow him
     to focus on the next version exclusively.
     
     Although there has been some effort to minimize the number of bugs
     that will be in the 2.2 release --- it is almost certain that we'll
     have at least a few 2.2.x releases within the first few months.
     Many of these will account for bugs that only affect a small subset
     of the available hardware configurations (one user in 10,000 or
     less). For the 1.0 series we had about nine releases to the stable
     kernel set. For the 1.2 series we had about 13 or so. In 2.0 we
     have had 36 (the versioning skipped from 1.3 to 2.x due major
     structural changes in the kernel). Don't just graph that to project
     an estimate --- unless you also scale the graph over the time
     frames involved. Even than you'd find some anomalies --- the
     differences between 1.2 and 2.0 are as great as the versions
     numbers suggest.
     
     As for how Linus decides what to incorporate and what to ignore or
     kick back ... that's one of the mysteries to which mere acolytes
     and initiates such as myself are not privvy.
     
     Linus is swamped. He gets direct e-mailed patches from countless
     programmers and programming students around the world. (The Savvy
     ones actually read the FAQ at http://www.tux.org/lkml before trying
     to contribute to the Linux kernel).
     
     See below for more on that.
     
   (?) What if, no offense intended, Linus died tomorrow? 
   
     (!) This class of events has been discussed (usually in less morbid
     terms --- using the term "retiring" rather than references to
     "expiriing").
     
     This would be a great loss to the Linux community.
     
     However, the sources are out there under a license that ensure that
     they will remain freely available and "alive" (able and likely to
     be upgraded, ported to new platforms, and generally improved upon).
     
     The great advantage that Linux has had over FreeBSD, (and it's
     brethren) has been Linus. He focuses on the kernel, and on code and
     quality, and almost completely eschews politics. He let's others
     deal with "user space" issues (libraries, compilers, and all of the
     suites of utilities and applications that go into any Linux
     distribution).
     
     We've benefitted immensely from our "benign dictactor" model --- we
     accepted Linus as "the Linux kernel God" (we hold none before him
     and we're monotheistic in this regard).
     
     When Linus eventually retires, moves on to other conquests, or
     whatever (may it happen long after my own demise), then the hope
     among the Linux kernel developers is that we'll be able to adopt,
     appoint, agree upon a successor --- a new benign dictator. That
     might be someone like Alan Cox, or Stephen Tweedie, or it might be
     just about anyone who's name appears regularly enough on the
     Linux-kernel mailing list (I don't know enough to say).
     
     Linus as jokingly referred to his daughters and Linus 2.0 and 3.0
     (we could make it a heriditary oligarchy, if they take the interest
     and aquire the proficiency). Check back in with us in about 15
     years on that.
     
   (?) Further, I'd like to find a place where (tentative) plans for
   future releases are discussed, and even a vague timeline is given. In
   short, is there a project management site/organization that contains a
   summary of (debates about) where LINUX is going and how it's going to
   get there? 
   
     (!) Here's the real fun question. Anyone who's seriously involved
     in Linux kernel development is subscribed to the Linux-kernel
     mailing list hosted by Rutgers University (Read the FAQ listed
     above for exact instructions on how to subscribe, where to find
     archives and how to search through them).
     
     linux-kernel is a very busy mailing list. I've received well over
     nine thousand pieces of e-mail on that list in just the last few
     months. It gets close to a hundred items per day. (The only
     Internet mailing list that I've been on that seemed busier was the
     old cypherpunks list when it was hosted at Toad Hall --- and maybe
     the Firewalls list that was started by Brent Chapman at Great
     Circle Associates).
     
     With that volume of traffic, you can be sure that many busy
     developers (such as Linus) don't get to read everything. (Linus has
     a family life and a full-time job --- mostly in addition to his
     kernel work; although Transmeta apparently does provide him with
     some work time to devote to Linux --- as per his contract with
     them).
     
     Of course, the best way for you to learn about the social dynamics
     of the Linux kernel developers is to immerse yourself in it for
     awhile. Start with some research (read the FAQ, and a month or
     two's worth of the archives), then subscribe to the list and lurk
     (read and don't post) for a month.
     
     If you're doing research on us --- please let us know where we can
     read any papers that you put together. We have one participant
     (esr, or Eric S. Raymond who has referred to himself as the Linux
     community's "anthropologist" but it might be interested to have an
     alternative set of opinions from a more "objective" source).
     
     (Eric has been a hacker since before Linux was developed. He helped
     to compile and publish the "New Hacker's Dictionary" --- which is
     also a pretty good source of background if you want to understand
     the Linux community as a subculture. Take it with a grain of salt,
     of course --- but read it anyway).
     
   (?) Donald Braman
   Yale Anthropology 
            ____________________________________________________
   
(?) Looking for a Hardware Vendor: In all the Wrong Places

   From Scott Tubbesing on Thu, 03 Dec 1998 
   
   (?) Mr. Dennis, 
   
   My name is Scott Tubbesing and I am just starting to support Linux on
   my new job. I read "The Answer Guy" in The Linux Gazette for the first
   time. 
   
   My employer is in the process of purchasing a Linux server. You
   mentioned AV Research as a possible and recommended vendor. I couldn't
   find a WEB page on this company and wonder how to contact them.
   Appreciate your article and your assistance. 
   
   Have a good day. 
   
   Communication is the secret to success...Pass it on. 
   
   Scott Tubbesing 
   
     (!) That's VA Research (initials VAR, as in value-added reseller).
     They're at http://www.varesearch.com
     
     You can find a whole list of other Linux friendly hardware vendors
     at Linux International:
     
     http://www.linux.org/hardware
     
     Hope that helps.
            ____________________________________________________
   
(?) Letting Those Transfers Run Unattended

   From Terry Singleton on Wed, 02 Dec 1998 
   
   (?) While at home, dialed into work with my 56Kb modem, I sometimes
   run across very large interesting looking applications. I often wish
   that there were a way for me to telnet to my Linux box at work and
   start the download. When I got to work the next day the download would
   have hopefully completed. 
   
   Question: Is there a way for me to start my download remotely,
   disconnect from the Linux server and have the server continue to
   download the file(s)?? 
   
     (!) Yes. The most obvious is to use 'screen' - this will let you
     start interactive processes over a dialup or telnet connect (or
     within an xterm, on a VC), then you can "multiplex" multiple
     interactive programs and you can "detach" the whole session from
     your terminal/connection.
     
     Later, when you reconnect you can re-attach to your 'screen'
     session using the command:
     
     screen -r
     
     ... assuming that you only have one of them going. If you've
     started multiple 'screen' sessions you can select the one to which
     you want to re-attach using additional command switches (read the
     man page for that).
     
     I routinely use 'screen' (I'm using from a virtual console right
     now). If I leave this session like this and connect from my
     terminal in the living from (to watch a little CNN or "Law & Order"
     as I work) I just use the command:
     
     screen -r -d
     
     ... to simultaneously detach and reattach my screen session --- to
     effectively "yank it over to my terminal."
     
     Another advantage of using 'screen' is that my session is preserved
     if I get disconnected. (There's an "auto-detach" feature). So, you
     can leave the same session saving state in up to ten programs for
     weeks, even months at a time. (I have three copies of xemacs, a
     copy of lynx and a couple of shell prompts to the local and some of
     the other hosts on my net open as I type this).
     
     I do try to force myself to drop out of my screen session at least
     once a month.
     
     If you're using FTP to get these files you can also use the 'ncftp'
     command line features, including a "re-dial" which will keep trying
     to get to that busy FTP site until it gets your files. There's also
     a program called 'lftp' that is a "command line driven, script
     friendly" FTP client.
     
     Another approach would be to use 'expect' and/or Kermit scripts
     which you start at the remote and run "asynchronously" (in the
     background by slapping an '&' ampersand on the end of the command
     or by hitting [Ctrl]+[Z] to "suspend" the job and issuing the 'bg'
     command to restart it as though you'd put the '&' on it to begin
     with.
     
     Note that this "job control" feature (the [Ctrl]+[Z] and 'bg'
     stuff) only works with non-interactive programs. Interactive
     programs are likely to stop with a "waiting on terminal input"
     message. 'screen' and any properly written 'expect' script will
     cope with those because they set up a Unix domain socket as a sort
     of "virtual" terminal to control the interactive software.
     
   (?) Regards,
   Terry Singleton
   Canadore College, Network Analyst 
                        ____________________________
   
(?) Letting Those Transfers Run Unattended

   From Terry Singleton on Fri, 25 Dec 1998 
   
   (?) Where do I find screen I searched my system and www.freshmeat.net
   but could not find the app you mentioned. I am running RedHat 5.1 and
   I believe installed almost everything. 
   
   thanks. 
   
     (!) That's odd. When I use freshmeat's "Quickfinder" it's the first
     entry that shows up. (Maybe the older version wasn't listed. A new
     version was just released recently --- after you sent me this
     message I think).
     
     Here's the Freshmeat "AppIndex" URL:
     
   ( freshmeat ) - ( details of "screen" )
          http://appindex.freshmeat.net/view/913939067
          
     ... and here's the main web page:
     
   screen - GNU Project - Free Software Foundation (FSF)
          http://www.gnu.org/software/screen
          
     It's also easy to find at the Filewatcher site
     (http://filewatcher.org formerly lfw.linuxhq.com) and at the Linux
     Archive Search (http://las.ml.org).
     
     However, Freshmeat returned the most recent version and the
     canonical web site, while the others showed dozens of links to
     older versions and other packages (with the string 'screen' in
     their names) and no information about the package. So Freshmeat's
     my first choice at this point.
            ____________________________________________________
   
(?) Translucent, Overlay, Loop, and Union Filesystems

   From c17h21no4 on Wed, 02 Dec 1998 
   
   (?) Where can i find information/documentation about the loopback
   filesystem and the translucent file sytstem under linux. From what i
   see on the mail lists there is support but the links are old or
   outdated (Ben's link) and i seem to not be finding any info on it. 
   
     (!) According to an old version of the CD-ROM Howto:
     
     Once upon a time there was an IFS (inheriting filesystem). This was
     written by Werner Almesberger for Linux version 0.99p11 was similar
     in principle to the "translucent fs" from Sun. This was a
     "copy-on-write" system, sometimes referred to as an "overlay" or
     "union" fs.
     
     All of these are different terms for the same concept, you mount
     two (or possibly more) filesystems on the same point. Accessing
     files under these mount points is presents files from one of the
     underlying filesystems.
     
     The most common case would be to lay a CD-ROM fs over a normal
     (ext2, minix, xiafs) filesystem. Any files on the "normal"
     (read-write) fs take precedence over any file with a colliding name
     on the CD-ROM. Any write attempt of a file results in a copy (or
     possibly a "diff" on a log-structured fs). Later access to such
     files will refer to the copy rather than the original.
     
     An early version of the Yggdrasil Plug-n-Play Linux (*)
     distribution supported this (IFS) as an installation method, if I
     recall correctly.
     
     * (the first CD-ROM distribution ever released as far as I know)
       
     As far as I know Werner's IFS hasn't been updated in years and
     there isn't any support for any of these union/translucent etc fs
     variants in the standard kernel. I did find on pretty obscure set
     of patches that appear to provide "overlay" filesystem support for
     2.0.31 kernels at:
     
   LOFS Patches for Linux:
          http://www.kvack.org/~blah/lofs
          
     ... this has no README files or other documentation so my guess
     about their intent is purely from reading the patches. I think
     "Blah" in this URL refers to Mr. Benjamin LaHaise who apparently
     wrote the following to the Linux-Kernel mailing list in May of
     1997:
     
     > Now is a very good time to tell me if
     > someone else has already got a working lofs :-) 
     
     I wrote one quite some time ago, and finally made patches against
     2.0.30 last week. They're at
     ftp://dot.superaje.com/pub/linux/lofs-2.0.30.diff It's not perfect,
     but it works. (I do have a fancier 2.1.x version, but it'll be a
     while before i get anymore work done on it.) 
     
     This was in response to a Mr. Jon Peatfield's query. (The ftp link
     therein does not work). He mentioned some additional work on his
     'lofs' as late as August of '97 --- quoted in a response by Linus
     regarding some VFS semantics.
     
     I presume this is the "Ben" to which you are referring. I've blind
     copied his last known @ddresses. (Sorry if you get three copies of
     this).
     
     There's a similar concept called a "cachefs" and there's a couple
     somewhat different concepts called "loop" filesystems.
     
     A Linux "loop" or "loopback" filesystem allows one to mount a
     regular file as a filesystem. This only works if the file is an
     image of a supported filesystem. Thus, if you have a boot diskette
     image you can mount it on /dev/loop0, 'cd' into the mount point and
     view the contents.
     
     I've leard of another interpretation of the phrase "loop back
     filesystem" that involves remounting the same filesystem with
     different option at different mount points. Thus you might mount
     one at /usr with "read-only" options and somewhere else with
     read-write and no-exec" However, I don't know which versions of
     Unix use this and it doesn't seem to match the Linux implemtation
     at all.
     
     It is possible to enable encryption on your loop devices using the
     'losetup' command (see the man page in section 8). However, this is
     more of a proof of concept than a real utility. See my column last
     month for pointers to some real cryptography packages, or look at
     the "privacy protected disk driver" (ppdd) which is one I forgot to
     mention last month.
     
     'cachefs' and 'tmpfs' are filesystems that are supported by
     Solaris.
     
     The CODA project at http://coda.cs.cmu.edu also has some
     interesting replication and caching features.
     
     Obviously when we start talking about specialized filesystems we
     see myriad terminology collisions and ambiguities.
     
     For now I'd say that Linux LOFS/Translucent filesystems are not
     "ready for prime time." However, if you're interested in working on
     the code --- go for it!
            ____________________________________________________
   
(?) Modem dial out

   From Infinite Loop on Wed, 02 Dec 1998 
   
   Hi Jim, 
   
   How are you? I'm want to write a program that enables my Linux system
   to dial a page to my beeper, this function to be activated upon
   certain events. I am learning C. I came across a system call ioctl
   that is supposed to let me control the devices, but I cannot find
   further information on it's usage. Or is there other
   programs/functions that you can advise me to work on to achieve the
   result? 
   
   Thanks. 
   
     (!) You might want to try the Linux Gazette "Search" feature. I
     wrote a fairly extensive piece on this back in May.
     
     Using the search phrase "pager software" at
     http://www.linuxgazette.com the following was the fourth hit:
     
   The Answer Guy 28: Email Alpha-Paging software
          http://www.ssc.com/lg/issue28/tag_paging.html
          
     Granted I wasn't able to find it so easily using Yahoo! and Alta
     Vista. When I elaborated on the phrase to include:
     
     pager software linux source code
     
     ... I got a surprise:
     
   Debian Package - hylafax-doc 4.0.2-5
          
   http://cgi.debian.org/www-master/debian.org/Packages/stable/comm/hylaf
          ax-client.html
          
          HylaFAX support [sic] the sending and receiving of
          facsimiles, the polled retrieval of facsimiles and
          the send [sic] of alphanumeric pages.
          ^^^^^^^^^^^^^~~~~~~~~~~~~~~~~~~~~~~~

     (emphasis mine).
     
   (?) Regards, Joseph Ang 
   
     (!) I'd get those packages and read through their sources a bit.
                        ____________________________
   
(?) Promptness: It's Just a Lucky Shot

   From Infinite Loop on Fri, 04 Dec 1998 
   
   (?) Hi Jim, 
   
   Thanks for your prompt reply! I'm very surprised to receive your reply
   in just a day! Really, really appreciate that :) 
   
   Best regards, Joseph Ang 
   
     (!) You were just lucky. The question was easy and appealed to me.
     
     Unfortunately there are many questions that I just don't "get to."
     Especially since I'm getting about five times more TAG traffic this
     month then last.
            ____________________________________________________
   
   
(?) 'chroot()' Jails or Cardboard Boxes

   From Clifton Flynt sometime before Wed, 02 Dec 1998 
   
   Hi, You recently stated: 
   
   You can set up inetd.conf to call simple chroot call to a jail before
   launching ftpd -- which will automatically use the /etc/passwd that's
   relative to the chroot directory. The You can even use shadow
   passwords in the chroot. 
   
   It does take a bit of tweaking -- but it can be done. 
   
   Could you point me to a FAQ or HowTo for this? 
   
   I'm upgrading a 4.2 based firewall system to 5.1, and already tried
   the obvious tricks of copying the /lib/security and /etc/pam.d
   directories to the playground/jail directory. 
   
   Thanks,
   Clif 
   
     (!) I don't know of an FAQ or HOWTO on this. I haven't had time to
     write one myself.
     
     One trick is to use the 'ldd' command extensively to identify
     shared libraries that must be copied into the 'chroot()' jail.
     Another is to use 'strace' to capture system call traces of each
     program (particularly those that fail to run properly in the jail)
     and compare the calls to 'open()' between the version run in the
     jail and the one that works normally within your normal
     environment.
     
     The brute force method is to simply install a whole distribution
     unto another filesystem. Mount that as the jail and trim out
     everything you don't need.
     
     It should be noted that 'chroot()' jails are not "root safe" under
     normal implementations of Unix and Linux. If an attacker does
     successfully gain 'root' privileges with the jail it is a simple
     matter to "break out."
     
     'securelevel' is a set of features in BSD (Free|Net|Open and
     BSDI/OS) to minimize the persistence of such compromise. These try
     to prevent root from exercising various privileges while the system
     is in "server" or "production" or "secure" mode.
     
     There were some patches for 'securelevel' that were under
     development for Linux. However, Linus rejected them and has
     accepted an alternative that may offer more flexibility, finer
     grained control and still allow for relatively easy "securelevel
     emulation."
     
     These features (what POSIX.1e refers to as "capabilities lists" but
     which are better described as "VMS like privileges") are built in
     the 2.1.x kernels and will almost certainly be part of 2.2. In
     addition to the possibility that these will allow us to "emulate
     'securelevel'" these may also prevent many forms of process
     subversion that lead to 'root' compromise.
     
     Normal 'securelevel' does nothing to prevent the attacker from
     gaining root. It doesn't very little to limit what the attacker can
     do with that privilege during the session in which it is obtained.
     In other words the successful attacker still has control of the
     system. 'securelevel' primarily prevents persistent changes to the
     filesystems (no changing immutable flags to mutable and
     "append-only" files to random access read/write, no remounting
     read-only filesystems in read/write mode, etc). Some other
     securelevel features prevent loading of kernel modules and access
     to /dev/kmem (/proc/kmem for Linux users).
     
     This doesn't address the mechanism by which the attacker gained
     'root' and only places relatively minor limitations on what 'root'
     can do to the state of the system. Those limitations mostly prevent
     sniffing on other processes, hiding the attacker tracks, and
     leaving 'rootkits' laying around.
     
     With the "privs" features the Linux kernel add more fine-grained
     delegation and limitation semantics. One can provide a process (and
     its descendents) with the ability to open a "privileged" TCP port
     (below the conventional Unix 1024 watermark) and/or with just
     read-only access to all files, without allowing that process to
     write to, change the ownership or permissions/mode or filesystem
     dependent attributes/flags on them, etc).
     
     Basically these "privileges" split the implications of "SUID root"
     into separately maskable and delegateable items. Instead of one
     "god flag" we have a whole pantheon of them, each with its own
     sphere of influence.
     
     The kernel support for this is just the tip of the iceberg.
     Consequently we probably won't see effective use of this for
     several month after Linux ships and it will be much longer until we
     have "full" support for this security model.
     
     Currently the only way to use these features with 2.1 kernels would
     be to write wrapper programs that set/mask the privilege sets
     (there are "allowed, effective, and inheritable" sets; the
     "inheritable" set is a mask which strips these privs from
     children). These wrapper/launchers could then start processes with
     small lists of required privileges and some (small?) assurance that
     these processes couldn't perform some forms of mischief directly.
     
     To emulate 'securelevel' you'd write wrappers that started 'init'
     and/or 'inetd' and various daemons like 'sendmail' and your web
     server with a set of privileges masked off. These processes and
     their children would be unable to exercise certain sorts of system
     calls (possibly including the equivalent of 'chroot(..)' to
     chdir/chroot out of a jail) and file operations. They would not be
     able to inherit these privileges even from an SUID 'root' program
     --- such programs would only be able to exercise the subset of
     privileges that were inherited and allowed. (*)
     
     * (The attack vector would then have to be via subversion of some
       running process that retained its privileges i.e. via some form of
       interprocess communication rather than by direct execution. If
       'init' was stripped of its "chatter +i" priv then no process on
       the system could make immutable files mutable. Naturally you'd
       construct the wrapper or patches to 'init' such that these
       features would be enabled at specific runlevels or disabled with
       certain boot-time parameters).
       
     Later it will be possible to store these privilege sets as
     attributes of executable files. Thus the 'rsh' and 'rlogin'
     commands would have their "bind to privileged IP port" bit set, and
     all others would be unset. (Note we're not masking off the other
     privs, we're merely not granting them). Thus the reason why these
     two command are "SUID 'root'" is accounted for, without giving
     these programs a host of other system privileges that are not
     required for their proper operation.
     
     The filesystem support for these features will presumably be added
     in the 2.3 kernel series.
     
     It looks like Linux 2.3 will mostly be about filesystems, "large"
     file support, ACL's, logging/journaling, b-tree directory
     structuring, and other features of that sort.
     
     It's not clear whether these will be rolled into ext2 or whether
     they will be incorporated into a new ext3.
     
     If this whole "privs" security model seems complex and difficult to
     administer and audit, then you're reading me loud and clear.
     
     Determining the precise set of requisite flags for each program and
     process will be a monumental pain. It is unclear how effective
     these efforts will eventually be. VMS has had these sorts of
     features since its inception, and they are similar to features in
     MLS/CMW (multi-level security for compartmented mode workstations)
     versions of Unix (usually billed/sold as the B2 Security Package,
     Option, or Version --- and generally only used by the U.S. military
     or similar organizations).
     
     Personally I would like to see a "true capabilities" subsystem
     implemented. This is a completely different security model that is
     so much unlike Unix, NT, and other identity/ACL based systems that
     you may have to spend a year or two unlearning what you know about
     operating systems design before you "get it." (It took me about two
     --- but I'm unusually stubborn).
     
     I've talked about this security model in this column before. Do a
     keyword search on EROS (extremely reliable OS) and/or KeyKOS to
     find some links about it. Ironically I've never used a system that
     incorporated "capabilities." However, I've grudgingly come to the
     conclusion that they represent a better security model than the
     ones we use in all major software today.
     
     The catch is that programs would have to be significantly retooled
     to work under such a system. There's also been almost no interest
     in this from the programmers that I've talked to. (That would
     suggest that I'm just a ranting crackpot --- since I'm not a
     programmer myself).
     
     In any event, hopefully these "privileges" will make your system
     somewhat more secure and make a chroot() jail more than just a
     cardboard box.
     
     If security is not your primary concern -- if all you want is to
     provide virtual FTP hosting, just look at ncftpd and or ProFTPD.
            ____________________________________________________
   
    "The Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                           (?) The Answer Guy (!)
                                      
                   By James T. Dennis, answerguy@ssc.com
          Starshine Technical Services, http://www.starshine.org/
     _________________________________________________________________
   
(?) Swap file on a RAM Disk

   From Mathieu Bouchard on Wed, 02 Dec 1998 
   
   (?) Hi, 
   
   Some have even reported that using 100 or 200K RAM disk with a swap
   file on it will dramatically improve the performance over using all of
   your memory as straight RAM. 
   
   Do you have any rational explication to this? I'm not a kernel expert,
   but it makes no sense -- especially because AFAIK, Linux RAM disks are
   swappable (and lazily-allocated), and mutual containment (in this
   context) makes no sense; 
   
     (!) No. I don't have a rational explication or explanation for
     this.
     
   (?) but in the event that a RAM disk wouldn't be swappable, then,
   swapping from RAM to RAM isn't anything more than a CPU hog and
   unnecessary complexity -- it's a kind of Alice in Wonderland to me. It
   would make sense if some compression was done while swapping, which
   would look like a Macintosh RAMdoubler. But Linux has no such feature
   -- six months ago I asked the Linux guys and they said that they
   didn't like the idea. 
   
   Is it possible that such a report would be gibberish? in which case I
   would like you to get the precise facts and publish them. I think that
   even though it is a detail, the Linux community doesn't deserve to
   have anything done wrong. I'm not [bf]laming, I just want to correct a
   situation. 
   
   matju 
   
     (!) However, I can make a guess. Many of the memory management code
     paths may have to special case the situation where no swap/paging
     space is available. The routines invoked to handle this special
     case may result in a slow down when no swap is available.
     
     You're welcome to search the Linux-Kernel mailing list archives
     yourself. You can also just try it (run some tests with no swap
     space "mounted" and then run them again with just a small swap file
     located on a small RAM disk. I haven't actually tried this
     experiment, so I made an effort to identify my statement as
     hearsay.
     
     If you'd like to so some research on it --- you could publish your
     results (probably in LG --- perhaps as a 2-cent tip or as a full
     article).
            ____________________________________________________
   
(?) How to "get into" an Linux system from a Microsoft client

   From WRB on Wed, 02 Dec 1998 
   
   (?) Jim, thanks for the samba reference, I'm going to spend some time
   there. 
   
   By the way, it appears I did not give you enough information the first
   time around. When I mentioned "get into" I meant from the NT40
   explorer window, network neighborhood. I can see the Linux machine
   (computer4) shown there, but when I try to "click" on it and log in
   (YYYYYYYY4 with password XXXXXXX), that's when I get the message. This
   happens with NT40 only. 
   
     (!) That is what I'd guessed. You're trying to share files that are
     on the Linux system. Those must be exported/published to you via
     Samba.
     
     By the way, I've blotted out the computer name and password that
     you included in your text. Please don't include any password or
     other private information in posting to any stranger on the net.
     Particularly one like me, that publishes such messages in a widely
     read monthly webazine.
     
   (?) When I do the same thing with W95 explorer window, network
   neighborhood (click on computer4), I go right to the directory I
   assigned to the W95 computer (/home/computer1). I set up a link from
   there (/home/computer1) to / and now I can browse all over the Linux
   machine - using W95. 
   
   I just can't find the right "trick" to do the same thing with NT40. 
   
     (!) You'll want to read the Samba FAQ. This is one of the
     situations they cover therein. Basically there is a difference
     between the way that your Win '9x and NT 4.0 clients are attempting
     to authenticate to the Linux system.
     
   Samba FAQ
          http://us1.samba.org/samba/docs/FAQ
          
     Search on the string "nt4" to jump to the first relevant paragraph.
     However, I'd suggest reading the whole FAQ. Then you'll know more
     about Samba then I currently do. (I hardly ever use it).
     
   (?) Thanks again for your time
   Ron Botzen 
            ____________________________________________________
   
(?) Dynamic IP Address Publishing Hack

   From Ronald Kuetemeier on Sat, 05 Dec 1998 
   
   (?) Here is an expect script that you might find useful for your
   article. It keeps a connection to the internet up and running with a
   dynamically assigned ip address. It updates html file(s) with the
   assigned ip address and ftps it to a well known server on the
   internet. Ronald 
   
> =====================================================================
 #!/usr/bin/expect -f

 #expect script to keep a www server connected to the internet over
 #dynamically assigned ip address
 #Ronald Kuetemeier 11/1/1998 dket@mail.saber.net
 #Replace all xxxx with your values

 #initial ppp server address to see if we are already up
 #change this to your ftp server ip addr.
 set server xxx.xxx.xxx.xxx

 #use of ppp script to make sure ppp is down and can be restarted
 #change this to your local ppp up/down script
 proc logon {} {
  system xxxx stop
  close
  wait
  sleep 10
  system xxxx start
  close
  wait
  sleep 35
  ping 1
 }

 #get ip's from ifconfig
 proc getip {} {
  spawn ifconfig
  expect -re "P-t-P:\[0-9\]+.\[0-9\]+.\[0-9\]+.\[0-9]+"

   close
   wait
   setip $expect_out(buffer)
  }

 }

 #find local ip and remote server ip address from ifconfig
 proc setip {out} {
  global server
  set ips [string range $out [string first "Point-to-Point" $out]
 [string length $out]]
  regexp  P-t-P:\[0-9\]+.\[0-9\]+.\[0-9\]+.\[0-9]+  $ips server_1
  regexp  addr:\[0-9\]+.\[0-9\]+.\[0-9\]+.\[0-9]+  $ips client_1
  regexp \[0-9\]+.\[0-9\]+.\[0-9\]+.\[0-9\]+ $server_1 server
  regexp \[0-9\]+.\[0-9\]+.\[0-9\]+.\[0-9\]+ $client_1 client
  changeaddr $client
 }

 #ping to see if connection is still up
 proc ping {i} {
  global server
  while {1} {
   if {$i == 6} {
    logon
    getip
    set $i 0
   }
   spawn ping -c 1 -n $server
   expect {
    "bytes from" break
    "100% packet loss" close
    ret=-1 close
   }
   wait
   incr i
   puts $i
   sleep 3
  }
   close
   wait
 }

 #change to your local userid and passwd and file transfer
 proc ftp {} {
 #change to your ftp server
  spawn ftp xxx.xxx.xxx
  expect "Name*:"
  send "xxxx\r"
  expect "Password:"
  send "xxxx\r"
  expect "ftp>"
 #change to your ftp server directory,i.e public_html
  send "cd xxxxx\r"
  expect {
 #change file to transfer             [file]
                "2*ftp>" [send "put xxxx.xxxx\r"]
                 "550*ftp>" ftp_error
  }
  expect {
 #change or delete file 2 transfer    [file 2]
                "2*ftp>" [send "put xxxx.xxxx\r"]
                "No such file" ftp_error
  }
  close
  wait
 }

 proc ftp_error {} {
  puts "FTP ERROR\n"
  close
  wait
 }

 # use sed to replace unique name with ip addr in a file
 proc changeaddr {client} {
 #change file names and local dns name
 #                     [DNS]              [in.file]   [out file]
  system sed 's/xxxx.xxxxxxx.xxx/$client/' xxx.xxxx > xxxx.xxxx
  close
  wait
 #change file names and local dns name or delete this
 #                      [DNS]               [in.file]   [out file]
  system sed 's/xxxx.xxxxxx.xxxx/$client/' xxxx.xxxx > xxxx.xxxx
  close
  wait
  ftp
 }


 ping 6

 while {1} {
  puts "Main loop\n"
  ping 1
  sleep 9
 }

     (!) I'll just leave this as is. However, I'd suggest that the
     'pppup' script documented in the 'pppd' man pages would provide
     some of the IP addresses that you are laboriously extracting from
     spawn command outputs using regexes.
     
     Also It would make a lot of sense to write up an article around
     this script and publish that in LG yourself.
            ____________________________________________________
   
(?) Why 40-second delay in sending mail to SMTP server?

   From Steve Snyder on Sat, 05 Dec 1998 
   
   (?) On my LAN, when my (Win95- and OS/2-based) mail clients retrieve
   mail from my (RedHat v4.2) Linux server, it is all but instant. When
   sending mail to the server, there is a 40 - 45 second delay before the
   sent mail is accepted. Mail retrieval by the clients are done via
   POP3; mail is sent via SMTP. 
   
     (!) Sounds like attempts at reverse DNS and/or 'ident'
     
   (?) These are the relevant lines from my /var/log/maillog. These lines
   are the result of sending mail from mercury.snyder.net (client running
   OS/2) to solar.snyder.net (server running Linux). Note that the second
   line contains the text "delay=00:00:40". Hmm. 
   
> Dec  2 09:12:05 solar sendmail[21694]: JAA21694:\
>       from=<steve@solar.snyder.net>, size=403, class=0, pri=30403,\
>       nrcpts=1, msgid=<199812021411.JAA21694@solar.snyder.net>,\
>       proto=SMTP, relay=mercury [192.168.0.2]
> Dec  2 09:12:05 solar sendmail[21724]: JAA21694:\
>        to=<steve@solar.snyder.net>, ctladdr=<steve@solar.snyder.net>\
>        (500/500), delay=00:00:40, xdelay=00:00:00,
>       mailer=local, stat=Sent

   I should also note that I don't use DNS on my LAN. Name resolution is
   done via a hosts file on each machine. 
   
   I haven't done any tweaking of (version 8.8.5) sendmail. It is pretty
   much as-is from the RedHat installation. In case it isn't already
   obvious, I'm a newbie at mail configuration issues. 
   
   Any advice on how to eliminate this delay in sending mail from my
   client machines? 
   
   Thank you.
   *** Steve Snyder *** 
   
     (!) Newer versions of 'sendmail' have features that are intended to
     minimize abuse by spammers and miscreants. Some of these involve
     doing doubled reverse DNS lookups to check that the forward and
     reverse names are consistent.
     
     Sendmail normally will not use the /etc/hosts file to map host
     names to IP addresses. This is because the standards call for it to
     look up DNS MX records in preference to other types of address
     records.
     
     In other issues I've described how I get around that on my private
     LAN (which also doesn't use DNS for mail routing or internal host
     resolution).
     
     [ Jim has at several points in the past revealed fragments of our
     main control file, sendmail.cf, so the Linux Gazette search box
     should be able to reveal it pretty easily if you use that filename
     as a keyword. -- Heather ] 
            ____________________________________________________
   
(?) Linux as Router and Proxy Server: HOWTO?

   From kdeshpande on Sat, 05 Dec 1998 
   
   (?) we want to setup linux server as a proxy server with two ethernet
   cards. kindly guide me for installtion process. 
   
   i am ms win 95 user and does not know any thing about unix / linus pl.
   reply on [another mail address]. 
   
     (!) You'll want to start with the "Linux Installation and Getting
     Started." That's an LDP guide that's often included with Linux
     distributions in the /usr/doc directory and is available on-line at
     any mirror of the LDP. That covers the basics of Linux (although it
     is getting a bit long in the tooth).
     
     Configuring a proxy server and router is a fairly advanced process
     and will involve a considerable understanding of Unix and of TCP/IP
     concepts. It sounds like your skills in English may make some of my
     explanation inaccessible to you. Hopefully the guide to routing
     that I've also written for this month's LG article (should appear
     for the January 1999 issue) will help.
     
     There are a couple of other HOW-TO documents written on using Linux
     as a "Firewall" (proxies are often a component in a firewall). Many
     of these have been translated into various languages. You'll want
     to see if there's one in your native language.
     
     Personally I'd suggest that you get a consultant to come in and
     configure it for you. That is likely to be far easier and less of a
     hassle than trying to do it yourself.
     
     Now, with all of those disclaimers out of the way here's a simple
     configuration:
     
                                    _________
                192.168.1.x  -------| proxy |------ the Internet
                                    ^^^^^^^^^

     In order to have a proxy system, you have to have a "multi-homed
     host" (a system with two interfaces in it).
     
     In this case you've specified that you want to have two ethernet
     cards. So, first you install those. Be sure to set their IRQ's and
     I/O base address settings to non-conflicting values. The exact
     process varies greatly from one card to another. With the 3c5x9 and
     3c900 cards you use a program to set them (3C5X9CFG.EXE under
     MS-DOS, or the appropriate utility that was written for Linux --- I
     found a copy at the VAResearch ftp site: ftp.varesearch.com under a
     relatively obvious name).
     
     Let's say that you have one of them set to IRQ 10, I/O 300 and the
     other set to IRQ 11, I/O 330 (make sure that these don't conflict
     with any SCSI, sound or other cards that you have installed).
     Typically you'll also want to disable any "Plug & Play" support on
     your motherboard since these features may change the settings on
     your ethernet card while you boot, causing you no end of
     consternation later.
     
     You'll also have to make sure that the appropriate driver is linked
     into your kernel, or that you've built the appropriate modules.
     
     It is also common for the Linux kernel to require that you provide
     it with a hint that there are multiple ethernet cards to
     initialize. You just provide the kernel with a boot parameter (read
     the 'bootparam(7)' man page and/or the "Boot Parameter HOWTO" for
     details). The HOWTO has an example at:
     
     http://metalab.unc.edu/LDP/HOWTO/BootPrompt-HOWTO-7.html#ss7.1
     
     ... showing the command case using:
     
     ether=0,0,ether1
     
     ... (no spaces --- and don't change the case of any letters).
     
     This option is passed to the kernel by typing it in at the LILO
     boot prompt, or adding an append directive to your /etc/lilo.conf
     like:
     
     append="ether=0,0,ether1"
     
     ...(the double quotes are required).
     
     This option forces the kernel to look for a second ethernet adapter
     (the first ethernet adapter is labelled as 'ether0' and will
     normally be detected automatically). The 0,0 forces it to search
     for the IRQ and I/O base addresses automatically. If that's not
     successful, or you want to be conservative, you can just provide
     the information manually.
     
     This is extensively documented in the "Ethernet HOWTO" at:
     
     http://metalab.unc.edu/LDP/HOWTO/Ethernet-HOWTO-10.html#ss10.1
     
     You should see boot time message indicated that the ethernet cards
     have been found. You can use the 'dmesg' command to review those
     after the system is finished booting and you've logged in.
     
     The last step in the hardware/driver layer is to issue 'ifconfig'
     command for each of these interfaces.
     
     Let's say your ISP router (cable modem, ISDN or DSL gizmo,
     whatever) is using address 172.17.100.1 on your ethernet (that's a
     private net address from RFC1918 --- but let's pretend is was your
     real address).
     
     Let's fill in our diagram a bit more:
     
                              _________      __________
          192.168.1.x  -------| proxy |------| router | -- Internet
                              ^^^^^^^^^      ^^^^^^^^^^
                            eth0     eth1    ^-------- 172.17.100.1
                       192.168.1.1   172.17.100.2

     Here we see a private network (all of 192.168.1.*), our proxy
     servier with two ethernet interfaces, with eth0 on our "inside LAN"
     (taking up the conventional .1 address for a router --- it is the
     router to the outside/perimeter segment. eth1 is the proxy host's
     interface on the "perimeter" or "exposed" segment (outside of our
     LAN).
     
     There is a small perimeter segment in this case. In many
     organizations it will be populated with web servers, DNS and mail
     servers and other systems that are intended to be publicly visible.
     
     Obviously each of the systems that are shown on this segment (the
     proxy and the router) need their own IP address. I've assigned
     172...2 to the proxy since I said that 172...1 was the border
     router's inside address. The border router would also have some
     sort of link (usually a point-to-point (PPP) link over a modem,
     ISDN, frame relay FRAD, CSU/DSU codec, DSL ATM or other device ---
     the telephony is not my specialty they hand me a "black box" and I
     plug the wires into the little tabs where they fit).
     
     For our example we don't care what the IP addresses over the PPP
     link are. all we care about is that our ISP gets packets to and
     from the 172...* network or subnet. They have to have routes to us.
     
     This example will work with any subnet mask --- we'll assume that
     we have a whole class C range, from 172.17.100.1 through
     172.17.100.254 for simplicity's sake (read all about subnetting and
     proxyarp for gory details on those scenarios).
     
     So, on our Linux proxy server we use the following commands to
     configure our interfaces:
     
     ifconfig eth0 192.168.1.1 netmask 255.255.255.0
     ifconfig eth1 172.17.100.2 netmask 255.255.255.0
     
     ... we could leave the netmask option off the first command since
     it will default to this mask due to the address class. With most
     modern ISP's we'll have to use some other netmask for the second
     case --- unless we're paying for a whole Class C block. We might
     need to anyway (our ISP might have a Class B address block and be
     subnetting it into Class C chunks). We'll just assume that we need
     it on both of them.
     
     We can optionally specify the broadcast addresses for these ---
     however it shouldn't be necessary if we're following normal
     conventions. It will default to the last valid number in the
     address range (192.168.1.255 for the first case and 172.17.100.255
     in the other).
     
     * (If we'd had a netmask of 255.255.255.240 in the first case then
       our broadcast address would be 172.17.100.15, if our addresses had
       been 172...33 and 172...34 with that netmask our broadcast would
       have been 172...47 --- again these are just examples; the
       explanation is a bit involved.)
       
     So we have IP addresses on each interface. Now we need routes. In
     the newer 2.1.x kernels (and presumably in the 2.2 kernels and
     later) the 'ifconfig' operation automatically results in an
     addition to the routing table. This is more like the way Solaris
     works. Under earlier kernels you have to add routes with commands
     like:
     
     route add -net 192.168.1.0 eth0
     route add -net 172.17.100.0 eth1
     
     ... this defines routes to the two local segments (one on the
     inside, and one on the outside). Again, newer kernels may not
     require this entry.
     
     Now, for our proxy to reach the Internet we'll have to set a
     "default route" like:
     
     route add default gw 172.17.100.1
     
     If we have other networks that must be accessed through our LAN
     (something like a 10.*.*.* network in the back office or for our
     server room) we may also want to add other "static" routes to this
     list. Let's say that 192.168.1.17 was a router between our desktop
     LAN and our 10-net server segment. We'd add a command like:
     
     route add 10.0.0.0 gw 192.168.1.17
     
     Notice that we are not forwarding packets between our interior LAN
     and the outside world. If we did the routers on the Internet will
     not have any valid routes back to us (that's what these 192.168.*.*
     and 10.*.*.* addresses are all about. Read RFC 1918 for details on
     that). 172.16.*.* through 172.31.*.* addresses (16 Class B blocks)
     are also reserved for this use --- but we're "pretending" that
     172.17.100.* is a "real address" for these examples.
     
     So now we need to enable our interior systems to access the outside
     world. We can use IP Masquerading and/or proxying to accomplish
     this. Masquerading is a bit easier than proxying under Linux since
     the support is compiled into most kernels.
     
     Masquerading is a process by which we make a group of systems (our
     internal clients) look like one very busy system (our proxy). We do
     this by re-writing the "source" addresses on each package as we
     route it --- and by patching the TCP port numbers.
     
     TCP "port" numbers allow a host to determine which process on a
     system is to receive a given packet. This is why two users on one
     system can telnet to another system without there being ambiguity.
     
     Using masquerading all of the connections that are being handled at
     any given modem essentially look like "processes" or "sockets" on
     the proxy server.
     
     Thus IP masquerading is "network layer proxying."
     
     Do do this under Linux 2.0.x and earlier (back to the 1.3.x series)
     we could simply use a command like:
     
     ipfwadm -F -m -a accept -S 192.168.0.0/16 -D 0.0.0.0/0
     
     ... which adds (-a) a masquerading (-m) rule to accept packets from
     any source address matching 192.168.*.* (16 bits of the address are
     the "network part" --- that's equivalent to a netmask of
     255.255.0.0) and whose destination is "anywhere." This rule must be
     added to the "forwarding" (-F) set of packet filters.
     
     The Linux 2.0.x IP packet filtering subsystem (kernel features)
     maintain four sets of rules (tables): Accounting, Input,
     Forwarding, Output
     
     ... we only care about the "forwarding" rule in this case.
     
     With all recent Linux kernels we also have to issue a command like:
     
     echo 1 > /proc/sys/net/ipv4/ip_forward
     
     ... to enable the kernel's forwarding code. These kernels default
     to ignoring packets that aren't destined to them for security
     reasons (this and a TCP/IP "option" called "source routing" have
     been used to trick systems into providing inappropriate access to
     systems --- so it is better for systems to leave these features
     disabled by default). Older versions of Unix and Linux were more
     "promiscuous" -- they would forward any packet that "landed on
     them" so long as the could find a valid route.
     
     Lastly we'd just configure our client systems with IP addresses in
     the range 192.168.1.2 through 192...254 and cofigure them to treat
     192.168.1 as their default route. Packets will get to the proxy
     from any of these, be re-written to look like they came from some
     socket on the 172...2 interface and forwarded out to the Internet.
     Returning packets will come in on the socket which will provide the
     kernel with an index into a table that stores the 192.168.*.* owner
     of this connection, and the return packet will be re-written and
     forwarded accordingly (back into the internal network).
     
     That's how masquerading works.
     
     Applications layer proxying is actually a bit easier than this. You
     install packages like SOCKS, Delegate, the FWTK (firewall toolkit),
     and a Squid or Apache caching web server unto the proxy system.
     These listen for connections on the inside interface (192.168.1.1).
     Proxy aware software (or users) on the internal system direct their
     connections to the proxy server (on port 1080 for SOCKS and
     Delegate) and then relay the real destination address and service
     to the proxy server. The proxy server, in turn, opens up its own
     connection to the intended server, makes the requests (according to
     the type of service requested, and relays the information back to
     the client).
     
     In addition to the basically relaying a good proxy server can
     provide caching (some multiple requests for the same static
     resource are handled locally --- saving time and conserving
     bandwidthy), additional logging (so big brother can tell who's been
     bad), and can enforce various access control policies (no FTP to
     popular mirror sites in the middle of the day, all users must be
     Kerberos authenticated in order to access the Internet, whatever).
     
     The main disadvantage to applications layer proxying is that the
     proxy clients must be "socksified" or proxy aware. Either that, or
     with some of them (FWTK and optionally DeleGate) the user of a
     normal client (such as FTP) can manually connect to the proxy
     server and use some special command (login sequence) to provide the
     proxy with the information about the real destination and
     user/account info. (Almost needless to say that a compromised proxy
     host is a great place to put password grabbing trojan horses!)
     
     However, one of the major advantages of the proxy system is that it
     can support strange protocols --- like "active FTP" which involves
     two co-ordinated IP connections, one outbound control connection
     and one inbound data channel. There are other protocols that
     connection pass information "in band" and make masquerading more
     difficult and sometimes unreliable.
     
     It's possible to use both, even concurrently with just one host
     acting in both roles.
     
     So far my favorite applications proxy package is "DeleGate" by
     Yutaka Sato, of the Electrotechnical Laboratory (ETL) in Japan. You
     can find it at:
     
     http://wall.etl.go.jp/delegate
     
     ... it's easy to compile and configure and it's available under a
     very liberal license (very BSD'ish but less wordy).
     
     DeleGate can be used as a SOCKS compatible server (i.e. SOCKSified
     client software will work with DeleGate); and it can be "manually
     operated" as well.
     
     My only complaint about DeleGate is that the English documentation
     can be a bit sparse (and my paltry studies of Japanese are nowhere
     near the task of reading the native docs).
     
     The easiest way to install SOCKS clients on your Linux systems is
     to just grab the RPM's from any Red Hat "contrib" mirror. That's
     also the easiest way to install a SOCKS server.
     
     To configure the clients for use with the SOCKS5 libraries you have
     to create a file, /etc/libsocks5.conf, to contain something like:
     
socks5          -       -       -            -          192.168.1.1
noproxy         -       192.168.1.           -

     ... note that the "noproxy" line ends with a "." to specify that
     this apples to the whole 192.168.1.* address range.
     
     configuring the socks server you need to create a file,
     /etc/socks5.conf and put it at least something like:
     
route   192.168.1.      -       eth1
permit  -       -       -       -       -       -

     ... and you might have to change that inferface for our example (I
     don't remember but I think it's "destination addresses and target
     interface).
     
     Naturally the docs on these are abysmal. However, I did eventually
     get this setup working when I last tried it.
            ____________________________________________________
   
(?) PostScript to GIF

   From Jamie Orzechowski on Fri, 05 Jun 1998 
   
   (?) Hi There .. I am trying to convert a .PS to .GIF .... no luck so
   far ... I got the progrma ppmtogif but it WILL NOT compile ... can;t
   get it working at all ... I was wondering if you had the binary to
   ppmtogif (linux redhat) or know where I can get a source distribution
   that will compile ... or any other program that will convert ps to gif
   ... thanks! 
   
     (!) You could use the pstogif perl script by Nikos Drakos of Leeds
     University. It apparently accompanies the LaTeX2HTML package.
     
     I discovered that by simply switching to a shell prompt and typing
     "ps{TAB}{TAB}" and looking that the list of utilities that bash'
     command completion offered me. Then I look for a man page and then
     just looked that the file itself.
     
     Running the 'rpm -qf' command to see which package included this
     'pstogif' file I found that it came with "l2h-96.1.h-5.rpm" on the
     'Canopus' and with "xemacs-19.15p2-2.rpm" on 'Antares' (A couple of
     my machines here).
     
     There are a dizzying array of pbm, ppm, and pgm conversion filters.
     The three formats seem to be very similar (for "portable bitmap,"
     "portable pixmap," and "portable graymap" respectively). So, like
     you, my first thought would have been to use one of them.
     
     In all honesty I avoid graphics files as much as possible so I
     don't have an easy answer to this.
     
     (Incidentally this is an old message. I'm trying to clear out my
     old drafts folder by the end of the year).
            ____________________________________________________
   
(?) troubleshooting

   From Matthew Easton on Wed, 06 May 1998 
   
   (?) One thing I notice as I try to learn more about Linux, is that
   much of the information I come across is very specific to a particular
   situation or a particular piece of software. I'd like to get away from
   the 'step by step instructions for software x' and construct a "bag of
   tricks" that will allow me to solve problems myself. 
   
   To explain: In my job I troubleshoot Macintosh hardware and software.
   If you had a problem with a Mac I could tell you some things to check
   and several procedures to try-- and even if I was unfamiliar with the
   particular application that was failing you, chances are pretty good
   that things would be functioning in the end. 
   
     (!) That is why we have professional technical support, system
     administration, help desk, repair technicians etc.
     
     The issue is similar for a number of trades and professions.
     
     Even the Mac for all its vaunted "easy of use" and consistency
     really requires a significant acculturation to a large number of
     assumptions. I know this from very recent first hand experience
     since I gave my mother her very first computer earlier this year
     --- it was a Mac Performa.
     
   (?) Granted the user interface is greatly simplified under Macintosh
   compared to Linux, but are there any general principles or things to
   look for, or standard procedures for troubleshooting software under
   Linux, or tools? 
   
     (!) The simplicity of Macs and Windows can largely be summed up as:
     
     If you don't see a menu option, button or dialog for it --- you
     probably can't do it.
     
     (I realize this is a bit of an over simplication --- there are
     whole books of Mac and Windows "tricks" that are slowly gleaned
     over time).
     
     These system make a reasonable subset of their functionality
     available on their face (through full-screen menu driven user
     interfaces). That whole issue of "icons" and "GUI's" is completely
     a red herring since they really are just menus under all the hype.
     I have a friend who said that the easiest system she'd ever worked
     on was an AS/400 (running OS/400 naturally enough). She described
     (even showed me, once) the interface and it did sound pretty handy.
     
     Unix is usually described as a "toolbox." The analogy is
     reasonable. If I handed you a real box full of hammers,
     screwdrivers, nail guns, pliers, drills, saws, wrenches sockets,
     and similar physical tools it wouldn't help you build or rewire
     your house, fix your car or anything --- until you learned the
     appropriate construction and mechanical trades that use these
     tools.
     
     Similarly we find that some programmers under Unix can be just as
     confused and incapacitated when faced with system or technical
     administrative issues as an auto mechanic might be when faced with
     a plumbing problem. Naturally a plumber or mechanic is more likely
     to successfully take on other "handyperson" repairs than someone
     with no related experience.
     
     Another way of thinking about these OS' is in terms of culture and
     language. Natural language (including idiom) is entwined with many
     cultural assumptions. Unix/Linux conventions can be seen as a
     "language" for expressing demands of your computer (via the shell,
     through myriad configuration files, even in the Motif, KDE,
     OpenLook and other GUI's that we encounter).
     
     The advantage of this "linguistic" point of view is that it
     approaches the level of complexity of a Unix system. When I was an
     electrician I doubt I encountered more than two hundred different
     tools, and probably less than two thousand different components
     (connectors, fittings, brackets, etc). (Thousands of sizes and
     minor differences --- but not different in terms of usage
     semantics).
     
     On this Linux box if I switch to a bash shell prompt and double tap
     on the "Tab" key on a blank line (forcing it to try command
     completion) it warns me that I have over 2300 commands available to
     me. Many of these are full programming languages or environments
     like awk, perl, and emacs (elisp). Similarly I once determined that
     my copy of emacs (or was it xemacs) had about 1500 interactively
     accessible functions built into it. (If I installed the emacs
     'calc' (a large mathematics package) that would probably double.
     
     So there's quite a bit of depth and breadth available.
     
   (?) For example: How do I deal with a segmentation fault? Or, if an
   application installs broken and reinstalling the RPM package still
   does not work, is there a way to get Linux to tell me what is missing
   or corrupted? And what can I do about a program that (under X windows)
   briefly appears and then dies without error messages? 
   
     (!) In many cases these can be tracked down using 'strace' (the
     system call tracer).
     
     Any segmentation fault is a bug in the program (or corruption in
     its binaries or libraries). Robust programs should handle bad data,
     corrupted configuration files, etc, gracefully.
     
     Packages that fail to operate as expected might be buggy, or they
     migh have inadequate documentation. I personally like to see
     programs that have some sort of "diagnostics" or "check option" to
     help me track down problems with them.
     
     ('sendmail' and 'named' are notable culprits in this case).
     
   (?) Thanks for any clarification on these or any other mysteries. . .
   Matt Easton 
   
     (!) That will take an entire book.
     
     (Incidentally I found this message languishing in an old drafts
     folder and decided to finish it up and send it off. I really wanted
     to say much more on this topic --- but I decided to write a book
     instead.
            ____________________________________________________
   
(?) More on: "Remote Login as root"

   From Eric Freden on Fri, 04 Dec 1998 
   
   (?) Here is a legitimate use for remote login as root: 
   
   My kid plays some svga game on my console and locks the keyboard (for
   instance). I want to telnet in and /sbin/shutdown -r now 
   
     (!) Actually you can often recover from this without a shutdown.
     But its a trick. Eventually we might have something like KGI/GGI to
     provide more robust SVGAlib support.
     
     (The trick is to start X from your telnet/terminal session. This
     usually does a complete video system reset as a side effect.
     WARNING!: This might hang the system --- so close any running text
     mode apps, save any accessible documents and issue a few calls to
     'sync' before trying it).
     
   (?) For RedHat 5.0 (and all other RedHat versions I've used) only root
   can do this. Changing to su and executing shutdown won't reboot!
   Perhaps you could find a workaraound for this scenario? 
   
   Eric Freden 
   
     (!) If that's your experience then something is wrong with your
     'su' command!
     
     Did you issue 'su -' (the dash is pretty important --- as it forces
     the 'su' command to run your .login/.profile scripts and initialize
     root environment (and shell variables, etc).
     
     Another approach is to tweak the permissions on 'shutdown' Here's
     the recommended method:
     
     Create a group such as "shutdown" or use the "wheel" group.
     
     Add your regular user account (and mom's?) to that group.
     
     Issue a command like:
     
     chown root.$GROUP $(which shutdown)
     
     ... to set the file group association appropriately. You could also
     use 'chown' then 'chgrp' seperately, of course.
     
     Make it SUID with a command like:
     
     chgrp 4550 $(which shutdown)
     
     N.B. I set the execute bit for owner and group but not for
     "other/world"
     
     This allows people in the associated group to issue the 'shutdown'
     command. That command will run with root' privileges. The 0
     permissions for "other" prevent "others" from executing this
     command at all. (Other users have no valid reason to issue a
     shutdown command).
     
     Setting binaries to be SUID always has implications for system
     security. However, it is one of the primary forms of authority
     delegation available in Unix/Linux.
     
     In this case we minimize the risk by limiting the number of
     accounts that can access the command.
     
     This technique is generally useful and should be considered for all
     Unix/Linux SUID commands.
            ____________________________________________________
   
(?) Kudos

   From Gray, Robert C on Fri, 04 Dec 1998 
   
   (?) Answer Guy I have been reading your column in the Linux Gazette
   for four months (I've also gone to the archives and read several past
   articles). I am very new to Linux, I installed Redhat 5.0 in July '98'
   and built a new Kernel on 2.0.34 after that. In any one column that
   you have written I find more information than I can possibly absorb.
   Though I can't give a specific example the information you provide has
   helped me through some small problems and increased my knowledge of
   how Linux works by an unbelievable amount. Despite the fact I
   generally suffer from information overload before I finish your column
   or the Gazette THANK YOU and the Gazette for that information and
   please keep it up. 
   
   Robert Gray
   Novice Linux user. 
   
   "The word bipartisan usually means some larger-than-usual deception is
   being carried out" George Carlin 
   
     (!) Thanks for the kudos and encouragement.
     
     I like the .sig quote.
            ____________________________________________________
   
(?) Linux Support for Intel Pentium II Xeon CPU's and Chipsets

   From mpasadas on Fri, 04 Dec 1998 
   
   (?) Hello! 
   
   I have a little technical question, because revising the information
   about the subject it is not clear at all if the several versions of
   Linux disponible at this moment can run without problems on the very
   new Pentium II Xeon microprocessor of Intel. 
   
   If you have the answer to this question, please send it to me via
   e-mail, at the following address [snipped for privacy] 
   
     (!) VA Research (http://www.varesearch.com), a fairly well-known
     Linux friendly hardware vendor, offers Xeon based systems with
     Linux pre-installed. PenguinComputing
     (http://www.penguincomputing.com) also offers quad and dual Xeon
     based Linux systems and I'm sure other HW vendors do as well.
     
     (Disclaimer, the principles of VA Research and PenguinComputing are
     friends of mine --- though I get no compensation for mentioning
     them. Darn!).
     
     If I recall correctly VA Research demonstrated a 4-way SMP Xeon
     system "The Future of Linux" meeting that was jointly sponsored and
     organized by SVLUG (http://www.svlug.org) and Taos Mountain
     (http://www.taos.com) last summer.
     
     That event was reviewed in LG:
     http://www.linuxgazette.com/issue31/roelofs.html
     
     So, I don't know of any problem with this, and a quick
     Yahoo!/AltaVista search didn't reveal any problems either.
            ____________________________________________________
   
(?) Linux Friendly ISP's: SF Bay Area

   From Cdoutri on Fri, 04 Dec 1998 
   
   (?) Hi, 
   
   I'm looking for some Internet Service Providers in San Francisco that
   would have a connection software working under Linux, can you help me
   with that ? 
   
   Many thanks, C. 
   
     (!) Many ISP's are "Linux" and "Unix" friendly. I personally have
     accounts with a2i (http://www.rahul.net) and Idiom
     (http://www.idiom.com run by David Muir Sharnoff).
     
     I also know people at Best (http://www.best.com).
     
     a2i uses Solaris, the other two use FreeBSD. However, all are
     friendly to Linux users.
     
     This is one nice thing about the Silicon Valley and SF Bay areas
     --- they are such strongholds of Unix that most of the local
     businesses and techies speak the same language. I hear that life it
     somewhat harder in other parts of the country. Many ISP's that run
     Unix or Linux on their own servers (over 70% run some form of Unix
     on most of their customer server systems) will refuse to support
     its use by the customers.
     
     The only reasonable response to that is "vote with your feet."
     There are plenty of ISP's out there, pick one that meets your needs
     rather than dictates your software choices.
     
     The whole point to standardized protocols (particularly networking
     protocols) is to allow customers and users choice (FREEDOM) in
     selecting their clients and their servers. That's what the
     client/server paradigm is all about!
     
     The best resource I ever found for comparison shopping of ISP's is
     at:
     
     http://thelist.iworld.com
     
     ... (which I guess used to be run by Boardwatch Magazine, which is
     now owned by Mecklermedia).
     
     Oddly enough a2i Communications (operated by Rahul Dhesi) is not on
     this list.
     
     Hope that helps.
            ____________________________________________________
   
(?) Eight Character login Name Limit

   From CHOSICA on Fri, 04 Dec 1998 
   
   (?) My name is Felix. I am new using linux. I just saw your web pages
   after making a search on altavista. I have set up my mail server on
   Linux 2.0.29 and I am only able to create accounts with a maximun of 8
   character. I was trying to create an account call webmaster@myname.com
   and it does not make it . The server only creates accounts with 8
   character or less than 8 characters. Do you know a way to increase the
   characters, so I can create account with 9 or 10 characters. There
   should be a way I do not know how? If you can help me I would really
   appreciate. Thanks in advance. 
   
   Felix 
   
     (!) This is a common limitation in many versions of Unix. It is
     determined by the libraries (primarily 'libc' the set of libraries
     that are compiled into virtually all Unix programs).
     
     Using glibc 2.x (a.k.a. Linux libc 6) it is possible to create
     longer login names (up to 31 characters). So, you could just
     install a newer copy of Red Hat, Mandrake, Debian or any other
     glibc based Linux distribution.
     
     However, you should consider the issue carefully before using this
     feature. You'll want to ensure that all of your binaries are able
     to cope with the longer login names. Also if there's any chance
     that you'll want or have to share account information across
     multiple versions of Unix it's a bad idea to take this chance. (I
     think that newer versions of Solaris and HP-UX support longer login
     names as well. I don't know about AIX, SCO ODT, or any others).
     
     I'd suggest using the name: 'webman' or 'www' for your "webmaster"
     or "web manager" account. You can easily configure your mail system
     to route mail addressed to "webmaster" to 'webman' (just us an
     aliase) and you can even configure your 'sendmail' to re-write
     outgoing mail from 'webman' such that it appears to come from
     "webmaster" (that would be in the generics, virtuser, or userdb
     FEATURES() in your sendmail .mc file).
     
     So, if the only reason you want the long name is for e-mail
     addressing --- just use a short name and let the MTA (mail
     transport agent) do the work.
            ____________________________________________________
   
(?) Locked Out of His Mailserver

   From Henry A. Lee on Fri, 04 Dec 1998 
   
   (?) I am having trouble logging into my Linux mailserver, as any of my
   users or as ROOT. All passwords are incorrect. I had to bring all my
   users up on WinNT / Exchange box yesterday to get the email rolling
   again. Do you know of ANY way to hack the box? 
   
   I have about 15 hours of mail that I need to get off the box, and
   without being able to login, I can't forward it to the new server. 
   
   I can't login at the server itself, can't telnet into it, but I can
   FTP SOME files from it and can maybe get some files back to it.
   Looking at the PASSWD and PASSWD- files in a text editor, seem fine.
   Any suggestions would be immensely appreciated. 
   
   Thanks for your time,
   Henry 
   
     (!) I don't know what's caused your inability to log in. It sounds
     like your /etc/passwd file might have been converted to shadow
     format ('pwconv' or similar utility) while your authenticating
     utilities and services aren't shadow capable. However that is only
     one of several possibilities (the passwd file could be corrupt,
     it's permissions could be wrong, you might have missing or corrupt
     PAM modules, etc).
     
     [ I've seen corrupted shadow-passwd files prevent logins before; in
     both cases, there was the wrong number of colons (:) on a line, and
     everyone after that couldn't get in. If you managed to break the
     first line, that would prevent root getting in. -- Heather ] 
     
     As for fixing the problem or "hacking the box" as you put it. If
     you have physical access to the system it is trivial to "hack into"
     it. Normally this can be done by using the [Ctrl]+[Alt]+[Del] (PC
     "nerve pinch" or "three finger salute"), to reboot the system (most
     Linux systems have an entry in their /etc/inittab that looks
     something like:
     
     # what to do when CTRL-ALT-DEL is pressed
     ca::ctrlaltdel:/sbin/shutdown -r -t 4 now
     
     ... which allows the 'init' process (the grandfather of all
     processes) to respond to this console event.
     
     Failing that you can wait for a bit while there is minimal disk
     activity and reset or power cycle the system.
     
     As you reboot you wait until the LILO boot load prompt is display
     and type in a command like:
     
     linux init=/bin/sh
     
     ... (assuming that you have a boot stanza named "linux" --- hit the
     [Tab] key at that prompt for a list of those).
     
     This passes a parameter to the kernel which forces it to use an
     alternative to the 'init' program (a copy of the shell in this
     case). From there you might need to mount the /usr filesystem
     (assuming that the system follows professional conventions rather
     than common Linux installation defaults). Then you can issue the
     '/usr/bin/passwd' command to set a new root password.
     
     If that doesn't solve the problem you can edit the passwd file. if
     necessary remove everything but the entry for root --- don't put
     any comments or blank lines in this file! (Obviously you should
     save a copy if you're going to try that).
     
     If that still doesn't work, and if there are no clues in your logs
     (look at /var/log/messages for hints), then you have some other
     troubleshooting to do.
     
     At that point it might be best to just call a consultant for some
     voice support. You don't provide enough information for me to
     explain the next troubleshooting without writing a whole book (and
     I'm already working on one).
     
     I can do phone support or you can look for anyone in the
     Consultants HOWTO. (Considering that you have data on this system
     that you don't want to lose, and that it sounds like you don't have
     any backups, I wouldn't suggest too much experimentation and
     learning curve climbing while trying to recover from this
     situation).
     
     If you have another Linux or Unix system anywhere else on your
     network --- one with 'sendmail' properly installed (assuming that
     the affected system was also running 'sendmail') it's possible to
     copy all of the files from /var/spool/mqueue to some arbitrary
     directory on the working system (from the ailing one, obviously).
     Then you can run a command like:
     
     /usr/lib/sendmail -v -q -O QueueDirectory=/tmp/q
     
     ... to tell sendmail to verbosely (-v) make a processing pass
     through the queue (-q) with the option (-O) to over-ride the
     QueueDirectory set to some place like /tmp/q (or where ever you
     ftp'd those df and qf files to).
     
     As for the user mail that's already been delivered to "mbox" files
     under /var/spool/mail, you can copy those to another system and
     append them to file under the /var/spool/mail on the new system. To
     avoid possible corruption you'd want to disable the sendmail and
     popd (etc) processing on the new system before trying this.
     
     The easiest way to do that is to shut the system down to single
     user mode after you've copied (ftp'd) all of the mbox files (inbox
     folders) to the system.
     
     Naturally you'll need to create user accounts that correspond to
     each of these users from the old system, and you'll need to ensure
     that the ownership and permissions of each mbox file are set
     properly.
     
     There are other ways to do this. However they depend on the
     situation and/or involve some more complicated command lines then
     I'd want you to try without a thorough understanding of how they
     work.
     
     In the 'procmail' man pages there is an example of a script to
     "postprocess" an mbox. It would be possible to use something like
     that to "break apart" each mbox file and resend it to the original
     recipient.
     
     If your users were using MH, 'elm' or 'pine' (or most any
     Unix/Linux mail reading package) they could copy an mbox file to
     any convenient place and either treat it as a folder ('elm -f') or
     "incorporate" it into their MH folders using the 'inc' command.
     These users should either know how to do that, or read the man
     pages for their favorite mail user agent for details.
     
     If you do hire a consultant, look for one that will provide you
     with some good tutorial/mentorship on Linux and consider having him
     or her help you prepare a comprehensive "Recovery Plan and Disaster
     Procedures" package. This will be vital to your company's IS/IT
     regardless of what OS or platform you choose for your future needs.
     
     My phone number can be found on my web pages:
     
   Starshine Technical Services
          http://www.starshine.org
          
     ... I normally don't advertise my consulting services in this
     column, and I don't plan to do so often. However, there are
     situations where the most prudent advice I can give is: "Call
     someone to walk you through this."
     
     As I say, you are encouraged to find a Linux consultant that is
     local to you. Look in the Consultant's HOWTO at:
     
     http://metalab.unc.edu/LDP/HOWTO/Consultants-HOWTO.html
     
     ... You can also find a wealth of help at any Linux Users Group
     (LUG) and there are a couple of "Lists of LUG's" that I've listed
     in previous columns. There's even a Users Group HOWTO at:
     
     http://metalab.unc.edu/LDP/HOWTO/User-Group-HOWTO.html
     
     ... which includes links to the three biggest lists of LUG's.
     
     I wish I could say: "Look for the union label" when considering
     entrusting your system's integrity to a consultant or volunteer.
     However, there is no widely recognized certification for sysadmin's
     yet. There isn't even a "better business bureau" of sysadmins
     and/or consultants. As a member of SAGE (the SysAdmin's Guild) I'm
     involved in an ongoing effort to provide some such process. However
     it's a contentious issues, and Unix sysadmins are a contentious
     lot(*). I'll be continuing this work while I'm in Boston next week
     at the annual LISA conference.
     
     * (Certainly your chances of getting a competent and experienced
       sysadmin are better if you find someone who went to the effort to
       join SAGE, or at least has cogent reasons for not doing so; and
       they are drastically diminished if you're talking about someone
       who's never heard of USENIX or SAGE).
       
     Good luck.
            ____________________________________________________
   
(?) Changing the X Server's Default Color Depth

   From Peter Waltman on Wed, 02 Dec 1998 
   
   (?) I'm using redhat v.5.1 and have just installed it, so I have not
   made too many modifications yet. The default window manager rh 5.1
   uses is fvwm2. I have been trying to figure out how to configure these
   window managers (fvwm and fvwm2) for some time now, when I realized I
   guess that rh 5.1 is using FvwmM4 to parse the rc files. I've looked
   through those, as well as the FvwmM4 man page to figure out how to
   change the color depth. I think it has to do with the Color PANEL
   setting or the RGB_PIXELS setting, but I'm not sure where or how to
   set it. In the XF86Config file? One of the of the fvwm2rc.* files
   provided by rh?. The FvwmM4 man page says that you can define these
   settings, but have I tried to do this without much success. Any help
   or links to info on how to modify rh window manager would be GREATLY
   appreciated. 
   
     (!) Window Managers have nothing to do with setting your X server's
     color depth. A window manager is an X client --- it talks to the
     server. By the time any clients are being loaded and issuing X
     protocol requests of the server (to draw windows on your display,
     or recieve mouse and keyboard events, for example) it is too late
     to change the color depth.
     
     You are correct regarding M4. Some window managers use 'cpp' or
     'm4' (macro preprocessor utilitiies) to expand your configuration
     files into their internal configuration language.
     
     I pointed out in my other response that you can change this setting
     in your XF86Config file. In my discussion of modifying the xdm
     Xservers file I forgot to mention that any error can cause your
     system to appear hung. (You might have to log in via telnet or a
     serial terminal to kill the X server if you make a syntactical
     mistake in that file).
     
     As for broader advice on X Windows configuration, read the XFree86
     FAQ (as I listed in my other response) and browse through some
     resources that are devoted to X. Some very extensive link lists are
     at:
     
   Kenton Lee's:
          Technical X Window System and Motif WWW Sites
          http://www.rahul.net/kenton/xsites.html
          
     ... and one of my favorites listed there is:
     
   Brandon Harris':
          X: End of Story
          http://www.gaijin.com/X
                        ____________________________
   
(?) Changing color depth for xdm?

   From Peter Waltman on Wed, 02 Dec 1998 
   
   (?) I just checked out the 2 cent tips, which have a page describing
   how to change and set up multiple x servers for differing color
   depths. the only thing is that this describes how to change the startx
   script, whereas I am using xdm when I boot up. I don't think modifying
   the strartx script would have any effect for xdm. Am I wrong in this?
   If not, how/what would I modify to change the color depth for xdm? 
   
     (!) Add the following entry to the active "Screen" section of your
     XF86Config file:
     
     DefaultColorDepth XX
     
     ... where XX is the desired depth (8, 16, 24, or 32).
     
     Another way to do this is to edit the
     '/usr/X11R6/lib/X11/xdm/Xservers' file and add the -bpp parameter
     to the :0 (and possibly any :1 and other similar lines) therein.
     
     xdm reads the 'Xservers' file to find the command line with which
     it can invoke an X server. There should a a line something like:
     
                :0 local /usr/X11R6/bin/X :0 vt07 -quiet

     ... in there. You can change that to something like:
     
                :0 local /usr/X11R6/bin/X :0 vt07 -quiet -bpp 16

   (?) again, thank you very much
   Peter Waltman 
   
     (!) That should do the trick. Oddly enough this is not in the FAQ
     at http://www.XFree86.org, though I've copied the maintainer of
     that document since I've seen the question several times.
     
     Hopefully he'll add it. Meanwhile, remember to check in the XFree86
     FAQ for questions about that package.
            ____________________________________________________
   
(?) NumLock and X Problems

   From Alan Shutko on Thu, 26 Nov 1998 
   
     More Expansions and Corrections:
     
   (?) Re Victor J. McCoy message on 11 Oct 1998, here's a possible
   explanation. 
   
   It seems that Victor is using an X button to start up PPP. And the
   button (and lots of other things) don't work when the numlock key is
   on. That's because somewhere along the line (X11R6 I think), the
   handling of numlock changed from a server-handled thing to acting as a
   modifier. 
   
   Many programs which don't handle this new modifier will fail to
   display menus, let buttons work, etc, when numlock is on. Certain key
   bindings won't work. The solution is to turn off numlock. If that
   doesn't work, it's a different problem. 
   
   Alan Shutko 
   
     (!) I'll let this speak for itself. Maybe the new XFree86 3.3.3
     fixes this.
            ____________________________________________________
   
(?) Expansion on NE-2000 Cards: Some PCI models "okay"

   From Kenneth.Scharf on Thu, 26 Nov 1998 
   
   (?) In general your answer is correct. There are however a breed of
   PCI ne-2000 cards based on the Real-Tek chip that do work fine under
   Linux. I bought two of these cards for less that $15. They came with
   drivers for Windows (3.1, 95), os2, even sco unix. I tried to get
   these cards to work under windows 95 and failed! Both and early
   version on win95, and osr2B failed to work with these cards. The Linux
   ne2k driver (both the old isa driver and the new pci specific driver)
   work very well with these cards and required no special parameters.
   They autodetected just fine. I did have to re-set the bios in my
   computer to perform a fresh pnp cycle in order to get the interrupts
   correct, but after a single re-boot all was well forever. My computer
   is a K6-233 on an Intel TX (triton2) chipset based motherboard (made
   by AZZA). 
   
   I agree that there are much better ethernet cards than these Real-Tek
   ne2000 el-cheapos, but they work fine in my home lan with one linux
   machine and two windows machines. (the windows machines have 3c509 isa
   cards in them). The network is thin-net coax, and is used to share the
   internet connection with the modem on the linux machine. It will also
   provide shared printer service and file backup. 
   
     (!) It sounds like damning with faint praise here. I wish the DEC
     Tulip chipset was staying in production --- since the $29(US)
     Netgear cards using those are rock solid 10/100Mbps PCI adapters.
     They are my favorite and I only have a couple left unopened.
     
     The newer Netgear cards (same model) seem to be "okay" as well
     (actually much better than these Real-Teks that you're talking
     bout).
     
     [ What I miss about those marvelous DEC Tulip chips is that the
     drivers just plain work - both in Linux, and in Windows... because
     there is only one MS Windows driver for them! With some other "plug
     and play" cards there are several drivers available, and if you
     pick the wrong one, your net is flaky or worse. But, enough said
     about Brand X for now. -- Heather ] 
            ____________________________________________________
   
(?) Finding info on MySqL?

   From Minh La on Thu, 26 Nov 1998 
   
   (?) Where can I get more info on MySqL? Thanks. 
   
     (!) I guess the publisher is from Sweden:
     
   MySQL by T.c.X. DataKonsultAB
          http://www.tcx.se
            ____________________________________________________
   
(?) Spying: (AOL Instant Messenger or ICQ): No Joy!

   From ONeillDD on Thu, 26 Nov 1998 
   
   (?) hey I really need to know if you could tell me how I could go
   about reading instant messages from and by other people. If you know
   what I mean some people call it spying. This is very important so if
   you please contact me A.S.A.P. 
   
   THANK YOU 
   
     (!) I presume this question was about AOL or ICQ messaging. I know
     nothing about these protocols though I suspect that a straight
     network sniffer strategic place on some multiple access medium
     (ethernet) between the person on whom you are trying to spy would
     do the trick.
     
     Sorry to take so long on the response. I've been pretty busy so
     Turkey Day has been my first chance to really clean out my inbox in
     a few months.
            ____________________________________________________
   
(?) Fraser Valley LUG's Monitor DB

   From Ruth Milne on Thu, 26 Nov 1998 
   
     This is more of a "2-cent Tips" entry but here's a reader comment:
     
   (?) The Fraser Valley LUG at http://www.netmaster.ca/LUG have a
   monitor database that accepts a monitor make and model and spits out a
   working descriptor text for X setup. 
   
   Dave Stevens 
            ____________________________________________________
   
(?) ext2fs "Undeletable" Attribute

   From J.S. Moore on Wed, 26 Aug 1998 
   
     (An old AG that I never answered)
     
   (?) Hello.
   The manpages indicate that a file with the u option set is
   undeleteable, but it doesn't say how. 
   
   Any ideas? J.S. Moore
   
     (!) I think this bit was reserved for future use and that it
     would/will require a userspace program (or use of an API by
     programs like 'mc' and other file managers) to actually browse and
     recover "deleted" files.
     
     I think the current feeling in the development community is to
     implement a new filesystem or some new extensions to ext2 that
     would allow full versioning support. However, I don't know the real
     skinny on it.
            ____________________________________________________
   
(?) How to Install Linux on an RS6000?

   From ESPEJEL GOMEZ ERIKA PAOLA on Thu, 26 Nov 1998
   
   (?) My question is How install linux in a workstation (RS6000)? Thank
   you for your help.
   
     (!) Newer RS/6000's are built around PowerPC CPU's. I've heard of
     some people running LinuxPPC and MkLinux on some RS/6000 systems,
     but I'm not sure that there's enough support (device drivers, etc)
     to make this more than a curiosity.
     
     The place to start looking for answers to these questions would be
     at the LinuxPPC and MkLinux web sites:
     
   LinuxPPC: Linux for PowerPC Systems
          http://www.linuxppc.org
          
   MkLinux: Mach Microkernel with a Linux Server/Personality
          http://www.mklinux.apple.com
          
     ... There's recently been alot more activity on the Linux-PPC
     mailing lists so I know that active development is going on. In
     fact they have recently released BootX which is a package for MacOS
     that allows one to boot LinuxPPC without adjusting the OpenFirmware
     settings on your system. This is akin to LOADLIN.EXE for MS-DOS,
     but more important since Apple and the MacOS clone manufacturers
     didn't quite "get it" when it comes to implementing
     OpenFirmware/OpenBoot support. (Many models of PowerMac and their
     clones don't support manual console operation of the OF command
     prompt and many options don't seem to be supported or documented).
     
     When I talked to a couple of IBM researchers at ONE ISPCon a few
     months ago one of them expressed some interest in porting mkLinux
     or LinuxPPC in house, and having some of his team contribute some
     drivers to it. So, this may get some support "from the source" at
     some point.
     
     However, for now, it would be a hacker's project. It's not suitable
     for immediate production deployment from what I've heard.
            ____________________________________________________
   
(?) Advanced Printer Support: 800x600 dpi + 11x17" Paper

   From Karl Raffelsieper on Thu, 26 Nov 1998
   
   >I am running Caldera 1.3 on my small networked P75 with 40MB Ram and
   >Several SCSI drives and Scanner, attached to the Parallel port is my
   >Xerox 4520 PostScript printer. I wish to have the P75 act as a print
   >server to the other PCs (running S.u.S.E. 5.1) This all works find,
   Here's
   >the problem.
   
   Is this a real PostScript printer, with a PostScript interpreter and a
   full CPU built into it?
   
   Is your print server passing the raw print jobs to the printer or is
   it passing them through it's own 'gs' (ghostscript), aps, nenscript,
   or other filters?
   _______
   
   Yes this is a genuine Adobe Level 2 PostScript 20 Megs of RAM built in
   -- RISC processor, 24 page/min screamer of a network printer, (less
   the network card) and you can drop the raw PS data to it without
   ghostscript or other filters. (This is Xerox's answer to the HP 5si)
   
   This is were I am having my trouble. The driver installed is a generic
   Post Script driver, and it does not seem to make all the printer
   capabilities available. Even locally on the server. How can I make
   configuration modifications to all workstations so when Star Office
   5.0 (as an example) is aware of the printers paper sizes. My limited
   understanding of the Post Script language is all the page definition,
   font info, formatting, etc. is actually written into the document.
   Thus so long as the data is sent raw to the server and it will send it
   raw to the PS printer the Adobe chips in the printer will do the rest.
   But I suspect I must report the printer capabilities to the OS some
   place. I started at /etc/printcap but it didn't seem obvious to me
   where to make the changes.
   
     (!) What applications are you running? (In other words, what
     applications are generating the PostScript).
     
     If they only use a subset of the PostScript supported by your
     printer then they have to be updated to generate more advanced
     PostScript. If you are dropping/sending raw PostScript to your
     printer then Linux isn't involved at all. It's between your
     applications and your printer.
     
     If you have Something like apsfilter (ASCII/text to PostScript) or
     nenscript (New "enscripting") listed in your /etc/printcap entry to
     transform text into PostScript that that's where you'd need to make
     the changes (though that should affect pages produced through
     Applixware, StarOffice, xfig, TeX/LaTeX/LyX etc since those are
     producing their own PostScript or their own .dvi or raw printer
     files).
     
     In the cases where TeX/LaTeX and/or LyX are involved the
     applications generate a .dvi file. This can be converted to
     PostScript using the 'dvips' command, or they can be used directly
     by any of the printer specific dvi drivers (called "dviware" by
     TeXnophiles).
     
     That's why I suggested calling Xerox to ask if they have or know of
     dviware for your printer.
            ____________________________________________________
   
(?) TAG suggestions

   From john on Thu, 26 Nov 1998
   
   (?) I think that the way you are laying out TAG right now makes it a
   little hard to navigate. It would almost be better if you ran them all
   together on one big page, a la $.02 tips. The one word descriptions of
   other solutions at the bottom of each page are also pretty tough to
   figure out. How about an onMouseOver window.status() description for
   each, or something to the same effect? Great job, by the way!
   --
   John
   
     (!) Heather (my wife) does all of the markup.
     
     She's spent many hours, for the last several months refining a
     script that does the bulk of the conversion from e-mail (adjusted
     for the quirks of how I format my responses) to HTML.
     
     However, one of the things that we both refuse to do is to rely
     about non-standard, browser dependent, and particularly upon
     JavaScript, features.
     
     [ Actually, this is not specifically because I have anything
     against javascript, though the abuse of certain features on the
     open web does annoy me considerably; nor because I don't write
     usable javascript code, for there's certainly a world of tested
     scripts at http://www.developer.com/ to go with the old Gamelon
     archives of Java applets; but rather, because I have no interest in
     making the folks with "modern" browsers lose more memory to a
     feature that they probably won't use.
     
     and the very idea of shipping someone 90+ full titles of messages,
     every time they read one of them, is insane. Don't even go there.
     I'm getting off this soapbox before I scorch it. -- Heather ]
     
     Originally all I wanted was for the URL's that I embed in my text
     to be wrapped with anchors. However, Heather and Marjorie (my
     editors) like to have the TAG messages split and like to over some
     navigation between them. Heather doesn't like sites that only offer
     "up, next, previous" options in their page footers, so she's
     implemented the scheme that you're describing.
     
     [ Also, at least one querent begged to be able to go to seperate
     messages without having to go back up to the index. Others thanked
     us for switching to an indexed format, as it was much easier to
     read the index alone and decide what messages they wanted to read.
     
     As for the "tough little words"... I thought it would be nicer than
     numbers, which is what my script actually generates. The good thing
     is that they can be figured out at all. They are short so that I
     can format the table at the bottom so it doesn't look lame and cost
     more space than the message bodies. As it is, there's so many this
     time, they're staying numbers. They'll probably go back to words
     next month, but I won't say for sure. -- Heather ]
     
     One problem I used encounter when TAG was "all one big page" was
     with search engines. I'd get a new question that correlated a
     couple of different concepts (IMAP plus Netscape
     Navigator/Communicator) and I'd get all sorts of spurious hits
     pointing to my own previous TAG articles.
     
     So I'm glad that we don't still smash all my articles into one
     page.
     
     [ However, masochists are encouraged to read 'The Whole Damn
     Thing'... the streamed version of the Linux Gazette. And if I see
     more than this one request, I may link 'The Whole Damn Answer Guy'
     (that is, the version I turn in to our Overseer for inclusion to
     TWDT) as an option off the Answer Guy index. But we're certainly
     not going back to the old format. Too many people like it, and I've
     put too much effort into the scripts I use to convert it, to go
     back. -- Heather ]
     
     However, Heather and Marjorie will see this message (along with
     other LG readers). I leave the details of formatting for
     publication entire up to them. Indeed when I first started answer
     these questions I didn't even know that they'd be published. (I
     just offered to take on technical questions that were misdirected
     to the editors). So, I'll focus on providing technical answers and
     commentary.
     
     [ I make a sincere effort to keep the resulting message looking as
     close as HTML allows to what the email looks like. When you only
     see it on the web, it could be hard to recall that it was a plain
     slice of mail. I feel it's important to keep that feeling. Real
     people use this software, real people have ordinary problems with
     it, and real people give a shot at answering them.
     
     Which is the last tack in the coffin of using browser-specific
     features... real people aren't going to change browsers just to
     read a webazine, and they're not gonna be happy if it crashes their
     browser because someone went a bit overboard on the HTML.
     
     So, I've kept changes minimal. I did all the graphics you see here,
     but except for color, and the split messages, I feel it's still
     pretty close to the original effort. (The astute reader, or
     especially the reader without color support, will note that I use
     EM and STRONG to support color usage, so the color is gratuitous,
     but does make for more comfortable reading if you have it and
     there's a lot of quoting.) You can look at the older Gazettes if
     you'd like to see what they used to look like... I think they look
     a lot better, but I'm biased ;) Still, if Jim keeps getting
     messages about the formatting that I'm really responsible for, I'm
     gonna have to draw my own speak bubble. I still have the blank
     bubble so it'll be easy. Gimp is cool, when it doesn't crash. Maybe
     some month when the load isn't too high I'll write an article about
     the script and how I did the gifs. -- Heather ]
     
     (Personally when I'm browsing through a series of related pages I
     prefer to bounce back up to the upper/dispatch page and then down
     to the next. This keeps my current "depth" a bit shorter when I
     want to back out of my browser completely. (Since I get interrupted
     and sidetracked frequently while browsing I like to make sure that
     I'm "done" with each page that's still on the "stack" by backing
     completely out to the "first document").
            ____________________________________________________
   
(?) CGI Driven Password Changes

   From Terry Singleton on Sat, 05 Dec 1998
   
   (?) Hi there,
   
   We recently installed a LINUX box that runs sendmail 8.9.1, we need
   someway for a user to be able to change their own password, most ISP's
   have a html form that allows them to do this.
   
   I know this can be done with CGI and Perl, question is does anyone
   have anything or know of anywhere I can find something that will do
   the trick..
   
   I just bought a perl/cgi so I am working in that direction, we need
   something asap though. I would even pay for something.
   
   Regards,
   Terry Singleton
   Network Analyst
   
     (!) I once wrote a prototype for such a CGI script. It wasn't fancy
     but it used the following basic method:
     
     The form has the following fields:
     
     userid (login name):
     current/old password:
     (repeated):
     new password:
     (repeasted):
     
     ... and the script does the following:
     
     * Check the consistency between the current password and the repeat
       (and issue a retry screen if that fails).
     * start an expect (or Perl comm.pl) script that:
          + telnet to localhost
          + waits for a "login:" prompt
          + sends the userid
          + waits for a "password:" prompt
          + send the current password
          + waits for one of:
               o a shell prompt (sends passwd command)
               o the passwd prompt (if the user shell is set to
                 /usr/bin/passwd).
               o a "login incorrect" message (aborts and returns HTML
                 error form).
     * if the process gets to .../bin/passwd's prompt:
          + send the old password
          + wait for new password prompt
          + send the new password
          + wait for repeat prompt
          + sent the new password again
          + wait for O.K. message
          + returns HTML success page.
       
     So mostly it's a matter of writing the expect or comm.pl script.
     
     Unfortunately I don't have the real script handy. It looked
     something like:
     
#!/usr/bin/expect -f
## by Jim Dennis (jimd@starshine.org)
## This should check a username/password
## pair by opening a telnet to localhost
## and trying to use that to login
## -- you might have to adjust the last
## expect block to account for your
## system shell prompts, and error messsages

## It returns 0 on success and various non-zero
## values for various modes of failure

set timeout 5
log_user 0

gets stdin name
gets stdin pw

spawn "/usr/bin/telnet" "localhost"

expect {

-- "ogin: $"    { send -- "$name\r" }

timeout         { send -- "\r\r" }

eof             { exit 253  }
}


expect {

"ssword: $"     { send -- "$pw\r" }
}

expect {

"ast login: "    { exit 0   }
"(\\\$|%)"       { exit 0   }
"ogin incorrect" { exit 1   }
timeout          { exit 254 }
eof              { exit 253 }
}

     ... so you'd replace the "exit 0" clauses with something like the
     following to have it change the password instead of merely checking
     the password as the example above does.
     
set password [lindex $argv 1]
send "/bin/passwd\r"
expect "password:"
send "$password\r"
expect "password:"
send "$password\r"

     ... this assumes that you got to a shell prompt. If you use the old
     trick of setting the users' login shell to /bin/passwd then you'd
     add another expect close to the original script to respond to the
     prompt for "Old password" --- which you'd get in lieue of a shell
     prompt.
     
     Obviously in that case you wouldn't be "send"-ing the /bin/passwd
     command to the shell prompt as I've done in the second line of this
     second code example.
     
     There's a package that purports to do this at:
     
   Linux Admin CGI Package Docu (English)
          http://www.daemon.de/doc_en.html
          
     ... so you can try that.
     
     You can also look at the Linux-admin mailing list archives where
     I'm sure I've seen Glynn Clements point people to some utility he
     wrote (I think I've seen this about a dozen times).
     
     A quick trip to the Linux-Admin FAQ
     (http://www.kalug.lug.net/linux-admin-FAQ) led me to a list of list
     archives, which lead me to one with search features. Searching on
     "web password change" got me to a message that refers to:
     
     ftp://win.co.nz/web-pwd
     
     ... I'm sure there are others out there.
            ____________________________________________________
   
(?) ifconfig reports TX errors on v2.1.x kernels

   From Peter Bruley on Tue, 15 Dec 1998
   
   (?) Answer Guy:
   
   I have tried various 2.1x kernels every - once and a while to see how
   the new version is coming along. What I have noticed is errors being
   reported by "ifconfig" on the TX only (both ppp & eth). Do you know
   why ?
   
   TX Error
   
     (!) That's a good question. On the ethernet, I'd expect that most
     of them would be due to frame collisions. Basically they'd happen
     whenever any two cards on your segment tried to send dataframes at
     close to the same time. On the PPP link I'd expect them to be due
     to line noise.
     
     However, I'm not sure and I don't know why you wouldn't see any RX
     errors. Are you saying that you only see these under the 2.1.xxx
     kernels? I can assure you that some errors are perfectly normal
     (under any kernel). Too many may indicate a flaky card (yours, or
     any other on your network segment), bad cabling (thinnet/coax is
     particularly bad --- also using cat 3 UTP and/or running any sort
     of cable too close to flourescent light ballasts and other sorts of
     transformers and "noisy" RF generating equipment).
     
     On one of my systems (a 486 router, two 3c509 ISA ethernet cards,
     each on relatively short quiet cat 5 UTP segments, running 2.0.36)
     I have 0 errors in both the TX and RX segments out of about 200,000
     packets routed. This is over an uptime of about 20 days. I picked
     that systems uptime and stats more or less at random (I'm using
     it's console as a telnet/terminal to get to my 'screen' session as
     I type this).
     
     On another system (a 386DX33 with on 3c509 adapter, running 2.0.30)
     I see six million packets received and 26 thousand RX errors (no TX
     errors out of about 3 million packets transmitted). That's been up
     for 71 days.
     
     I supposed we could commission a study to see if different ethernet
     cards, kernels and other factors produce wildly different
     statistics. But that sounds too much like a graduate project.
                        ____________________________
   
(?) 'ifconfig': TX errors

   From Peter Bruley on Fri, 25 Dec 1998
   
   (?) Hi: Jim
   
   Thanks for your reply, sorry I'm slow getting back.
   
   Here is a print out of my "ifconfig" after about 5 min. on the ppp
   connection to my ISP:
   
 lo    Link encap:Local Loopback
       inet addr:127.0.0.1  Bcast:0.0.0.0  Mask:255.0.0.0
       UP LOOPBACK RUNNING  MTU:3924  Metric:1
       RX packets:166 errors:0 dropped:0 overruns:0 frame:0
       TX packets:0 errors:24679 dropped:166 overruns:0 carrier:0 coll:0

 eth0  Link encap:Ethernet  HWaddr 00:40:05:60:71:DD
       inet addr:10.40.150.1  Bcast:10.40.150.255  Mask:255.255.255.0
       UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
       RX packets:288 errors:0 dropped:0 overruns:0 frame:0
       TX packets:86 errors:74789 dropped:507 overruns:0 carrier:0 coll:0
       Interrupt:10 Base address:0x7000

 ppp0  Link encap:Point-to-Point Protocol
       inet addr:226.186.100.56  P-t-P:226.186.100.249 Mask:255.255.255.255
       UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:576  Metric:1
       RX packets:156 errors:0 dropped:0 overruns:0 frame:0
       TX packets:0 errors:14836 dropped:135 overruns:0 carrier:0 coll:0

   Here is are some of my software versions:
   
Kernel is 2.1.128
libc.so.5 => libc.so.5.4.44
depmod (Linux modutils) 2.1.121
ppp-2.3.5
net-tools 1.432

   Things seem to work properly. (all network services) except for some
   utilities that report modem activity ie/ xmodemlights
   (http://www.netpci.com/~dwtharp/xmodemlights)
   
   Note that my ethernet card is also reporting errors.
   
   Now assuming that these are real errors; how come when I boot up into
   a v2.0.34 kernel all the errors go away (on both ethnet & ppp) and my
   xmodemlights utility works flawlessly ?
   
   I have tried the v2.1.(85-131) kernel on apx 3-4 different boxes and I
   have observed the same problems.
   
   I'm alone on this issue or do you know of others reporting the same
   problems ?
   
   Peter
   
     (!) I don't know if there's any problem here. However, I would
     check the kernel mailing archives and possibly (after downloading,
     installing and testing the 2.1.132 or later kernel) post a message
     to the kernel developers list to inquire about it.
     
     I might be that the old 2.0.x driver wasn't reporting errors for
     your cards. They may have been buggy. It's also possible that they
     may have been driving your hardware slower, causing fewer errors,
     or fewer detections of errors. Of course it could be bugs in the
     latest drivers which we'd like fixed before we go to 2.2.
     
     So, check with the kernel developers and possibly get onto the
     comp.os.linux.* newsgroups (networking or hardware) with this
     question to poll other users for their results.
     
     [ In the "late breaking news" department, the kernels are starting
     to be called 2.2.pre so now is the time to start trying them out if
     you've been interested but afraid to go for a beta kernel.
     -- Heather ]
            ____________________________________________________
   
(?) Support for Trident Video/Television Adapter

   From Daniel Robidoux on Mon, 14 Dec 1998
   
   (?) Looking for a manual for a trident 9685 with tv. I'm trying to get
   output to my tv but nothing works. Can you offer any suggestions.
   
     (!) Call Trident?
     
     I answered a question about the Providia 9685 chipset back in issue
     31 --- but that had no mention of a TV tuner.
     
     There is a "video4linux" project that supports at least the BTTV
     (Hauppage et al) chipsets. I've never used it but you can feel free
     to hit Yahoo! and browse through the 2200 hits that you'll get with
     a search string like:
     
     "video4linux 9685"
     
     ... see what that nets you.
            ____________________________________________________
   
(?) Plug and Pray Problems

   From Tony Grant on Mon, 14 Dec 1998
   
   (?) Hi,
   
   Problem: USR Sportster ISDN TA will not work on AMD K-6/II machine.
   
   Solution: Force D-Link Ethernet card to use IRQ 5 and ioports in the
   0300 - 031f range so as not to interfere with the ioports needed by
   the Sportster.
   
   Question: How do I force the Ethernet card to behave? On bootup my
   kernel (2.0.36) tells me that IRQ / io etc are being set by BIOS. I
   want to set them myself.
   
   TIA for pointers to the correct doc.
   
   Cheers
   Tony Grant
   
     (!) My first guess would be that you're encountering a problem with
     some "ISA Plug and Play" adapters. The first option would be to see
     if there's a setting to disable "plug and pray" on one or both of
     these boards, and manually set them (possibly using an MS-DOS or
     Windows) utility).
     
     Failing that you can look for a Linux package called 'isapnptools'
     --- I've never used it --- but it seems to have helped a few people
     out there.
     
     I've read many messages from people who've resorted to booting DOS,
     running the configuration utilities from there and then loading
     their Linux kernel via LOADLIN.EXE.
     
     This is more of a workaround than a real solution --- but it seems
     to be effective for most and I don't know of any downside for
     normal operation (just that mild distaste that running MS-DOS to
     configure you hardware every time you boot might leave in your
     mouth). Console yourself with the fact that you rarely have to
     reboot Linux.
                        ____________________________
   
(?) Plug and Pray Problems

   From Tony Grant on Mon, 14 Dec 1998
   
   Jim Dennis wrote:
   
   ..snip
   
   This is more of a workaround than a real solution --- but it seems to
   be effective for most and I don't know of any downside for normal
   operation (just that mild distaste that running MS-DOS to configure
   you hardware every time you boot might leave in your mouth). Console
   yourself with the fact that you rarely have to reboot Linux.
   
     (!) Jim,
     
     Thanks for your prompt reply if only M$ offered such aftersales
     support =;-)
     
     Loadlin looks like a last resort solution that I will have to turn
     to. I really didn't want to install W$ or DOS on this machine (it
     is a headless server) so booting is no problem, the machine is up
     all of the time, only SW upgrades will imply rebooting.
     
     Cheers and thanks again
     
     Tony Grant
            ____________________________________________________
   
(?) Sharing/Exporting Linux Directories to Windows '9x/NT

   From markr on Mon, 14 Dec 1998
   
   (?) I have a 5 system LAN at home with 2 linux, 2 win98, and 1 NT
   machine. From any of the 98/NT machines, I can see the linux boxes in
   Network Neighborhood. However, I am unable to connect to shares on the
   linux boxes. I have logged out of windows and back in as both 'root'
   and 'mark', both valid users on the linux systems, but when I try to
   access a share I'm prompted for a password which, although correct, is
   promptly rejected. I can go from Linux to win, and Linux to Linux, but
   I need to be able to go the other way as well....
   
   Any advice?
   
   thanks,
   Mark Rolen
   
     (!) Have you read through the Samba man pages (smbd(8), nmbd(8),
     smb.conf(5), samba(7), smbstatus(1), etc), and the Samba FAQ and
     web site (http://www.samba.org)?
     
     Start there and make sure that you have 'smbd' and 'nmbd' running
     in the correct order, and that you have a valid 'smb.conf.'
     
     The best place to ask this sort of question is the
     comp.protocols.smb newsgroup. This is where the most avid Samba
     users exchange notes and commiserate over the latest MS CIFS
     machinations.
     
     When you ask them a question, be sure to include the simplest
     version of your smb.conf that you've tried and representative
     samples of any relevant syslog messages from /var/log/messages.
     Read their FAQ thoroughly for more details about the sorts of
     information to include in your support queries.
     
     [ Actually, your conf files are probably fine, since you see the
     share announced, and actually get a dialog back... except that
     you're missing one. Win98 and NT use encrypted passwords (or Win95
     since one of the OSR packs) which a new enough version of SaMBa can
     answer, but it needs to be fed the passwords your win boxes will be
     using. Go into the FAQ and search for 'smbpasswd' and you should
     find the rest of the details. -- Heather ]
            ____________________________________________________
   
(?) Mail processing

   From Juan Cuervo on Mon, 14 Dec 1998
   
   (?) Hello Answerguy, My name is Juan Cuervo and I was wondering if you
   could help me with this:
   
   I need all the incoming mail of my mail server users to be send to
   their mailboxes (as usual), but also to be processed by an external
   program (I mean , not a MTA). So, I need so send a copy of the mail to
   this external program if the mail user has a file in their home
   directory (called, lets say, ~/.myprog ) wich indicates that the mail
   messages for that user should be parsed by this external program too.
   
   Thank you for your help.
   Juan Cuervo
   
     (!) You can create an '/etc/procmailrc' and define 'procmail' as
     your local delivery agent. This is the most straightforward way to
     do this. However, it is pretty dangerous (the called program will
     be called as 'root') and it might result in unacceptable levels of
     overhead (depending on your number of users and their mail
     volumes).
     
     In any event the contents of /etc/procmailrc would look something
     like:
     
:0c
| /root/bin/mailfilter/.myprog

     .... to send a copy of each mail item through a program as you
     described.
     
     Personally, I don't recommend this, as it sounds like several
     disasters just begging to happen. However, you're welcome to
     experiment with this on a test system for a little bit to learn how
     it works.
     
     Many Linux distributions include 'sendmail' configured to use
     'procmail' as their LDA by default. Look for a group of lines in
     your /etc/sendmail.cf that looks like:
     
Mlocal,         P=/usr/bin/procmail, F=lsDFMAw5:/|@qSPfhn9, S=10/30, R=20/40,
                T=DNS/RFC822/X-Unix,
                A=procmail -Y -a $h -d $u

     ... to see if this is the case. If not, either replace the Mlocal
     clause that's in your /etc/sendmail.cf (yuck!), or add an entry
     like:
     
                MAILER(`procmail')dnl

     ... to your ".mc" (M4 configuration file) and regenerate your .cf
     file with the appropriate m4 command like:
     
                m4 < foo.mc > /etc/sendmail.cf

     Note that sendmail involves quite a bit more than this --- so you
     may want to get more detailed advice before trying this on a
     production mail server. There's a 900 page book from O'Reilly
     that's probably the best reference to 'sendmail' available.
     Arguments that we should also switch to 'qmail' or Wietse Venema's
     'PostFix' (formerly known as 'Vmailer') may not help in your case.
            ____________________________________________________
   
(?) Extra Formfeed from Windows '95

   From Jerry W Youngblood on Mon, 14 Dec 1998
   
   (?) Is there a way to supress the extra formfeed when I print on my
   Linux HP540 printer from Windows95 on the network. Everything prints
   great, however there is an extra page that always comes out after the
   print job. How do I supress this?
   
     (!) I think there is a Win '9x control panel setting that can be
     tweaked to prevent this. However, I do as little with Win '9x as I
     can, so I don't know precisely where in that morass of dialogs and
     menus you might find this setting.
     
     (I suppose another option would be to set a special printcap entry
     for your Win '95 system to use, and have that use one of the
     settings for supressing formfeeds or use a special filter or
     something).
     
     I should warn that I also do as little printing as I can get away
     with.
            ____________________________________________________
   
(?) Can't Login in as Root

   From David Stebbins on Mon, 14 Dec 1998
   
   (?) Hey Jim, After reading the letters to you and your responses I
   feel kind of silly writting to you with my little problem, but here it
   is. I am a very, very new linux 5.2 redhat (Macmillan) user and after
   installing the OS and establishing a user account for myself I have
   not been able to login as the root user. I type the same exact
   password that I used when I set the system up (as the root uesr), but
   cannot get back in (...very frustrating). perhaps you have a solution
   for me? I was logging in as "root" (w/o the " marks) and then just
   entering my password. What am I doing wrong? Thanks David
   
     (!) Is this at the console?
     
     If not, it's probably just securetty (read the man page in section
     5).
     
     Can you login as a normal user and then use the 'su' command to
     attain 'root' status?
     
     If not then you probably have lost or forgotten the password or
     corrupted your /etc/passwd file. In those cases you can boot from a
     floppy diskette or boot and issue the 'init=/bin/sh' LILO option
     (as I described last month) to get into the system in single user
     mode without requiring any password (requires console access,
     obviously).
     
     Keep in mind that the passwords are case sensitive. You must
     remember which letters you typed in [Shift]-ed mode and in lower
     case. Also, if you look at you /etc/passwd file you shouldn't seen
     any blank lines, any comments, and any "junk" characters (control
     characters, etc). Read the passwd(5) on any working system to get
     the details of proper 'passwd' file formatting --- or just copy one
     from your boot floppy and recreate the accounts as necessary.
     
     Note, if you create a new passwd file you may create "orphan" files
     in this process, as your new account names might have mismatches to
     the old numeric UID's and GID's under which these files were
     created. The easiest way to fix that on a small system is to look
     at the numeric UID's of the files (any "orphan" file will show up
     with a numeric owner during an 'ls -l' and you can use the command
     'find / -nouser -ls' to list all of them) --- then using your
     personal knowlege of who these files belong to, set their
     /etc/passwd account to match those numerics.
     
     Unfortunately the full details of all of this are far to
     complicated to describe in detail at this hour (it's 3:00am my
     time, I just got back from Boston, Massachusetts from the
     USENIX/SAGE LISA Conference).
     
     Once you get your system straightened out, make a backup copy of
     your /etc/passwd and /etc/group files --- just mount a floppy and
     copy them to it. That will make restores much easier in the future
     (even if you you have a full backup system in place it's often
     handy to restore these files before trying the rest of your restore
     --- some versions of 'tar' and 'cpio' for example don't restore
     files under numeric UID's and GID's -- they'll "quash" the
     ownership to root.root for all of them!
     
     If you really get stuck, call my number (800) 938-4078 to leave
     voice mail. It would make more sense to walk you through the
     recover than to type up every possible recovery technique in this
     message.
            ____________________________________________________
   
(?) Alternative Method for Recovering from Root Password Loss

   From David C. Winters on Mon, 14 Dec 1998
   
   (?) Just discovered the LG, and your column, today. I sent you a
   message a few minutes ago asking a question; here's a submission.
   
   You finish up your "No Echo During Password Entry" answer in your
   Issue #35 column with a method for recovering from losing root's
   password. I've used another method, using LILO.
   
   During boot, when the "LILO boot:" prompt appears, hitting the <TAB>
   key will give you a list of all of the kernels (by label) that LILO
   knows about. On my system, I'd see
   
> LILO boot:
> 2.0.30                2.0.30-orig
> boot:

   ("2.0.30-orig" is the default Red Hat 2.0.30-3 kernel on 4.2; "2.0.30"
   is the label for the kernel I compiled.)
   
   If I append " single" to a kernel label, eg, "2.0.30 single", it'll
   boot using that kernel but come up in single-user mode. Just calling
   passwd() will let you change root's password. You then want to use
   exit() to continue bringing yourself back up to your normal runlevel
   (3 on my machine).
   
     (!) I'm well aware of this technique. However, using 'init=/bin/sh'
     will work in cases where 'single' won't.
     
     Some systems have their 'single user' mode entries in /etc/inittab
     set to call an 'sulogin' command --- which requires a root
     password. Ooops!
     
     I glossed over the details due to my own time constraints.
     
   (?) Useful, but a large security hole. Unless you secure it, anyone
   sitting down on console can reboot the machine and come up as root. To
   close this hole off, chmod() /etc/lilo.conf to 600 (or 660 if it's
   owned root:root) and add the "restricted" and
   "password=<some_password>" lines, like the following example
   /etc/lilo.conf file:
   
     (!) Quite right.
     
   (?)
 boot=/dev/sda
 map=/boot/map
 install=/boot/boot.b
 prompt
 timeout=50
 restricted
 password=AnswerGuy
 image=/boot/vmlinuz
         label=2.0.30
         root=/dev/sda2
         initrd=/boot/initrd
         read-only
 image=/boot/vmlinuz-2.0.30-orig
         label=orig
         root=/dev/sda2
         initrd=/boot/initrd
         read-only

   Run lilo(), then reboot. Entering "2.0.30 single" will get you to a
   password prompt. When you enter "AnswerGuy", the LILO password won't
   be echod to the screen as per normal for entering passwords, and LILO
   will bring you up as root.
   
   This obviously requires remembering yet another password, but it's
   something to look into because, by default, LILO isn't
   password-protected on the Debian or Red Hat distributions I've used.
   
     (!) Also quite right.
     
     The principle problem with this is that it doesn't prevent the user
     from booting from a floppy (such as a Tom's Root/Boot
     (http://www.toms.net/rb) or even just an MS-DOS diskette with a
     disk/hex editor).
     
     Some PC's have the ability to "lock out" the floppy drive and
     protect the CMOS with a password. That helps. However, it isn't
     much help. Many (possibly most) BIOS/CMOS sets have "backdoors"
     such that their support technicians can help customers "get back
     into" their systems. This is a bad idea --- but seems to be pretty
     common. In addition it's possible to open the system case and
     temporarily remove or short (with a resistor) the battery on the
     motherboard, or remove the clock chip (where the CMOS data,
     including the password, is stored).
     
     So, to achieve any semblence of console security you must at least
     do the following:
     
     * Lock the PC in a cabinet, closet or case (or install one or more
       locking bolts in the case.)
     * Verify that the BIOS has no back door (how?) or replace the BIOS
       with a custom one or one that has been audited and verified by
       some trusted party as having no back doors.
     * Disable floppy and CD-ROM boot.
     * Enable CMOS password protection to prevent changes to the boot and
       other CMOS settings.
       
   (?) Debian: Whatever version was current two years ago; we switched to
   RH. Red Hat: 4.2
   
   D.
   
     (!) Thanks for the prompting.
     
     I personally like the design of the Corel Netwinder (StrongARM/RISC
     based "thin clients" or "network computers" with embedded Red Hat
     Linux and KDE), and the Igel "Etherterm/Ethermulation" (PC based X
     Terminal, thin client, and character mode ethernet terminals, with
     custom embedded Linux --- and Java, Netscape and other optional
     tools on solid state disks).
     
   Corel Computing, a division of Corel Software, Inc:
          http://www.corelcomputer.com
          
   Igel USA:
          http://www.igelusa.com
          
     These systems are specifically designed with no support for
     removable media. This makes this much better suited to deployment
     in hostile user environments (such as libraries, kiosks, Internet
     cafes, public access and college computing labs).
     
     It is unfortunate that these systems are currently a bit more
     expensive than similarly powered PC's. Since they are currently
     produced in somewhat volumes and they are currently niche markets,
     they command a higher margin and don't benefit from the full
     economies of scale.
     
     However, that's the main reason I don't own any of these.
     
     (Another advantage to these systems, over and above security, is
     that they offer much less power draw and much quieter operation
     than standard PC's with that incessant fan and disk noise).
            ____________________________________________________
   
(?) SCOldies Bragging Rights

   From David C. Winters on Mon, 14 Dec 1998
   
   (?) In your response to Anthony's second message (re: a coworker
   teasing him about SCO's capabilities), you say:
   
   I figured. About the only things the SCOldies can hold over us right
   now are "journal file support" and "Tarantella."
   
   Abject curiosity makes me ask: What are these two capabilities?
   
   D.
   
     (!) "Journaling Filesystems" and "Logging Filesystems" are those
     which store and utilize transaction logs (journals) of file
     operations until those changes are "committed" (synchronized).
     
     Thus a set of small data structures on your filesystems are
     automatically synchronized (like in a "write-through cache") while
     the rest of the fs benefits from normal write caching.
     
     The net effect is that filesystems can be quickly checked and
     repaired after a castastrophic shutdown. In other words, you don't
     have to wait for hours for 'fsck' to finish fixing your filesystems
     after someone kicks the plug on your server (or the power supply
     fails, etc).
     
     This is likely to be added to Linux by version 2.4 or 3.x. Some
     preliminary work as already been done.
     
     Many versions of Unix (such as SCO, Novell/SCO Unixware, and AIX)
     have their own implementations of these features. In addition there
     is a company called Veritas *
     (http://www.veritas.com/corporate/index.htm).
     
     You can get some similar effect from Linux, at considerable
     performance cost, by selectively mounting your important
     filesystems with the 'sync' option (mount -o sync ....).
     
     "Tarantella" is a unique feature of SCO's. It provides Java/Web
     based access to your Linux desktop. The closest match is the VNC
     package (virtual network computer) from the Olivetti/Oracle (Joint)
     Research Laboratory * (http://www.orl.co.uk/vnc/index.html).
     
     VNC is a network windowing system (like X Windows, but more
     "lightweight") which allows you to connect to your systems remotely
     via an MS Windows (Win '95/'98/NT), MacOS, or Java client. VNC also
     allows you to remotely connect to Win '9x/NT system from it's X
     Windows (or other Win/MacOS/Java) clients.
     
     Actually VNC might be close enough in features that SCO couldn't
     get much mileage out of touting it over Linux with VNC. VNC is
     under the GPL.
            ____________________________________________________
   
(?) Application Direct Access to Remote Tape Drive

   From Yves Larochelle on Mon, 14 Dec 1998
   
   (?) Hello,
   
   I have a simple problem. I run a FORTRAN program that makes calls to a
   C library to read data from a DLT SCSI tape drive. Everything is fine
   when I run from a drive on my host machine.
   
     (!) Oddly enough I was just writing in my book about how the tape
     drives under Unix (and Linux) are available for general purpose
     spooling of applications data sets (like the old mainframe spooling
     and job control model) but are rarely used in this fashion. It's
     amusing to see that someone is doing this.
     
   (?) But I haven't been able to open/read the tape drive on a remote
   host. I have read your previous answers on remote tape access:
   
   http://www.ssc.com/lg/issue31/tag_backup.html
   and http://www.ssc.com/lg/issue29/tag_betterbak.html
   
   but it doesn't solve my problem.
   
   I want to use local memory and CPU, so "rsh" is not an option. In my C
   library I have tried to change:
   
>               fd =  open ("/dev/st0", flag, 0);
> ... to:
>               fd =  open ("remotehost:/dev/st0", flag, 0);
> or even:
>               fd =  open ("operator@remotehost:/dev/st0", flag, 0);

   without success:
   
   Open failed on tape No such file or directory
   
     (!) Yep. That's right. Inspection of the code for 'tar' and 'cpio'
     would reveal that these do use an implicity 'rsh' to the remote
     host, and pipe their data streams over TCP connections thereto.
     
     This ability to access devices remotely is not built into the C
     libraries, it is built by your program through the native network
     mechanisms (or at least via judicious use of the 'system()' library
     call).
     
   (?) I do have setup /etc/hosts.equiv (and /$HOME/.rhosts) so I can
   access my account on the remote host without password
   
   I have been told to use "rmt", but how to do it within the C library
   ??
   
     (!) I don't know much about 'rmt' but you can pick up the 'dump'
     package in which it is included and read the man page therefrom.
     I'd pick up the sources to that package so you can read some sample
     source code to understand how 'dump' uses it.
     
     (Obviously if you want to actually cut and paste the source code
     for use in your project, you'll want to read and comply with the
     license --- probably GPL. This may be of no consequence if you
     won't be redistributing your program --- and should be know problem
     if you're willing to release your sources under a compatible
     license. It should also be no problem if you read these sources to
     gain and understanding of the API's and code your own functions.
     However, read the license for yourself).
     
   (?) Can you help me with this one ???
   
     (!) Since I'm not a programmer --- not much directly. However, as
     I've said, you can study examples of this sort of code in the
     'tar', 'cpio', and 'dump' sources.
     
   (?) If at all possible,
   
     (!) ... (e-mail ellided)
     
     I always respond via e-mail whenever I respond at all. The mark-up
     to HTML is done at the end of each month by my lovely assistant
     (and wife), Heather.
            ____________________________________________________
   
(?) Mounting multiple CD's

   From ali on Mon, 14 Dec 1998
   
   (?) HI
   
   I've just recently purchased a copy of Red Hat Linux 5.0 and a new CD
   drive(ie. I now have 2 CD drives) and I need to know how to mount
   them.
   
   The 2 drives are connected to the IDE on my soundblaster AWE-64 sound
   card and I need to know how to mount the drives from there. (I
   previously had one drive and mounted it using /dev/cdrom but now what
   do I use?)
   
     (!) /dev/cdrom is normally a symbolic link (sometimes a hard link)
     to some other device node such as /dev/hdc (first device on the
     second IDE channel) or /dev/scd0 (first CD device on the first SCSI
     channel).
     
   (?) The two drives are:
   1) Samsung SCR-2030
   2) HP CD-Writer Plus 8100
   
     (!) These (and the fact that you refer to your Sound card) sound
     like SCSI devices. You'd simply find out which of these your
     /dev/cdrom is linked to (by mounting it as normal or inspecting the
     'ls -il' output of your /dev/ directory).
     
     [ And, you could tell which one it was since its light will flash
     when you mount the disc. -- Heather ]
     
     For the other you'd use a command like:
     
     mount -t iso9660 -o ro /dev/scd1 /mnt/cdrom1
     
     ... where the -t specifies the filesystem type (ISO 9660 is the
     standard for normal CD's), -o is a set of options (read-only in
     this case) and the next two parameters are a device name (second
     SCSI CD drive), and an arbitrary mount point (usually an empty
     directory under the /mnt tree --- or an empty directory in any
     other convenient location).
     
     You could name that mountpoint anything that your filesystem will
     allow (just about anything). I use /mnt/cd* or /mnt/cdrom* as
     prefixes to these names for obvious reasons.
     
     If this new drive is on an IDE interface its likely that you'd use
     something like /dev/hdc or /dev/hdd for the device name.
     
   (?) Any help will be appreciated.
   
   Thanks Ali
            ____________________________________________________
   
(?) More on Multi-Feed Netnews (leafnode)

     Some additional comments by one of the authors/maintainers of
     leafnode(?).
     
   From The Answer Guy on Mon, 14 Dec 1998
   
   (?) Jim,
   I have a question in the "Answer Guy's" inbox regarding multi-feed
   leaf netnews. I don't remember which package my querent was using
   (leafnode, suck?, ???) but his question regarded whether it's possible
   using any of these packages to download news from multiple sites and
   upload/ feed them back selectively).
   
     (!) It is possible with suck/INN, of course. (Everything is
     possible with this combination, I suppose :-) .
     
     It is also possible with leafnode, starting from 1.7.
     
     The config file should look like this:
     
server = news.a.org
supplement = news.b.com
expire = 14                     # expire messages after that many days
create_all_links = no           # optional, saves disk space

     Leafnode should be able to figure out everything else on its own.
     
     If you need usernames and passwords for servers, it becomes a bit
     more complicated, but leafnode can handle this as well:
     
server = news.a.org
username = user_at_a
password = password_at_a
supplement = news.b.com
username = user_at_b
password = password_at_b
expire = 14
create_all_links = no

     HTH, - --Cornelius.
     
     /* Cornelius Krasel, U Wuerzburg, Dept. of Pharmacology, Versbacher
     Str. 9 */
            ____________________________________________________
   
    "The Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                           (?) The Answer Guy (!)
                                      
                   By James T. Dennis, answerguy@ssc.com
          Starshine Technical Services, http://www.starshine.org/
     _________________________________________________________________
   
(?) Getting 'rsh' to work

   From Anthony Howe on Mon, 14 Dec 1998
   
   (?) Oh hum. I'm having trouble with getting rsh to work between two
   machines for a specific task. I've read the rsh, tcpd, and hosts.allow
   man pages and I still can't get it to work.
   
    1. the same user "joe" with the same password exists on both "client"
       and "server" machines.
    2. server's hosts.deny:
       ALL:ALL
    3. server's hosts.allow:
       in.rshd:1.2.3.4
    4. server's inetd.conf:
       "shell" line uncommented
    5. I have an A record for:
       client     A     1.2.3.4
    6. and I have a PTR record for:
       4.3.2.1.in-addr.arpa     PTR     client
       
   Now every time I try and do something as simple as:
   
joe@client$ rsh server '/bin/ls /home/joe'

   I get "Permission denied". The logs on neither client nor server
   provide no reason for the "Permission denied".
   
   Maybe I just over-tired, but I can't figure out what I'm overlooking.
   Can anyone please tell me what I'm missing?
   
     (!) What is the precise line in your /etc/inetd.conf?
     
     Some versions of in.rshd and in.rlogind have options which force
     the daemon to ignore .rhosts files (-l) allow 'superuser' access
     (-h), syslog all access attempts (-L), and perform "double reverse
     lookups" (-a).
     
     It looks like your forward and reverse records are alright
     (assuming that the client's /etc/resolv.conf is pointing at a name
     server that recognized the authority for the zones you're using).
     
     Note: If you are going through IP Masquerading at some point (some
     sort of proxy/firewall package) then there's also the remote chance
     that your source port is being remapped to some unprivileged
     (>1024) port as the packets are re-written by your masquerading/NAT
     router.
     
     I did complain to the Linux/GNU maintainers of the rshd/rlogind
     package about the fact that their syslog messages don't provide
     more detailed errors on denial. However, I'm not enough of a coder
     to supply patches.
     
     To test this without TCP Wrappers at all try commenting out the
     line that looks something like:
     
shell   stream  tcp     nowait  root    /usr/sbin/tcpd          in.rshd -a

     ... and replacing it with something like:
     
shell   stream  tcp     nowait  root    /usr/sbin/in.rshd       in.rshd -L

     (note: we just changed the tcpd to refer to rshd).
            ____________________________________________________
   
(!) Linux as a Netware Client

   From Jedd on Sat, 05 Dec 1998
   
   (?) Howdi,
   
   In the December issue of LG, you answered someone's query regarding
   accessing their netware servers from Linux, by pointing him at Caldera
   or the ncpfs package.
   
   The caldera system is quite fine, albeit based on Redhat, and between
   the two companies they seem to have not only ignored the old FSSTND,
   but positively danced on its grave. <hug Debian> Trying to get the
   Caldera Netware client working under Debian, btw, was a right pain
   (I've still not done it), so it may not be feasible. Looking at their
   archives, it appears that even getting it to work under pure Redhat is
   a bit, uhm, 'challenging'.
   
   However, for your info, the ncpfs package does support NDS (Netware 4
   & 5) connections - and has done for the last two minor releases. I'm
   still experiencing some problems with this feature - when trying to
   concurrently authenticate to two servers in the same tree - but I
   hope/ suspect that's me doing something funny.
   
   Cheers,
   Jedd.
   
     (!) Some more feedback that I'll just present on its own merits.
            ____________________________________________________
   
(?) LILO Default

   From Steven W.Cline on Mon, 07 Dec 1998
   
   (?) Answerguy,
   
   I've been searching for this but can't find it. I would like to change
   the default OS that lilo loads. Right now it is Linux. How can I
   change the default to DOS?
   
   This is because I am the only one using Linux and the rest of the
   family uses DOS.
   
   Steven W.Cline
   San Bruno, Ca.
   
     (!) The default is the first "stanza" (boot image) listed in the
     /etc/lilo.conf.
     
     So, just edit that file and move the block for your Windows stanza
     to place it after any global directives and before any other OSes
     that you have listed.
     
     Alternatively you can just use the default= directive to specify
     the label of the image that you want to use.
     
     (Hint: searching the lilo.conf(5) man page on the term "default"
     leads us to the answer within a few shots.
     
     [ I should only comment on layout stuff, but here, I just gotta.
     
     Once upon a time when antares was a multi-boot machine, I had it
     set up so that CTRL-ALT-DEL would always reboot you into "the other
     OS". For a while it was so handy, I'd even forgotten how we did
     it... so I wasn't able to tell Jim! But here's a trick that should
     work:
     
     Add a line to /etc/inittab that reads:
     
     ms::boot:/sbin/lilo -R dos
     
     (assuming you've named your stanzas linux and dos). The 'ms' is a
     gratuitous identifier; it could really be anything, as long as the
     other inittab lines have a different first value. The '-R' stores a
     LILO command choice for only one session, so on the next reboot
     (from DOS, which isn't saying anything special to LILO) you'll go
     back to the other OS... unless sometime during your linux session,
     you run another lilo -R command that mentions a different command
     line to default to. However, you leave the lilo default to linux
     this way. I suppose you could use this to run /sbin/lilo -R linux
     so that reboots from Linux will tend to stay in Linux, with the
     default set as Jim described to dos so that power-on, and reboots
     in DOS, will tend to stay in DOS.
     
     I don't know if there's a LILO control program for DOS these days,
     but with LOADLIN and a copy of the kernel stored in DOS-accessible
     space, you could even create a script that would let you add
     "Linux" to your DOS or Windows menu system. If you prefer to go
     that way, you could even uninstall LILO and put back a plain DOS
     master boot record, so it would never ask anymore. Or, you can set
     LILO to delay forever, so you can always choose which OS ... though
     this loses the benefit of being able to ignore the system while it
     boots. -- Heather ]
            ____________________________________________________
   
(?) Uninstalling Linux

   From Tom Monaghan on Fri, 18 Dec 1998
   
   (?) i cannot find any info on the best way to uninstall Red Hat Linux
   5.2.
   
   I must reinstall DOS as linux does not support my video driver (yet).
   Any help would be appreciated. Thanks.
   
     (!) When you installed Linux, you probably created a set of
     partitions on one of your hard disks. You can just go into the
     Linux 'fdisk' (using your installation diskette or CD) and delete
     all of your Linux parititions (including swap and "native" (ext2)).
     
     Once you done that then DOS/Windows should be "willing" to create
     new partitions in the unallocated portions of disk space that
     you've created by deleting your Linux partitions.
     
     If the whole disk was devoted to Linux and you want to trick MSDOS
     into believing that this whole drive is "fresh from the factory"
     you can use the following trick:
     
     WARNING! THIS WILL WIPE OUT ALL DATA ON YOUR DISK!
     
   Boot into Linux (on a rescue diskette or into the working copy that
          you have installed)
          
   Login as root.
          
   Issue a command like the following:
          dd if=/dev/zero count=1 of=/dev/hda
          
     ... NOTE: The "of" parameter should point at the device node for
     your disk. If are doing this to the first or only IDE drive on your
     system (the most likely case) you'd use /dev/hda as I've shown. If
     you're doing this to the first SCSI drive it would be /dev/sda, if
     you were doing it to a second IDE or SCSI drive that would be
     /dev/hdb or /dev/sdb respectively, and so on.
     
     To get some idea of which drives and partitions you have Linux
     installed on you could use the command:
     
     fdisk -l | less
     
     ... to look at the partitions on all drives that Linux can see.
     Note that you'll see partitions like /dev/hda1 and /dev/hda5, etc.
     These are partitions on the first IDE drive (/dev/hda).
     
     When we zero out the first sector of the drive, operating systems
     will consider the whole drive to be blank and will install just as
     you would on a brand new hard drive. (Technically under MS-DOS you
     could just wipe out the two bytes at the end of the first sector
     --- which is a signature value that MS-DOS FDISK.COM (or FDISK.EXE)
     uses to detect a partiton table or MBR. Naturally you could also
     delete the partitions (as described earlier) and then boot from a
     DOS floppy and issue the command:
     
     FDISK /MBR
     
     ... this will work on MS-DOS 5.0 and later. Otherwise use the 'dd'
     method from Linux.
     
     Incidentally I rather doubt that Linux doesn't support your video
     card.
     
     It is probably more formally correct to say that XFree86 doesn't
     support your video card. You don't need to run a GUI to use most of
     Linux (I rarely go into X Windows).
     
     [ They just released XFree86 3.3.3 recently, so maybe you should
     check again with a fresh X package from http://www.xfree86.org/ and
     see if it has your card in it now. -- Heather ]
     
     As I point out in other messages there are a couple of alternatives
     to XFree86 including some freely distributed binary X servers
     (source code unavailable) which can be found at the Red Hat contrib
     site and other mirrors and software archives, and there are a
     couple of commercial X Windows System packages for Linux (from Xi
     Graphics: http://www.xig.com, and MetroLink:
     http://www.metrolink.com).
                        ____________________________
   
(?) Uninstalling Linux

   From Tom Monaghan on Fri, 18 Dec 1998
   
   Thanks. Since I was in a hurry, I just ran the install and deleted all
   my linux partitions via Disk Druid (coincidentally the same tack you
   suggest) and booted out of the install. So now I am back to DOS :(
   
   I have RH 5.2 at home, so deleting linux here at work does not end my
   experimentation with this OS. The thing I am stuck on at home is
   getting my modem to connect to my ISP. This is so freaking frustrating
   I had to step away for a day or so...Will continue to bang on linux
   until I get it right. It's funny, I have a decent amount of UNIX
   experience under my belt (I am ashamedly a Software Developer), but
   when it comes to configging stuff I am a moron!
   
   Super L/User
            ____________________________________________________
   
(?) Making a Kernel Requires 'make'

   From Mohd. Faizal Nordin on Fri, 18 Dec 1998
   
   (?) Hai everybody....
   
   I am having problem to compile my kernel in order to set my ppp and
   diald. The problem is when I want to compile kernel for RedHat 5.1 I
   got error msg: " make : bash command not found ".
   
   Can someone pls help me on this problem.
   
   Cheers...
   fiber...
   
     (!) It sounds like you need to install 'make' (the package that
     interprets "makefiles" and traverses the sources resolving
     dependencies by compiling, linking and otherwise manipulating the
     sources, object files, etc). (Either you need to install the
     package or make sure that it and your other development tools are
     properly on your PATH).
     
     You also need gcc (the GNU C compiler) and the 'binutils' package
     (which contains 'ar' 'ld' and the assemblers and other tools that
     are needed to build most C programs).
     
     It seems odd that you need to recompile your kernel for PPP
     support. Most distributions ship with that built in or with a
     modular kernel and a selection of pre-compiled modules.
            ____________________________________________________
   
(?) Using only 64Mb out of 128Mb Available

   From Terry Singleton on Thu, 17 Dec 1998
   
   (?) When I run the admin tool "top" it appears as if my system is only
   using 64MB of memory.
   
 11:00am  up 4 days, 23:39,  2 users,  load average: 0.07, 0.03, 0.00
 40 processes: 39 sleeping, 1 running, 0 zombie, 0 stopped
 CPU states:  0.3% user,  0.1% system,  0.0% nice, 99.6% idle
 Mem:   64168K av,  57420K used,   6748K free,  19820K shrd,  19816K buff
 Swap: 104384K av,     24K used, 104360K free                 23932K cached

   The results show 64168K av which indicates 64168K of available memory
   yet our system has 128MB RAM? I have the same results on 2 other Linux
   servers with more than 64MB RAM.
   
   I am running RedHat 5.1, is there anything special I have to do to
   tell the system I have more than 64MB, recompile the kernel..?
   
     (!) This is a classic FAQ. The BIOS standards for memory query (Int
     12h?) don't support the return of more than 64Mb of RAM. There are
     a number of different mechanisms for doing this on different
     chipsets, and some were "dangerous" (in the sense that they might
     hang some systems with a different API/BIOS). So, Linux didn't
     support automatic detection of more than 64Mb on most systems until
     very recently (2.0.36?).
     
     You've always been able to over-ride this with a kernel parameter.
     As you may know from my earlier articles or from the LILO man pages
     you can pass parameters to the Linux kernel using an append=
     directive in your /etc/lilo.conf file (and subsequently runing
     /sbin/lilo, of course) or by manually appending the parameters on
     the command line at the LILO prompt (or on the LOADLIN.EXE command
     line).
     
     To do this with lilo.conf you add lines of the form:
     
     append="mem=128M"
     
     ... to each of the Linux stanzas to which you want this to apply.
     (I'd leave one of them without it for the first try so you have a
     working configuration into which you can boot in case there's a
     problem with your system. I've heard of some cases where users had
     to reduce their memory configuration by 1Mb for odd reasons).
     
     With the newer 2.0.36 and 2.1.x kernels you shouldn't need to do
     this (they have new autodetection code that should handle all of
     the common chipsets).
     
     One trick for programmers --- if you want to ensure that your code
     or distribution will run in limited memory constraints you can do a
     quick test using a smaller mem= parameter to force the kernel to
     run in less space than it normally would.
     
     WARNING: If you forget the trailing 'M' the kernel load will fail
     when it tries to allocate the specified amount of RAM in bytes.
     (Don't do that!).
     
     In any event you might want to check out some of the FAQ's on Linux
     since I'm sure this is in a couple of them.
                        ____________________________
   
(?) Using only 64Mb out of 128Mb Available

   From Terry Singleton on Fri, 18 Dec 1998
   
   (?) Thanks Jim.I added the line as you suggested, however it did not
   seem to take am I supposed to be it under the boot image section
   itself? Memory is still 64000av.
   
     (!) Sorry, I should have been more detailed. You need to add this
     append= directive to each of the stanzas to which it applies. (You
     could have a couple of stanzas that referred to reduced memory
     configuration if you were a software developer, tester or reviewer
     so that you could test a package's behaviour under varying memory
     constraints).
     
   (?) This is what I have:
   
 boot=/dev/sda
 map=/boot/map
 install=/boot/boot.b
 prompt
 timeout=50
 append="mem=128M"
 image=/boot/vmlinuz-2.0.34-0.6
         label=linux
         root=/dev/sda1
         initrd=/boot/initrd-2.0.34-0.6.img
         read-only

   should it be:
   
 boot=/dev/sda
 map=/boot/map
 install=/boot/boot.b
 prompt
 timeout=50
 image=/boot/vmlinuz-2.0.34-0.6
         label=linux
         root=/dev/sda1
         initrd=/boot/initrd-2.0.34-0.6.img
         read-only
          append="mem=128M"

     (!) Yes.
     
     (Also remember to re-run /sbin/lilo to read this config file and
     build the new boot blocks and maps therefrom).
     
     Incidentally it would have been quicker and reasonably safe (in
     this case) to just try the experiement. It should have worked and
     you'd have gotten your answer much quicker.
     
     I can understand a degree of hesitation about experimenting with
     the boot blocks and partition tables (a data structure that's
     stored in the the same block as the MBR first stage boot block).
     Obviously a mistake means that you can't boot at all.
     
     However, it's wise to have a backup and a working rescue floppy and
     to practice using them before you make any changes to your
     /etc/lilo.conf.
            ____________________________________________________
   
(?) Manipulating Clusters on a Floppy ...

   From Padma Kumar on Thu, 17 Dec 1998
   
   Sir,
   
   I'am basically want to write an application which needs to mark a
   particular predefined cluster as bad, and also need to change
   dynamically the value contained in the specific cluster. Is there any
   way by which we can write some data into a cluster, mark that cluster
   as bad, again i need to mark that bad cluster as usable and update the
   data in the cluster and then mark it as bad again.
   
   I would be greatful if you could help me out with this task or tell me
   where i can find some information regarding this.
   
   Thanking you for your consideration.
   With Regards
   Padma Kumar
   
     (!) This is a rather dubious request.
     
     You'd have to write you're own custom programs to do this (for each
     filesystem type that you wanted to support --- since different
     filesystems have different mechanisms for marking clusters as bad).
     
     I've heard of MS-DOS virus writers, and some copy protection
     schemes, that used similar techniques to covertly write keying
     information on people's systems back when software copy-protection
     seemed feasible. The demise of this technique has two major
     dimensions:
     
     There were chronic technical problems caused to legitimate users
     (thus decreasing customer satisfaction while increasing support
     costs). (Problems resulting from restoration of user programs and
     data after a hardware failure or upgrade are one example). A
     moderately skilled cracker could easily reverse engineer and bypass
     these measures (often by "NOP-ing" out the portions of code that
     performed the objectional hackery).
     
     Many users/customer simply rejected the whole adversarial stance of
     software companies employing these techniques. We still see tacit
     acceptance of "dongles" (hardware keys, typically attached to
     parallel or serial ports which are queried by a program to enable
     its operation, typically with some sort of challenge response
     protocol). However, those are only used for a small number of high
     end packages.
     
     To write your own code, just look at the examples in the programs:
     badblocks, mke2fs, and e2fsck. These all manipulate the badblocks
     lists on Linux' ext2 filesystems.
     
     Naturally you can look at the sources for similar programs for each
     other fs which you intend to support. Note that most of these
     programs are under the GPL. While studying them and writing your
     own code is probably a fair use, if you intend to "cut and paste"
     code from them you must read and respect their licenses (which
     would be in conflict with any copy-protection applications which
     you might have in mind).
     
     I realize I'm reading a lot into your question. I don't know of any
     other rational uses for bogus "bad blocks."
                        ____________________________
   
(?) Manipulating clusters on a floppy ...

   From Padma Kumar on Sun, 20 Dec 1998
   
   (?) Sir,
   
   Thanks for spending time for answering my question ...
   
   Basically i'm trying to write the code in Delphi (Assembly code in
   Delphi) for Windows 95 stand-alone PC.
   
   I hope this clarifies ur doubt in short.
   
     (!) It clarifies things just fine. This is the Linux Gazette. I
     answer questions that relate to Linux.
     
     If you have programming questions that related to Win '95 and/or
     Delphi --- please go to Yahoo! and look for a Win '95 or Delphi
     "Answer Guy" (or other support forum).
     
     You paid money for that commercial software --- with the stated or
     implied benefit of technical support. It's really a pity that the
     companies that sold you these packages aren't delivering on that
     promise.
     
     As I said, you can look at the Linux sources for many examples of
     manipulating many types of filesystems (including MS-DOS/Win '95)
     -- those examples are mostly in C.
     
   (?) Thanking you once more for ur consideration
   Expecting a reply soon
   
     (!) Have you ever read any of my other answers? How did you find my
     address without getting any indication of the focus of my work? Is
     it just that using all that vendorware has left you desparately
     seeking support from anyone you can find?
            ____________________________________________________
   
(?) Setting up ircd

   From paul maney on Thu, 17 Dec 1998
   
   (?) hi there i cant set up ircd on redhat Linux i haveing big pog what
   can i do to get it to work a.s.p pls from paul marney
   
   you can get me on [phone omitted] or [email omitted]
   
     (!) I'm afraid this question is below the literacy threshold.
     
     I think this translates to:
     
     I can't set up 'ircd' on my Red Hat Linux system. I'm having "big
     problems" (with it). What can I do to get it to work? Please reply
     ASAP.
     
     First, I've never set up an IRC server. I've only used IRC as a
     client on a few occasions (less than ten in the last ten years). I
     presume that I could set on up if I tried (and I know some of the
     people who created Undernet -- one of the larger IRC networks, so
     I'm sure I could get help if I needed it).
     
     However, your question shows a remarkable lack of motivation. You
     don't provide any information about what version of Red Hat you are
     using, where you got the your IRC (Internet Relay Chat) daemon
     package (I've never seen one included with Red Hat Linux CD's ---
     though admittedly I wasn't looking for it). More importantly you
     don't give any indication of what you've tried or what sort of
     problems you've encountered.
     
     Based on your message it doesn't appear that you've even read
     whatever man pages, README files, and other documentation came with
     your ircd package.
     
     Glancing around I found an ircd21002.tgz in the Slackware contrib
     directory on ftp.cdrom.com (Walnut Creek, manufacturers of many
     fine collections of free software --- including the Slackware Linux
     distribution and FreeBSD).
     
     Grabbing that I find a set of documents and an example ircd.conf
     file that give some hints as to how you'd use this particular
     version of IRC. It turns out that this one is the "UnderNet"
     version of ircd (I'd heard that they'd written their own, but
     coming across here it was just by chance).
     
     You could look at the UnderNet web site (http://www.undernet.org)
     but that doesn't seem to lead to any technical documentation of the
     sort that you'd need to set up your own server or join one of the
     relay networks. You'd have to know to look at
     http://www.undernet.org/documents (since there don't seem to be any
     links from the index page down to this page). However that doesn't
     include anything that you need either.
     
     My next stop would be Yahoo!:
     
   Computers and Internet:Internet:Chat:IRC:Networks
          http://dir.yahoo.com/Computers_and_Internet/Internet/Chat/IRC/
          
     ... and I'll leave it to you to search through those links.
     
     Naturally you could also hunt through the IRC channels and ask the
     regulars in those that are appropriate. I suspect that some of the
     people who install, configure and maintain these servers are also
     IRC users. I think there's also a couple of news groups that are
     appropriate (search your local lists of newsgroups for the string
     "irc").
     
     If you ask for help from any other parties, I'd suggest putting
     some careful thought into crafting your questions --- most people
     won't spend nearly the time that I just have on answer a question
     that's presented as badly as this.
            ____________________________________________________
   
(?) Sendmail on private net with UUCP link to Internet

   From rkblum on Tue, 15 Dec 1998
   
   Hello Answer Guy!
   
   Thanks for all of your excellent advice. I really enjoy your columns.
   In your December issue, you had an answer for RoLillack for using
   Sendmail on a local private network. You mentioned that your network
   is connected to the Internet via a UUCP hub for mail purposes. I would
   like to follow-up on that comment.
   
   I do volunteer work at a local K-6 school and we were looking for a
   similar mail solution. Your answer got the wheels rolling and we think
   we have a good, inexpensive e-mail solution for the school. The only
   piece that we are missing is the sendmail.cf file for the UUCP hub. We
   have not been able to find a good example of how to configure the hub
   to route all outbound mail to the ISP UUCP host, as well as not do DNS
   lookups for our clients running Eudora. Unfortunately, we have not
   been able to find the SendMail book in our local bookstores. We would
   appreciate any help you could give us in this direction.
   
     (!) I don't know how you'd convince Eudora and other mail user
     agents not to do DNS queries for MX records. I use a trick with
     sendmail (specifying an IP address of the form '[192.168.1.x]' ---
     note the square brackets --- in my nullclient.mc file).
     
     In my case I have an "all Linux" network. The rare occasions when I
     try to run some MS or Apple based OS around generally don't involve
     setting them up with access to the Internet and certainly don't
     involve my trying to read my mail on them.
     
     You might be able to do the same, or you might have to create a DNS
     server that "claims" to be authoritative for the root domain (then
     one called ".").
     
     I've heard of people setting up these sorts of disconnected DNS
     zone but I don't have an example handy. I'd suggest grabbing the
     DNS HOWTO and searching through the archives of the Linux-admin
     list for some suggestions on that.
     
     Incidentally I hear there are some pretty good Linux Users' Groups
     in Indiana. Sadly I note that there is no SAGE (SysAdmin's Guild)
     chapter for your area. USENIX/SAGE is hoping to greatly expand the
     number of SAGE local chapters around the world and across the
     country in the near future. All it takes are a few professional
     system administrators to get together (SAGE is OS neutral, though
     the membership shows a decided preference for Unix-like systems).
     
     As for my particular setup, here's the M4 config file from one of
     my clients:
     
divert(0)dnl
VERSIONID(`@(#)clientproto.mc   8.7 (Berkeley) 3/23/96')

OSTYPE(linux)
FEATURE(nullclient, `[192.168.1.3]')

     ... that's all you need. You can then use m4 to generate a
     /etc/sendmail.cf file from this (as I've described in past columns.
     Newer versions of sendmail provide a 'makefile' to make this
     generation step even easier.
     
     The effect of this .mc file is to forward all mail to my mail hub
     (which is the mail store for my LAN and is the gateway to the rest
     of the world).
     
     On my client workstations I retrieve mail using 'fetchmail' (via
     POP-3). Thus if I mail 'star' (my wife) the mail gets sent to
     'antares' (the hub) even though she has an account on the local
     host. This means that she, my father, and others with accounts on
     my workstation, don't need to maintain .forward files on 'canopus'
     or any of the other workstations around the house. All of their
     mail (and mine for that matter) gets sent to antares.
     
     My mail gateway's .mc file looks like:
     
divert(-1)
divert(0)dnl
include(`../m4/cf.m4')dnl
VERSIONID(`$Id: antares.mc,v 1.3 1998/03/17 02:22:55 root Exp root $ by James T
. Dennis, Starshine.org $Date: 1998/03/17 02:22:55 $')
OSTYPE(`linux')

FEATURE(`allmasquerade')dnl
FEATURE(`masquerade_envelope')dnl
FEATURE(`always_add_domain')dnl
FEATURE(`nodns')dnl
FEATURE(`nocanonify')dnl
FEATURE(`local_procmail')dnl
FEATURE(`uucpdomain')dnl

MAILER(`smtp')dnl
MAILER(`uucp')dnl
MAILER(`procmail')dnl
MAILER(`uucp')dnl

MASQUERADE_AS(`starshine.org')dnl

undefine(`BITNET_RELAY')dnl
define(`confDEF_USER_ID',"8:12")dnl
define(`SMART_HOST', `uucp-dom:XXXX')dnl

     On this last line I have the name of my UUCP provider listed in
     place of those X's. By defining a mailer and host pair for my
     SMART_HOST I force 'sendmail' to deliver all of my non-local mail
     to my UUCP provider through the "uucp-dom" mailer. "uucp-dom" is a
     mailer that delivers mail via uucp even though it uses "domain
     style" (DNS) address syntax.
     
     This last file is probably a bit more elaborate than you actually
     need --- and it's simplified a bit for this example.
     
     (I actually use the "mailertable" FEATURE to trick the system into
     deliver mail that appears to be to one of my LAN hosts into
     delivering it to a virtual hosted mail server that's really
     maintained by my ISP).
     
   (?) Thanks again for all of your great answers!
   
   Rich Blum
   Trader's Point Christian Schools
   Indianapolis, Indiana
   
     (!) I'm glad I could help. You are right, UUCP is still a good way
     to get e-mail and netnews without getting a full Internet
     connection and without having the connection used by web browsing
     or other protocols which you might prefer not to run into your
     site. (Conversely it's also a great way to preserve your PPP
     bandwidth to interactive uses while your mail and/or news gets
     spooled quietly away for other times).
                        ____________________________
   
(?) Re(2): Sendmail on private net with UUCP link to Internet

   From rkblum on Wed, 16 Dec 1998
   
   Jim -
   
   Thanks for your quick response and acurate answers! The sendmail.cf
   sample you sent was exactly what we needed. I think that I
   unneccessarilty muddied the waters with my Eudora question. It turned
   out that it was not a DNS problem with Eudora, it was my mistake of
   not having the IP addresses in the ip_allow. The Eudora clients work
   fine now. I have asked our local bookstore to order the SendMail book
   for me - I think I need it!
   
   Thanks again for your help - keep up the good work!
   
   Rich Blum
            ____________________________________________________
   
(?) Complaint Department:

   From T Elliot on Sun, 20 Dec 1998
   
   (?) Having worked with Unix (1983-1989) and (gasp) MS-DOS (from DOS
   2.11) and Windows (from Win 3.0 to NT Server 4.0 - I once installed
   the NT5 beta, but decided it was too risky) and occasionally been
   tempted into trashing my spare PC to install Linux, one of the biggest
   problems I find with Linux is the lack of coherent tools and user
   interfaces.
   
   If I install a package under Windows, I get a shortcut to the
   program(s) via a menu or window (program group).
   
     (!) So don't use it.
     
   (?) Do the same under Linux and then I have to write down the main
   program name or remember it (after examining the files to be installed
   so I can figure out what the actual command is) - sure it will
   probably be installed in the path, but I'm getting old and the memory
   is failing.
   
     (!) Yep. I know. Tough isn't it?
     
   (?) My spare PC is currently running RedHat 5.2 and this afternoon I
   downloaded Code Crusader et al and therein lies the tale... NOT
   ccrusader, NOT codecrusader or variations thereof, but "jcc" - no
   additions to the "start menu" if using Fvwm2 or any other window
   manager, in fact, no indication that the system had new software
   except that the disk free space had decreased.
   
   Until this type of thing is resolved, then Linux will only gain the
   support of the lab-coats or the enthusiast.
   
     (!) ... and professionally administered sites where a sysadmin or
     delegate evaluates packages before installing them --- figures out
     what is going where and deploys them according to their needs.
     
     This is a rather boring message. I'm not the Linux complaint
     department. You can send your suggestions to Red Hat Inc.,
     S.u.S.E., Caldera, and many others.
     
     Incidentally, S.u.S.E. does have some scripts that maintain your
     system default window manager menus when you install new packages.
     
     As for the implied suggestion --- I know that some people at Red
     Hat are working on something like this. However, since there is no
     central authority over Linux development and there are no "code and
     interface police" to enforce your notion of "how things should be
     done" --- there are practical limits to what can be accomplished.
     
     For those that care, the usual technique I use when installing
     RPM's is to list and/or browse the contents of the package before I
     install it. You can list them with a command like:
     
     rpm -qpl <package.file.name>
     
     ... and you can narrow that do just the docs using:
     
     rpm -qpd <package.file.name>
     
     You can browse through an RPM file interactive using Midnight
     Commander ('mc'). Just highlight the file using mc's "Norton
     Commander inspired" interface and hit [Enter]. This will traverse
     down into the RPM file as though it were a directory tree --- and
     you can browse through and view the file contents to your heart's
     content.
     
     When you use mc's [F3] key to view a file, it can interpret several
     types of files. Thus you can view the man pages from inside of an
     RPM file without installing anything.
     
     Since many of the most useful programs available under Linux and
     other forms of Unix are designed as filters, or intended to be run
     as services (possibly as dynamically launched 'inetd' based
     daemons) or cron jobs --- or are otherwise non-interactive --- it
     often doesn't make sense to add menu options for them.
     
     However, I've suggestion to Red Hat and S.u.S.E that RPM
     maintainers and builders be encouraged to add entries for programs
     that constitute "user interfaces" (for character mode and/or X
     Windows --- and for any other interfaces that might arise in the
     future --- such as Berlin). One of Red Hat's senior people
     disagreed with the whole notion, though that may be more a
     deficiency in my presentation than anything.
     
   (?) PS. My main PC runs NT server 4.0 sp4 with sql server, iis, etc,
   etc. I use it for software development using DevStudio (c++) and even
   though I have to reboot the &^^% thing every time I touch something in
   its config, I'd rather that than guessing at what I've installed and
   what the comand line is.
   
     (!) Great. More power to you. So, do a Yahoo! search to see if you
     can find the "complaintguy" somewhere. Let me know if you find him
     (or her) and I'll bounce mail like this to the appropriate venue.
     
     The problem here is that you seem to have confused me with some
     Linux advocate. I use Linux and I prefer it to other systems that
     I've used (although FreeBSD is a very close second).
     
     I've espoused the opinion, on several occasions in this column,
     that the selection of any tools (software or otherwise) should be
     done through a process of requirements analysis. Some requirements
     can be met with a number of solutions. So, after we've found a
     basic list of possible solutions that meet the requirements we can
     narrow down that list by measuring them against our constraints and
     make final selections (if choices still remain) based on
     preferences.
     
     The time is rapidly approaching when you can run a complete KDE or
     GNOME system and never see a command line. Developers of KDE,
     GNOME, and eventually GNUStep applications will be free to
     integrate their interfaces in the ways that are appropriate to each
     of those systems.
     
     The KDE developers have already shown an amazing predilection for
     generating KDE interfaces to existing programs. Once nice thing
     about Linux and Unix is that it's relatively easy to design an
     application in a client/server model --- and to provide multiple
     front ends (clients) which each provide unique forms of access to
     the same application functions. This is just good programming
     design.
     
     Another nice thing is that we can concurrently run programs from
     many GUI's under the same desktop. Thus I can run a GNOME
     application under KDE and vice versa. Indeed using VNC and XNest I
     can run whole X sessions within a window under one of my X
     sessions.
     
     Of course, people who just stick with the front ends will be
     constrained from access many of those powerful filters and tools
     that I described earlier. It's unlikely that front ends will be
     built for all of them.
     
     However, most people only use a few applications, anyway.
     
   (?) PPS. The main gripe is - USER TOOLS and EASE OF CONFIGURATION.
   
     (!) So find someone to gripe to. I'm here to answer questions.
     
     (P.S. the various "advocacy" newsgroups are perfect for this sort
     of message).
                        ____________________________
   
(?) More on "Complaint Department"

   From T Elliot on Fri, 25 Dec 1998
   
   (?) Thank you for your comments and suggestions. I appreciate that I
   have probably wasted your time, but you have answered most of my
   questions (including to whom to gripe).
   
     (!) If I was worried about "wasting my time" I wouldn't have signed
     up for this.
     
     However, one of the few rights I reserve for myself in this column
     is the right to be a curmudgeon.
            ____________________________________________________
   
(?) eql Not Working

   From Brett on Thu, 24 Dec 1998
   
   (?) I have had absolutely no luck with eql getting my two USR 56k
   modems working in sync. I can get both of them connected, but only one
   uses bandwidth... and if I disconnect #1 then #2 takes over the
   bandwidth job... I am just wondering if I can get this working and
   somewhere that you could point me to get it working... any reply would
   be much appreciated...
   
     (!) If you read the eql docs carefully I'm pretty sure it points
     out that you must establish both connections to a
     server/router/terminal server that supports this mode of operation.
     Essentially you must be connected to two modems on a single other
     system running something that is like and compatible with eql.
     
     If ISP isn't specifically working with you on this --- then you
     won't be able to get it working. So call your ISP and explain your
     needs to them. According to the README.eql file Linux eql driver is
     compatible with the load balancing (round robin routing) on some
     Livingston (CommOS?) router/terminal servers.
     
     I suggest a careful re-reading of
     /usr/src/linux/drivers/net/README.eql
     
     ... and perhaps a follow up of that FTP link to see if there are
     any updates are additional notes available on that site. There are
     a couple of e-mail reports from users appended to this file ---
     perhaps one of them can help more. I've never used the eql drivers
     since speed was never my problem with online access --- it's just
     the latency and dial time delays that used to drive me crazy.
            ____________________________________________________
   
(?) Upgrade Kills Name Server

   From Anonymous on Fri, 25 Dec 1998
   
   (?) I just upgraded"= to Red Hat 5.2 and set up everything as I had it
   before and now I get the following:
   
fetchmail: POP3 connection to mail.nashville.com failed: temporary name server
error

   Netscape can't recognize mail.nashville.com either. I am having to
   send this from Windows email.
   
   My etc/hosts file looks the same as it did before. What other files do
   I need to check and/or post?
   
   Thanks!!
   
     (!) I'm just going to guess that the upgrade renamed some of your
     files (probably your DNS zone files, possible even your
     /etc/resolve.conf) to add the 'rpmorig' extension.
     
     So, search for rpmorig files and look for the files that were put
     in place of them. You'll have to manually resolve the differences.
     (Use the 'diff' program).
     
     I've complained to them before about their penchant for moving your
     files out of the way when they to a upgrades. Their concern is that
     the old configuration files may be incompatible with the new ones.
     I've said that the disruption caused by users doing an upgrade when
     they never realized or tracked which files there changed and
     "configured" tends to outweigh the chances that a new package
     upgrade will completely fail when presented with an older format of
     its own configuration file.
     
     One problem to consider is that you old version of Linux may have
     been running BIND 4.9.x or earlier (I'm guessing that your system
     is providing it's own DNS services). The new version (5.2) might be
     installing BIND 8.1.2. These do have incompatible file formats for
     the base configuration file --- however the name has chagned too.
     The old one used named.boot. The new version uses /etc/named.conf.
     There is a utility with the package to convert a named.boot file to
     named.conf format. Actually the new format is much easier to set
     up.
     
     Anyway it is almost certain that you need to configure your 'named'
     (BIND).
     
     Unix mail doesn't normally refer to your /etc/hosts file since that
     can't convey information about MX records and preferences. SMTP
     mail routing is done via MX records --- not by host address (A)
     records. So it doesn't matter what your /etc/hosts file look like
     for this.
            ____________________________________________________
   
(?) MS Applications Support For Linux

   From Marty Bluestein on Fri, 25 Dec 1998
   
   (?) Is there a mechanism that enables MS apps to run under Linux? Is
   anyone working on an autoloader for Linux?
   
     (!) There are a few projects. The most prominent is WINE
     (http://www.winehq.com). The goal of WINE is a complete
     re-implementation of the Windows API's to achieve full binary
     compatibility under any x86 Unix with X Windows (Linux is the
     predominant platform but any other modern x86 Unix should be a
     reasonable platform for WINE).
     
     Another is Bochs (which has recently moved it's web pages to
     http://www.bochs.com). Bochs is a package which emulates an x86 CPU
     and PC chipset (similar to Connectix' "Virtual PC"). It runs on any
     platform that can compile its C sources. I've heard that it works
     reasonably well but is to slow for production use (for running Win
     95 or 98 on a PC). Considering that you're using a PC to emulate a
     full PC CPU and chipset this is not a surprising limitation.
     
     For older MS Windows applications (3.1 and earlier) you might try
     WABI --- a commercial Windows Applications Binary Interface which
     is available for Linux from Caldera (http://www.caldera.com). This
     is not be updated and is unlikely to ever support Windows '95 or
     later applications.
     
     For DOS (non-Windows) you can run a copy of MS-DOS, DR-DOS FreeDOS
     or just about any other "real mode" x86 OS under the Linux
     'dosemu'. (Just search for it in Yahoo! using "+linux +dosemu").
     
     [ Its home page is hosted by SuSE ... http://www.suse.com/dosemu/
     ... I use it to run dBase stuff and it works pretty well at this
     point. -- Heather ]
     
     The DOS support is pretty good these days, though I don't use any
     MS-DOS applications any more so I don't have much first hand
     experience with it. The WABI support was pretty fast (it felt
     faster running typical Windows 3.x programs under Linux than it did
     under native MS-DOS on similar hardware --- probably do to Linux
     more efficient filesystem and memory management).
     
     When thinking about the limitations of Linux support Win '9x and NT
     applications support (Win 32S) it is helpful to keep in mind that
     these limitations are almost certainly a key design goal at
     Microsoft. Although Linux was not on thier "radar" during the
     design of Windows '95 and NT --- OS/2 certainly was.
     
     Enmeshing the interfaces at various levels to make applications
     difficult or impossible to support under competing operating
     systems is one of the key strategies that Microsoft employs. The
     current DoJ case against them is only a tip of the backlash that
     consumers are now directing to this monopoly. The fact that Linux
     installation tripled in the last year --- and that many
     organizations are now considering Linux for their desktop
     applications platform is ample evidence of that.
     
     * (personally I think it's still a bit premature to be touting
     Linux as typical workers desktop system --- though the introduction
     of Corel's WordPerfect for Linux, and the release of an updated
     Wingz Professional for the platform do certainly bode well for the
     coming year. I've heard that Applixware 4.4.1 is also greatly
     improved and the next version of StarOffice 5.x should stabilize
     and mature that suite. Meanwhile GNOME, KDE, LyX, and GNUStep are
     plodding along towards "prime time" availability).
     
     So that fact that there is only limited support for MS apps under
     Linux is a testimony to the skills of Microsoft's programmers. We
     can surmise that preventing these applications from running on
     non-Microsoft operating systems was given higher priority than
     robustness, security, stability, integrity, or performance.
     
     Probably the only features that were given priority over "trap the
     user" were those that would enable magazine writers, and corporate
     purchasing agents to "review" the products and feel that they had
     evaluated them with about 15 minutes to an hour of actual work time
     exposure. This forces the application programmer to put all sorts
     of "features" onto menus, button bars, toolbars, icon ribbons, and
     otherwise clutter the screens and windows. This is an endemic
     problem in commercial software --- it's written to get reviews and
     make sales, not to satisfy long term users.
     
     Of course an alternative to direct MS Applications support is
     support for their document formats. However this is another of
     those key "customer trapping" opportunities. They do everything
     short of strong (non-exportable) encryption to lock your data into
     .DOC, .XLS, and .PPT formats. The latest Linux applications suites
     and word processors are making some headway in this --- and I can
     often extract contents from Word '97 files without too much fuss.
     Earlier versions of Word are pretty well supported at this point.
     
     You can bet that the next version of Office will egregiously break
     format compatability. MS can't allow its customers any freedom of
     choice or portability of documents to "other" platforms. That's
     much too dangerous to their upgrade revenue scheme.
     
     I've talked about MS Windows support and the evils of proprietary
     document formats before. I personally think that the only rational
     remedy for Microsoft's monopolistic practices would be for the DoJ
     to impose a rule that MS produce freely available (open source)
     "reference implementations" of standards C source code to peform a
     reasonable suite of conversions and manipulations on all
     "documents" produces by their applications (including .EXE and .DLL
     "documents" produced by their programming "applications"). Under
     this plan any upgrade to any MS product that failed compatibility
     test suites with there freely available reference implementation
     (converters, tools and filters) would result in an immediate
     injunction on distribution until the reference implementation as
     updated and vetted as compatible.
     
     (Note that I didn't say that MS has to release any of the sources
     to any of their products. Only that they must release some
     reference implementation that is compatible with the file formats,
     and freely usable in competing products --- free and commercial.
     Their contention is that their products enjoy superior market share
     as a result of superior interface and integration with one another
     --- this would give them a unique opportunity to prove that).
     
     I have no idea what you mean by an "autoloader for Linux."
     
   (?) Thanks.
   Marty Bluestein
                        ____________________________
   
(?) Automount/autoloader

   From Marty Bluestein on Fri, 25 Dec 1998
   
   (?) OK. Guess I should have fully read your message before I
   responded. By the term "autoloader" I mean a self installing function
   - you stick in the CD and Linux (or some other OS) sets itself up. I
   wasn't aware that MS was already loading their user's work (.DOC,.
   XLS, etc.) with gotchas. I wonder if the DoJ is aware of and pursuing
   this?
   
   Marty
   
     (!) There are several packages that will automatically mount CD's
     (and floppies, NFS directories etc) for Linux. This is referred to
     generically (under Unix) as "volume management" or "automounting"
     (the latter term is more often used with regards to network file
     systems while the former is exclusively used for local media).
     
     Under Solaris there is a daemon called 'vold' that manages CD's.
     
     Under Linux you can use the 'amd' (automount daemon) or an old
     program called "Supermount" (Stephen Tweedie, if I recall
     correctly). Under newer Linux kernels you can look for a module
     called "autofs".
     
     I haven't played with these much so I can't give real advice on
     using them. However, you now have some key words to search on. If
     you get one of them working in a way that seems like it would meet
     a typical requirements scenario --- write it up as a mini-HOWTO and
     solicit people to contribute sample configurations and descripts
     for other common usage scenarios (or at least write up an
     "unmaintained" mini-HOWTO and encourage the readers to adopt and
     maintain it.
                        ____________________________
   
(?) More on: MS Apps Support

   From Marty Bluestein on Fri, 25 Dec 1998
   
   (?) Although my ire against Gates, et al would like to see a good
   platform running his apps that will probably be a moving target.
   Better, I think, to develop a good set of apps that can work on the
   docs that MS apps produce. MSs response would have to be to encumber a
   user's work with junk to make it incompatible with any other apps. The
   result of that could very well be disaster for MS. Could you imagine
   having your work suddenly become incomprehensible because of the cute
   little things your app put in it?
   
   Marty
   
     (!) I don't have to imagine this scenario. I've seen it happen many
     times.
                        ____________________________
   
(?) MS Applications Support For Linux

   From Marty Bluestein on Fri, 25 Dec 1998
   
   You are right on. My appreciation of MS coincides with yours. I wish I
   had the time and the money to pursue that emulation of 95 and NT. Even
   better would be a good, competitive set of apps. Corel's latest
   release for Linux may indicate some movement in that direction. TNX
   for your response. Happy Xmas.
   Marty Bluestein
                        ____________________________
   
(?) More on: MS Apps Support

   From Marty Bluestein on Sat, 26 Dec 1998
   
   (?) I've just installed Redhat. It is "auto loading". I now have a
   problem which Redhat and I must resolve. I'll write it up and post it
   when it's corrected. To whit.. WIN95 now crawls along as if it had a
   bigger bag of sand on it's back. Re MS: I'd rather see MS broken up
   into two separate companies. One doing APPS and the other doing OS.
   TNX for responses. HAPPY XMAS, MERRY CHANUKAH, SWINGING KWANZA and
   JOYFUL RAMADAN.
   
     (!) I can't help with the Win '95 problem. It's probably confused
     about WINS (Windows Naming System) or some other networking issue.
     
     Re: Breaking up MS. Historically this has done NO GOOD with other
     monopolies. Go read a decent historical account and business
     analysis on JP Morgan (and wash that down with some Noam Chomsky).
     I'd recommend a book for you --- but I'd have to refer to my father
     to find one. My knowlege is definitely second-hand on this --- but
     I've discussed it with a couple people whose background in the
     fields of finance and history I respect.
     
     Breaking them up is a fundamentally flawed approach. The
     controlling interests -- the OWNERS will still be the same. The
     resulting companies would clearly have mutual interests,
     complementary product lines, and interlocking boards of directors.
     
     Unfortunately this approach would "appease" the masses and actually
     work in Bill G's favor (as it did with JP Morgan). It will allow
     the DoJ to appear competent and be touted as a "tough on
     (corporate) crime" victory. So, it's the most likely outcome.
     
     It's also just about the worst way to deal with the problem. (It's
     even worse than sitting back and doing nothing) since it sets
     another bad precedent.
            ____________________________________________________
   
(?) Linux as a Home Internet Gateway and Server

   From Nilesh M. on Thu, 24 Dec 1998
   
   (?) Hi,
   
   I just have some questions about setting up linux to run as a server
   for my home computer and to share an internet connection and also to
   setup as a server for the internet.
   
     (!) O.K. That is three different roles:
     
    1. Network Server (which services)
    2. Internet Gateway (proxy and/or masquerading)
    3. Internet Host/Server (which services)
       
     It is possible for Linux to concurrently handle all three roles ---
     though having all of your "eggs in one basket" may be not be a good
     idea with regards to security and risk assessment.
     
     Traditionally your concerns would also have encompassed the
     capacity planning --- but a typical modern PC with 200+ Pentium
     processor, 32Mb to 256Mb of RAM and 4Gb to 12Gb if disk space has
     quite a bit of capacity compared to the Unix hosts of even 5 to 10
     years ago.
     
   (?) Do you know if I can setup a linux box with one 10mbs ethernet for
   a modem and a 100mbs ethernet for a network in my house? Where do I
   start and how would I do it.
   
     (!) I presume you're referring to a 10Mbps ethernet for a cable
     modem, ISDN router (like the Trancell/WebRamp, or the Ascend
     Pipeline series), or a DSL router. These usually provide 10Mbps
     interfaces and act as routers to the cable, ISDN or xDSL services
     to which you're subscribed.
     
     It's certainly possible for you to install two or three ethernet
     cards into a Linux system. Any decent modern 100Mbps ethernet card
     will also automatically handle 10Mbps if you plug them into such a
     LAN. So you'd just put two of these cards into your system, plug
     one into your router and the other into your highspeed hub.
     
     You often have to add the following line to your /etc/lilo.conf to
     get kernel to recognize the second ethernet card:
     
     append="ether=0,0,eth1"
     
     ... the 0,0, is a hint to autoprobe for the IRQ and I/O base
     address for this driver. Alternatively you might have to specify
     the particulars for your cards with a line like:
     
     append="ether=10,0x300,eth0 ether=11,0x280,eth1"
     
     ... instead. This line must be present in each of the Linux
     "stanzas" (groups of lines which refer to different Linux kernels
     with their corresponding root filesystem pointers and other
     settings).
     
     Of course you must run the /sbin/lilo command to read any changes
     in your /etc/lilo.conf file and "compile" them into a new set of
     boot blocks and maps.
     
     If you have a normal modem connected to the system --- it's
     possible to use that as well. You can use PPP (the pppd program) to
     establish Internet connection over normal phone lines. There are
     also internal ISDN, T1 "FRADs" (frame relay access devices) and
     CSU/DSU (or Codecs --- coder decoder units) that can be installed
     into your PC and controlled by Linux drivers.
     
     I've seen references to the ipppd to control some sorts of internal
     ISDN cards. I think most of the others have drivers that make them
     'look like' a modem or ethernet driver to Linux.
     
   (?) I just want to buy two 100mbs ethernet cards to hook up to each
   other... so I don't think I'd need a hub do I? I only want two
   computers hooked up to this makeshift network.
   
     (!) You either need a hub, or you need a "crossover" ethernet patch
     cord. A normal cat 5 ethernet patch cord isn't wired correctly to
     directly connect two ethernet cards.
     
   (?) Any help would be appreciated, especially something like a link to
   a document which would give me a step by step setup.
   
     (!) I don't have such a link. As you may have realized there are a
     couple of hundred HOWTO documents on Linux and many of them relate
     to configuring various services.
     
     Let's go back to our list of different roles:
     
     Network Server (which services) Internet Gateway (proxy and/or
     masquerading) Internet Host/Server (which services)
     
     Starting at the top. You have a small network that is not normally
     connected to the Internet (there isn't a permanent dedicated
     Internet connection). So, you probably want to use "private net"
     addresses for your own systems. These are IP addresses that are
     reserved --- they'll never be issued to any host on the Internet
     (so you won't create any localized routing ambiguities by using
     them on your systems).
     
     There are three sets of these number:
     
     192.168.*.* 255 Class C nets 172.16.*.* through 172.31.*.* 15 Class
     B nets 10.*.*.* 1 Class A net
     
     ... I use 192.168.42.* for my systems at home.
     
     ... These addresses can also be used behind firewalls and Internet
     gateways. The classic difference between a router and a gateways is
     that a router just routes package between networks (operating at
     the "transport" layer of the ISO OSI reference model) while a
     gateway does translation between protocols (operating at the
     applications or other upper layers of the reference model).
     
     In the case of Linux we can configure our one Linux system to act
     as local server and as an Internet gateway. Our gateway can operate
     through "proxying" (using SOCKS or other applications layer
     utilities to relay connections between our private network and the
     rest of the world), or through IP masquerading (using network
     address translation code built into the kernel to rewrite packets
     as they are forwarded --- sort of a network layer transparent
     proxying method).
     
     However, we're getting ahead of ourselves.
     
     First we need to setup our Linux LAN server. So we install Linux
     and configure its internal ethernet card with an IP address like
     192.168.5.1. This should have a route that points to our internal
     network, something like:
     
     route add -net 192.168.5.0 eth0
     
     ... to tell the kernel that all of the 192.168.5.* hosts will be on
     the eth0 segment.
     
     Now, what services do you want to make accessible to your other
     systems.
     
     By default a Linux installation makes a common set of services
     (telnet, NFS, FTP, rsh, rlogin, sendmail/SMTP, web, samba/SMB, POP
     and IMAP etc) available to any system which can reach you. Most of
     these are accessible via the "internet service dispatcher" called
     'inetd'. The list of these services is in the /etc/inetd.conf file.
     Some other services, such as mail transport and relaying
     (sendmail), and web (Apache httpd) are started in "standalone" mode
     -- that is they are started by /etc/rc.d/*/S* scripts. NFS is a
     special service which involves several different daemons --- the
     portmapper and mountd in particular. That's because NFS is an "RPC"
     based service.
     
     The fact that any system that can route packets to you can request
     any service your system offers, and the fact that most Unix and
     Linux systems offer a full suite of services "right out of the box"
     has classically been a major security problem. Any bug in any
     service's daemon could result in a full system compromise which
     could be exploited from anywhere in the world. This is what led to
     the creation of TCP Wrappers (which is installed in all major Linux
     distribution by default --- but is configured to be completely
     permissive by default). It is also why we have "firewalls" and
     "packet filters."
     
     It's tempting to think that you'll be too obscure for anyone to
     break into. However, these days there are many crackers and 'script
     kiddies' who spend an inordinate amount of time "portscanning" ---
     looking for systems that are vulnerable --- taking them over and
     using them for further portscanning sniffing, password cracking,
     spamming, warez distribution and other activities.
     
     I recently had a DSL line installed. So, I'm now connected to the
     Internet full time. I've had it in for less than a month and there
     are no DNS records that point to my IP addresses yet. I've already
     had at least three scans for a common set IMAP bugs and one for a
     'mountd' bug. So, I can guarantee you that you aren't too obscure
     to worry about.
     
     You are also at risk when you use dial-up PPP over ISDN or POTS
     (plain old telephone service) lines. The probabilities are still
     reasonably on your side when you do this. However, it's worth
     configuring your system to prevent these problems.
     
     So, you'll want to edit two files as follows:
     
                /etc/hosts.allow
                ALL:LOCAL

                /etc/hosts.deny
                ALL:ALL

     ... that's the absolute minimum you should consider. This
     configuration means that the tcpd program (TCP Wrappers) will allow
     access to "local" systems (those with no "dots" in their host
     names, relative to your domain), and will deny access to all
     services by all other parties.
     
     For this to work properly you'll have to make sure that all of your
     local hosts are given proper entries in your /etc/hosts file and/or
     that you've properly set up your own DNS servers with forward and
     reverse zones. You'll also want to make sure that your
     /etc/host.conf (libc5) and/or /etc/nsswitch.conf (glibc2, aka
     libc6) are configured to give precedence to your hosts files.
     
     My host.conf file looks like:
     
                # /etc/host.conf
                order hosts bind
                multi on

     and my /etc/nsswitch.conf looks like:
     
                passwd: db files nis
                shadow: db files nis
                group:  db files nis

                hosts:          files dns
                networks:       files dns

                services:       db files
                protocols:      db files
                rpc:            db files
                ethers:         db files
                netmasks:       files
                netgroup:       files
                bootparams:     files

                automount:      files
                aliases:        files

     glibc2 has hooks to allow extensible lookup for each of these
     features through modular service libraries. Thus we'll soon be
     seeing options to put 'LDAP' in this services switch file --- so
     that hosts, user and group info, etc could be served by an nss_ldap
     module which would talk to some LDAP server. We could see some user
     and group information served by "Hesiod" records (over DNS or
     secure DNS protocols) using some sort of nss_hesiod module. We
     might even see NDS (Novell/Netware directory services) served via
     an nss_nds module.
     
     But I'm straying from the point.
     
     Once you've done this, you should be able to provide normal
     services to your LAN. Precisely how you set up your client system
     depends on what OS they run and which services you want to access.
     
     For example. If you want to share files over NFS with your Linux or
     other Unix clients, you'd edit the /etc/exports file on your Linux
     server to specify which directory trees should be accessible to
     which client systems.
     
     Here's an exports file from one of my systems:
     
# /       *.starshine.org(ro,insecure,no_root_squash)
# /       192.168.5.*(ro,insecure,no_root_squash)
/etc/   (noaccess)
/root/  (noaccess)
/mnt/cdrom 192.168.5.*(insecure,ro,no_root_squash)

     ... note I've marked two directories as "noaccess" which I use when
     I'm exporting my root directory to my LAN. I do this to prevent any
     system in the rest of my network from being able to read my
     configuration and passwd/shadow files. I only export my root
     directory in read-only mode, and I only do that occasionally and
     temporarily (which is why these or commented out at the moment). My
     CDROM I leave available since I'm just not worried about anyone in
     the house reading data off of any CD's I have around.
     
     Keep in mind that NFS stands for "no flippin' security" --- anyone
     in control of any system on your network can pose as any non root
     user and access any NFS share "as" that user (so far as all
     filesystem security permissions are concerned. NFS was designed for
     a time when sites only had a few host systems and all of those were
     connected and tightly controlled in locked rooms. NFS was never
     intended for use in modern environments where people can carry a
     Linux, FreeBSD, or even Solaris x86 system into your office under
     one arm (installed on a laptop) and connect it to the nearest
     ethernet jack (now scattered throughout every corner of modern
     offices --- I've seen them in the reception areas of some sites).
     
     To do filesharing for your Windows boxes you'd configure Samba by
     editing /etc/smb.conf. To act as a fileserver for your MacOS
     systems you'd install and configure 'netatalk'. To emulate a
     Netware fileserver you'd install Mars_nwe, and/or buy a copy of the
     Netware Server for Linux from Caldera (http://www.caldera.com).
     
     There are ways to configure your system as a printer server for any
     of these constituencies as well.
     
     Beyond file and print services we move to the "commodity internet
     services" like FTP, telnet, and HTTP (WWW). There's generally no
     special configuration necessary for these (if you've installed any
     of the general purpose Linux distributions).
     
     If you create an FTP account in your /etc/passwd file then
     anonymous FTP will be allowed to access a limited subdirectory of
     files. If you rename this account to "noftp" or to "ftpx" or to
     anything other than "ftp" and/or if you remove the account entirely
     than you system will not allow anonymous FTP at all. If you allow
     anonymous FTP you can simply put any file that you want made public
     into the ~ftp/pub directory --- and make sure that they are
     readable. By default the FTP services are run through tcpd so they
     will respect your hosts.allow/hosts.deny settings.
     
     If you're going to set up a "real" FTP site for public mirroring or
     professional "extranet" applications you'd want to use ncftpd,
     proftpd, or beroftpd instead of the now aging WU-ftpd or the old
     BSD FTP daemon (in.ftpd). These alternative FTP daemons have their
     own configuration files and can support virtual hosting and other
     features. In some of them you can create "virtual users ---
     accounts that are only valid for FTP access to specific FTP
     subtrees and/or virtually hosted services --- accounts that can be
     used to access any other service on the system.
     
     Web services are controlled with their own configuration files.
     There are a couple of whole books just on the configuration of
     Apache servers. By default they let anyone view any web pages that
     you put into the 'magic' directories (/home/httpd/docs or something
     like that).
     
     It's possible to limit access to specific directories according the
     the IP addresses (or reverse DNS names) of the clients. As with TCP
     Wrappers this should not be considered to be a form
     "authentication" --- but it can be used to distinguish between
     "local" and "non-local" systems IF YOU HAVE ANTI-SPOOFING PACKET
     FILTERS in place (a part of any good firewall).
     
     telnet, rlogin, rsh, and other forms of interactive shell access
     are generally pretty easy to setup. Like many Unix/Linux services
     it is harder to disable or to limit access to these services than
     it is to allow it.
     
     Under Red Hat Linux access to these and other "authenticating"
     services can be controlled by editing PAM configuration files under
     /etc/pam.d/
     
     So, the short answer to the question "How do I set up Linux as a
     server?" is you install it, setup its address and routing, then you
     install and configure the services that you want to provide.
     
     Now, when we we want to use Linux as a gateway to the Internet (or
     any other network --- to connect you home network to your office or
     to a friend's network) you first resolve the addressing and routing
     issues (set up your second interface and add the appropriate
     routes). Then you use IP masquerading or proxy services (SOCKS) to
     allow your systems (using the non-routable "private net" addresses)
     to access services on the Internet.
     
     To use IP masquerading with the old ipfwadm code (as present in the
     standard 2.0.x kernels you just issue a command like:
     
     ipfwadm -F -a accept -m -D 0.0.0.0/0 -S 192.168.5.0/24
     
     ... which adds (-a) a rule to the forwarding (-F) table to "accept"
     for "masquerading" (-m) any packets that are "destined for" (-D)
     anywhere (0.0.0.0/0) and are from source IP addresses (-S) that
     match the pattern 192.168.5.0/24 (an address mask that specifies
     the first 24 bits, or three octets as the "network portion" of the
     address --- and therefore covers that whole class C network).
     
     You should definitely use a modular kernel and almost certainly
     should have 'kerneld' loaded when you use this masquerading
     technique. That's because there are several common protocols
     (especially FTP) which require special handling for masquerading
     (in the case of FTP there's a data connection that comes back from
     the server to the client, while the data connection when in the
     usual direction from the client to the server.
     
     For this reason I actually prefer applications proxying. To use
     that you go to the "contrib" directory at any Red Hat site and
     download the SOCKS server and client packages. You install the
     server on your Linux gateway then you install the clients on any of
     your Linux clients.
     
     On the SOCKS gateway you create a file: /etc/socks5.conf with
     something like this for its contents:
     
                route   192.168.5.     -       eth0
                permit  -       -       -       -       -       -

     ... there are many options that you can use to limit access to the
     socks gateway --- but this is the simplest working example.
     
     On the Linux clients you create a file named /etc/libsocks5.conf
     with an entry in it that looks something like:
     
        socks5     -    -       -       -       192.168.5.2
        noproxy    -    192.168.5.      -

     ... where the ".2" address is the one on which I was running this
     SOCKS server.
     
     For the non Linux clients you have various different configuration
     methods. Most Windows TCP/IP utility suites (other than
     Microsoft's) support SOCKS proxies. There are replacement
     WINSOCK.DLL's that support this proxying protocol transparently for
     most/all other Windows services. The MacOS applications also seem
     to support SOCKS pretty widely.
     
     There are a few alternatives to NEC's SOCKS servers. I've found
     "DeleGate" to be a pretty good one (search for it on Freshmeat).
     DeleGate as the advantage that you can use it as a "manually
     traversed" proxy as well as a "SOCKS" compatible one. The SOCKS
     proxying protocol allows the client software to communicate with
     the proxy server to relay information about the request to it, so
     that it can, in turn, relay that to a process that runs on the
     external servers. This is called "traversal."
     
     Non-SOCKS proxies have to have some other traversal mechanism. Many
     of them are "manually traversed" --- I telnet or ftp to the TIS
     FWTK proxies (for example) and I log in as
     "myname@myrealtarget.org." --- in other words I encode additional
     account and destination information into the prompts where I'd
     normally just put my account name.
     
     DeleGate allows you do use this manual traversal mechanism when you
     are stuck with a non-SOCKSified client.
     
     I've also seen reference to another SOCKS server package called
     "Dante" --- that's also listed at Freshmeat
     (http://www.freshmeat.net).
     
     There are also a few other types of proxies for special services.
     For example the Apache web server, and the CERN web server and a
     few others can be used as "caching web proxies." Squid can proxy
     and cache for web and FTP.
     
     Some services, such as mail and DNS are inherently "proxy" capable
     by design. I can't adequately cover DNS or e-mail services in this
     message. There are full-sized books on each of these.
     
     So that's the very basics of using Linux as a gateway between a
     private LAN and the Internet. If you get a set of "real" IP
     addresses, and you insist on using these to allow "DRIP" (directly
     routed IP) into your LAN you don't have to do any of this IP
     masquerading or proxying --- but you should do some packet
     filtering to protect your client systems and servers.
     
     Good packet filtering is difficult. I alluded to one of the problem
     when I pointed out that FTP involves two different connections ---
     an outgoing control connection and an incoming data connection.
     There's also a "PASV" or "passive" mode which can help with that
     --- but it still involves two connections. This wreaks havoc with
     simple packet filtering plans since we can't just blindly deny
     "incoming" connection requests (based on the states of the "SYN"
     and "ACK" flags in the TCP packet headers. One of the "advantages"
     (or complications) of "stateful inspection" is that it tracks these
     constituent connections (and the TCP sequencing of all connections)
     to ensure consistency.
     
     A decent set of packet filters will involve much more code than the
     set of proxying and masquerading examples I've shown here. I
     personally don't like DRIP configurations. I think they represent
     too much risk for typical home and small business networks.
     However, here's a sample
     
# Flush the packet filtering tables
/root/bin/flushfw

# Set default policy to deny

/sbin/ipfwadm -I -p deny
/sbin/ipfwadm -F -p deny
/sbin/ipfwadm -O -p deny


# Some anti-martian rules -- and log them
        ## eth1 is outside interface

/sbin/ipfwadm -I -o -W eth1 -a deny -S 192.168.0.0/16
/sbin/ipfwadm -I -o -W eth1 -a deny -S 172.16.0.0/12
/sbin/ipfwadm -I -o -W eth1 -a deny -S 10.0.0.0/8
/sbin/ipfwadm -I -o -W eth1 -a deny -S 127.0.0.0/8

# Some anti-leakage rules -- with logging
        ## eth1 is outside interface

/sbin/ipfwadm -O -o -W eth1 -a deny -S 192.168.0.0/16
/sbin/ipfwadm -O -o -W eth1 -a deny -S 172.16.0.0/12
/sbin/ipfwadm -O -o -W eth1 -a deny -S 10.0.0.0/8
/sbin/ipfwadm -O -o -W eth1 -a deny -S 127.0.0.0/8

        ## these are taken from RFC1918 --- plus
        ## the 127.* which is reserved for loopback interfaces


# An anti-spoofing rule -- with logging
/sbin/ipfwadm -I -o -W eth1 -a deny -S 222.250.185.16/28

# No talking to our fw machine directly
        ## (all packets are destined for forwarding to elsewhere)

/sbin/ipfwadm -I -o -a deny -D 222.250.185.14/32
/sbin/ipfwadm -I -o -a deny -D 222.250.185.30/32


# An anti-broadcast Rules
        ## (block broadcasts)
/sbin/ipfwadm -F -o -a deny -D 222.250.185.15/32
/sbin/ipfwadm -F -o -a deny -D 222.250.185.31/32

# Allow DNS
        ## only from the servers listed in my caching server's
        ## /etc/resolv.conf

/sbin/ipfwadm -F -a acc -D 222.250.185.18/32 -P udp  -S 192.155.183.72/32
/sbin/ipfwadm -F -a acc -D 222.250.185.18/32 -P udp  -S 192.174.82.4/32
/sbin/ipfwadm -F -a acc -D 222.250.185.18/32 -P udp  -S 192.174.82.12/32

# anti-reserved ports rules
        ##  block incoming access to all services
/sbin/ipfwadm -F -o -a deny -D 222.250.185.16/28 1:1026 -P tcp
/sbin/ipfwadm -F -o -a deny -D 222.250.185.16/28 1:1026 -P udp


# Diode
        ## (block incoming SYN/-ACK connection requests)
        ## breaks FTP
/sbin/ipfwadm -F -o -a deny -D 222.250.185.16/28 -y

## /sbin/ipfwadm -F -o -i acc \
##      -S 0.0.0.0/0 20 -D 222.250.185.16/28 1026:65535 -y
##      simplistic FTP allow grr!


# Allow client side access:
        ## (allow packets that are part of existing connections)
/sbin/ipfwadm -F -o -a acc -D 222.250.185.16/28 -k

     There are bugs in that filter set. Reading the comments you'll see
     where I know of a rule that handles most FTP --- but opens risks
     any services that run on ports above 1024 --- like X windows
     (6000+) etc. This would simply require the attacker to have control
     of their system (be root on their own Linux or other Unix system
     --- not too tough) and for them to create package that appeared to
     come from their TCP port 20 (the ftp data port). That's also
     trivial for anyone with a copy of 'spak' (send packet).
     
     So, I have this rule commented out and I don't show a set of rules
     to allow localhost systems to connect to a proxy FTP system.
     
     Note that these addresses are bogus. They don't point to anything
     that I know of.
     
     The only parts of this set of filters that I feel confident about
     are the parts where I deny access for incoming spoofed packets (the
     ones that claim to be from my own addresses or from non-routable or
     "martian" addresses like localhost). I also have rules to prevent
     my system from "leaking" any stray private net and/or martian
     packets out into the Internet. This is a courtesy --- and it has
     the practical benefit that I'm much less likely to "leak" any
     confidential data that I'm sharing between "private net" system on
     my LAN --- even if I screw up my routing tables and try to send
     them out.
     
     I've read a bit about ipfil (Darren Reed's IP Filtering package ---
     which is the de facto standard on FreeBSD and other BSD systems and
     which can be compiled and run on Linux. This seems to offer some
     "stateful" features that might allow one to more safely allow
     non-passive FTP. However, I don't know the details.
     
     The 2.2 kernels will include revamped kernel packet filtering which
     will be controlled by the 'ipchains' command. This is also
     available as a set of unofficial patches to the 2.0 series of
     kernels. This doesn't seem to offer any "stateful inspection"
     features but it does have a number of enhancents over the existing
     ifpwadm controlled tables.
     
     Your last question was about configuring Linux as an Internet
     server (presumably for public web pages, FTP or other common
     Internet services.
     
     As you might have gathered by now; that is the same as providing
     these service to your own LAN. Under Linux (and other forms of
     Unix) any service default to world-wide availability (which is why
     we have firewalls).
     
     I've spent some time describing how Linux and other Unix systems
     need to be specially configured in order to limit access to
     services to specific networks. Otherwise someone in Brazil can as
     easily print document on your printer as you.
     
     To be an Internet server all you have to do is have a static IP
     address (or regularly update your address record at
     http://www.ml.org). Once people know how to route requests to your
     server --- assuming you haven't taken steps to block those requests
     --- Linux will serve them.
     
     Most of the challenges in setting up networks relate to addressing,
     routing, naming and security. Most of us still use "static" routing
     for our own networks --- just manually assigning IP addresses when
     we first deploy our new systems. Most of us with dial-in PPP get
     dynamic IP addresses from our ISP's. Some sites now use DHCP to
     provide dynamic addresses to desktop systems (servers still need
     consistent addresses --- and using DHCP for those just introduced
     additional opportunities for failure).
     
     For routing, subnetting, and LAN segmentation issues --- read my
     posting on routing from last month (I think Heather is publishing
     it this month). That's about 30 pages long!
     
     (The one thing I glossed over in that was "proxyarp" on ethernet.
     It's covered in another message this month so glance at it if you'd
     like to learn more.)
     
     I hope I've imparted some hint on the importance of considering
     your systems security. Even if you have nothing of value on your
     systems --- if the thought of some cracker vandalizing your files
     for kicks is of no concern to you --- it is irresponsible to
     connect a poorly secured system to the Internet (since your
     compromised system may be used to harass other networks).
     
   (?) I would like to write a faq about this after I'm done... hopefully
   I can help other after a bit of exprimenting myself.
   
     (!) While the offer is appreciated --- it would be more of a book
     than an FAQ. However, I would like to see some "Case Studies" ---
     descriptions of typical SOHO (small office, home office),
     departmental, and enterprise Linux (and heterogenous)
     installations.
     
     These would include network maps, "sanitized" examples of the
     addresses, routing tables and configuration files for all services
     that are deployed in the network, on all of the clients and servers
     present. Company, domain and other names, and IP addresses would be
     "anonymized" to discourage any abuse and minimize any risk
     represented by exposure. (E-mail addresses of the contributors
     could be "blind" aliased through my domain or hotmail, or
     whatever).
     
     The important thing here is to define the precise mixture of
     services that you intend to provide and the list of users and
     groups to which you intend to provide them. This is a process that
     I've harped on before -- requirements analysis.
     
     You need to know who you are serving and what services they need.
     
   (?) Thanks
   Nilesh.
            ____________________________________________________
   
(?) Persistent Boot Sector

   From Hummingbird Designs on Thu, 24 Dec 1998
   
   (?) Hi,
   
   I installed Linux on my PC at work and had everything working with
   System commander, I have to use NT for some apps we use at work.
   
   anyway, I was trying to get the nic card working so I tried using the
   setup tool to install the kernel from the Cdrom that is used to
   install linux off a network. Now everytime I turn on the machine it
   gives me the screen as if I had installed a bootdsk like when you
   first install Linux. I have done EVERYTHING I know of to get that out
   of there . . .I used a zerofill utility that goes over each and every
   sector of every track and fills it with 0's including the MBR. and
   that damn message still comes up everytime I boot. . . I was thinking
   of removing my Hard drive and seeing if it flashed my BIOS or
   something cause according to Quantum9its a quantum drive) their
   utility is almost like a low level format.
   
     (!) "the setup tool..." (what setup tool?) "the screen as if I had
     installed a bootdsk[sic]" (what screen?) "EVERYTHING" (what is
     "everything?"). "zerofill utility" (what utility?) "that damn
     message" (what damn message?).
     
     You do seem to be a bit sketchy on the specifics so I'll have to
     guess.
     
     You had (some distribution of) Linux installed on your system in a
     dual boot configuration with NT. You were using System Commander as
     your primary boot manager. Presumably you installed LILO (the Linux
     loader) into the "logical boot record" (the "superblock") of one of
     your Linux filesystems (presumably the root fs). While trying to
     configure or troubleshoot some problem with a network card (NIC)
     you used some sort of "setup" utility which somehow configured your
     system to bypass System Commander's boot record (presumably by
     overwriting it with a copy of LILO). You've tried some ways to
     restore your System Commander installation, and/or to build a new
     MBR, and those have been unsuccessful.
     
     O.K. Given that guess work I have a hypothesis. You may have run
     something like 'FDISK /MBR' from your NT boot disk. This may have
     enabled the active partition in your MBR. The DOS MBR code would
     load the logical boot record of the active partition. If your Linux
     partition (with its copy of LILO in the superblock" ) just happened
     to be the active partition at the time --- you might see that copy
     of LILO (one of two that had been installed on your disk, one on
     the MBR and the other in the LBR/superblock) as the first screen on
     boot up.
     
     (I'm not sure this scenario accounts for all of your symptoms since
     this is all based on guesswork).
     
     I have no idea what your "zero fill" utility is doing --- but it
     almost certainly is not zero'ing out track zero of your hard drive
     (including the MBR). That would render the system unbootable and
     would destroy the primary copy of your partition table (the last 50
     bytes or so of the MBR). The Linux/Unix command to do this is very
     simple:
     
     dd if=/dev/zero of=/dev/hda bs=512 count=63
     
     ... where /dev/hda is the first IDE drive, 512 is the bytes per
     sector and count is the number of sectors in a typical track. DON'T
     DO THIS! (If you insist on doing this, first double check which
     device you want to use, the first IDE is /dev/hda and the first
     SCSI is /dev/sda, then check the number of sectors per track ---
     which should be listed in your CMOS setup for an IDE drive and
     would be listed in your vendor documentation and possibly by your
     SCSI adapter diagnostics firmware).
     
     You could save a copy of your MBR and partition table using dd with
     a command like:
     
     dd if=/dev/hda bs=512 count=1 of=/root/mbr.bin
     
     ... which you can use in scripts to compare and replace your MBR in
     future mishaps.
     
     It's possible that System Commander's boot loader is still in the
     MBR --- but that it's been configured to skip it's opening
     menu/selection prompting and boot directly off of your Linux
     partition.
     
     Of course it's also possible that Linux as completely taken over
     your system; that's it's run amok and overwritten every partition
     and drive on the system. In my experience that would only happen if
     you (or someone) told it do do this. I've never seen Linux touch
     any part of a hard drive unless it was "told" to do so. (Unlike
     MS-DOS, OS/2, and Windows which periodically trash the MBR when
     they hang --- apparently scribbling register or random memory
     contents over track zero, sector zero when those zero's just happen
     to be in the the register during the dying spasms of those
     systems).
     
     There is virtually no chance that Linux touched your flash BIOS ---
     so this is not a bug in your firmware. I'd say that this "zerofill"
     utility is highly suspect. Obviously Linux users just use the 'dd'
     command for this sort of thing.
     
     As for how to fix you problem. You could try re-installing System
     Commander. I've never used it --- but it seems that it can find
     most types of partitions during installation --- so it should be
     able to find your NT and Linux filesystems and install a new copy
     of it's boot loader code to start either of these systems. I've
     never used System Commander --- but it is commercial software ---
     so it SHOULD come with some technical support. Perhaps they can
     walk you through the re-installation.
     
     Keep in mind that LILO can still be installed on your MBR, your
     superblock, or both, so it might still show up after you have
     System Commander or NT's boot manager installed. It should then
     only come up after you've selected an option from your primary boot
     loader. This can be a bit confusing --- so you can reconfigure lilo
     to bypass any prompts or delays when you're calling it in this
     fashion.
     
     Keep in mind that you can also find, download, and install
     LOADLIN.EXE into a DOS directory somewhere on your system. You can
     use that instead of LILO (it's a DOS program that loads linux
     kernels). I've heard a rumor that there is an NT native console
     application (an NT .EXE that you'd run under a CMD.EXE shell) to
     load Linux. I've never seen it.
     
     If you end up having to re-install Linux and NT (probably
     unnecessary --- but it might be the easiest way) you can configure
     Linux to boot from floppy and never touch the boot records on your
     hard disk. It's also possible to configure Linux to use some other
     hard disk on your system --- and not have it touch your primary
     drive at all.
     
     Read through back issues of this column and go through the various
     multi-boot HOWTO's and mini-HOWTO's at the LDP site
     (http://metalab.unc.edu/LDP) and it's mirrors. There are many
     options.
     
   (?) C's from home and a nic card I know to install linux over the
   network and see if that gets rif of it.
   
     (!) I don't get this at all. How would you expect installing Linux
     (over a network or otherwise) to get rid of a Linux boot loader.
     
   (?) any help would be appreciated
   Brian Korsund
            ____________________________________________________
   
(?) Secondary MX Records: How and Why

   From Craig Capodilupo on Thu, 24 Dec 1998
   
   (?) Some domains have multiple MX records. Sometimes the MX record of
   lower preference, say 20, is an off-site domain. Does this off-site
   server have to be configured to hold mail until the primary exchanger
   is back online?
   
   I am going to use my UNIX server as a secondary mail exchanger but I
   am not sure if it has to be configured.
   
     (!) In the good old days there was no special tricks to providing
     secondary MX for your friends. They would just add you mail server
     to their DNS records, listing you as a "less preferred" mail
     exchanger (an MX record with a higher value than any of yours).
     Mail would be relayed automatically.
     
     This was in the days of "promiscuous mail relaying" --- it was
     easier to just let anyone relay mail though anyone else. However,
     just as venereal disease contributed to the demise of the "free
     love" promiscuity of the '60's --- the blight of spam as spelled
     the end of open e-mail relaying in our decade.
     
     The problem was that spammers would dump their e-mail on any open
     relay --- one piece of mail that might be addressed to thousands of
     happless recipients (and with the return addresses forged on top of
     that).
     
     When you install 'sendmail' version 8.9.x and later the open relay
     to which early versions defaulted are now closed. You'll have to
     create a relay map (default location in /etc/mail/relay-domains) to
     enable relaying for your sites).
     
     There are some questions that relate to this in the 'sendmail' FAQ
     at:
     
     http://www.sendmail.org/faq/section3.html#3.27
     
     ... although you could disable this feature and allow promiscuous
     relaying --- I'd not suggest this.
     
     You'd eventually get hit by a spammer and then you'll probably end
     up on Paul Vixie's "Real-time blackhole list" (the RPL) or on
     "DorkSlayer's" ORBS (open relay blocking system). There are many
     sites these days that subscribe to these free DNS lookup services
     in their "check_relay" macros --- and deny any mail access
     whatsoever from any site listed on one or either of these.
     
     However, that should be all there is to it. Normally your mail
     would just get tossed into the queue at your MX secondary's site
     where it will languish until your site is back up (or less busy, or
     whatever). In other words whatever connectivity problem the
     original sender's site had in getting to your primary MX host will
     probably go away within a few hours --- and your secondary MX will
     relay your mail during its normal queue runs. The orginal sender
     will get delay notifications (4 hours, 4 days, etc) according to
     the settings in your secondary's configuration files.
     
     Some people use these features in their firewall configuration ---
     they place a higher MX host outside their main network (on the
     exposed network segment) --- and all outside mail has to hit it
     first (since they can never connect to the preferred hosts inside
     due to the packet filters). The packet filters then allow that
     exposed host (and only that exposed host) to transfer files into
     the domain. Thus the potential attacker can't attempt to directly
     exploit bugs in the internal SMTP daemon (especially if the
     "exposed" host is behind an anti-spoofing screen, and has "source
     routing" disabled, which all Linux systems default to).
     
     A more elegant approach is to use "split DNS" --- so that the
     external/exposed MX host appears (to the outside world) to be the
     preferred mail destination while the real preferred system (to your
     internal systems, and to your exposed host itself) is sequestered
     on your internal network using non-routable "private net"
     addresses. The advantage to this is that your potential attackers
     don't have any information about your internal structure --- and
     they can't route packets to your internal hosts at all (those don't
     have "real" IP addresses). Thus the outside attacker has to resort
     to high wizardry to get packets to your hosts, before any exploits
     can even be attempted.
     
     (I should note that any attacks that can be carried through the
     mail contents will still get delivered to you. The bugs this
     protects you from are those in the TCP connection handling of the
     daemons --- not in the parsing of headers and message contents).
     
     I've heard of some sites that maintain separate queues for their
     relay neighbors. I don't know exactly how that works --- but its
     similar to the way that ISP's maintain queues for their SMTP
     customers. Basically they create a rule (probably an entry in their
     mailertable) that calls the relay mailer with an extra parameter.
     Thus all the queue items end up in special, separate directories.
     Then the SMTP ETRN command can be used (by customers) to force a
     queue delivery (something like: 'sendmail -q -O
     QueueDirectory=/var/spool/mqueue.customerX') when the customer's
     connection comes up.
     
     Then there are sites that deliver all mail to a given site into a
     single mail spool (mbox) file. Hopefully they are adding the
     "X-envelope-To:" headers as they do this. Then their clients use
     'fetchmail' to grab these messages, split them back out and
     dispatch them according to the delivery policies at the
     disconnected site.
     
     Personally I still prefer UUCP for handling mail to disconnected
     sites. However, it is getting increasingly difficult for new users
     to find people who understand UUCP. (Oddly one study showed that
     the use of UUCP hasn't decreased at all -- it's grown at a slow,
     steady couple of percent all along. However, compared to the
     explosive growth of the Internet it as seemed, by comparison to
     completely disappeared. I think UUCP is still a very good option
     for emerging countries and for anyone that isn't maintaining
     dedicated connections to the Internet --- though I'd say that a bit
     of work should be done on simple configuration tools and examples.
     It's easy enough to use UUCP as a transport for DNS/Internet
     "domain" style addresses. So we don't need to ever return to the
     bad old days of "bang paths").
     
   (?) TIA,
   Craig
            ____________________________________________________
   
(?) 'lpd' Bug: "restricted service" option; Hangs Printer Daemon

   From Michael Martinez on Thu, 24 Dec 1998
   
   (?) The lpd that RedHat linux supplies has a problem. If you send it a
   print job across the network, and you do not have an account on the
   print serve, lpd forks a child, creates an entry for you in the queue,
   then hangs because it can't find your user id. Do you know a remedy
   for this?
   
   Michael Martinez
   System Administrator, C.S. Dept, New Mexico Tech
   
     (!) I think I read about this in the security mailing lists
     recently. It seems to be related to the "restricted service" (rs)
     option in your /etc/printcap.
     
     One option would be to remove the rs option from the printcap and
     use packet filtering and hosts_access (TCP_Wrappers) to restrict
     access to your print server(s).
     
     Then look for updates to the packqage itself.
     
     The first thing to do is to report this to Red Hat Inc. after
     checking their web site and for any updates to this package. First
     find the package name using rpm -q /usr/sbin/lpd. This will tell
     you which RPM package included the lpd command.
     
     Then connect to ftp://updates.redhat.com (or one of its mirror
     sites). I don't see one there yet. If you aren't already using the
     most current Red Hat version (5.2 at this point) then check for
     that package in the RPMS directory for the latest. Red Hat Inc
     normally embeds the version in the package and file names.
     
     My S.u.S.E. system (which uses RPM format but uses a different
     suite of RPM files) reports lprold-3.0.1-14 as the package name
     that owns '/bin/sbin/lpd' --- so I'd look for a S.u.S.E. RPM that
     was later than that.
     
     Failing that look for a Debian package (an update) and try using
     "alien" to convert that into an RPM. Look up the Debian maintainer
     for that package at the http://www.debian.org web site.
     
     If that doesn't work, look for a canonical "home" site for the
     package (lpr/lpd is a classic BSD subsystem --- so looking at the
     FreeBSD NetBSD and/or OpenBSD sites for a later version of the
     "tarball" (sources in .tar format) might work. Look in the man
     pages and run 'strings' on the lpd binary --- and look through
     other docs (use rpm -ql <packagename> for a list of all files in
     that package) to see if an author or maintainer for the base
     package is listed. Then you can look at that maintainer's web site
     or FTP server, and/or possibly e-mail them.
     
     (The BSD sites are http://www.freebsd.org, http://www.netbsd.org,
     http://www.openbsd.org, in case you needed them.)
     
     If you have a competent programmer on hand (I'm am not a competent
     programmer) you could have them look through the sources and apply
     a fix. Then you'd e-mail the diffs to your patches to the
     maintainer of the package (possibly copying Red Hat Inc as well).
     If you also looked at the Debian site for an update you can copy
     their maintainer on your fix as well.
     
     They may not accept your patches --- but they will certainly
     appreciate the effort and it may help them focus on the right part
     of the code.
     
     This is how Linux got where it is today. (I've sent patches in on
     'sendmail', 'md5sum' and 'tripwire' in the past --- and I'm not a
     programmer. So anyone who does feel competent in the art should not
     be intimidated by the notion, and won't have to spend nearly as
     long poring over the sources as I did for my pathetic little
     suggestions).
     
     I'd like to suggest one modest "New Year's Resolution" to every
     Linux user:
     
     Find one bug or typo. Fix it.
     
     ... hunt through the man pages, docs, sources, etc of a few of your
     favorite packages. Find one thing that's wrong or missing, correct
     it (or find someone to do it with you) and submit the patch to the
     appropriate parties.
     
     Last year was the first year Linux was taken "seriously." Let's
     make this the year that we prove that the "open source" (TM)
     process is maintainable and yields truly superior and mature
     results.
                        ____________________________
   
(?) LPD forks and hangs/Linux

   From Michael Martinez on Sat, 26 Dec 1998
   
   Thanks a bunch for your great, documented help. Just so you know, RH
   5.2 ships with this problem. So, I'll check out the other resources
   you gave me. I've considered writing a patch for it - I might just do
   it!
   
   Merry Christmas,
   
   Michael Martinez
   System Administrator, C.S. Dept, New Mexico Tech
            ____________________________________________________
   
(?) Dual Boot Configurations

   From Justin Jenkins on Thu, 24 Dec 1998
   
   (?) I'm ashamed to ask this but I don't know how! i got a copy at
   work, and installed it on an old 166 I would like to install it on my
   450 can you help me?? -thanks Justin
   
     (!) Have you tried the HOWTO?
     
     http://metalab.unc.edu/LDP/HOWTO/mini/Linux+DOS+Win95+OS2.html
            ____________________________________________________
   
(?) Microtek Scanner Support: Alejandro's Tale

   From AmericanPride88 on Sat, 26 Dec 1998
   
   (?) Hello Alejandro, I'm pretty annoyed that when I went to download
   the file at Microtek Ibelieve it was this?/ epp 264 exe.
   
   They didn't have it available , They should at least include some info
   about their hardware on a CD so that my Hardware Wizard can setup the
   right driver for the Damn Scanner, someone suggested. I try a HP
   scanner. I believe I will
   
   Return the C3 tomorrow. It was my Christmas present it's been nothing
   but a Dissapointment to me. Thank you...
   
   If you can suggest a compatible Driver or anything else please do.
   Thanks again!1
   
   Sincerely, Rebecca
   
     (!) Who is Alejandro?
     
     Rebecca, it looks like you have sent this message astray. First I'm
     not Alejandro, I'm Jim Dennis. You sent this to "answerguy@ssc.com"
     which is the access point to the Linux Gazette "Answer Guy" column
     --- providing free technical support (and a bit of spleen venting
     and curmudgeonly commentary) for users of Linux.
     
     Your message doesn't relate to Linux as far as I can see. We don't
     often use ".exe" files and Linux doesn't need a "Hardware Wizard"
     to find and use any of its devices (except for the guy at the
     keyboard, perhaps).
     
     The primary resource for supporting scanners under Linux would be
     SANE (Scanner Access is Now Easy) at: http://www.mostang.com/sane
            ____________________________________________________
   
(?) Modem HOWTO Author Gets Offer RE: WinModems

   From bf347 on Sat, 26 Dec 1998
   
   (?) I'm the author of the new Modem-HOWTO. Someone sent me email
   saying that he could put me in touch with someone who might release
   the info needed to write a driver for Lucent LT Winmodems. Is anyone
   interested? Is the first line of this message truncated? It is on my
   dumb terminal. I'm writing this from a BBS that allows no editing of
   this message.
   
   Dave Lawyer
   bf347@lafn.org
   
     (!) Ironically I was just answering another correspondent who got
     burned by one of these things.
     
     As I said there, I'll never purchase any internal modem.
     
     However, I'll post this message to my editor (it should appear in
     the February LG if at all). Maybe someone else will be interested.
     
     [ February when you got it in December? I sometimes defer by one
     month, but not short messages like this one. Get more coffee. BTW
     the other querent has a Lucent WinModem, so I'm sure there's at
     least one interested reader. Contrary to usual, I've left his
     address available in case any readers want to make contact to take
     on the task. -- Heather ]
     
     You could also send a message to the SVLUG (Silicon Valley Linux
     Users Group) list --- since you appear to be in Los Angeles. I
     realize that L.A. doesn't have as large and active a LUG as the SF
     Bay and Silicon Valley areas --- thought I've heard that you're
     working on it.
     
     Look at http://www.svlug.org and http://www.balug.org for a couple
     of Linux users groups up in Northern Cal that might have some
     interested programmers. Also be sure to shop it around at your
     local UG's and in the newsgroups and mailing lists. I'm sure that
     someone will pick off the job --- if the people at Lucent, or
     wherever, aren't too onerous.
     
     The message came through fine.
            ____________________________________________________
   
(?) Condolences to Another Victim of the "LoseModem" Conspiracy

   From simone on Sat, 26 Dec 1998
   
   (?) I think I allready know th answer...
   
   Well I have a Lucent modem 56K (it has been a mistake i didn't know
   winmodems existance I was so happy with my old 14400)
   
   It doesn't work with linux or solaris (so sad)
   
   Do you have any suggestions?
   
     (!) Return the modem. Complain bitterly to the retailer and get
     their assurances that they will stop ripping off their customers
     with these pieces of garbage posing as computer peripherals.
     
     Then get an external modem. So far I've never heard of an external
     "winmodem" --- and I've heard that it's not feasible to design one.
     Sticking with external modems has always been a good idea. They are
     more expensive but they've always been better products. Generally
     they are more reliable (probably less power fluctuation and
     electronic crosstalk from the other components inside the PC). It's
     also safer and better for the rest of the PC (less crosstalk with
     the internal modem, less heat in the case, virtually no chance that
     a modem that gets zapped by a phone line power surge will destroy
     your CPU or memory chips in the computer, etc). Finally it's much
     more convenient for most users (you have status lights; you can
     move the modem to other systems easily and replace it with an old
     "test" modem easily).
     
     I personally will not ever buy an internal modem. Not ever!
     
   (?) thanks for reading
   sorry for bad English
   waitig for an answer
   
     (!) No problem. I still have some other messages in Spanish,
     Portugese, and maybe some in Italian that I haven't answered yet. I
     haven't had the time to cut and paste them into Babelfish for the
     "rough" translation.
     
   (?) Un cordiale saluto Montanari
            ____________________________________________________
   
(?) Reading Audio Tapes using HP-DAT Drive

   From Thomas Kruse on Fri, 25 Dec 1998
   
   (?) Hi! I wonder, if you can help me with the following issue: I
   bought a brand new SCSI HP-DAT streamer. In the manual it is described
   to treat audio-dat tapes as if they were read-only. I tried to fetch
   the data from the tape, but I get always i/o erorrs. (I tried "cat
   /dev/st0" "dd if=/dev/st0...") Do I need special software or is it
   impossible to "read" audio tapes with Linux? (I heard rumors, that
   this is possible with special Win95 software)
   
   Regards, Thomas
   
     (!) That's an excellent question. I have absolutely no idea. I
     guess you could look at the Linux st driver sources and see if they
     need to be changed. I guess you might even write to the author or
     maintainer of the st driver to ask for advice.
     
     Looking under /usr/src/linux I find, in .../drivers/scsi/st.c that
     the 2.0.36 sources list Kai Makisara as the author. I've blind
     copied his addresses on this response.
     
     Kai, thanks for the work on the 'st' driver. What would prevent one
     from reading audio tapes using /dev/stX under Linux?
     
     I'm sorry if you're getting two copies of this I wasn't sure which
     address from the st.c file to use.
     
     (Note: this message is in response to a Linux Gazette "Answer Guy"
     question. I'll be happy to post any response --- which may end up
     prevent future questions on this topic. If this is buried in an
     FAQ, HOWTO, or man page somewhere, please point us at it and
     forgive us for not finding it).
                        ____________________________
   
(?) More on: Reading Audio Tapes using HP-DAT Drive

   From Kai Makisara on Sat, 26 Dec 1998
   
   (?) There are (at least) two issues when using audio DAT tapes in a
   computer DAT drive:
   
    1. You may or may not be successful in using audio media to record
       digital data. The tape cartridge does not contain the MRS (Media
       Recognition System) identification data that most of the digital
       tapes nowadays have. The drive uses this data to determine the
       tape length, etc. By default, the HP drives I have seen treat any
       non-MRS tapes read-only. You can change this with a switch. I
       assume this is what the HP manual means but not what you are
       interested in.
    2. You may be able to read audio data using a computer DAT. This
       depends on the firmware of the DAT driver. As far as I know, most
       computer DAT drives are unable to read audio data. There have been
       some drives from Silicon Graphics that were able to read audio
       data. As far as I know, they were ordinary Archive DATs with
       special firmware. You needed special SCSI commands to read audio
       data (I don't know the commands).
       
   The Linux SCSI tape driver does not currently have any support for
   reading audio data.
   
   Kai
   
     (!) Thanks Kai.
     
     I presume this is a result of the music industry's lobbying. The
     big record companies (Sony, Columbia, et al) have been interferring
     with the digital electronics industry for years in a misguided
     effort to discourage bootlegging.
     
     Oh well. We're already at the stage where some people are providing
     free writing --- the beginnings of an "open content" movement. This
     will probably encompassing music and literature much as the "open
     source (TM)" movement has made an impact on software.
     
     I don't object to spending money on a good book or a decent CD. I'd
     just like to see more of it go to the artist and I'd like some
     assurance that corporate politics and big business aren't exerting
     undue control over the contents. However, I'll leave it at that
     before this becomes overly political (and overtly subversive).
            ____________________________________________________
   
(?) Best of Answer Guy: A Volunteer?

   From EvilEd on Fri, 25 Dec 1998
   
   (?) Hi,
   
   I've been reading "Answer Guy" for a while now. I have to say, it's
   really informative and cool. Would'nt it be great to have an archive
   or a separate page for "Best of Answer Guy 1998"?
   
   Thanks and more power, EVILed
   
     (!) Wow! That sounds like a great News Year's project. Would you
     like to do it? Would anywhere like to do it?
     
     If y'all put together your favorite 50 or 100 questions, answers
     and rants from me --- I'll annotate them with updated links and
     comments. So you'll get a retrospective on how things have changed
     since I wrote some of those messages. (In many cases I've learned
     more after that fact --- often from reader comments and
     corrections. In other cases things have just changed since I wrote
     my responses).
     
     However, I can't do this myself. I wrote all that stuff and the
     thought of reading back through all of it is mind numbing. So, if
     someone wants to volunteer on this --- let me know.
            ____________________________________________________
   
(?) termcap/terminfo Oddities to Remotely Run SCO App

   From Eric Freden on Fri, 25 Dec 1998
   
   (?) Dear Answer Guy,
   
   I managed to solve my keybinding problem without your help (thanks
   anyway). Here is a synopsis: I needed to run proprietary software (in
   Cobol, no less) on a PPro running SCO Unixware via telnet from a PII
   running RedHat 5.0. The SCO box had a limited termcap file, none of
   which matched type linux (Linux console) or xterm (for Linux X).
   Changing TERM in Linux does not alter function key bindings! The only
   way I could change keybindings was to mess with /usr/lib/kbd/keytables
   which changes bindings at boot or /usr/X11/lib/X11/xkb which alters
   bindings upon startx. Both of these methods are global in nature and
   will "break" existing applications like emacs. I read the man pages on
   xterm where there is an option to change to Sun style function keys
   bindings (which was not SCO compatible either).
   
   Then I noticed that xterm and nxterm (i.e. color xterm) bind F1--F4
   differently!?!?! By sheer luck, SCO has a termcap entry called coxterm
   that is compatible with nxterm keybindings. There is no termcap or
   terminfo entry for nxterm in Linux. Why not? For that matter, I see no
   effect in function keymappings after changing existing termcap
   entries, compiling with tic, and rebooting . Why not?
   
   Eric Freden
   
     (!) Last I heard Eric S. Raymond was still maintaining the termcap
     file. He's also listed in the author's section in the 'terminfo'
     man page. So perhaps he'd be the best person to address these
     issues?
     
     (I've copied him on this. Hi Eric! Missed you at LISA. Hope to see
     you at LinuxWorld Expo next March).
            ____________________________________________________
   
(?) Arabic BiDi Support for Linux

   From Anas Nashif on Fri, 25 Dec 1998
   
   (?) Hi, I was wondering if its possible in one way or another to use
   arabic on linux.. Is there anything being done in this field? and how
   difficult it is to implement it?
   
   thanks,
   anas
   
   Anas Nashif       Universitaet Mannheim
   
     (!) Sorry it took me so long to answer this question. I finally did
     get around to doing a Yahoo! search on "+Linux +Arabic" and found
     this reference:
     
   Linux in Turkey
          http://www.slashdot.org/articles/98/09/05/1624256.shtml
          
     ... the references to Arabic ensued from the thread discussion
     after the main article. So far as I know there is no direct BiDi
     (bidirectional text) support for Linux yet. Some applications such
     as emacs/xemacs with MULE (multi language extensions) do provide
     some support for this. However I don't know much about the details.
     
     Happy Ramadan, and good luck.
            ____________________________________________________
   
(?) Automated Updates

   From Josh on Fri, 25 Dec 1998
   
   (?) A quick suggestion for the updates question.
   
   A student at georgia tech has written two excellent scripts, autorpm
   and logwatch. Autorpm will automatically keep your system up to date
   with the current redhat updates. Autorpm can be found at
   ftp.kaybee.org. It saves a lot of work on the system admins part.
   
   He was going to add them to the LSM, but I'm not sure if he has yet.
   
   Josh
   
     (!) There's also an 'autoup.sh' script for Debian systems.
     
     I'd suggest that these systems be used with considerable
     trepidation (if at all). However, they do make sense for some
     cases. For example I'm pretty sure you can configure these to watch
     some internal server.
     
     So, as the sysadmin for a medium to large installation you could
     manually grab and test updates --- or set up a "sacrificial" system
     to automatically grab them. Then, when you've vetted the updates
     you can post the RPM or .deb files to your internal server where
     you're client systems would pick it up.
     
     There's also a package called 'cfengine' by Mark Burgess which can
     help with various configuration details that might need to be tuned
     after any sort of automated update or software/configuration file
     distribution. (The old fashioned Unix way to automate updates to
     client systems is to use 'rdist' --- preferably over 'ssh' for
     better security).
     
     'cfengine' is the "awk of configuration management." Basically a
     'cfengine' script is a series of class descriptions, assertions and
     corrective actions. So you can express policies like: All Red Hat
     Linux systems running 2.0.30 kernel in this DNS subdomain and in
     this NIS netgroup, on any Tuesday (a series of class
     specifications) should have less than 100Mb of log files under
     /var/log (an assertion) and should have more that 40Mb of free
     space thereunder (another assertion) OR we should rotate the logs,
     removing the really old ones and compressing the other non-current
     ones (a corrective action).
     
     'cfengine' is an interesting project I'd like to see the security
     features beefed up considerably and I'd like to see it undergo a
     comprehensive security audit (by the OpenBSD and/or Linux SecAudit
     teams).
     
     Naturally 'cfengine' is one of those tools with which you can shoot
     off your foot, at about the HIP! So you should be very careful when
     you first start playing with it.
     
     More info on that package can be found at its canonical home page:
     http://www.iu.hioslo.no/cfengine
     
     Kirk Bauer (autorpm's author) doesn't seem to maintain a web page
     touting it's features. So you'll have to grab the file via FTP.
     
     There's also a package called 'rpmwatch' which is listed at:
     http://www.iaehv.nl/users/grimaldo/info/scripts
     
     More info on autoup.sh can be found in the Debian FAQ:
     
     http://www.debian.org/doc/FAQ/debian-faq-10.html
     
     ... or directly at these sites:
     
     http://www.taz.net.au/autoup http://csanders.vicnet.net.au/autoup
            ____________________________________________________
   
(!) Liam Greenwood: Your XDM question

   From Liam Greenwood on Fri, 25 Dec 1998
   
     Here's a suggestion that I'll just pass along.
     
   (!) To run XDM and not run have XDM start an Xserver on your local
   host:
   
telinit 3  # go to runlevel 3 (no xdm)

   edit the file /etc/X11/xdm/Xservers and comment out the line which
   looks like this:
   
:0 local /usr/X11R6/bin/X

   ...by putting a # at the start.
   
telinit 5   # go to runlevel 5 to start xdm

   (on a Red hat system... others may have the config file in another
   place).
   
   Cheers, Liam
   _______
   
   Don't tell my Mother I'm a programmer... ...she thinks I'm a piano
   player in a brothel.
            ____________________________________________________
   
(?) 'rsh' as 'root' Denied

   From Walt Smith on Thu, 24 Dec 1998
   
   (?) hi,
   
   I can run a program using rsh as 'user' on the same pc. i.e. rsh
   pcname ls (or thereabouts) It won't run as 'root'.
   
   There is one file that is supposed to be used as a config if running
   as root. It makes no difference. Do I need to recompile rsh wil a
   particular option?
   
     (!) You probably won't need to recompile it.
     
     The most common version of 'in.rshd' that's included with Linux
     will allow you to invoke it with the -h option (added to the
     appropriate line in the target system's /etc/inetd.conf file) to
     over-ride this restriction. If you're using Red Hat with PAM then
     you'll have to consider reconfiguring the appropriate file under
     /etc/pam.d/ to remove the option that prevent root access therein
     (I don't have that configuration file handy since I'm not using PAM
     on any of my boxes at home, at this point).
     
     All of this is in the man pages (in.rshd for the daemon).
     
     I'll go on record to recommand that you ban 'rsh' and 'rlogin' from
     your networks completely --- using 'ssh' instead. Later, when we
     have ubiquitous deployment of IPSec (transport layer security for
     TCP/IP) and Secure DNS (the ability to digitally sign and
     authenticate hostname/IP records) it may be acceptable to
     re-introduce these protocols.... maybe.
     
   (?) regards,
   
   Walt...in Baltimore respond to XXXXXXX@bcplXXXXXX
   
     (!) Did you ever program in BCPL?
     _________________________________________________________________
   
                     Copyright  1999, James T. Dennis
            Published in The Linux Gazette Issue 36 January 1999
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                      Booting Linux with the NT Loader
                                      
                            By Gustavo Larriera
     _________________________________________________________________
   
   On these days technical professionals like you and me often must deal
   with the following scenario: To make Linux and NT peacefully coexist
   on the same machine. Many HOW-TOs have been written -and it's a good
   advice to give them a look- about how to configure LILO (The Linux
   Loader) to do the task. Unfortunately, classic documentation have
   little references about the NT Loader. Yes, I know for some people
   there's some kind of religious war between Linux and NT out there :-)
   But from the point of view of a IT professional, the main objective is
   to have the job well done.
   
   In many real-life situations we must tackle with a installation where
   it is not desirable to alter the NT boot process. May be it is your
   machine's boss and he/she prefers to keep on booting the same way for
   ever ;-) In this article I will focus on how to configure the NT
   Loader so as to boot Linux (and continue booting NT also!).
   
   I hope these tips will help Linux users to successfully boot Linux
   through the NT Loader the easiest way. The procedure I will explain
   works for NT Server 4 and NT Workstation 4 running on Intel-compatible
   PC.
   
The Scenario

   After long conversation you have convinced your boss to put Linux on
   her computer machine. She is a happy NT user, she loves Word and Excel
   and such. She also is a clever person and has decided to give Linux a
   try. So she wants to have Linux installed. Just a moment: She prefers
   to keep booting with her familiar loading menu, from where she can
   choose to boot NT or DOS. Her wishes are your wishes, so you decide
   not to use LILO to dual-boot her computer.
   
The MBR considered useful

   The most important thing you must always remember is that many
   software products sit on your unique precious hard disk's Master Boot
   Record (MBR). So does NT without asking and so optionally does LILO if
   you want to. The machine's BIOS executes code stored on the active
   partition to initiate your preferred OS.
   
   When NT is installed, the MBR is modified to load a program called
   NTLDR from the active partition's root directory. The original MBR is
   saved on a small file called BOOTSECT.DOS. After a NT installation, be
   careful never overwrite the MBR, because the NT will no longer boot.
   To fix this problem, a NT user needs the NT's Emergency Repair Disk
   (ERD).
   
   With those things in mind, take note you must be careful to configure
   LILO *not* to install on MBR. Instead you will need to configure LILO
   on the root partition of Linux. That's safe for NT and Linux can live
   without the MBR.
   
NT loading process

   Once the NTLDR program launchs the NT user watch the "OS Loader V4.xx"
   message. Then NTLDR shifts the processor to 386 mode and starts a very
   simple file system. After that, it reads the file BOOT.INI to find out
   if there are other operating systems and prompts the user with a menu.
   A typical BOOT.INI looks like this:
   
[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(0)partition(2)\WINNT

[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINNT="NT V4 is here"
multi(0)disk(0)rdisk(0)partition(2)\WINNT="NT V4 VGAMODE" /basevideo /sos
C:\="DOS is here"

   The BOOT.INI file has two sections. The "boot loader section"
   specifies how long in seconds will be the menu on screen and the
   default menu choice. The "opearating systems section" specifies the
   different OSs the user can choose. We can read the machine boots NT
   (either in normal mode or in VGA diagnosing mode) and also can boot
   DOS. We can deduce from this example that DOS boots from the partition
   C: (first partition on first disk) and NT boots from the second
   partition. Typical installations have a C: partition formatted with
   DOS's FAT file system and NT on another partition formatted with its
   NTFS (NT File System).
   
   If the user chooses to load NT, another program NTDETECT.COM runs to
   check the existent hardware. If everything was okay, the NT kernel is
   loaded and that's all we need to know.
   
   Let's examine what happens if the user decide to choose other OS
   rather than NT. In this situation, NTLDR needs to know which is the
   boot sector required to load the non-NT OS. The appropiate boot sector
   image must exists on a small 512-byte file. For instance, to load DOS,
   NTLDR searches for a boot sector image file called BOOTSECT.DOS. This
   image was created by the NT installation.
   
   So, what if I want to load Linux? It's quite simple, all we need is a
   boot sector image file, let's name it BOOTSECT.LIN (later we'll see
   how to obtain this file). You must put BOOTSECT.LIN on C: and edit
   BOOT.INI, the "operating systems section" now looking something like
   this:
   
[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINNT="NT V4 is here"
multi(0)disk(0)rdisk(0)partition(2)\WINNT="NT V4 VGAMODE" /basevideo /sos
C:\="DOS is here"
C:\BOOTSECT.LIN="Now Linux is here"

   The BOOT.INI can be edited with any plain ASCII text editor. Normally
   this file has system-hidden-readonly attributes, so you must change
   them using the 'attrib' DOS command or within NT, from the file's
   property dialogbox.
   
The Linux side of the story

   Now let's concentrate on the Linux shore. We need to install Linux,
   configure LILO and create the BOOTSECT.LIN file.
   
   The first step is to have Linux installed. We all know how to do that:
   Choose appropiate partitions for Linux system, swap and user's stuff,
   run installation program, etc. Easy cake, first step is completed okay
   in less than 45 minutes.
   
   Then we must configure LILO. We also know how to do that, but be
   careful *not* to install LILO on the MBR (unless you hate NT too much
   :-)) When configuring LILO, choose to install it on your Linux root
   partition. If you don't know how to configure LILO, spend some minutes
   reading the HOW-TOs or use some of the useful setup programs most
   modern Linux distributions have. My installation is S.u.S.E., so I use
   the 'yast' (Yet Another Setup Tool).
   
   Once LILO is configured (let's asume the Linux root partition is
   /dev/hda3) we must use 'dd' to create the boot record image. Login as
   root and do the following:
   
# dd if=/dev/hda3 bs=512 count=1 of=/dosc/bootsect.lin

   Prior you have mounted the FAT C: partition as /dosc. Just in case you
   cannot access to this partition, for instance if it's formatted with
   NTFS, just write BOOTSECT.LIN to a DOS-formatted diskette or some
   partition where NT can read from. If you put BOOTSECT.LIN in a place
   othet than C:\ remember to modify the BOOT.INI file accordingly.
   
   Now your boss can choose Linux from her NT Loader's menu. The NTLDR
   will load the BOOTSECT.INI and she'll see the LILO prompt. Then she'll
   plunge into her new Linux box. Finally, if you configured LILO to load
   Linux and also the DOS on C: when LILO prompts, your boss will reload
   from the active C: partition, again into NT Loader. The procedure
   described may be repeated if you wish to boot several Linuxes, you
   must just create appropiated boot sector image files for each of your
   Linuxes.
     _________________________________________________________________
   
                     Copyright  1999, Gustavo Larriera
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                  Defining a Linux-based Production System
                                      
                             By Jurgen Defurne
     _________________________________________________________________
   
Introduction

   In a previous article ("Thoughts About Linux", LG, October) I browsed
   upon several topics to use Linux in business, not only for networking
   and communication, but also for real production systems. There are
   several conditions which need to be fulfilled, but it should be
   possible to define a basic database system, which is rapidly deployed
   and has everything that is needed in such a system.
   The past two months I have been increasing my knowledge about Linux
   and available tools on Linux. There are several points which need
   further elaboration, but I have a fairly good idea of what is needed.
   
Goal

   The goal is to have a look at the parts which are needed to implement
   a reliable production database system, together with the tools needed
   to provide for (rapid) (inhouse) development, but for a fairly lower
   cost than is needed with traditional platforms. It must be reliable,
   in the sense that all necessary procedures for a minimum downtime are
   available, with the emphasis that a little downtime can be tolerated.
   I do need to place a remark here. I worked on several projects, where
   people tried to save time by asking for rapid development, or trying
   to save money by reusing parts which lay around, or by converting
   systems. What happened in all cases was that both money and time were
   lost, the main reason being not understanding all aspects of the
   problems.
   This is a mistake I don't want to make. I think that I now have enough
   experience to show a way to achieve the above defined goal, describing
   a Linux-based production platform which has lower deployment and
   exploitation costs.
   
Basic recommendations

   These are general guidelines. The first part in creating and
   exploiting a successful production system is in constraining the
   amount of tools that are needed on the platform. This leads to the
   second part of success, understanding and knowing your tools.
   Experience is still the most valuable tool, but depending on the
   amount and complexity of tools, much time can be wasted trying to get
   to know the tools that are at hand. Good handbooks with clear examples
   and thorough cross-references are a great help, as are courses on the
   subjects that matter.
   
Hardware

   At the moment I won't go very deep into hardware. The base platform
   should be a standard PC with an IDE harddrive on a PCI controller
   which is fast back-to-back capable. I tested the basic data rate of a
   Compaq Despro system (166 MHz, Triton II chipset) and got a raw
   data-speed (unbuffered, using write()) of 2.5 MB (megabytes)/s. I
   suppose that for a small entry platform this is fairly reasonable.
   Further tests should be developed to test the loading of the machine
   under production circumstances.
   The most important part , however, is that all machines running Linux
   with production data, should be equipped with a UPS. This is because
   the e2fs file system (as most Un*x filesystems) is very vulnerable in
   the case of an unexpected system shutdown. For this reason, a tape
   drive is also indispensable, with good backup and restore procedures
   which must be run from day 0.
   
Production tools

   Our main engine is the database management system. For production
   purposes, the following features must be available :
     * Fast query capability
     * Batch job entry
     * Printing
     * Telecommunication
     * Transaction monitoring
     * Journaling
     * User interfacing
       
  Fast query capability
  
   This feature is especially necessary for interactive applications.
   Your clients shouldn't wait half a minute before the requested
   operation is fulfilled. This capability can be enhanced by buffering,
   faster CPU's, faster disk-drives, faster bus-systems and RAID.
   
  Batch job entry
  
   This is a very valuable production tool. There are much jobs which
   depend on the daily production changes, but which need much processing
   time afterwards. These are mostly run at night, always on the same
   points of time, daily, weekly, monthly or yearly.
   
  Printing
  
   Printing is a very important action in every production system and
   should be looked after from the design phase. This is because many
   companies have several documents that are official. Not only printing
   on laser- or inkjet printers should be supported, but also printing
   with heavy duty printing equipment for invoices, multi-sheet paper,
   etc.
   
  Telecommunication
  
   Telecommunication is not only about the Internet. There are still many
   systems out there that work with direct lines between them. The main
   reason is that this gives the people who are responsible for the
   services, a much greater degree of control over access and
   implementation. In addition to TCP/IP; e-mail and fax, support for
   X.25 should also be an option in this area.
   People should also have control over the messages and/or faxes they
   send. A queue of messages should be available, where everybody can see
   all messages globally (dest, time, etc) and where they have access to
   their own messages.
   
  Transaction monitoring
  
   With transaction monitoring, I mean the ability to rollback pending
   updates on database tables. This feature is especially needed when one
   modification concerns several tables. These modifications must all be
   committed at the same time, or be rolled back into the previous state.
   
  Journaling
  
   This capability is needed to repair files and filesystems which got
   corrupted due to a system failure. After restarting the system, a
   special program is used to undo all changes which couldn't be
   committed. In this sense, journaling stands very close to transaction
   monitoring.
   
  User interfacing
  
   This is a tricky part, because it is part development and part
   production. On the production side, the interface system should give
   users rapid access to their applications and also partition all
   applications between departments. Most production systems I have seen
   do this with menu's. There are several reasons. The main reason is
   that most production systems still work with character-based
   applications. There are many GUI's out there, but production systems
   will still be solely character based (except for graphics and
   printing, but I consider these niche markets), even on a GUI. The
   second reson is that a production system usually has lots and lots of
   large and little program's. You just can't iconify them all and put
   them in maps. Then you would only have a graphical menu, with all
   icons adding more to confusion than clarity.
   
  What's available ?
  
   Note : When I name or specify products, I will only specify those with
   which I am already familiar. I presume that any one of you will have
   his/her own choices. They serve as basic examples, and do not imply
   any preferences on my side.
   
   The only database system on Linux I personally know for the moment, is
   PostgreSQL. It supports SQL and transaction monitoring. Is it fast ? I
   don't know. One should have a backup of a complete production
   database, which can then be used to test against the real production
   machine, with interactive, background and batch jobs running like they
   do in the real world.
   
   For batch jobs, crond should always be started. In addition to this,
   the at and batch commands can be used to have a more structured
   implementation of business processes.
   
   For printing, I know (and use) the standard Linux tools lpd,
   Ghostscript and TeTeX. There might be a catch however. In some places
   you need to merge documents with data. The main reason for this is
   that a wordprocessing package offers more control over the format and
   contents of the document, instead of printing the document with a
   simple reporting application. On my current workplace, a migration to
   HP is busy. The solution there is WordPerfect. In the past, I have
   used this solution under DOS, to automatically start WP and produce a
   merged document. Is this possible too with StarOffice ?
   Are there other print solutions which offer more interactive control
   over the printing process than lpd ? Users should have more easy
   access to their printjobs and the printing process.
   
   Telecommunication is a real strong point of Linux. I won't enumerate
   them all. Even if it doesn't support X.25, it is still possible to use
   direct dial-up lines using SLIP or PPP.
   
   Journaling is the weakest point of Linux. I have worked with the
   following filesystems : FAT, HPFS, NTFS, e2fs, the Novell filesystem
   and the filesystem of the WANG VS minicomputer system. With all these
   systems, I have had power-failures or crashes, but the only
   file-system that gives trouble after this is e2fs. In all other cases,
   a filesystem check repairs all damage and the computer continues. On
   WANG VS, index-sequential files are available. When a crash occurs,
   the physical integrity of an indexed file can be compromised. To
   defend against this, there are two solutions. The first is
   reorganizing the file. This is copying the file on a record-by-record
   basis. This rebuilds the complete file and its indices, and inserts or
   deletes which were not committed are rejected. The second option is
   using the built in transaction system. A file can flagged as belonging
   to a database. Every modification to these files is logged until the
   transaction is completely committed. After a failure has occurred, the
   files can be restored in their original states using the existing
   journals. This is a matter of minutes.
   I think that the only filesystem on PC which offers comparable
   functionality is that of Novell.
   The e2fs file system check works, but it does offer not enough
   explanation. When there is a really bad crash, the filesystem is just
   a mess.
   
Development tools

   I will describe here the kind of tools that I needed when I was
   maintaining a production database in a previous job. The main theme
   here is that programmers in a production environment should be
   productive. This means that they should be offered a complete package,
   with all tools and documentation necessary to start immediately (or in
   a relatively short time). This means that for every package there
   should be a short, practical tutorial available.
   I will divide this section into two parts, the first being necessary
   tools, the second being optional tools. Also necessary for development
   is a methodology. This methodology should be equal through all
   delivered tools. The easiest way to do this is through an integrated
   development environment.
   
Compulsory development tools

   Which tools are the minimum needed to start coding without much hassle
   ? I found these tools to be invaluable on several platforms :
     * An integrated development environment
     * A powerful editor
     * An interactive screen development package
     * A data dictionary
     * A high-level language with DBMS preprocessor support
     * A scripting language
       
  Integrated development environment
  
   Your IDE should give access to all your tools in an easy and
   consistent manner. It should be highly customisable, but be delivered
   with in a configuration which gives direct access to all installed
   tools.
   
  Editor
  
   If you have a real good editor, it can act as an integrated
   development environment. Features which enhance productivity are
   powerful search-and-replace capabilities and macro features (even a
   simple record-only macro feature is better than no macro features).
   Syntax colouring is nice, but one can live easy without it. Syntax
   completion can be nice, but you have to learn to live with it.
   Besides, the editor cannot know which parts of statement you don't
   need, so ultimately you will have more clutter in your source, or you
   waste your time erasing unnecessary source code.
   
  Screen development
  
   This is an area where big savings can be done. For powerful screen
   development you need the following parts in the development package :
    1. Standard screens which are drawn upon information in the data
       dictionary
    2. Easy passing of variables between screens and applications
    3. A standard way of activating a screen in an application
       
   The savings are on several places. If you create a new screen, then
   you should immediately get a default screen with all fields from the
   requested table. After this, only some repositioning and formatting to
   local business standards needs to be done. I worked with two such
   systems, FoxPro and WANG PACE, and the savings are tremendous in all
   parts of the software cycle (prototyping, implementation and
   maintenance).
   
  Data dictionary
  
   A data dictionary is a powerful tool, from which much information can
   be extracted for all parts of the development process. The screen
   builder and the HL preprocessor should be able to extract their
   information from the data dictionary. The ability to define field- and
   record-checking functions in the data dictionary instead of the
   application program, eliminates the need to propagate changes in this
   code through all applications. With the proper reports, one should
   also be able to look at different angles into the structure of the
   database.
   
  High level language with DBMS preprocessor support
  
   You can't do complete development without a high-level language. There
   are always functions needed which can't be implemented through the
   database system. To make development easier, it should be possible to
   embed database control statements in the source program. The compiler
   should be invoked automatically.
   
  Scripting language
  
   A scripting language is very useful in several aspects. Preparing
   batch jobs is part of it. I also found out that a business system
   consists of several reusable pieces, which can be easily strung
   together using a scripting language. Also, the overall steering and
   maintenance of the system can be greatly simplified.
   
Optional development tools

   These are tools that were avalailable on several platforms, which can
   come in handy, but aren't necessarily usable to deliver production
   environment applications. I found out that these are little used.
     * Interactive query system
     * Report editor
       
  Interactive query system
  
   This is often designed to be used by people which are not programmers.
   Experience has thaught me however that people who are not programmers
   in a business, don't have the time to learn these tools. It is a
   useful tool for a programmer to test queries and views, but it isn't
   really useful as a production tool. Only in some cases, for real quick
   and dirty work, is it worth using.
   
  Report editor
  
   This is even a more overestimated tool. I shared thoughts about this
   with other programmers, and our conclusion was : bosses always ask
   reports which are much more complicated than a simple report editor
   can handle. It would be far better to use a programming language
   specifically designed for reporting (any one know of such a thing ?
   Any experiences with Perl for extraction and reporting ?).
   
  What's available ?
  
   Note : I will direct my attention only at the compulsory development
   tools. The rest of the environment will be centered around the
   features of PostgreSQL.
   
   As an integrated development environment, EMACS is probably the first
   which comes to mind. It integrates even with my second subject, a
   powerful editor. Is it even at all possible to draw a line between the
   two ? Is EMACS a powerful editor which serves as a development
   environment, or is it a development environment which is tightly
   integrated with its editor?
   
   The data dictionary, the screen development package and the DBMS
   preprocessor are more thightly bound than other parts of the package.
   The screen editor and the DBMS preprocessor should get their
   information from the data dictionary, and the DBMS HL statements
   should also provide for interaction with screens. It should be both
   possible to develop screens for X-windows, as well for character-based
   terminals.
   In the field of high level languages, there are several options, but a
   business oriented language is still missing. Yes, I am talking COBOL
   here, although an xBase dialect is also great for such applications. I
   have programmed for eight years in several languages, only the last
   two year in COBOL, and it IS better for expressing business programs
   than C/C++. If anyone would ask me now to write business programs in
   C/C++, I think the first thing I would do was write a preprocessor so
   that I could express my programs with COBOL-like syntax.
   I don't know how ADA goes for business programs, but a combination of
   GNAT, with a provision to copy embedded SQL statements to the
   translated C-source and then through the SQL preprocessor would maybe
   work.
   I only had a small look at Perl, and from Tcl and Python I know
   absolutely nothing, but while interactive languages are fine for
   interactive programs, you should also bear in mind that some programs
   must process much data, and that therefore access to a native code
   compiler is essential.
   There is another point in which only COBOL is good. This is in
   financial mathematics. This is due to the use of packed decimal
   numbers up to 18 digits long where the decimal point can be in any
   place. You should have compiler support for that too. On the x86
   platform this capability exists in the numerical processor, which is
   capable of loading and storing 18 digit packed decimal numbers.
   Computations are carried out in the internal 80-bit floating point
   format of the co-processor.
   
   When you have a Linux system, the first scripting language you run
   into is probably that of the bash shell. This should be sufficient for
   most purposes, although my experiences with scripting languages is
   that they benefit greatly from statements for simple interaction
   (prompting and simple field entry).
   
What should be delivered ?

   As I said before, this list doesn't present any endorsement from me
   towards any of these products or programs. This list should be
   expanded with all products which fit in either one of these
   categories, so all hints all wellcome.
   Another weak point in some areas of Linux is documentation. For a
   production environment, the Linux documentation project is probably a
   must, preprinted from the Postscript sources. For the commercial
   products, good documentation is also not a problem. For other parts of
   Linux tools, the books from O'Reilly & Associates are very valuable.
   HOWTO's are NOT suited for a production environment, but since they
   are more about implementation, they are suitable for the people who
   put the system together. The catch is this : when a system is
   delivered, all necessary documentation should be prepared and
   delivered too. I worked with several on-line documentation systems,
   but when in a production environment, nothing beats printed
   documentation.
   
   
                             Production system
                                    DBMS
                            - Fast query/update
                          - Transaction processing
                          - Journaling postgreSQL
                                   mySQL
                                    mSQL
                                   Adabas
                         c-tree Plus/Faircom Server
                                    ...
                             Communication ppp
                                    slip
                                    efax
                                    ...
                           Batch job entry crond
                                     at
                                   batch
                                Printing lpd
                             User interfacing ?
                             Development system
                                 IDE EMACS
                                Editor EMACS
                                     vi
                     Screen development Depends on DBMS
                      Data dictionary Depends on DBMS
                           Application language C
                                    C++
                                  Cobol ?
                                    Perl
                                  Tcl(/Tk)
                                   Python
                                    Java
                          Scripting language bash
                                      
Summary

   I am still trying to drag Linux into business. If you want to do
   business using Linux, you should be able to deliver a complete system
   to the customer. In this article I outlined the components of such a
   system and some weaknesses which should be overcome. As a result, I
   created a table enumerating the needed components for such a system.
   This table is absolutely not finished. I welcome all references to
   programs and products to update this table. It should be possible to
   publish an update once a month. What I also should do, is extend the
   table with references to available documentation.
   Another part which needs more attention is developing tests to assess
   the power of the database system, ie. what can be expected in terms of
   throughput and response under several load scenarios.
   
     _________________________________________________________________
   
                      Copyright  1999, Jurgen Defurne
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                                EMACSulation
                                      
                              By Eric Marsden
     _________________________________________________________________
   
     This column is devoted to getting more out of Emacs, text editor
     extraordinaire. Each issue I plan to present an Emacs extension
     which can improve your productivity, make the sun shine more
     brightly and the grass greener.
     _________________________________________________________________
   
     Why is the word abbreviate so long?
     
                                  Time saving
                                       
   You've probably noticed that Emacs goes to a fair bit of trouble to
   save you typing. The minibuffer offers a history mechanism which
   allows you to recall and edit previous commands, and many minibuffer
   entry prompts try to complete whatever you're typing when you hit TAB.
   This behaviour was the inspiration for the readline and history
   libraries, which are used in several shells and commandline
   interpreters.
   
   This column is dedicated to another of these keystroke-saving features
   in Emacs: the abbreviation facility. Do you get sick of typing in
   repetitive phrases such as your company's name, or your phone number?
   Abbreviations are here to save your fingers. For example, you could
   ask Emacs to expand LAAS to Laboratoire d'Analyse et d'Architecture
   des Systmes. The expansion happens once you type a non
   word-constituent character after the abbreviation (a space, for
   instance, though the exact definition of a word separation depends on
   the mode you are using).
   
   This is the Emacs abbrev mechanism. You can either use a minor mode
   called abbrev-mode, which will cause abbrevs to expand automatically
   (you enable the minor-mode by saying M-x abbrev-mode), or you can
   expand them on demand by saying C-x a e with the cursor positioned
   after the abbreviation. Your abbreviations can be saved to a file when
   you quit Emacs and reloaded automatically when you launch it:
   

    ;; if there is an abbrev file, read it in
    (if (file-exists-p abbrev-file-name)
       (read-abbrev-file))

Defining an abbrev

   To create an abbrev definition, type the abbreviation (LAAS in the
   example above) in a buffer, say C-x a i g, then enter the text you
   would like it to expand to in the minibuffer. This slightly arcane
   sequence creates a global abbrev, which will apply in all modes. Try
   it out by entering the abbreviation and saying C-x a e (e for expand).
   Emacs also allows you to create abbreviations which will be active
   only in a specific mode by saying C-x a i l instead (in a buffer which
   is already in the appropriate mode). M-x list-abbrevs displays a list
   of all currently defined abbrevs.
   
Mail abbrevs

   Since the dawn of time, Unix mail programs have used the ~/.mailrc
   file to allow users to create their own email aliases. The
   mail-abbrevs mechanism reads in the contents of this file and defines
   abbreviations which will be expanded in the To: and Cc: fields of any
   email you compose in Emacs. Here is an example of the ~/.mailrc alias
   syntax:
   
    alias dsssl        dssslist@mulberrytech.com
    alias cohorts      rabah jarboui almeida behnia
    alias bond         "James Bond <bond@guerilla.net>"

   There are other more sophisticated addressbook systems around, such as
   Jamie Zawinski's BBDB, but they won't allow you to share aliases with
   other mailers. You can have mail-abbrev minor mode activated whenever
   you compose an email in Emacs using the following line in your
   ~/.emacs:
   

    ;; unnecessary if you use XEmacs
    (add-hook 'message-setup-hook 'mail-abbrevs-setup)

Dynamic abbrevs

   The standard abbreviation facility requires you explicitly to register
   your abbrevs, which is fine for things you type every week, but is a
   hassle for expressions which only occur in one document. Emacs also
   supports dynamic abbrevs, which try to guess the word you are
   currently typing from the surrounding text. This is very useful for
   programming in languages which encourage VeryLongVariableNames: you
   only need type the variable name once, after which it suffices to type
   the first few letters followed by M-/, and Emacs will try to complete
   the variable name.
   
   To be very precise, dabbrev searches for the least distant word of
   which the word under the cursor is a prefix, starting by examining
   words in the current buffer before the cursor position, then words
   after the cursor, and finally in all the other buffers in your Emacs.
   If there are several possible expansions (ie the text you have typed
   isn't a unique prefix), pressing M-/ cycles though the successive
   possibilities. Saying SPC M-/ lets you complete phrases which contain
   several words.
   
   Diehard vi users might be interested to read the tribulations of a
   user who tried to implement a limited version of dabbrevs in vi.
   
Completion

   The Completion package, by Jim Salem, is similar in function to
   dynamic abbrevs, but uses a different keybinding (M-RET) and a subtly
   different algorithm. Rather than searching for a completion which is
   close in the buffer, it starts by searching through words which you
   have typed in recently (falling back to searching open buffers if this
   fails). The history of recently used words is saved automatically when
   you quit Emacs. To enable completion (you can use it instead of, or as
   well as, dabbrevs), put the following in your ~/.emacs:
   

    (require 'completion)
    (initialize-completions)

Hippie Expand

   Filename completion in the minibuffer is a truly wonderful keystroke
   saver, and you might find yourself wishing you could use it when
   entering a filename in a regular buffer. Wish no longer: this is one
   of the features offered by the fabulous hippie-expand package.
   
   Hippie-expand, by Anders Holst, is a singing and dancing abbrev
   mechanism, which is capable of many different types of dynamic
   abbrevs. It can expand according to:
     * file name: if you type /usr/X then hit the expansion key, it will
       expand to /usr/X11R6/;
     * exact line match: searches for a line in the buffer which has the
       current line as a prefix;
     * the contents of the current buffer, and other buffers on failure,
       just like dabbrev;
     * the contents of the kill-ring (which is where Emacs stores text
       that you have killed, or ``cut'' in MacOS terminology, in a
       circular buffer). Rather than typing M-y to cycle through
       positions in the kill-ring, you can hippie-expand on the first
       word in the killed text.
       
   Hippie-expand is not active by default, so you need to bind it to a
   key. Here's what I use:
   

    (define-key global-map (read-kbd-macro "M-RET") 'hippie-expand)

   Go forth and save keystrokes.
   
Feedback

   Glenn Barry sent me a comment on the EMACSulation on
   gnuclient/gnuserv:
   
     Just read and enjoyed your article on gnuserv/gnuclient in the
     Linux Gazette.
     
     But you forgot the use of gnuserv/gnuclient that makes it
     incredibly useful; one can access their full running emacs session
     by logging-in via a tty remotely (rlogin/telnet) and running
     "gnuclient -nw" ... makes working from home a breeze (even over low
     speed (28.8) links).
     
     Note you do have to rlogin to the system running the emacs
     w/gnuserv, as the gnuclient -nw does not work over the net (at
     least that's what the man page says). It took me awhile to figure
     this out so it would be nice to make sure folks know about this
     great capability.
     
   The -nw switch asks Emacs to start up in console mode, which makes it
   much more useable over a slow connection than using a remote display
   with X11. Note that XEmacs is able to use ANSI colors on the console
   or in an xterm, while GNU Emacs currently can't do color but does
   offer a text-mode menubar.
   
   Glenn also gave an illustration of the power of ffap: he has
   customized it to recognize Sun bug numbers under the cursor and
   dispatch a dynamically generated URL to a web front end for their bug
   tracking system.
   
Next time ...

   Next month I'll look at skeleton insertion and templating mechanisms
   in Emacs. Don't hesitate to contact me at <emarsden@mail.dotcom.fr>
   with comments, corrections or suggestions (what's your favorite
   couldn't-do-without Emacs extension package?). C-u 1000 M-x hail-emacs
   !
   
   PS: Emacs isn't in any way limited to Linux, since implementations
   exist for many other operating systems (and some systems which only
   halfway operate). However, as one of the leading bits of free
   software, one of the most powerful, complex and customizable, I feel
   it has its place in the Linux Gazette.
     _________________________________________________________________
   
   EMACSulation #1, February 1998
   EMACSulation #2, March 1998
   EMACSulation #3, April 1998
   EMACSulation #4, June 1998
   EMACSulation #5, August 1998
     _________________________________________________________________
   
                       Copyright  1999, Eric Marsden
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
             Evaluating postgreSQL for a Production Environment
                                      
                             By Jurgen Defurne
     _________________________________________________________________
   
                                 Introduction
                                       
   With the advent of relatively powerful, free and/or cheap software, I
   wanted to see if I was able to create a production environment around
   Linux and a DBMS. In the past I have worked with several DBMS products
   in several environments. My goal was to evaluate the Linux/postgreSQL
   against the several environments I am familiar with. Out of these
   environments I will pick the aspects which I think are important in a
   production environment.
   
                         Past production environments
                                       
   I have worked in three stages of production environment. These were
   the unconnected PC environment, the connected PC environment
   (file/print server networking), and multi-user/multi-tasking
   environment (minicomputer). I will introduce the several tools in
   order of this complexity.
   
   Some terms will need to be explicitly defined, because the xBase
   terminology is sometimes confusing. The term 'database' here means the
   collection of several related tables which are needed to store
   organised data. The term 'table' is used to define one collection of
   identical data, a set. This is because in the original xBase
   languages, 'database' was used as to mean 'table'.
   
FoxBase

   Effectively being a much faster clone of dBase III, FoxBase contained
   the minimum necessary to define the tables of a database and a
   programming language which contained all necessary to write
   applications very quickly.
   
   A database consisted of several tables and their indexes. The
   association of tables and their indexes must explicitly be done using
   commands.
   
   The programming language used is completely procedural. It contains
   statements to create menu's, open and close tables, filter tables
   (querying), insert, update and delete records, view records through
   screens and a browse statement. Defining all these things in a program
   is quite straightforward. Records are manipulated as program
   variables. All data is stored in ASCII format.
   
   One special feature which originated in dBase, are 'macro's'. These
   macro's are text strings, which could be compiled and interpreted at
   run-time. This was a necessary feature, because most statements took
   string arguments without quotes, e.g. OPEN MY_TABLE. If you wanted to
   define a statement with a parameter, you could not directly refer to a
   variable. Trying to execute OPEN ARG_TABLE, the program would search
   for the file 'ARG_TABL'. To circumvent this problem you need to code
   the following :
   
       ARG_TABLE = "MY_TABLE"
       OPEN &ARG_TABLE
   
Clipper

   Clipper originated as a dBase compiler, but added soon after powerful
   extensions to the language. I have also worked with the Summer '87 and
   the 5.0 version. At the database level, nothing changed very much from
   FoxBase, but at the user interface level and the programming level,
   several advanced features offered faster development turn-around times
   and advanced user interfacing. The macro feature was still available,
   but Clipper expanded it through code blocks. In the original
   implementation, a macro needed to be evaluated every time it was used.
   In cases where macro's where used to filter data, this amounted to
   waste of computing time. The introduction of the code block made it
   possible to compile a macro just once and then use the compiled
   version.
   
   Other features where the introduction of some object-oriented classes
   for user interfacing, a powerful multi-dimensional array type,
   declarations for static and local variables and a plethora of
   functions to manipulate arrays and tables. The flipside of all this
   was that learning to effectively use the language took some more time.
   I have two books about Clipper 5.0 and they are quite large.
   
FoxPro 2.0

   FoxPro was the successor of FoxBase. It added GUI-features to the
   text-interface, making it possible to work with overlapping windows.
   FoxPro 2.0 also added embedded SQL statements. It was only a subset
   with SELECT, INSERT, UPDATE and DELETE, but this offered already a
   substantial advantage over the standard query statements. It also
   offered a better integration between tables and their indexes, and one
   of the most powerful query optimizers ever developed. They also
   provided some development tools, of which the most important where the
   screen development and the source documentation tools.
   
   Clipper and FoxPro made it also possible to program for networks and
   thus enable multi-user database systems.
   
WANG PACE

   WANG PACE is a fully integrated DBMS development system which runs on
   the WANG VS minicomputers. It offers an extended data dictionary with
   field- and record-level validation, HL-language triggers and
   view-definitions. All defined objects contain a version count. When an
   object is modified and subsequent programs are not, then a runtime
   error is generated when compiled versions don't match DD versions. It
   also offers a powerful screen editor, a report editor and a
   query-by-example system. Access through COBOL, COBOL85 or RPGII is
   available with a pre-processor which compiles embedded statements into
   API-calls.
   
                         Summary of important features
                                       
   If I look in retrospect to these systems, what were the important
   features which made programming more productive ? This reference must
   be made against postgreSQL and the available libraries for interfacing
   to the back-end. It must also be made from the point of the production
   programmer, who must be able to write applications without being
   bothered by irrelevant details.
     * Field names translate to native variable names
       Defining a field name for a table makes it available under the
       same name to the program which can then use it as an ordinary,
       native variable.
       
     * Uniform memory allocation system
       The xBase systems have a dynamic memory allocation scheme, which
       is completely handled by the runtime library. COBOL is fully
       statically allocated. In none of these languages the programmer
       needs to be concerned with tracking allocated memory.
       
     * Direct update through the current record
       The manipulated record is available to update the table through
       one or another UPDATE statement.
       
     * Database statements have the same format as the application
       language
       When programming in xBase, the statements to extract and
       manipulate data from the database tables formed an integral part
       of the procedure language.
       In COBOL, the statements where embedded and processed by a
       preprocessor. The syntax of the available statements was made to
       resemble COBOL syntax, with its strong and weak points.
       
     * Simple definition and usage of screens
       In xBase, there are simple yet powerful statements available for
       defining screens. Screens are called through only one or two
       statements.
       In WANG PACE, screens can only be defined through the screen
       editor. There are three statements available : one to use menu's,
       one to process one record in a cursor and an iterative version to
       process all records in a cursor. Most screen processing is handled
       through the run-time libraries.
       
                 Features available when installing postgreSQL
                                       
   The four first features can be installed using the ecpg preprocessor.
   This makes it possible to use native program variables, you don't have
   to worry about memory allocation, because the run-time library takes
   care of it, and updates can also take place using the selected
   program-variables.
   
   What is missing, is a special form of include statement. Now you need
   to know which fields are in a table if you want to use a 'exec sql
   declare' statement. It would be better if there was something like
   'exec sql copy fields from <tablename>'. If the tabledefinition then
   changes, recompiling the program will adjust to the new definitions.
   
   Using the pgaccess program (under X-windows) provides access to the
   data dictionary in a more elegant manner.
   
                                    Summary
                                       
   I started out to write a critique on postgreSQL because of the summary
   documentation which is delivered in the package. This made it rather
   hard to find and use all the components which provide additional
   functions.
   
   I started describing my experiences on other platforms to get an
   insight in what a production environment should deliver to the
   programmer. Then I started to look closely at the delivered
   documentation and to my surprise all components that I needed where in
   fact in the package.
   
   The following critique still remains however. The documentation of the
   package is too much fragmented, and most parts of the documentation
   are centered around technical aspects which do not bother the
   production programmer. This is understandable however. The
   documentation is written by the same people that implement them. I
   know of my own experience that writing a user manual is very hard and
   it is easy to get lost in the technical details of the implementation
   that you know about.
   
   The following parts of postgreSQL are important for the production
   programmer, and their documentation should be better integrated.
     * The psql processor
       This is a nice tool to define all necessary objects in a database,
       to get acquainted with SQL and to test ideas and verify joins and
       queries.
     * The ecpg preprocessor
       This is the main production tool to write applications which use
       database manipulation statements. This capacity should probably be
       extended to other languages too. Since all bindings from the
       selected cursor are made to program variables, records can be
       processed without the hassle of converting them from and to ASCII,
       and updates can be made through the 'exec sql update' statement.
     * The pgaccess package
       The pgaccess package provides access to all parts of the database
       and offers the ability to design screens and reports. It is still
       in a development phase. I hope it will be extended in the future,
       because the basic idea is excellent and the first implementations
       are worthwile.
       
   The libpq library is of no real value to a production programmer. It
   should be mainly a tool to be used in implementing integrated
   environments and database access languages. It could e.g. be used to
   create an xBase like environment (for those who wish to use this).
   
                               Further research
                                       
   In the following weeks (months) I hope to setup a complete database
   system over a network, with one server and several types of clients
   (workstation, terminal, remote computer) through several interfaces
   (Ethernet, serial connections). I will investigate the several
   platforms for application development. I intend to have a closer look
   at the provided tools in the postgreSQL package (developing a simple
   database for my strip book collection), but I will also look at the
   possibilities that Java offers a development platform with JDBC,
   screen design and application programming.
   
   One last note : for the moment I concentrate on using tools that I
   don't need to pay for, because I need the money for my hardware
   platforms and for my house. This does not mean that I am a die-hard
   'software should be gratis' advocate. A production environment favors
   to pay for software, because then it knows that it has a complete tool
   with support and warranty (horror stories about bad support not
   withstanding).
     _________________________________________________________________
   
                      Copyright  1999, Jurgen Defurne
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
   A Linux Journal Review: This article appeared first in the July 1998
   issue of Linux Journal.
     _________________________________________________________________
   
                             Introducing Samba
                                      
                               By John Blair
     _________________________________________________________________
   
   The whole point of networking is to allow computers to easily share
   information. Sharing information with other Linux boxes, or any UNIX
   host, is easy--tools such as FTP and NFS are readily available and
   frequently set up easily ``out of the box''. Unfortunately, even the
   most die-hard Linux fanatic has to admit the operating system most of
   the PCs in the world are running is one of the various types of
   Windows. Unless you use your Linux box in a particularly isolated
   environment, you will almost certainly need to exchange information
   with machines running Windows. Assuming you're not planning on moving
   all of your files using floppy disks, the tool you need is Samba.
   
   Samba is a suite of programs that gives your Linux box the ability to
   speak SMB (Server Message Block). SMB is the protocol used to
   implement file sharing and printer services between computers running
   OS/2, Windows NT, Windows 95 and Windows for Workgroups. The protocol
   is analogous to a combination of NFS (Network File System), lpd (the
   standard UNIX printer server) and a distributed authentication
   framework such as NIS or Kerberos. If you are familiar with Netatalk,
   Samba does for Windows what Netatalk does for the Macintosh. While
   running the Samba server programs, your Linux box appears in the
   ``Network Neighborhood'' as if it were just another Windows machine.
   Users of Windows machines can ``log into'' your Linux server and,
   depending on the rights they are granted, copy files to and from parts
   of the UNIX file system, submit print jobs and even send you WinPopup
   messages. If you use your Linux box in an environment that consists
   almost completely of Windows NT and Windows 95 machines, Samba is an
   invaluable tool.
   
                                  [INLINE]
                                      
        Figure 1. The Network Neighborhood, Showing the Samba Server
                                      
   Samba also has the ability to do things that normally require the
   Windows NT Server to act as a WINS server and process ``network
   logons'' from Windows 95 machines. A PAM module derived from Samba
   code allows you to authenticate UNIX logins using a Windows NT Server.
   A current Samba project seeks to reverse engineer the proprietary
   Windows NT domain-controller protocol and re-implement it as a
   component of Samba. This code, while still very experimental, can
   already successfully process a logon request from a Windows NT
   Workstation computer. It shouldn't be long before it will act as a
   full-fledged Primary Domain Controller (PDC), storing user account
   information and establishing trust relationships with other NT
   domains. Best of all, Samba is freely available under the GNU public
   license, just as Linux is. In many environments the Windows NT Server
   is required only to provide file services, printer spools and access
   control to a collection of Windows 95 machines. The combination of
   Linux and Samba provides a powerful low-cost alternative to the
   typical Microsoft solution.
   
  Windows Networking
  
   Understanding how Samba does its job is easier if you know a little
   about how Windows networking works. Windows clients use file and
   printer resources on a server by transmitting ``Server Message Block''
   over a NetBIOS session. NetBIOS was originally developed by IBM to
   define a networking interface for software running on MS-DOS or
   PC-DOS. It defines a set of networking services and the software
   interface for accessing those services, but does not specify the
   actual protocol used to move bits on the network.
   
   Three major flavors of NetBIOS have emerged since it was first
   implemented, each differing in the transport protocol used. The
   original implementation was referred to as NetBEUI (NetBIOS Extended
   User Interface), which is a low-overhead transport protocol designed
   for single segment networks. NetBIOS over IPX, the protocol used by
   Novell, is also popular. Samba uses NetBIOS over TCP/IP, which has
   multiple advantages.
   
   TCP/IP is already implemented on every operating system worth its
   salt, so it has been relatively easy to port Samba to virtually every
   flavor of UNIX, as well as OS/2, VMS, AmigaOS, Apple's Rhapsody (which
   is really NextSTEP) and (amazingly) mainframe operating systems like
   CMS. Samba is also used in embedded systems, such as stand-alone
   printer servers and Whistle's InterJet Internet appliance. Using
   TCP/IP also means that Samba fits in nicely on large TCP/IP networks,
   such as the Internet. Recognizing these advantages, Microsoft has
   renamed the combination of SMB and NetBIOS over TCP/IP the Common
   Internet Filesystem (CIFS). Microsoft is currently working to have
   CIFS accepted as an Internet standard for file transfer.
   
                                  [INLINE]
                                      
  Figure 2. SMB's Network View compared to OSI Networking Reference Model
                                      
  Samba's Components
  
   A Samba server actually consists of two server programs: smbd and
   nmbd. smbd is the core of Samba. It establishes sessions,
   authenticates clients and provides access to the file system and
   printers. nmbd implements the ``network browser''. Its role is to
   advertise the services that the Samba server has to offer. nmbd causes
   the Samba server to appear in the ``Network Neighborhood'' of Windows
   NT and Windows 95 machines and allows users to browse the list of
   available resources. It would be possible to run a Samba server
   without nmbd, but users would need to know ahead of time the NetBIOS
   name of the server and the resource on it they wish to access. nmbd
   implements the Microsoft network browser protocol, which means it
   participates in browser elections (sometimes called ``browser wars''),
   and can act as a master or back-up browser. nmbd can also function as
   a WINS (Windows Internet Name Service) server, which is necessary if
   your network spans more than one TCP/IP subnet.
   
   Samba also includes a collection of other tools. smbclient is an SMB
   client with a shell-based user interface, similar to FTP, that allows
   you to copy files to and from other SMB servers, as well as allowing
   you to access SMB printer resources and send WinPopup messages. For
   users of Linux, there is also an SMB file system that allows you to
   attach a directory shared from a Windows machine into your Linux file
   system. smbtar is a shell script that uses smbclient to store a remote
   Windows file share to, or restore a Windows file share from a standard
   UNIX tar file.
   
   The testparm command, which parses and describes the contents of your
   smb.conf file, is particularly useful since it provides an easy way to
   detect configuration mistakes. Other commands are used to administer
   Samba's encrypted password file, configure alternate character sets
   for international use and diagnose problems.
   
  Configuring Samba
  
   As usual, the best way to explain what a program can do is to show
   some examples. For two reasons, these examples assume that you already
   have Samba installed. First, explaining how to build and install Samba
   would be enough material for an article of its own. Second, since
   Samba is available as Red Hat and Debian packages shortly after each
   new stable release is announced, installation under Linux is a snap.
   Further, most ``base'' installations of popular distributions already
   automatically install Samba.
   
   Before Samba version 1.9.18 it was necessary to compile Samba yourself
   if you wished to use encrypted password authentication. This was true
   because Samba used a DES library to implement encryption, making it
   technically classified as a munition by the U.S. government. Binary
   versions of Samba with encrypted password support could not be legally
   exported from the United States, which led mirror sites to avoid
   distributing pre-compiled copies of Samba with encryption enabled.
   Starting with version 1.9.18, Samba uses a modified DES algorithm not
   subject to export restrictions. Now the only reason to build Samba
   yourself is if you like to test the latest alpha releases or you wish
   to build Samba with non-standard features.
   
   Since SMB is a large and complex protocol, configuring Samba can be
   daunting. Over 170 different configuration options can appear in the
   smb.conf file, Samba's configuration file. In spite of this, have no
   fear. Like nearly all aspects of UNIX, it is pretty easy to get a
   simple configuration up and running. You can then refine this
   configuration over time as you learn the function of each parameter.
   Last, the latest version of Samba, when this article was written in
   late January, was 1.9.18p1. It is possible that the behavior of some
   of these options will have changed by the time this is printed. As
   usual, the documentation included with the Samba distribution
   (especially the README file) is the definitive source of information.
   
   The smb.conf file is stored by the Red Hat and Debian distributions in
   the /etc directory. If you have built Samba yourself and haven't
   modified any of the installation paths, it is probably stored in
   /usr/local/samba/lib/smb.conf. All of the programs in the Samba suite
   read this one file, which is structured like a Windows *.INI file, for
   configuration information. Each section in the file begins with a name
   surrounded by square brackets and either the name of a service or one
   of the special sections: [global], [homes] or [printers].
   
   Each configuration parameter is either a global parameter, which means
   it controls something that affects the entire server, or a service
   parameter, which means it controls something specific to each service.
   The [global] section is used to set all the global configuration
   options, as well as the default service settings. The [homes] section
   is a special service section dynamically mapped to each user's home
   directory. The [printers] section provides an easy way to share every
   printer defined in the system's printcap file.
   
  A Simple Configuration
  
   The following smb.conf file describes a simple and useful Samba
   configuration that makes every user's home directory on my Linux box
   available over the network.
   
[global]
        netbios name = FRODO
        workgroup = UAB-TUCC
        server string = John Blair's Linux Box
        security = user
        printing = lprng

[homes]
        comment = Home Directory
        browseable = no
        read only = no

   The settings in the [global] section set the name of the host, the
   workgroup of the host and the string that appears next to the host in
   the browse list. The security parameter tells Samba to use ``user
   level'' security. SMB has two modes of security: share, which
   associates passwords with specific resources, and user, which assigns
   access rights to specific users. There isn't enough space here to
   describe the subtleties of the two modes, but in nearly every case you
   will want to use user-level security.
   
   The printing command describes the local printing system type, which
   tells Samba exactly how to submit print jobs, display the print queue,
   delete print jobs and other operations. If your printing system is one
   that Samba doesn't already know how to use, you can specify the
   commands to invoke for each print operation.
   
   Since no encryption mode is specified, Samba will default to using
   plaintext password authentication to verify every connection using the
   standard UNIX password utilities. Remember, if your Linux
   distributions uses PAM, the PAM configuration must be modified to
   allow Samba to authenticate against the password database. The Red Hat
   package handles this automatically. Obviously, in many situations,
   using plaintext authentication is foolish. Configuring Samba to
   support encrypted passwords is outside the scope of this article, but
   is not difficult. See the file ENCRYPTION.txt in the /docs directory
   of the Samba distribution for details.
   
   The settings in the [homes] section control the behavior of each
   user's home directory share. The comment parameter sets the string
   that appears next to the resource in the browse list. The browseable
   parameter controls whether or not a service will appear in the browse
   list. Something non-intuitive about the [homes] section is that
   setting browseable = no still means that a user's home directory will
   appear as a directory with its name set to the authenticated user's
   username. For example, with browseable = no, when I browse this Samba
   server I will see a share called jdblair. If browseable = yes, both a
   share called homes and jdblair would appear in the browse list.
   Setting read only = no means that users should be able to write to
   their home directory if they are properly authenticated. They would
   not, however, be able to write to their home directory if the UNIX
   access rights on their home directory prevented them from doing so.
   Setting read only = yes would mean that the user would not be able to
   write to their home directory regardless of the actual UNIX
   permissions.
   
   The following configuration section would grant access to every
   printer that appears in the printcap file to any user that can log
   into the Samba server. Note that the guest ok = yes normally doesn't
   grant access to every user when the server is using user-level
   security. Every print service must define printable = yes.
   
[printers]
        browseable = no
        guest ok = yes
        printable = yes

   This last configuration snippet adds a server share called public that
   grants read-only access to the anonymous ftp directory. You will have
   to set up the printer driver on the client machine. You can use the
   printer name and printer driver commands to automate the process of
   setting up the printer client on Windows 95 and Windows NT clients.
   
[public]
        comment = Public FTP Directory
        path = /home/ftp/pub
        browseable = yes
        read only = yes
        guest ok = yes

                                  [INLINE]
                                      
      Figure 3. Appearance of Samba Configuration in Windows Explorer
                                      
   Be aware that this description doesn't explain some subtle issues,
   such as the difference between user and share level security and other
   authentication issues. It also barely scratches the surface of what
   Samba can do. On the other hand, it's a good example of how easy it
   can be to create a simple but working smb.conf file.
   
  Conclusions
  
   Samba is the tool of choice for bridging the gap between UNIX and
   Windows systems. This article discussed using Samba on Linux in
   particular, but it is also an excellent tool for providing access to
   more traditional UNIX systems like Sun and RS/6000 servers. Further,
   Samba exemplifies the best features of free software, especially when
   compared to commercial offerings. Samba is powerful, well supported
   and under continuous active improvement by the Samba Team.
   
  Resources
  
   The Samba home page, at http://samba.anu.edu.au/samba/, is the
   definitive source for news and information about Samba. The
   documentation distributed with Samba is relatively unorganized, but
   covers every aspect of server configuration. If you have questions
   about Samba, first consult the FAQ, then try the Samba Mailing List.
   The location of both can be found on the Samba home page.
   
   The book Samba: Integrating UNIX and Windows, by John Blair and
   published by SSC, covers all aspects of installation, configuration
   and maintenance of a Samba server.
     _________________________________________________________________
   
                        Copyright  1999, John Blair
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                     Linux Installation Primer, Part 5
                                      
                               By Ron Jenkins
     _________________________________________________________________
   
   Copyright  1998 by Ron Jenkins. This work is provided on an "as is"
   basis. The author provides no warranty whatsoever, either express or
   implied, regarding the work, including warranties with respect to its
   merchantability or fitness for any particular purpose.
   The author welcomes corrections and suggestions. He can be reached by
   electronic mail at rjenkins@qni.com, or at his personal homepage:
   http://www.qni.com/~rjenkins/.
   
   Corrections, as well as updated versions of all of the author's works
   may be found at the URL listed above.
   
   NOTE: As you can see, I am moving to a new ISP. Please bear with me as
   I get everything in working order. The e-mail address is functional;
   the web site will be operational hopefully around mid December or
   early January.
   
   SPECIAL NOTE: Due to the quantity of correspondence I receive, if you
   are submitting a question or request for problem resolution, please
   see my homepage listed above for suggestions on information to
   provide.
   
   I only test my columns on the operating systems specified. I don't
   have access to a MAC, I don't use Windows 95, and have no plans to use
   Windows 98. If someone would care to provide equivalent instructions
   for any of the above operating systems, I will be happy to include
   them in my documents.
   
   ADDENDUM TO LAST MONTH'S COLUMN:
   
   I neglected to mention that you should consider purchasing some cable
   ties, coaxial clips, or other devices to dress your cabling properly.
   
   These should be available at your local computer store, or a large
   selection can be found on page # 163 of the Radio Shack 1999 Catalog.
   (Space limitations preclude listing them all.)
   
   This will allow you to bundle the cable or cables neatly together,
   attach them firmly to the baseboard or whatever area in which you are
   installing them, and make troubleshooting and maintenance of cabling
   problems much easier.
   
   Finally, consider marking each end of your cables in some way so you
   know which ends go together. There are a variety of ways to do this,
   including simply writing on the cable itself with a sharpie or white
   pen, noting the location or machine it is intended for, or my
   favorite, using color coded tape wrapped at each end.
   
   Also, each connection on a 10BASE2 coaxial bus network will require a
   BNC "tee" connector. This should be included with your network card.
   If not go to Radio Shack and get some (PN# 278-112.) They are cheaper
   than buying them at a computer store. Finally, don't forget the
   termination devices. You will need two. These are available either at
   your local computer store, or at Radio Shack (PN# 278-270.)
   
   Part Five: Deploying a home network
   
   This month we will utilize the home networking plan we prepared last
   month, and bring it to fruition.
   
   This is going to involve several steps, and I will present them in the
   order I recommend, but ultimately it will be up to you to choose your
   deployment method.
   
   Additionally, I will offer step by step instructions on the
   configuration of the networking components and protocols. This will
   give you the basic functionality upon which you will add to as this
   series continues.
   
   The goal of this installment will be to get the networking hardware
   and software installed, provide basic connectivity, and simple name
   resolution and file sharing services.
   
   The more advanced services, such as sendmail, DNS, routing, ftp, web,
   print services, and gateway service will be covered in the next
   installment.
   
   As with each installment of this series, there will be some operations
   required by each distribution that may or may not be different in
   another. I will diverge from the generalized information when
   necessary, as always.
   
   In this installment, I will cover the following topics:
   
     * Pre-installation planning.
     * Preparing the cabling.
     * Preparing the file server.
     * Preparing the workstations.
     * Installing the cabling.
     * Installing the hardware.
     * Installing the software.
     * Configuration of the file server.
     * Configuration of the work stations.
     * Testing the installation.
     * Troubleshooting the installation.
     * References.
     * Resources for further information.
     * Preview of next months installment.
     * About the Author.
       
   Assumptions that apply to the following installation instructions:
   
   To keep this installment to a manageable size, as well maintaining an
   acceptable level of simplicity, the following things will be assumed.
   
   We will be installing a three node network, consisting of a file
   server, one Windows NT client, and one Linux client. Physically, all
   three machines are on a single table. The Linux client is at the
   extreme left, the Linux fileserver is in the center, and the NT client
   is at the extreme right.
   
   In the 10BASE2 (coaxial or bus configuration,) the cabling will be run
   along the rear edge of the table and fastened by clips available for
   this purpose either from a computer store or at Radio Shack as
   previously mentioned.
   
   In the 10BASET or star configuration, the hub will be placed alongside
   the file server, and the cabling will emanate from the hub to the
   various machines, The three cables will be bundled together with cable
   ties, forming one larger diameter group of cables that can be treated
   as a single cable. This will be attached to the back of the table
   using clips as described above.
   
   The NICs I will use are NE2000 ISA bus combo cards, with both a BNC
   and a RJ-45 interface. The cards will be Plug and Play cards which
   require you to use a utility diskette under DOS, provided with the
   card, to configure it. This utility diskette also contains the NT
   drivers for the card.
   
   I use FREE DOS, available at http://sunsite.unc.edu to create the DOS
   boot disk. You may or may not have to create your own DOS boot disk,
   depending on what kind of NIC you have.
   
   Two of our NE2000 NICs will be set to the following:
   
   IO = 0x320 (320), IRQ = 10
   
   The third one will be configured by NT.
   
   These are by far the most common cards people usually start out with.
   If you are using something different, the instructions should be
   similar. Just make sure you can turn off the Plug and Play feature
   (bug?) for the Linux machines, if necessary. This usually only applies
   to ISA NICs, as kernels => 2.0.34 usually do a pretty good job of
   snagging PCI NICs.
   
   This should provide the information required for most any size
   network, the steps will just need to be duplicated for the extra
   clients and/or servers.
   
   I will use the terms UNIX and Linux somewhat interchangeably, except
   where I am explicitly referring to something unique to a particular
   flavor of UNIX, in which case I will note the difference.
   
   If you will be integrating Novell or MAC clients, you're on your own.
   I have not touched Novell since 3.1, and I don't have access to a MAC
   machine. The AppleTalk and IPX HOW-TOs may be of some assistance to
   you.
   
   Further, it will be assumed you are using "reserved" IP addresses for
   your home network. We will use the Class C reserved network
   192.168.1.0. The netmask for our network, thus will be 255.255.255.0.
   We will give the file server the IP 192.168.1.2, and the hostname
   fileserver01. The Linux client's IP will be 192.168.1.3, with the
   hostname linux01. Finally, the NT client's IP will be 192.168.1.4,
   with a hostname of nt01. I am keeping the 192.168.1.1 address and the
   hostname gateway01 for the gateway machine we will build next month.
   
   The domain name of this network will be home.net.
   
   The NT domain (not to be confused with the actual domain) name will be
   HOME.
   
   The NT client will access the file services using SAMBA, and the Linux
   client will access file services using the native Network File System
   (NFS.)
   
   Name resolution will be accomplished using common hosts and
   resolv.conf files, and a little trick for the NT box.
   
   When finished you should be able to ping all machines both by IP
   address, and hostname.
   
   Additionally, you should be able to access the disk storage on the
   file server from either client, with both read and write access.
   
   Pre-installation planning:
   
   Review of the network plan: Look over the network plan ONE LAST TIME.
   Make sure you have acquired all the necessary hardware, software, and
   cabling, as well as a hub or termination devices, if required.
   
   Preparing the common files: Since we will not be using DNS for name
   resolution at this point, we will rely on primarily three files for
   the UNIX machines, and one file for the NT box.
   
   Unique to the UNIX machines will be:
   
   /etc/hosts.conf
   
   /etc/resolv.conf
   
   These two files will be propagated throughout the Linux portion of the
   network, along with the hosts file described below.
   
   The first file, hosts.conf, simply tells the Linux box what means to
   use to resolve IP addresses to hostnames, and the order in which it
   should use them.
   
   There are basically two methods utilized for name resolution. The
   hosts file (see below for more information,) which we will use in this
   installation, and a DNS server, usually another UNIX box running a
   program called the Berkeley Internet Name Daemon (BIND.)
   
   First, cd to etc/, then open the hosts.conf file, or create it if
   necessary, and edit it to contain the line:
   
   order hosts,bind
   
   Then close the file. This simply tells the Linux box to first check
   its hosts file to find another machine on the network before trying
   anything else.
   
   Next, open the resolv.conf file, or create it if necessary, and edit
   it to contain the lines:
   
   domain home.net
   
   search home.net
   
   After you are finished, close the file. This tells the Linux box it's
   domain name, and to search this domain first before implementing any
   external name resolution.
   
   The purpose of this is to keep your local network name resolution on
   the local network. This will become important later when we hook these
   machines up to the Internet through a gateway machine.
   
   Common to both the NT and UNIX machines will be:
   
   A hosts file, which is simply a listing of all the machines on a local
   area network, which translates IP addresses to hostnames.
   
   Open the hosts file with your favorite editor, again creating it if
   necessary, and create entries for the loopback adapter, also known as
   the localhost, and each machine on your network. This file will be
   copied to each machine, thus allowing both the UNIX boxes and the NT
   machine to find each other by hostname.
   
   Entries in the hosts file are created using the following syntax:
   
   IP address Fully Qualified Domain Name (FQDN) hostname
   
   For example, for the machine bear.foobar.net, with an IP of
   206.113.102.193, the proper entry would be:
   
   206.113.102.193 bear.foobar.net bear
   
   A SHORT NOTE ON THE LOOPBACK ADAPTER: this interface, also known as
   the localhost, MUST be the first entry in any hosts file.
   
   So, to create the hosts file we will be using across our entire
   network, edit it to contain the following lines:
   
   127.0.0.1 localhost
   
   192.168.1.1 gateway01.home.net gateway01
   
   192.168.1.2 fileserver01.home.net fileserver01
   
   192.168.1.3 linux01.home.net linux01
   
   192.168.1.4 nt01.home.net nt01
   
   On the UNIX machines, this file also lives in the /etc directory,
   while on the NT machine it will live in /winnt/system32/drivers/etc
   directory.
   
   Now that we have prepared our common files, we can move to actual
   deployment preparations.
   
   Logistics and downtime: While this is not as great a concern on a home
   network as it is on a commercial LAN, it is still important to
   consider the impact the network installation will have on your
   machines, as well as what if any interruption of productivity might
   occur.
   
   You have two major, and one minor option in this regard:
   
     * The blitz method: This method entails setting aside a period of
       time when all machines may be downed, cabling, hardware and
       software installed and configured all in one contiguous session.
       Best for very small networks, and non commercial networks.
       
     * The phased method: This method involves a more conservative,
       cautious approach. Individual machines or sections of the network
       are downed one at a time, configured, tested, and brought back up
       before moving to the next section. This minimizes the interruption
       of productivity, and loss of computer services. Best for larger
       networks or most any commercial network.
     * A combination of the two: In any installation other than an
       extremely small network such as our example here, some combination
       of the two methods is usually the most practical approach. For
       instance, you may choose to blitz all the server machines, so that
       file services and other services will be immediately available to
       the client machines. After the blitz of the servers, you may then
       choose to slowly integrate the client machines by department, by
       priority of use, or most commonly, the suits first, then everyone
       else. As an aside, if you are deploying a network in a commercial
       environment, never underestimate the effect of office politics in
       your planning. While the executives may not technically need to be
       on the network as soon as your programmers or designers, remember
       to keep the man with the checkbook happy, and you will find your
       next upgrade much easier to justify.
       
   Preparing the cabling:
   
   10BASE2: Double check that you have sufficient coaxial cable, in the
   proper lengths, to interconnect all the machines on your bus.
   Remember, the cable strings from machine to machine, so I recommend
   physically laying out the cable between each machine to make sure you
   have enough, and the proper lengths. Finally, be sure you have the
   proper clips and ties to dress the cables neatly.
   
   10BASET: Depending on whether you bought the cables already made up,
   or made them yourself, the same general rules stated above will also
   apply here. Placement and layout of the cabling will be largely
   determined by your placement of the hub. Try to place the hub in such
   a way as to assure the shortest average length from the hub to each
   machine. As mentioned above, make sure you have sufficient materials
   to neatly run, anchor, and wrap your cabling.
   
   Preparing the file server:
   
   Memory issues: A good rule of thumb for any computer, and especially
   servers, is the more RAM the better. Since this is a home network,
   this is not as big an issue, but still important.
   
   Disk storage issues: If you can afford it, get SCSI drives. They work
   better and last longer. If you are on a budget, EIDE or UDMA drives
   will do in a pinch, but be aware they will not stand up as well under
   heavy, constant use.
   
   Backup device issues: I use a SCSI DAT drive, and have always had good
   results with it. Whatever you choose, MAKE SURE IT IS SUPPORTED BY
   Linux BEFORE YOU BUY IT! And backup up anything on any of the machines
   you will be working on that you cannot afford to lose!
   
   Power interruption and loss of data: You should consider at least
   protecting your fileserver with an Uninterruptable Power Supply (UPS.)
   I can recommend APC and Tripp-Lite products here. Why? Because they
   put they're money where they're mouth is on the warranty provided. Try
   to get one with two protected outlets, and jacks for your phone line.
   This will come in handy later when we do the gateway. Surges don't
   just come over the power lines. Ideally all your machines should have
   one, but try to make sure you get one for the file server.
   
   Preparing the client workstations:
   
   Linux box: not really much to do here, as most everything you need
   should already be installed. All your networking software should
   already be there. The only possible exception to this is if you have a
   RedHat machine, and you chose dialup workstation during installation.
   In this case, you may or may not have to install additional packages.
   Check your documentation.
   
   NT box: Here you will need to have your CD-ROM handy, as the
   networking software is probably not on your machine unless you
   explicitly requested it during the installation process. The software
   I am talking about here is separate and distinct from what is required
   for Dial Up Networking (DUN.)
   
   Surge protectors: If you cannot afford a UPS for each machine, at
   least put a quality surge protector on the two clients. Avoid the
   temptation to buy a bargain one. APC and Tripp-Lite are ones I can
   recommend for the same reasons as stated above. If either of these
   machines has any peripherals connected to it such printers, modems,
   scanners, etc. make sure these are protected as well.
   
   Installing the cabling:
   
   10BASE2: This is a fairly straightforward process. Simply lay the
   cable along the back on the table (or whatever your machines are on,)
   where you plan to install them. Do not anchor the cables at this time.
   
   10BASET: Once you have determined where your hub will be located, lay
   out the cable from the hub to each machine. Do not bundle or anchor
   the cables at this time.
   
   Installing the hardware:
   
   Network Interface Cards: This is fairly straightforward. Power off
   your machine. Remove the case cover and find an empty expansion slot
   appropriate for your type of card. Make sure it is firmly seated, and
   that you replace the screw that holds it in place.
   
   If the card is an ISA card, and is going into one of the Linux boxes,
   be sure to disable the Plug and Play feature and make note of the IO
   address and IRQ the card is using. There is usually some of setup
   program to help you with this. Write these values down as you will
   need them later.
   
   A QUICK NOTE ON IO ADDRESSES AND IRQ's: Some cards may require you to
   manually set the IO and IRQ values using jumpers on the card. Use care
   here. If you choose an IO address or IRQ already in use by another
   device, all sorts of nasty things can happen. Here are some good ones
   to try that generally work:
   
   IO Address:
   
   0x300 (300)
   
   0x310 (310)
   
   0x320 (320)
   
   IRQ:
   
   10, 11, or 12.
   
   If the card is a PCI card, have a go at auto detection first, then
   failing that, use the DOS setup program if required. Here at most, you
   may have to specify the IO address, which usually looks something
   similar 0x6xxx.
   
   In any case, once the card is set, be sure to write the pertinent
   information down. You will need it later on the Linux boxes, and you
   may or may not need it on the NT box.
   
   10BASE2:
   
     * Connectors and termination: Connect the tee to the NIC. On the
       Linux client and the NT client, place a termination device one
       side of the tee. Then using the cables you laid out earlier,
       connect the machines together.
       
   10BASET:
   
     * Installing the hub: This should consist of nothing more than
       setting it down on the table and plugging it in. In other
       situations, you may or may not find it advantageous to mount it on
       the wall, or even under the table.
     * Connecting to the hub: Using the cables you laid out before,
       connect one end of each cable to the appropriate NIC. Now you may
       bundle, but not anchor the cabling, starting at the each client on
       either end, and working toward the fileserver in the middle.
       
   Installing the software:
   
   Required software:
   
   Common:
   
   The /etc/hosts file: as specified above.
   
   The /etc/hosts.conf file: as specified above.
   
   The /etc/resolv.conf file: as specified above.
   
   Specific to the file server:
   
   If necessary, copy the above common files to the appropriate
   directories.
   
   SAMBA: This may or may not already be present on your system. If not,
   use pkgtool on a Slackware box to install it, and glint or the command
   RPM:ivh <name of samba.rpm> to install it on a RedHat box. Once you
   have verified it is installed, configure it as follows:
   
     * In the /etc directory, there should be an smb.conf-sample file.
       You may copy this to a new file called smb.conf, or create your
       own from scratch. I recommend using the sample one at first.
     * Edit the line workgroup = WORKGROUP and change it to workgroup =
       home.net
     * Next, look for a line similar to the following: hosts allow =
       xxx.xxx.x. Where xxx.xxx.x. represent the first three octets of
       your network address, 192.168.1. in the example. Additionally, be
       sure to allow the loopback interface. So the correct entry would
       be : hosts allow = 192.168.1. 127.
     * Finally, look for the line: remote announce = xxx.xx.x.xxx and
       change it to 192.168.1.255 for our example network.
       
   NFS services: This should already be installed on your Linux boxes.
   
   A possible exception is RedHat, again if the NFS server and client
   options were not selected during installation. If necessary, install
   them. Once you have verified the software is installed on your system,
   configure as follows:
   
   The /etc/exports file: This is fairly simple. There is much more to
   NFS than what I will present here, but briefly, and entry in the
   exports file uses the following syntax:
   
   /name/of/directory/to/export (type of access) who.can.access
   
   So as an example, to export the home directory with read and write
   permissions, to anyone in the home.net, the correct entry would be:
   
   /home (rw,no_root_squash) *.home.net
   
   Specific to the NT client: Copy the hosts file ONLY to the specified
   location. Insert your NT CD-ROM and choose
   start/settings/controlpanel/network. Depending on whether you have
   been using this machine for DUN, you may or may not have some of the
   software already installed. If not just follow the prompts, with the
   following objectives:
   
   Install ONLY the TCP/IP protocol.
   
   When the time comes to install your Network Adapter (NIC), you can try
   to let it auto-detect first, then failing that, choose Have Disk and
   use the diskette supplied with your NIC.
   
   You can safely accept the defaults at this point. If prompted for
   information such as hostname, IP address, or netmask, refer to the
   stated configuration above.
   
   You may be prompted to reboot several times. Do so.
   
   Specific to the Linux client: Copy the common files to the appropriate
   directories.
   
   The only exception would be if you desired to make directories on the
   Linux client available to the NT client. If this is the case, simply
   repeat the SAMBA instructions for the file server above on the Linux
   client as well.
   
   Configuration of the file server:
   
   Basic Networking the first step on the UNIX boxes is to get the NIC
   recognized. On a Slackware machine, this is done by editing
   /etc/rc.d/rc.modules and uncommenting the line that will load the
   kernel module necessary for your particular NIC, and possibly passing
   the IO address and/or the IRQ to help Linux find the card. Scroll down
   to the Network Device Support section, and look for the line:
   
   #/sbin/modprobe/ ne io=0x320 #NE2000 at 0x320
   
   Uncomment the line by deleting the pound sign. Depending on what
   release of Slackware you are using, you may or may not have to specify
   the IRQ as well. This should not be necessary if you are using release
   3.5 or higher.
   
   Next, you will want to configure your networking software. Use the
   netconfig utility for this. Follow the prompts, with the following in
   mind:
   
   When asked if you will be using only loopback, answer no.
   
   Leave the default gateway blank.
   
   Leave the nameserver stuff blank.
   
   In RedHat, you can use the linuxconf utility in either text mode or
   under X. I have had a few bad experiences with the X version, so I
   recommend using the text mode version.
   
   At the command prompt, type linuxconf <RETURN>
   
   You will be presented with a dialog box.
   
   Choose Config/Networking/Client tasks/Basic host information.
   
   First, set your hostname to fileserver01.home.net, then tab to quit to
   return to the previous screen. Choose Adaptor 1, use the spacebar to
   select the following parameters:
   
     * Enabled
       
     * Config mode Manual
       
   Next enter the proper hostname, domain, IP, netmask, device number,
   kernel module, IO, and IRQ for machine. In our case, the proper data
   is:
   
   fileserver01.home.net
   
   fileserver01
   
   192.168.1.2
   
   255.255.255.0
   
   eth0
   
   ne
   
   0x320
   
   10
   
   If at any point, you are prompted for a default gateway, leave it
   blank for now.
   
   After you have entered this information, choose quit, accept, quit,
   quit, quit, until you are asked to activate your changes.
   
   If you want, you can use linuxconf to add your user accounts now, or
   do it manually later.
   
   Reboot.
   
   Configuration of the workstations:
   
   Configuration of the NT client Choose
   start/settings/controlpanel/network.
   
   Select the Identification tab. Make sure your Workgroup is set to
   HOME.
   
   Select the Protocols tab. Highlight TCP/IP. Click on Properties.
   
   Select the IP Address tab, and make sure Specify an IP address is
   selected, and that the IP and netmask are correct. Additionally, make
   sure the Default Gateway is blank.
   
   Select the DNS tab. Enter your hostname (nt01) and domain (home.net)
   in the appropriate boxes.
   
   Select the WINS Address tab. Make sure the WINS server boxes are
   blank, and uncheck the Enable DNS for Windows Resolution and Enable
   LMHOSTS Lookup boxes if necessary.
   
   Select OK. When prompted that one of the adapters has an empty WINS
   something or other, select yes to continue. Select close. You will be
   prompted to reboot.
   
   Configuration of the Linux client The network configuration will be
   the same as the fileserver instructions.
   
   Testing the installation:
   
   If any of these testing procedures fail, go to the troubleshooting
   section for suggestions on how to correct the problem.
   
   Testing for physical connectivity To test physical connectivity, ping
   one of the other hosts on the network. You should see some return
   information and statistics. Depress Ctrl+C to exit.
   
   Testing the loopback adapter To test the loopback adapter, simply ping
   127.0.0.1.
   
   Testing the NIC To test the NIC, simply ping the IP address of the
   NIC.
   
   Using ifconfig and ipconfig -
   
   In Linux and NT, there are utilities provided to assist you in
   assessing the condition of your networking setup and hardware. They
   are called ifconfig and ipconfig, respectively.
   
   On a Linux box, at the command prompt: ifconfig <RETURN> should yield
   two entries one for the Loopback Adapter called lo, and one for your
   NIC, called eth0.
   
   On an NT box the command ipconfig should yield one entry, describing
   your Ethernet adapter.
   
   Testing name resolution To test name resolution simply ping by
   hostname, such as fileserver01, nt01, linux01, etc.
   
   Testing file services
   
     * Linux NFS To test the NFS services, CD to /mnt, and create a
       directory called test. Then try to mount the remote directory to
       it. For instance, in our example above, we are exporting /home on
       the fileserver machine, so lets mount it under test: mount t nfs
       fileserver01:/home /mnt/test <RETURN>. If all went well, you
       should now be able to access the remote directory from the Linux
       client.
     * NT SAMBA double-click on Network Neighborhood. Under Home, you
       should see both your NT client and the Linux machine,
       fileserver01. Double click on the entry for fileserver01. If your
       user account has been created on the Linux box, you should be able
       to enter your username and password when prompted, and be taken
       directly to your home directory on the Linux box.
       
   Troubleshooting the installation:
   
   Troubleshooting physical connectivity problems
   
     * ping 127.0.0.1. If this fails, you have an improper networking
       configuration. Go back and recheck all your settings and required
       files. If this works, try to ping the IP address of the NIC
       installed in the machine. If this does not work, make sure the
       card is being recognized by Linux. If the NIC has more than one
       interface e.g. RJ-45 (10BASET) and a BNC (10BASE2,) make sure you
       have the correct one activated. If all goes well up until this
       point, try to ping another machine on the network by IP address.
       If this fails, see the next section on cable integrity.
       
   Cable integrity
   
     * If you cannot ping any other machine on the network, and you have
       tried all of the above, here are some tips for isolating cabling
       problems.
     * 10BASE2 move the termination of the bus to the next machine in
       line. Try to ping it. If this fails, try another cable. Repeat the
       ping test. If it still fails, suspect a termination problem. See
       below.
     * 10BASET check that the RJ-45 connector is firmly seated in the NIC
       and the hub. If the cable is good, there should be an LED lit up
       above the port on the hub into which the cable is inserted. If the
       LED is not lit, try another cable.
       
   Termination integrity
   
     * This only applies to 10BASE2 or bus networks. Terminators are
       usually a pass or fail type of deal. Either they work or they
       don't. First try another cable, and check to see if you are
       getting link lights on the NIC. Finally, double check combo cards
       to make sure the BNC interface is active.
       
   Troubleshooting name resolution problems:
   
     * First, try to ping by IP address. If this fails, check cabling,
       termination, and NIC recognition at boot time. On the UNIX boxes,
       ifconfig <RETURN> should show the loopback interface and eth0. If
       the NIC is not recognized, make sure Plug and Play is turned off,
       and you have passed the correct IO and IRQ parameters to the
       kernel. On an NT box, ipconfig <RETURN> should yield similar
       results. If not, check your network configuration.
       /start/programs/administrative tools/nt diagnostics may be of some
       help here.
     * If the ping by IP is successful, try to ping by hostname. If this
       fails, check your hosts file and make sure it matches the one
       above. If this is a UNIX box, check your hosts.conf and
       resolv.conf files and make sure they match the examples. If this
       is a NT box, make sure you placed the hosts file in the proper
       directory as specified above.
       
   Troubleshooting NFS problems
   
     * If you cannot mount a remote drive, check the /etc/exports file on
       the machine that physically contains the directory you are trying
       to mount. Make sure the desired directory is being exported
       correctly.
     * If you can mount the remote directory, but can read and/or write,
       go back to the exports file and check the permissions.
       
   Troubleshooting SAMBA problems
   
     * If the Linux box does not show up in the Network Neighborhood,
       make sure that both the NT box and the /etc/smb.conf files are
       using the HOME workgroup.
     * If the Linux box shows up, but you cannot access the shares, see
       if you are running Service Pack 3. If so, read the SAMBA docs for
       the required registry change that will need to be made to the NT
       machine.
     * Finally, make sure the username/password combination you are
       trying to use exists on the UNIX box as well as the NT box.
       
   References:
   
   Previous columns:
   
   Linux Installation Primer parts three and four
   
   Other:
   
   Ethernet HOW-TO
   
   Net-3 HOW-TO
   
   Network Administrators Guide
   
   Mastering Windows NT Server 4 (3rd Edition)
   
   Resources for further information:
   
   The Linux Documentation Project
   
   http://www.patoche.org/LTT/
   
   http://www.ugu.com/
   
   http://www.stokely.com/unix.sysadm.resources/
   
   alt.unix.wizards
   
   comp.security.unix
   
   comp.unix.admin
   
   alt.os.linux.slackware
   
   comp.os.linux.networking
   
   comp.os.linux.hardware
   
   linux.redhat.misc
   
   Coming in Part Six: the long awaited Internet Gateway!
     _________________________________________________________________
   
               Previous ``Linux Installation Primer'' Columns
                                      
   Linux Installation Primer #1, September 1998
   Linux Installation Primer #2, October 1998
   Linux Installation Primer #3, November 1998
   Linux Installation Primer #4, December 1998
     _________________________________________________________________
   
                       Copyright  1999, Ron Jenkins
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                           Linux on a Shoestring
                                      
                              By Vivek Haldar
     _________________________________________________________________
   
   This article first appeared in the September 1998 issue of PC Quest,
   India's leading infotech magazine.
     _________________________________________________________________
   
                             Table of Contents
                                      
     * INTRODUCTION
     * SAVE RAM!
          + RECOMPILE THE KERNEL
          + STOP SOME SERVICES!
          + HOW TO REMOVE SERVICES FROM A RUNLEVEL 
          + WHICH SERVICES TO KEEP, AND WHICH TO REMOVE 
          + SERVICES YOU MIGHT WANT TO KEEP
     * SAVE DISK SPACE
          + HOW TO REMOVE A PACKAGE 
          + WHICH PACKAGES DO I REMOVE?
     * WINDING UP
     _________________________________________________________________
   
INTRODUCTION

   With every operating system out there screaming "Give me more!" - more
   disk space, more RAM, more Mhz - it's comforting to know that there is
   one savior out there for those of us not endowed with the sizzlingly
   latest hardware. Yes, I am talking about Linux.
   
   Though Linux shines as a network operating system, and is often
   projected as one, the fact is that it makes a great single user OS as
   well - something that one could use on a non-networked home PC.
   
   And in that case, there are a number of ways in which you could tweak
   your system to get more punch out of it - even on machines as
   antiquated as 486s, and with as little RAM as 8MB.
   
   Now please remember that you need to be logged in as root to do all
   the following things. Our attack will be two pronged - to minimize
   usage of RAM, and to save disk space.
   
SAVE RAM!

  RECOMPILE THE KERNEL
  
   The kernel that is installed out of the box does the job, but its a
   catch-all kernel, with almost everything compiled into it. Which means
   that its bigger than it has to be for you. If you compile your own
   kernel from the kernel sources, it could be upto 100kb smaller than
   the default vmlinuz kernel. Besides, its very helpful to know how to
   compile the kernel. It's quite simple actually. You first configure
   it, that is, you say what all you want in your kernel. And then you
   compile it.
   
   Linux has reached that advanced stage in its evolution where even the
   kernel configuration can be done graphically. The kernel sources
   usually reside in /usr/src/linux. To get the graphical configuration
   running, do "make menuconfig"(for text based menus), or "make
   xconfig"(for graphical setup in X). You'll be presented with a long
   list of configurable options, and before deciding, it is advisable to
   see the sagely help note which goes along with each. The notes always
   give sound advice, and you should follow it. By doing so, you'll land
   up with exactly the things that you need compiled into your kernel,
   and nothing else. I would also suggest reading the README file in the
   source directory. Once you've configured everything, quit X if you're
   running it. This is so that you can do the compilation in text mode,
   without a heavy X running, and with more available RAM.
   
   Do "make dep; make zImage", go have coffee, and come back after some
   time. Once that is done, the README explains in no uncertain terms
   what to do with your new kernel, and I would only be reproducing it if
   I told you.
   
  STOP SOME SERVICES!
  
   When a normal Linux system is running, there are a number of
   background jobs constantly running on it, each for a specific purpose
   - these are called daemons. For example, sendmail, the mail daemon, is
   the process which takes care of all the sending and routing of mail. A
   number of such daemons are started at bootup. And to group together
   sets of daemons that you might want to start for specific purposes,
   you have runlevels, which are simply groupings of services to start
   and stop. For example, on a normal Linux system runlevel 1, which is
   single user mode, will obviously need a lot fewer services to be
   running than runlevel 3, the full fledged multi user mode.
   
   Linux, by default, boots into runlevel 3. Now it turns out that of the
   myriad services started in that runlevel, some of them a simple non
   networked home PC could do without. For example, you obviously
   wouldn't want to waste precious RAM by running sendmail on such a
   machine. Yeah, it can be fun to send mail back and forth between
   root@localhost, and someuser@localhost, but that wears off pretty
   fast.
   
  HOW TO REMOVE SERVICES FROM A RUNLEVEL
  
   With RedHat, it's all very simple. Administration is definitely one of
   the areas in which RedHat scores over other distributions. After
   logging in as root, start X, and from an xterm, start "tksysv". This
   is the graphical runlevel editor.
   
   You'll see six columns, one for each runlevel. Now we'll only be
   fiddling with runlevel 3, the one which Linux normally boots into.
   Each column will have two halves, the top one for services to start at
   bootup, and the botton one for services to stop at shutdown. All you
   have to do to remove a particular service is to select it, and press
   Del. Thats it. Just remember to save your changes before quitting.
   
  WHICH SERVICES TO KEEP, AND WHICH TO REMOVE
  
   Actually, it's much simpler to tell you which ones to keep. Remember,
   all this tweaking is only in runlevel 3. Now the bare essentials are :
     * kerneld - nothing will work without this!
     * syslog - must have around for kernel to log messages. The logs are
       helpful for seeing what was going on with your system in case
       something goes wrong(actually, nothing ever goes wrong with
       Linux!).
     * keytable - you need this if want to be able to use your keyboard!
     * rc.local - this is where some trivial nitty gritties happen, after
       all the other services have been started.
       
   You simply need to have the above four services. Without them, as some
   say, "not everything will work."
   
  SERVICES YOU MIGHT WANT TO KEEP
  
   Then there are the fence sitters - non critical services which you
   might want to keep, if you need them, or if you fancy them.
     * crond - this runs a number of trivial jobs periodically, the most
       important of which is to make sure that your log files don't get
       too large. you can run it if you're paranoid.
     * atd - this deamon is required if you want to run "at" jobs, i.e.,
       jobs which begin execution at a time specified by you. people
       working on large, multi-user systems which are up 24 hours,
       everyday, use this to run heavy computational jobs at night, when
       loads on the system are lighter. but on a simple home machine, i
       don't see much use for it. after all, you're the only one using
       it!
     * gpm - this allows you to use the mouse in text mode. useful
       sometimes only if you work in text mode, and a complete waste if
       you work in x.
       
SAVE DISK SPACE

   Actually, there's nothing much you can do here, except removing
   unwanted packages. Redhat linux has a superb, easy to use, and
   comprehensive package management system which can keep track of almost
   every non user file on your disk. Everything installed on your system
   is part of some package, and packeges can be uninstalled.
   
  HOW TO REMOVE A PACKAGE
  
   Just run "glint", the graphical interface to the redhat package
   management system, from a command line while in x, and you will get a
   graphical interface to all the packages installed on your system. The
   packages are classified, and show up in a directory browser like
   window. To remove a package, just select it and click on the
   "uninstall" button on the right side.
   
  WHICH PACKAGES DO I REMOVE?
  
   Beware though, there are some critical packages which shouldn't be
   uninstalled. In glint, it's generally advisable to not touch the
   "base" and "library" packages unless you know exactly what you are
   doing.
   
   For others, see their description(click the "query" button). If you
   haven't used that package in a long time, or don't foresee using it,
   it's generally safe to remove it. In case removing a package affects
   any other package, glint will tell you. It's all quite safe. If you do
   end up needing the package, you can always reinstall it from the CD.
   
WINDING UP

   These were only a few suggestions that you could try out. The more
   comfortable you get with Linux, and the more you explore, the more
   ideas you'll get to tweak your system to get the most out of it.
   
   Linux is an OS which is more forgiving to experimentation than most
   others. So think, and try it out!
     _________________________________________________________________
   
                       Copyright  1999, Vivek Haldar
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                               The Linux User
                                      
                          By Bryan Patrick Coleman
     _________________________________________________________________
   
   Who uses Linux? This question has changed as Linux evolves. Originally
   none but the ultra hacker or the core developers of the OS were the
   ones to use it. As different functionality got added, more and more
   less technically oriented people began to use Linux.
   
   Now the question is how far will Linux go toward being an OS for the
   end user. The response that is the healthiest for continued growth
   would be as far as one can go. What you say would you turn Linux into
   a next generation Windows. No, but there is more to it than that.
   nifty To effectively become an end user product and keep the hackable
   quality of Linux should be the new focus. That means when developing
   open source software you are developing for everyone from the ultimate
   power user / hacker to the less than average user that may have never
   used a computer before. Yes some people have never used computers
   before still in this day and age.
   
   What does this mean for development? First and foremost make
   everything you possible can configurable. Not just different makes for
   different needs but truly extendable interfaces using guile or python
   for example. But also there need to be defaults. So after your
   application is installed a user can simply start your program and it
   look polished. As long as your source code is available the hard core
   hacker is happy. But for hacker wouldi-be's it is very important that
   source code is internally documented.
   
   But wait we can go a step beyond simply creating fully configurable
   applications that are extendable and come with default settings. How
   about "smart" applications. Maybe you have installed application A on
   your system and application B comes along from the same people that
   brought you A. Wow you would love to have it so you install it and low
   and behold all of the little tweaks that you have made to A are
   already configured for application B. Since A and B are smart
   applications they have communicated and B now knows what you like. Of
   course not everyone likes there applications deciding what they like
   so all smart applications should be lobotomyzable.
   
   Now for the real fire. How about all this plus the application is
   ready for immediately distributed computing, not only distributed but
   PVM aware so if you connect to a Beowulf cluster your application is
   ready to do some super computing. Groups can be formed across the web
   i.e. ready made intranet. Security is of course built in so you
   company or organization can just set up there own key and away they
   go.
   
   Why stop at just X or the console or even Linux. I your application is
   completely system aware no matter where you are or what computer your
   using a person just has to start up there application and it does the
   rest going so far as trying to figure out which way you like your
   application and if your going to be doing distributed work.
   
   In short the new wave of computing will be all things for all people.
   This new approach needs a new name I think. I prefer liquid or fluid
   UI or interfacing framework. Some might think of Java. Java however is
   slow, slow and in the end it is only one library. What I have in mind
   would be more of a set of wrapper classes one for each library used.
   And one wrapper that would handle all of the calls to the widget sets
   and do all of the AI work. This double wrapper approach would cut a
   lot of the time and effort of emulating multiple classes.
     _________________________________________________________________
   
                  Copyright  1999, Bryan Patrick Coleman
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                      Kernel 2.2's Frame-buffer Option
                                      
                               By Larry Ayers
     _________________________________________________________________
   
                                Introduction
                                      
   The long-awaited new stable version 2.2 of the Linux kernel is on the
   verge of release as the new year begins; a great variety of new
   features and drivers will be included. One of the new features is the
   option to use a new device type for the virtual consoles, the frame
   buffer device (/dev/fb[0-7].) This new device type isn't really
   essential for the many Linux users running Intel machines, but those
   people using Alpha, Sparc, PPC, or any of the other platforms
   supported by Linux should benefit, as the idea behind the frame buffer
   console is to provide a hardware-independent console device.
   
   Geert Uytterhoeven, a Belgian programmer and one of the primary frame
   buffer developers, wrote a succinct introduction to one of the frame
   buffer documents included with the kernel source:
   
     The frame buffer device provides an abstraction for the graphics
     hardware. It represents the frame buffer of some video hardware and
     allows application software to access the graphics hardware through
     a well-defined interface, so the software doesn't need to know
     anything about the low-level (hardware register) stuff.
     
   Users of Intel machines already have methods available to vary the
   size of the text console screen, such as the video=ask Lilo parameter
   and the SVGATextMode utility. I've used SVGATextMode for quite some
   time to set the console text to a 116x34 resolution. It works fine,
   but while configuring recent 2.2 beta kernels with menuconfig I
   couldn't help but be intrigued by the choices offered in the Console
   Drivers sub-screen. My video card these days is a Matrox Mystique, and
   when I saw a new frame buffer driver for Matrox cards and another
   option for Mystique support I just had to give it a try.
   
                             Installation Tips
                                      
   The first time I tried a kernel with Matrox frame buffer support I
   could see that the card was detected (as the boot messages scrolled
   by) and the penguin logo's appearance at the upper right corner of the
   screen seemed to indicate that at least part of this compiled-in
   feature was working, but the console was the same old 80x25 default.
   Back to the documentation, where I learned that a utility called fbset
   would be helpful. This small program (written by Geert Uytterhoeven
   and Roman Zippel) is used to change or query the current frame buffer
   mode. Even more important, the installation of fbset creates the
   special device files /dev/fb[0-7] which are needed for frame buffer
   functionality. The fbset archive can be found at this FTP site.
   
   Another document found in the fb subdirectory of the kernel source's
   Documentation directory is called matroxfb.txt. Written by Petr
   Vandrovec, the Czech developer responsible for the Matrox frame buffer
   drivers, this document is a great help in setting up workable frame
   buffer modes. Another, more generic document called vesafb.txt can be
   consulted when setting up modes for other VESA-2.0 compliant video
   cards.
   
   There are two ways to tell the kernel which frame buffer mode to use.
   While experimenting, setting the mode specification at the Lilo prompt
   is a quick way to try a mode out. Let's say that your main dependable
   kernel is the first one in the /etc/lilo.conf file, and the frame
   buffer kernel is the second and is named bzImage-2.2. Your computer
   boots, the LILO prompt appears, and you press the shift key. Here is
   an example of a mode being set:
   
   LILO bzImage-2.2 video=matrox:vesa:0x188
   
   If the mode is acceptable, the console screen will switch to the new
   mode (in this case, 960x720) soon after the boot messages begin to
   scroll by. The relevant boot messages will look something like this:
   
matroxfb: Matrox Mystique (PCI) detected
matroxfb: 960x720x8bpp (virtual: 960x4364)
matroxfb: framebuffer at 0xE0000000, mapped to 0xc4807000, size 4194304
Console: switching to colour frame buffer device 120x45
fb0: MATROX VGA frame buffer device

   If you like the mode, a variation of the above Lilo command can be
   inserted directly into the /etc/lilo.conf file; the line should look
   something like this:
   
   append="video=matrox:vesa:392"
   
   The quotes are essential, and notice that the hex number 0x188 has
   been converted to its decimal equivalent 392, since Lilo can't
   understand hex numbers in the lilo.conf file.
   
   Once the frame buffer kernel is booted the fbset utility can be used
   to switch to other modes. Mode specifications can be derived from X
   modes, but not wanting to spend hours fooling around with this I took
   the easy way out. Before I edited the lilo.conf file so that the mode
   would be set automatically when booting, I tried several different hex
   numbers at the Lilo prompt. After booting each one I ran fbset without
   any arguments. When run this way fbset outputs to the screen the
   current mode specs in a format usable in the (initially nonexistent)
   config file /etc/fb.modes. Here's a sample of the output:
   
kbdmode "name"
    # D: 56.542 MHz, H: 45.598 kHz, V: 59.998 Hz
    geometry 960 720 960 4364 8
    timings 17686 144 24 28 8 112 4
endmode

   Several of these mode specs can be pasted into a new /etc/fb.modes
   file, substituting different mnemonic names for the "name" in the
   pasted output. One useful mode to include is a basic, non-color text
   mode, either by trying one of the text modes described in the
   documentation or simply by running fbset -depth 0. SVGAlib console
   graphics programs won't run properly in frame buffer consoles with
   higher color-depths. Once a fb.modes file has been created the frame
   buffer mode can be changed by running the command fbset name, where
   "name" is one of the mode names in the file.
   
                            Frame Buffers and X
                                      
   Naturally, the big question many readers will have is "Will X Windows
   run when started from a frame buffer console?". The answer is "It
   depends.". Some combinations of X servers and video cards are known to
   have problems, especially when switching back and forth from X to a
   virtual console. This can be a problem with SVGATextMode as well. The
   XFree86 3.3.3 SVGA server I've been using with my Matrox card has
   worked well with the frame-buffer consoles. Your mileage may vary.
   
   There is a special server available in source form; it's called
   XF68_FBDev and it's included in the XFree86 3.2 (and later) sources.
   Binaries aren't available, and the server is unaccelerated and would
   mainly be of interest to those running Linux on non-standard hardware
   such as PPC.
   
                                 Conclusion
                                      
   The majority of Linux users probably won't be using the frame buffer
   kernel options any time soon. It has advantages with some hardware,
   but it takes time to figure out and use effectively, and the benefits
   are nice for console users but won't be of much use to those who spend
   most of their time in X Windows. I think that the reason it will be a
   part of the next stable kernel release is that frame buffer devices
   aren't Intel-specific, as is much of the current console code. It's
   likely that the much-anticipated release of XFree86 4.0 (possibly this
   year) will include more frame buffer compatibility in its server
   modules, such as seems to exist now in the SVGA server.
     _________________________________________________________________
   
   Last modified: Sun 3 Jan 1999
     _________________________________________________________________
   
                       Copyright  1999, Larry Ayers
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
              Running Your Own Domain Over a Part Time Dialup
                                      
                               By Joe Merlino
     _________________________________________________________________
   
   You love your linux box. You love the power. You love the flexability.
   You love the freedom. You really love the utter non-microsoftness of
   it. But deep down inside, you know there's something missing. A deep
   longing sits within you, crying out to be assuaged.
   
   Your friends with full-time ethernet connections have it. Your really
   rich friend with a T-1 to the house has it. They can log into their
   linux boxen any time they want. They have their own domain names. You
   probably even have an account on one of their machines. But look what
   you're stuck with. Sure your modem's pretty fast, but it only dials up
   when you tell it to, and you can't tell it when you're not logged in.
   Even if you set it up to run as a cron job, you wouldn't know where to
   telnet to because your ISP gives you a different IP number every time
   you dial in.
   
   How do you get remote access?
   
   Fear not. There Is A Way. For the price of a dialup PPP account, you
   can have that precious remote access. And if you're willing to pay the
   freight to InterNIC (you can scrape up seventy bucks, can't you?) you
   can even have your own domain. Here's how:
     _________________________________________________________________
   
    STEP 1 - SETTING UP PPP
    
   The general setup of PPP connections on Linux is well documented
   elsewhere, so I won't go into it, except to say that you need to have
   PPP set up to run non-interactively from a command line. Graphical
   programs to activate PPP such as EZPPP or Red Hat's netcfg won't work.
   This is because you're going to create a script to be run as a cron
   job, and that script needs to be able to call your PPP-connecting
   script.
   
   For the purposes of this article, my PPP-connecting script is called
   /etc/ppp/ppp-on, and the script that ends the PPP connection is called
   /etc/ppp/ppp-off. You should be able to find examples of these sorts
   of scrips on the web.
     _________________________________________________________________
   
    STEP 2 - DYNAMIC DNS SERVICE
    
   You probably have Domain Name Service (DNS) through your ISP, but your
   ISP doesn't keep track of your particular connection because it
   changes every time you dial in. Your ISP does this because it has more
   users than it does IP numbers. This makes sense when you consider that
   most of the people who use the service only connect for a short time -
   a couple of hours at most. You can probably get a full-time connection
   and a static IP number from your ISP, but such things are typically
   pretty expensive.
   
   The thing is, you don't really need a static IP number to have a
   constant domain name. As long as the Domain Name Server where your
   domain name lives knows what your IP number is *at any given time*,
   you can get to your machine. And the DNS server where you domain name
   lives doesn't have to be the same one that belongs to your ISP.
   
   I use a service provided by a company called Dyndns (www.dyndns.com)
   Dyndns will, for a fee, maintain your domain name in its database. The
   domain name you get can either be a subdomain of theirs (i.e.
   yourdomain.dyndns.com), which is cheaper, or your can have your own
   unique domain name (i.e. yourdomain.com), which is somewhat more
   expensive. If you want a unique domain name, irst, you have to
   register your domain name with InterNIC (www.internic.net). Dyndns
   will do this for you, for a fee, but it's so easy to do, you might as
   well save yourself the money and do it yourself. When you register
   with InterNIC, you have to supply the IP numbers of a primary and
   secondary DNS server. These numbers are available on Dyndns's web
   page. Once all of this goes through (read: is paid for), you're good
   to go.
   
   The next thing you do is download a client program from Dyndns's
   website. They have a couple of different clients you can choose from
   (one in C, and one in Perl), and it might take some experimenting to
   figure out which one is better for you (even then, I had to have a
   friend of mine hack the Perl client a little to make it work).
   
   When you are logged into your ISP, you run the client program. The
   client program gets your current IP number from the output of the
   'ifconfig' command, and reports it to Dyndns's DNS server. Your domain
   name is now pointed at your machine.
   
   [Note: Nothing I've said in this section should be considered an
   endorsement of or advertisement for Dyndns. I've used their service as
   an example because it's the service I use, and it's what I'm familiar
   with.]
     _________________________________________________________________
   
    STEP 3 - AUTOMATING THE CONNECTION
    
   You've got the domain name, you've got the DNS service, amd you've got
   the client program working. Now you need a way to make the computer
   log itself onto your ISP without your actually being there to do it.
   Ah, the wonders of Linux! This is taken care of with a simple shell
   script. Here's what I use:

#!/bin/bash

#  This is a script that attempts to log into a remote dialup and
#  establish a PPP connection. If it is sucessful, it runs 'ntpdate'
#  (network clock set), NamedControl.pl (a perl script to update
#  the dynamic DNS), and fetchmail for all accounts. If it fails, it
#  makes two more attempts, and then exits.

#  This script is released under the GNU General Public Licence. No
#  warrenty whatsoever is expressed or implied.

#  Original version was written by Joe Merlino <joe@negia.net>, November,
#  1997.

#  If you have an idea for an improvement to this script, please let me
#  know.

#  set iteration counter at 1
i=1
while [ $i -le 3 ]
  do

    #  This part tests for the availability of the modem. If the modem
    #  is available, it runs /etc/ppp/ppp-on. If not, it reports and
    #  exits.

    (
    if (test -e /var/lock/LCK..modem)
      then
        echo modem not available  # for some reason this didn't work.
        exit 0
      else
        /etc/ppp/ppp-on
        sleep 45
     fi
    )

    #  This part tests for the modem lock file, and if it exists, runs
    #  the various programs needed to update the system from the network.
    #  if the lock file is not found, it reports and exits.

    (
    #!/bin/bash
    if (test -e /var/lock/LCK..modem)
      then
        /etc/ppp/netpack  #invoke 'netpack' script
        echo done
      else
       echo no connection
    fi
    )
    sleep 60

      #  This part again tests for the lock file, and if it finds it, sets
      #  the iteration counter to 4 (so the script will exit). If the lock
      #  file is not found, it incriments the counter by one.

      if (test -e /var/lock/LCK..modem)
        then
          i=3
      fi
      i=`expr $i + 1`
      echo $i
  done

   You'll notice that this script calls another script, 'netpack'. I've
   done that because I have a set of things I like to do when my machine
   logs itself in. At the very least, 'netpack' should include your
   dynamic DNS client script. I would also recommend that it include
   whatever you use to download your email (e.g. 'fetchmail' or
   'popclient' or whatever). It would also be possible to replace the
   line that calls 'netpack' with a series of lines that call the various
   programs, but I like the modular design because I can edit 'netpack'
   on it's own.
   
   I put both this script (which I named 'auto-up'), and 'netpack' in
   /etc/ppp/.
   
   Once you've got all that set up, try running it manually to make sure
   it works. (Don't forget to give yourself execute permission.) Once
   you've established that it works, set it up as a cron job (using the
   'crontab -e' command) to run whenever you want to have remote access
   to your linux box. Also, set up /etc/ppp/ppp-off to run when you want
   your access to end.
   
   [Note: Some ISPs have a limit on the amount of time you can be
   connected without doing anything. This is to keep people from logging
   in and simply leaving their computers connected indefinitely. You
   should be aware of your ISP's policy with regard to this.]
   
   And there it is. You now have remote access to your machine at
   specified times. Now you can start pining for a full-time connection.
   
   Addendum: Between the time I wrote this article, and the time that
   this issue of Linux Gazette was posted, DynDNS added a web-based
   update system to its already existing methods. This means that you can
   update DynDNS manually, with your browser.
   
   It also gives us the opportunity to write another Perl client. This
   one can be much more compact, and should work "out of the box" with
   only one small hack required for your account information.
   
   If you want to use it, simply copy the text between the ---CUT---
   lines to a file, give yourself execute permission, and use it in place
   of the other client program.


---------------CUT------------------

#!/usr/bin/perl

#
# Client script for HTTP update of DynDNS's Dynamic Domain service.
# Written by Joe Merlino  12/31/98
# Licence: GNU GPL
#

use IO::Socket;

# Replace the values below with your information as indicated

$host = "master.dyndns.com";
$myhost = "myhost";             #replace with your hostname
$myname = "postmaster";
$mypass = "mypass";             #replace with your password


# This part opens a connection to DynDNS's web server.
$remote = IO::Socket::INET->new(
        Proto => "tcp",
        PeerAddr => "$host",
        PeerPort => "http(80)"
        )
        or die "couldn't open $host";

# This part sends an HTTP request containing your information.
print $remote "GET /dyndns/cgi/DynDNSWeb.cgi?name=$myname&passwd=$mypass&domain
=$myhost&IP=AUTO HTTP/1.0\n\n";


#This part extracts and prints DynDNS's response.
while ($hrm = ) {

        if ($hrm =~ /UPDATE/) {
                $message = $hrm
        }

        if ($line =~ /THERE/) {
                $message = $hrm
        }
}

print "DynDNS: $message";

-close $remote;
---------------CUT------------------
     _________________________________________________________________
   
                       Copyright  1999, Joe Merlino
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
        Setting Up a PPP/POP Dial-in Server USING Red Hat Linux 5.1
                                      
                               By Hassan Ali
     _________________________________________________________________
   
   DISCLAIMER:
   
   This worked for me. Your mileage may vary!
   
   OBJECTIVES:
   To install PPP and POP/IMAP services on a Red Hat Linux 5.1 server for
   dial-in users.
   
   TOOLS:
   Red Hat Linux 5.1 CDs
   
   ASSUMPTIONS:
   You have a PC with basic installation of Red Hat Linux 5.1 with a
   Linux kernel that supports IP forwarding.
     _________________________________________________________________
   
   STEP 1: Install "mgetty" (if not yet installed) from Red Hat 5.1 CD #1
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    1. Login as "root", insert Red Hat 5.1 CD #1 in the CD-ROM drive and
       mount it using the command:

     # mount -t iso9660 /dev/hdb /mnt/cdrom
   (It is assumed that your CD-ROM drive is device /dev/hdb, if not
       change it accordingly)
    2. Get to the RPMS directory:

     # cd /mnt/cdrom/RedHat/RPMS
    3. Install "mgetty" rpm files:

     # rpm -Uvh mgetty*
   This will install mgetty and all its cousins, but who cares!! If you
       hate extended family, have your way and replace "mgetty*" with
       "mgetty-1.1.14-2.i386.rpm".
    4. At the end of /etc/mgetty+sendfax/mgetty.config file, add the
       following set of three lines for each serial port connected to a
       modem for dial-in users. Here is an example for /dev/ttyS1 and
       /dev/ttyC15:

     # For US Robotics Sportster 28.8 with speaker off
     port ttyS1
     init-chat "" ATZ OK AT&F1M0E1Q0S0=0 OK
     answer-chat "" ATA CONNECT \c \r

     # For Practical Peripheral 14.4 with fax disabled and prolonged
     # carrier wait time (90 sec)
     port ttyC15
     init-chat "" ATZ OK AT&F1M0E1Q0S0=0S7=90+FCLASS=0 OK
     answer-chat "" ATA CONNECT \c \r
   Notes:
         1. AT&F1 sets hardware flow-control mode on many modems. For
            other modems use appropriate initializations in the init-chat
            line.
         2. Just in case you wonder why I took as an example a ttyC15
            port; well, you may have such a port if you have a multiport
            serial card. If you need one, I recommend Cyclades cards.
    5. In /etc/mgetty+sendfax/login.config file, search for the line that
       starts with /AutoPPP/. Make sure that it is not commented (i.e.
       there is no "#" at the beginning of the line), and edit it to be:

     /AutoPPP/  -       a_ppp   /etc/ppp/ppplogin
   If you wish to have users' login names (rather than "a_ppp") to appear
       in the /var/run/utmp and /var/log/wtmp log files, then the above
       line should be:

     /AutoPPP/  -       -       /etc/ppp/ppplogin
    6. In /etc/inittab file, search for the section that runs "getty"
       processes and add at the end of that section one line of the
       following form for each modem port. Example here is given for
       ttyS1 and ttyC15.

     7:2345:respawn:/sbin/mgetty -x 3 ttyS1
     8:2345:respawn:/sbin/mgetty -x 3 ttyC15
   [the first number (7,8) is arbitrary (in fact I have seen in some
       cases "s1", "s2", etc, used instead). Just give a different number
       for each port. And why not you go by the order!!? Me wonders!]
    7. Connect the modems to the serial ports, switch them ON and then
       initialize "mgetty" with the command:

     # init q
   NOTE: If you spawn "mgetty" on a serial port with no modem connected
       to it, or the modem is not switched ON, you'll get lots of error
       messages in "/var/log/messages" or/and in the other mgetty
       ("/var/log/log_mg.ttyXX") log files. In fact those error messages
       may continuosly pop up on your screen. Quite annoying, eh? To
       avoid this annoyance, each serial port that has no modem connected
       to it should have its corresponding lines commented out in
       /etc/inittab and in /etc/mgetty+sendfax/mgetty.config files.
     _________________________________________________________________
   
   STEP 2: Install PPP (if not installed) from Red Hat 5.1 CD #1
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    1. If the Red Hat CD #1 is properly mounted (see STEP 1.1), to
       install PPP type the following command:

 # rpm -Uvh /mnt/cdrom/RedHat/RPMS/ppp*
    2. Edit /etc/ppp/options files to read as follows:

     -detach
     crtscts
     netmask 255.255.255.0
     asyncmap 0
     modem
     proxyarp
   NOTES:
         1. Use appropriate netmask for your network. It doesn't have to
            be 255.255.255.0, in fact in my case it was 255.255.255.224
         2. Read man pages for "pppd" to understand those options.
    3. Edit /etc/ppp/ppplogin file (create it if it doesn't exist) to
       read as follows:

     mesg n
     tty -echo
     /usr/sbin/pppd silent auth -chap +pap login
   Make the file executable using command:

     # chmod +x /etc/ppp/ppplogin
   NOTE: We're going to use PAP authentication BUT using the ordinary
       /etc/passwd password file. That's what "+pap login" means.
    4. For each serial port connected to a modem, create a corresponding
       /etc/ppp/options.ttyXX file, where "XX" is "S1" for ttyS1 port,
       "S2" for ttyS2 port, "C15" for ttyC15, etc. In one such file put
       the following line:

     myhost:ppp01
   where "myhost" is the hostname of the PPP server - change it
       accordingly to the actual hostname of your Linux box. If you're
       more forgetful than you can REMEMBER to admit, remind yourself of
       the hostname of your server using the "hostname" command.

     # hostname
   The word "ppp01" used above is just an arbitrarily chosen name for the
       virtual host associated with one of the PPP dial-in lines and its
       corresponding IP address as defined in /etc/hosts file (to be
       discussed later). In another /etc/ppp/options.ttyXX file, you may
       wish to type in the following line:

     myhost:ppp02
   That is, here you define a different PPP hostname, "ppp02". Use a
       different hostname for each serial port. You can choose any names
       that your lil' old heart desires! They don't have to be ppp01,
       ppp02, ppp03, etc. They can be "junkie", "newbie", "noname",
       whatever!
    5. Edit /etc/ppp/pap-secrets file and add one line as shown below for
       each IP address that is to be dynamically assigned to PPP dial-in
       users. This, of course, assumes that you have a pool of IP
       addresses that you can assign to your dial-in clients:

     # Secrets for authentication using PAP
     # client   server          secret          IP addresses
     *          *               ""              10.0.0.3
     *          *               ""              10.0.0.4
   This says: no PAP secrets (passwords) set for any client from anywhere
       in the world with the shown IP address. We don't need to use PAP
       secrets if we will be using /etc/passwd instead. If you are REALLY
       not paranoid, you can have just one following line that will serve
       all the IP addresses (yours and your neighbour's!):

     # Secrets for authentication using PAP
     # client   server          secret          IP addresses
     *          *               ""              *
    6. Make /usr/sbin/pppd program setuid "root" by using command:

     # chmod u+s /usr/sbin/pppd
    7. Edit /etc/hosts file to assign IP addresses to all PPP hostnames
       you used in STEP 2.4. Use the pool of IP addresses used in STEP
       2.5:

     10.0.0.3   ppp01   ppp01.mydomain.com
     10.0.0.4   ppp02   ppp02.mydomain.com
   NOTE: Replace "mydomain.com" with the actual domain name of your PPP
       server. Just in case you're confused, I assume your PPP server is
       "myhost.mydomain.com".
     _________________________________________________________________
   
   STEP 3: Install POP/IMAP servers (if not installed) from Red Hat 5.1
   CD #1
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   ~~~~
    1. With the Red Hat CD #1 properly mounted, issue the following
       command to install POP and IMAP:

     # rpm -Uvh /mnt/cdrom/RedHat/RPMS/imap*
    2. Check /etc/inetd.conf file to see if "pop-2", "pop-3", and "imap"
       service lines are all uncommented. If not, uncomment them (i.e
       remove the leading "#"). If you only want to support POP3 clients,
       just uncomment the "pop-3" line. If POP2 and POP3 files are not in
       the "imap*" RPM file, try to see if you have "ipop*" RPM file and
       use it instead.
    3. Activate the new services by using command:

     # kill -HUP `cat /var/run/inetd.pid`
     _________________________________________________________________
   
   STEP 4: Enable IP fowarding
   ~~~~~~~~~~~~~~~~~~~~~~~~~~~
    1. If you use the already compiled Linux kernel that comes with Red
       Hat 5.1, it does normally have support for IP forwarding. If you
       compile your own Linux kernel, you have to enable "IP:
       forwarding/gatewaying" networking option during compilation. For
       RFC compliance, the default bootup process does not enable IP
       forwarding. Enable IP forwarding by setting it to "yes" in
       /etc/sysconfig/network file, like so:

     FORWARD_IPV4=yes
    2. Activate IP forwarding by using command:

     # echo "1" > /proc/net/ip_forward
   or by rebooting the system.
     _________________________________________________________________
   
   STEP 5: Test the server
   ~~~~~~~~~~~~~~~~~~~~~~~
    1. First create users (if not ready). You can give them
       "/home/username" home directory and "/bin/bash" login shell if you
       want them to have both "PPP" and shell access. Give them
       "/home/username" home directory and "/etc/ppp/ppplogin" login
       program if you want them to have PPP access but not shell access.
       It's better to use "usercfg" tool to set-up new users. Typical
       /etc/passwd file entries may be as follows:

     jodoe:tdgsHjBn/hkg.:509:509:John Doe:/home/jodoe:/bin/bash
     jadoe:t8j/MonJd9kxy:510:510:Jane Doe:/home/jadoe:/etc/ppp/ppplogin
   In this example, John Doe will have both PPP and shell access, while
       Jane Doe will only have PPP access. If you have just started to
       wonder how John Doe may have PPP access, the answer lies with the
       /AutoPPP/ configuration in "mgetty" - it does the magic. Any user
       that will dial in and talk PPP, mgetty will give him/her the
       /etc/ppp/ppplogin program.
       So, if John Doe dials-in using Windows 95 dial-up adaptor which is
       set up to make a PPP connection, mgetty will give John Doe PPP
       access. If he dials in with any other communication software e.g
       HyperTerminal, (with no PPP negotiation) he will be given the
       normal login shell. This will never happen for Jane Doe. She will
       always be welcome by the "/etc/ppp/ppplogin" program.
       In fact "mgetty" allows you to use the same modem lines for
       various protocols. For example, your UUCP clients (if you have
       any) may use the same modem lines as your PPP clients! Of course,
       you have to give your UUCP clients "/var/spool/uucppublic" home
       directory and "/usr/sbin/uucico" login program.
    2. Assuming you have a web server (Apache) already setup (it's a
       piece-a-cake to setup Apache), use a web browser, and a POP e-mail
       client (e.g Eudora) on a remote PC connected to a modem and a
       phone line. If it is a Windows 95/98 PC, setup the Dial-up Adaptor
       appropriately by specifying the IP address of the PPP server as
       the Gateway, use correct DNS IP address, and specify that the
       server will assign an IP address automatically. In the POP client
       (e.g Eudora), set SMTP and POP host as the IP address of the
       PPP/POP server.
       Now dial-up the server and wait for connection. Test out web
       browsing, and POP mail sending and receiving. If it doesn't
       work... something is wrong somewhere ;-)
     _________________________________________________________________
   
   REFERENCES:
   
   1. PPP-HOWTO 2. NET-3-HOWTO 3. "Using Linux", Bill Ball, published by
   Que (around US$30 - highly recommended) 4. mgetty documentation
     _________________________________________________________________
   
                      Copyright  1999, Hassan O. Ali
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                        Touchpad Cures Inflammation
                                      
                               By Bill Bennet
     _________________________________________________________________
   
   Here are some reasons to go to a little touchpad for Linux:
    1. It will exercise your whole hand digit by digit.
    2. When you take the pressure off of your "clicker finger" your
       chronic carpal tunnel and joint soreness will go away in a few
       days.
    3. When you work without soreness, you will enjoy your Linux box even
       more than you do now.
    4. You can really truly do magic hand gestures to make your machine
       go.
    5. It works better in Linux than in the monopoly system!
       
            So, what make and model are we talking about? It is the Elite
   800 dpi EZ-Pointe serial touchpad by PC Concepts. I got mine at
   Computer Avenue (www.computeravenue.com). Developed for the
   Windows-Intel monopoly system, it comes with a diskette that holds the
   "drivers" for Microsoft's DOS and their 16-bit and 32-bit window
   managers.
   
             The DOS setup of the pad is simply a matter of putting the
   diskette in the floppy drive and copying the "drivers" over to your
   machine. You get the usual triple set of instructions: one for DOS,
   one for 16-bit gui DOS and one very "clickety" set of instructions for
   32-bit gui DOS 7.0.
   
             After about twenty minutes of fiddling and adjusting, you
   are back to square one: you have installed a serial pointing device.
   Yes, you can enter in some key bindings coupled with clicks of the
   primary and secondary switches. You can set up hotspots accessible
   from a set of keys and clicks of switches. When the play-time is done
   (about twenty minutes or so - depends how playful you feel), you have
   to reboot your machine to record the settings. Ok, no problem.
   
             Now you can use the pad. You will find that all of the
   fiddling and customizing was a waste of time, since you will just be
   doing same-old, same-old with a new pointing device. When it comes to
   RSI, you can get yourself a better mouse or you can get yourself a
   totally new pointing device.
   
             The time comes when you want to see this thing work in
   Linux. Then you realize that your DOS fiddlings with "drivers" will be
   impossible in Linux because all of the "drivers" are written for the
   Widows-Intel monopoly system. It makes me think.
   
             Where does the typical hardware problem start? Right! It
   starts with standard manufacturing procedure: you follow the market.
   So the typical hardware problem in Linux is about hardware that will
   work only when a set of mystical registers is set up; those settings
   which can only be set with DOS software. We all know that the majority
   of PC owners are forced to get DOS software when they buy their
   machine. All we need is hardware that works on its own interface (like
   your BIOS and CMOS at startup) and hardware that will accept signals
   from any software, as long as the signals are correct.
   
             The non-standard hardware that uses non-standard settings is
   often (too often) kept from us by manufacturers who force the signing
   of a NDA (non-disclosure agreement) in order to protect their secrets.
   Ask yourself: is it a secret because it is simple and elegant? My
   answer is that these companies are afraid of a certain big, bad wolf
   company that steals innovation; this same thief claims to be the
   leading innovator! It is no wonder, then, that certain hardware is not
   yet open to Open Source.
   
             We also need for the manufacturers to hear that Linuxians
   will purchase from "Linux friendly" companies first. First and
   foremost, the consumer can really influence the computer industry by
   supporting the protocols that are open and free for all users. So do
   not buy from a company that seeks to own the protocols and do buy from
   companies that adhere to the protocols as they have been established.
   
             Enter the serial port protocol for the serial mouse. A mouse
   is a mouse. Evidence for the DOJ: a "regular" mouse is a Microsoft
   mouse. If any of you folks think that there is no need to curb
   monopolies and their anti-competitive, locked in, exclusive contracts,
   remember this: a "regular" mouse used to be an Apple mouse. Apple had
   the home computer mouse first and they played fair. It is my
   contention that Apple needed to play a bit more hard-nosed. Just look
   at who "owns" (influences) them now.
   
             The defacto standard for a "regular" serial mouse is based
   on its ubiquitous placement as an accessory for the monopoly system
   PC. Besides, we users like pointing devices. For the sake of clarity,
   you even call a "regular" mouse or the touchpad a Microsoft mouse when
   you install Linux.
   
             Well, it is time to leave the pad plugged in and reboot the
   machine to Linux. Good. Let us see if it works without all of these
   DOS driver fiddlings. You wait. You hope. You curse the monopoly. Then
   it happens.
   
             Gpm -t ms is running. You brush a digit across the pad. It's
   alive! Now for startx.
   
             The pad will work as a regular mouse in Linux without any of
   those annoying "drivers" because the Linux mouse config is ready for
   any serial mouse. No drivers. No fiddling. And the left and right
   buttons work just fine. In fact, it seems to me that the motion is
   smoother.
   
                        Give me a bit of skin anyday
                                      
             The standard procedure for operating a mouse is odd to watch
   if you look at it like a non-computer-familiar person. The operator
   holds the hand in readiness on top of the mouse. Whether you are a
   "micro-wrist-twitch" artist or a "full-shoulder-pusher" or a
   "swing-punch-twister" it all looks the same: your finger rests on the
   clicker and moves in one axis, making a tiny movement over and over.
   The term "clickfest" was coined as a derisive remark by some person
   with an aching "mouse wrist" and and a sore "clicker finger", I'll
   bet.
   
             Enter the pad, man. Brush a finger, any finger across the
   smooth touch pad. Your cursor will follow. Skin is in. Try a knuckle.
   Any skin covered body part will do. Now do a little light tap on a
   menu button. It responds. Do a light double tap. This light double tap
   is now your new "clickety-click". You do have switches, and they make
   drag and drop a little easier. I prefer to do the light stroke thing
   at this time; it just is way too cool and human, if you know what I
   mean.
   
             Then you try some fine pointer movement, such as in
   xpainting or GIMP-ing. Wow! The finest single pixel motion is waiting
   for you with a touchpad. It is done with a "fingerprint rollover" of a
   fingertip; just like you get when they throw you in a holding cell at
   your local ticket giving outlet. You get good traction and positive
   one-to-one feedback from the pointer with none of that annoying
   mouse-ball slippage. The finishing touch is the drag and drop, where
   you can move to your target, take your digit away from the pad surface
   (the cursor stays put), move it to one edge of the pad, touch down and
   tap twice to light up the target, and drag your targeted item to its
   destination. It is just a pleasure to work with a touchpad.
   
             So that is it. No HOWTO is needed for this seriously fun way
   to point and click on your screen. Best of all, there is no fiddling
   with no damn "drivers".
   
             When your carpal tunnel soreness goes away you may once
   again be carefree and easy-going at your monitor. Do you suppose that
   flame wars are due to pain from mousing in addition to the pain in the
   usual place? Adios from this desktop.
     _________________________________________________________________
   
gpm note

             The gpm for a two button mouse is gpm -t bare. It also works
   on gpm -t ms if you want or need three-button emulation.
     _________________________________________________________________
   
3 button emulation note

             As a three-button mouse emulator, the pad is very nice
   because the middle pair of left/right switches are about 1 millimeter
   apart and can easily be pressed together with one finger. The point is
   that you do not have to make any adjustments from your "regular" mouse
   setup, which is in /etc/X11/XF86Config in the pointer section. Just
   plug it in and make it go.
     _________________________________________________________________
   
rodent protocol

   To see the various rodent protocols, type "man mouse" to see the fine
   documentation.
     _________________________________________________________________
   
I was set up!

   XF86Setup is the graphical setter upper for your mouse and X Windows.
   
   Xconfigurator is the console/xterm setter upper for this same job.
   
   xf86config is the text based setter upper.-- pick a binary, any binary
     _________________________________________________________________
   
Reference reading:

   XFree86 HOWTO -- required reading for Linuxians -- see secret video
   timings
   
   3-Button-Mouse HOWTO -- you might have fun with this -- prep for
   surgery
   
   Loadlin+Win95 mini-HOWTO -- to beat the "DOS only" hardware trick
   
   "Loadlin.exe Installer", Linux Gazette issue #34, November, 1998 --
   step by step
     _________________________________________________________________
   
          made with Emacs 20.2.1 on an i486 with GNU/Linux 2.0.32
                                      
    The word damn is used to emphasize an adamant position and is in no
                way meant as an affront to sincere readers.
     _________________________________________________________________
   
                       Copyright  1999, Bill Bennet
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
        Through the Looking Glass: Finding Evidence of Your Cracker
                                      
                              By Chris Kuethe
     _________________________________________________________________
   
   You've subscribed to Bugtraq and The Happy Hacker list, bought
   yourself a copy of The Happy Hacker, and read The Cuckoo's Egg a few
   times. It's been a very merry Christmas, with the arrival of a cable
   modem and a load of cash for you, so you run out and go shopping to
   start your own hacker lab. A week later, you notice that one of your
   machines is being an especially slow slug and you've got no disk
   space. Guess what - you got cracked, and now it's time to clean up the
   mess. The only way to be sure you get it right is to restore from a
   clean backup - usually install media and canonical source - but let's
   see what the "h4x0r" left for us to study.
   
   In late October of this year, we experienced a rash of attacks on some
   workstations here at the University of Alberta's Department of
   Mathematical Sciences. Many of our faculty machines run RedHat 5.1
   (there's a good platform to learn how to try to secure...) since it's
   cheap and easy to install. Workstations are often dual-boot with
   Windows 95, but we'll be phasing that out as we get Citrix WinFrame
   installed. This paper is an analysis of the compromise of one
   professor's machine.
   
   One fine day I was informed that we'd just had another break-in, and
   it was time for me to show my bosses some magic. But like a skilled
   cardshark who's forced to use an unmarked deck, my advantage of being
   at the console had been tainted. Our cracker had used a decent rootkit
   and almost covered her tracks.
   
   In general, a rootkit is a collection of utilities a cracker will
   install in order to keep her root access. Things like versions of ps,
   ls, passwd, sh, and other fairly essential utilities will be replaced
   with versions containing back doors. In this way, the cracker can
   control how much evidence she leaves behind. Ls gets replaced so that
   the cracker's files don't show up, and ps is done so that her
   processes are not displayed either. Commonly a cracker will leave a
   sniffer and a backdoor hidden somwhere on your machine. Packet
   sniffers - programs that record network traffic which can be
   configured to filter for login names and passwords - are not part of a
   rootkit per se, but they are nearly as loved by hackers as a buggered
   copy of ls. What wouldn't want to try intercept other legitimate user
   passwords?
   
   In nearly all cases, you can trust the copy of ls on the cracked box
   to lie like a rug. Don't bet on finding any suspicious files with it,
   and don't trust the filesizes or dates it reports; there's a reason
   why a rootkit binary is generally bigger than the real one, but we'll
   get there in a moment. In order to find anything interesting, you'll
   have to use find. Find is a clever version of 'ls -RalF | grep | grep
   | ... | grep '. It has a powerful matching syntax to allow precise
   specification of where to look and what to look for. I wasn't being
   picky - anything whose name began with a dot was worth looking at. The
   command: find / -name ".*" -ls
   
   Sandwiched in the middle of a ton of useless temporary files and the
   usual '.thingrc' files (settings like MS-DOS's .ini) we found
   '/etc/rc.d/init.d/...'. Yes, with 3 dots. One dot by itself isn't
   suspicious, nor are two. Play around with DOS for about two seconds
   and you'll see why: '.' means "this directory" and '..' means "one
   directory up." They exist in every directory and are necessary for the
   proper operation of the file system. But '...' ? That has no special
   reason to exist.
   
   Well, it was getting late, and I was fried after a day of class and my
   contacts were drying up, so I listed /etc/rc.d/init.d/ to check for
   this object. Nada. Just the usual SysV / RH5.1 init files. To see who
   was lying, changed my directory into /tmp/foo, the echoed the current
   date into a file called '...' and tried ls on it. '...' was not found.
   I'd found the first rootkit binary: a copy of ls written to not show
   the name '...' . I will admit that find is another target to be
   compromised; in this case it was still clean and gave me some useful
   information.
   
   Now that we knew that '...' was not part of a canonical distribution,
   I moved into to it and had a look. There were only two files;
   linsniffer and tcp.log. I viewed tcp.log with more and made a list of
   the staff who would get some unhappy news. Ps didn't show the sniffer
   running, but ps should not be trusted in this case, so I had to check
   another way.
   
   We were running in tcsh, an enhanced C-syntax shell which supports
   asychronous (background) job execution. I typed './linsniffer &' which
   told tcsh to run the program called linsniffer in this directory, and
   background it. Tcsh said that was job #1, with process ID 2640. Time
   for another ps - and no linsniffer. Well, that wasn't too shocking.
   Either ps was hacked or linsniffer changed its name to something else.
   The kicker: 'ps 2640' reported that there were no processes available.
   Good enough. Ps got cracked. This was the second rootkit binary. Kill
   the currently running sniffer.
   
   Now we check the obvious: /etc/passwd. There were no strange entries
   and all the logins worked. That is, the passwords were unchanged. In
   fact the only wierd thing was that the file had been modified earlier
   in the day. An invocation of last showed us that 'bomb' had logged in
   for a short time around 235am. That time would prove to be
   significant. Ain't nobody here but us chickens, and none of us is
   called bomb...
   
   I went and got my crack-detection disk - a locked floppy with binaries
   I trust - and mounted the RedHat CD. I used my clean ls and found that
   the real ls was about 28K, while the rootkit one was over 130K! Would
   anyone like to explain to me what all those extra bytes are supposed
   to be? The 'file' program has our answer: ELF 32-bit LSB executable,
   Intel 80386, version 1, dynamically linked, not stripped. Aha! So when
   she compiled it, our scriptkiddie forgot to strip the file. That means
   that gcc left all its debugging info in the file. Indeed, stripping
   the program brings it down to 36K, which is about reasonable for the
   extra functionality (hiding certain files) that was added.
   
   Remember how I mentioned that the increased filesize is important?
   This is where we find out why. First, new "features" have been added.
   Second, the binaries have verbose symbol tables, to aid debugging
   without having to include full debug code. And third, many
   scriptkiddies like to compile things with debugging enabled, thinking
   that it'll give them more debug-mode backdoors. Certainly our 'kiddie
   was naive enough to think so. Her copy of ls had a full symbol table,
   and source and was compiled from /home/users/c/chlorine/fileutils-
   3.13/ls.c - which is useful info. We can fetch canonical distributions
   and compare those against what's installed to get another clue into
   what she may have damaged.
   
   I naively headed for the log files, which were, of course, nearly as
   pure as the driven snow. In fact the only evidence of a crack they
   held was a four day gap. Still, I did find out something useful: this
   box seemed to have TCP wrappers installed. OK, those must have failed
   somehow since she got in to our system. On RH51, the TCP wrappers live
   in /usr/sbin/in.* so what's this in.sockd doing in /sbin? Being
   Naughty, that's what. I munged in.sockd through strings, and found
   some very interesting strings indeed. I quote: You are being logged ,
   FUCK OFF , /bin/sh , Password: , backon . I doubt that this is part of
   an official RedHat release.
   
   I quickly checked the other TCP wrappers, and found that RedHat's
   in.rshd is 11K, and the one on the HD was 200K. OK, 2 bogus wrappers.
   It seems that, looking at the file dates, this cracked wrapper came
   out the day after RH51 was released. Spooky, huh?
   
   I noticed that these binaries, though dynamicically linked, used
   libc5, not libc6 which we have. Sure, libc5 exists, but nothing, and I
   mean nothing at all uses it. Pure background compatiblity code. After
   checking the other suspect binaries, they too used libc5. Thats where
   strings and grep (or a pager) gets used.
   
   Now I'm getting bored of looking by hand, so lets narrow our search a
   little using find. Try everything in October of this year... I doubt
   our cracker was the patient sort - look at her mistakes so far - so
   she probably didn't get on before the beginning of the month. I don't
   claim to be a master of the find syntax, so I did this:

find / -xdev -ls | grep "Oct" | grep -v "19[89][0-7]" > octfiles.txt

   In english: start from the root, and don't check on other drives,
   print out all the file names. Pass this through a grep which filters
   everything except for "Oct" and then another grep to filter out years
   that I don't care about. Sure, the 80's produced some good music
   (Depeche Mode) and good code (UN*X / BSD) but this is not the time to
   study history.
   
   One of the files reported by the find was /sbin/in.sockd.
   Interestingly enough, ps said that there was one unnamed process with
   a low (76) process id owned by uid=0, gid=26904. That group is unknown
   on campus here - whose is it? And how did this file get run so early
   so as to get that low a PID? In.sockd has that uid/gid pair... funky.
   It has to get called from the init scripts since this process appears
   on startup, with a consistently low PID. Grepping the rc.sysinit file
   for in.sockd, the last 2 lines of the file are this:

#Start Socket Deamon
exec in.sockd

   Yeah, sure... That's not part of the normal install. And Deamon is
   spelled wrong. Should a spellchecker be included as an crack-
   detector? Well, RedHat isn't famous for poor docs and tons of typos,
   but it is possible to add words to a dictionary. So our cracker tried
   to install a backdoor and tried to disguise it by stuffing it in with
   some related programs. This adds credibility to my theory that our
   cracker has so far confined her skills to net searches for premade
   exploits.
   
   The second daemon that was contaminated was rshd. About 10 times as
   big as the standard copy, it can't be up to anything but trouble. What
   does rsh mean here? RemoteSHell or RootShell? Your guess is as good as
   mine.
   
   So far what we've found are compromised versions of ls, ps, rshd,
   in.sockd, and the party's just beginning. I suggest that once you're
   finished reading this, you do a web search for rootkit and see how
   many you can scrounge up and defeat. You have to know what to look for
   in order to be able to remove it.
   
   While the log files had been all but wiped clean, the console still
   had some errors printed on it, quite a few after 0235h. One of these
   was a refusal to serve root access to / via nfs at 0246h. That
   coincided perfectly with the last access time to the NFS manpage. So
   our scriptkiddie found something neat, and she tried to mount this
   computer via NFS, but she didn't set it up properly. All crackers, I'd
   say, make mistakes. If they did everything perfectly we'd never notice
   them and there would be no problems. But it's the problems that arise
   from their flaws that cause us any amount of grief. So read your
   manuals. The more thorougly you know your system, the more likely you
   are to notice abnormalities.
   
   One of the useful things (for stopping a cracker) about NFS is that if
   the server goes down, all the NFS clients with directories still
   mounted will hang. You'll have to 120-cycle the machine to get it
   back. Hmmm. This presents an interesting tool opportunity: write a
   script to detect an NFS hack, and if a remote machine gets in,
   ifconfig that interface off. Granted, that presents a possible
   denial-of-service if authorized users get cut off. But it's useful if
   you don't want your workstation getting compromised.
   
   At this point I gave up. I learned what I'd set out to do - how to
   find the crap left behind by a cracker. Since the owner of this system
   had all her files on (removed) removable media there was no danger of
   them being in any way compromised. The ~janedoe directory was mounted
   off a Jaz disk which she took home at night, so I just dropped the CD
   into her drive and reinstalled. This is why you always keep user files
   on a separate partition, why you always keep backups and why it's a
   good plan to write down where to get the sources for things you
   downloaded, if you can't keep the original archives.
   
   Now that we've accumulated enough evidence and we're merely spirited
   sluggers pulverizing an equine cadaver, it's time to consider the
   appropriate response. Similar to Meinel's you-can-get-punched and
   you-can-go-to-jail warnings in The Happy Hacker, I would suggest that
   a vicious retaliatory hack is not appropriate. In Canada, the RCMP
   does actually have their collective head out of the sand. I am not a
   lawyer, so don't do anything based on these words except find a lawyer
   of your own. With that out of the way, suffice it to say that we're
   big on property protection here. Aside from finding a lawyer of your
   own, my advice here is for you to call the national police, whoever
   they are. People like the RCMP, FBI, BKA, MI-5 and KGB probably don't
   mind a friendly phone call, especially if you're calling to see how
   you can become a better law-abiding citizen. Chances are, you'll get
   some really good tips, or at least some handy references. And of
   course you'll know someone who'll help you prosecute.
   
   My communication with RCMP's Commercial Crimes unit (that includes
   theft of computing and/or network services) can be summarized as
   follows: E-mail has no expectation of privacy. You wish email was a
   secret, but wake up and realize that it's riskier than a postcard. As
   systems administrator, you can do anything you want with your computer
   - since it's your responsibility either because you own it or because
   you are its appointed custodian - so long as you warn the users first.
   So I can monitor each and every byte all of my users send or receive,
   since they've been warned verbally, electronically and in writing, of
   my intent to do so. My browse of the FBI's website shows similar
   things. But that was only browsing. Don't run afoul of provincial or
   state laws regulating the interception of electronic communication
   either.
   
   NOTE: While I have attempted to make this reconstruction of events as
   accurate as possible, there's always a chance I might have misread a
   log entry, or have misinterpreted something. Further, this article is
   solely my opinion, and should not be read as the official position of
   my employer.
   
   Appendix A: Programs you want in a crack-detection kit
     * find, ps, ls, cp, rm, mv
     * gdb, nm, strings, file, strip
     * (GNU)tar, gzip, grep
     * less / more
     * vi / pico
     * tcsh / bash / csh / sh
     * mount
       
   For security reasons these should all be statically linked.
   
   Appendix B: References WinFrame:
   http://www.citrix.com/
   
   RedHat 5.1:
   http://www.redhat.com/
   http://www.rootshell.com/
   http://www.netspace.org/lsv-archive/bugtraq.html
   
   About the filesystem:
   McKusik, M.K., Joy, W.N., Leffler, S.J., Fabry, R.S., "A Fast File
   System for UNIX" Unix System Manager's Manual, Computer Systems
   Reseach Group, Berkeley. SMM-14 April 1986
   
   LEA and Computer Crime:
   http://www.rcmp-grc.gc.ca/html/cpu-cri.htm
   http://www.fbi.gov/programs/compcrim.htm
     _________________________________________________________________
   
                       Copyright  1999, Chris Kuethe
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                   USENIX LISA Vendor Exhibit Trip Report
                                      
                             By Paul L. Lussier
     _________________________________________________________________
   
   Thu, 10 Dec 1998
   I went into Boston yesterday, 09 December, for the Vendor Exhibit at
   the LISA conference. My most immediate and overwhelming feeling was
   one of major disappointment in myself for not having pushed on my
   management to send me to this conference :( It looks like it's a
   really great conference. Alas, I'm trying to be positive and look at
   it from the point of view of "Why waste a trip to your own backyard
   that might be better spent traveling elsewhere :)"
   
   Anyway, the vendor exhibit was fantastic, though my guess is it's only
   really good for those of us who do hard-core sysadmin'ing for a
   living. The average Linux enthusiast might have been bored, since it
   really is nothing more than a lot of vendors hawking their wares
   (Though *everyone* would have enjoyed all the free stuff :)
   
   Ironically, the one major vendor who was conspicuously absent was Sun.
   All the other vendors were there, Network Appliance, Auspex, Compaq
   (never did see maddog though), IBM, SGI (at least I saw a booth with
   an INDY in it).
   
   There were a lot of what I call "Want-Ad" booths to. Collective
   Technologies (formerly Pencom System Administration), Sprint Paranet,
   Fidelity, and several other companies there for sole reason of trying
   to recruit people.
   
   The Open Source contingent was there in full force with booths for
   RedHat OpenBSD, The Free Software Foundation, etc. There were several
   booths from various software companies, most of whom I've heard of,
   and even several I haven't.
   
   I spent a lot of time talking to various companies for things directly
   related to my needs here at work, and got in some personal geek talk
   re: Linux as well.
   
   I stopped by the RedHat booth, and was kind of disappointed. They just
   didn't seem excited to be there. Maybe it was because I keep to much
   up-to-date on them and they had *absolutely nothing* new to tell me
   that I didn't already know. I got the distinct impression they were
   tired of being there. It very well could have been that they wanted to
   be talking to those who aren't yet converted to Linux yet, but instead
   kept getting inundated with the RH fan club :) I don't think they've
   adjusted to be on top of the world yet. I heard someone come by and
   say, "Hey, we're planning on another 10 RH Linux servers in then month
   of so!" The RH response, was an un-enthusiastic "Oh, that's cool." As
   if they had heard the same thing all day long, and really didn't want
   to hear it anymore. I don't think they knew how to deal with their
   success. It could also have been that this particular guy was one of
   the RH developers, not a PR/Marketing person.
   
   I spoke with a guy at the OpenBSD booth, I think it was Theo de Radt
   himself. I mentioned I tried to get the latest release from amazon.com
   last week, which, according to the OpenBSD site, is selling it. Yet
   amazon doesn't have any mention of 2.4, only 2.3. He basically got
   really upset at that, mentioning that *they* sent him an e-mail the
   same day of the 2.4 release announcement stating they had already
   gotten 170 requests for it. His only response was "Well then fix your
   web page. You just lost $1700US. They all bought it off the OpenBSD
   site!" So, needless to say, I'll be getting 2.4 directly from them :)
   
   There were 3 sw booths I stopped at that really got me intrigued.
   First there was Aurora Software from Pelham, NH (I think). Their
   product is called SARCheck. It's for Solaris, and it's a front end
   reporting mechanism for ps and SAR. Supposedly it assists in
   performance monitoring and tuning by taking the output of ps and sar,
   translating it into English, and then making recommendations on what
   to change, why, and how. I think the sw is $150 per system, not per
   CPU (this means that I can use it on my 14 processor Sun E4500, and
   only pay $150). This sounds really good, and I'm hoping to be able
   play with it real soon.
   
   The next company was Shpink Software (yes, really!:) . Their product
   is the Network Shell (nsh). This looks *really, really, really cool*.
   In short, it's a client/server system where you can 'cd' to a UNC path
   on another machine. This differs greatly from NFS in that nsh has the
   ability to *execute commands* on the remote system. For example, say I
   have 3 systems, a Linux box, an NT box, and a Solaris box. From my
   Linux system I can:

        linux> tar cvf //solaris/foo.tar //nt/users

   or:
        linux> cd //solaris/etc
        linux> vi passwd

   Basically, nsh removes the need for rlogin/telnet sessions to a system
   and provides for heavily encrypted sessions, user/machine ACLs, and
   many other niceties. The price is incredibly reasonable at $150 per
   seat. The advised way of using nsh is to set up a limited number of
   machines as "administration" hosts, and run the server daemon where
   ever else you need to. Nsh comes with Perl modules to allow access
   from perl programs, and works on all major versions of Unix/Linux,
   with the nsh daemon available for W95/NT.
   
   Now, for the last, but one of the neatest! Spiderplant. This is an
   environmental monitoring gizmo that can connect to the serial port of
   any system. In short, you can designate any system as an environment
   monitoring station and connect this little black box to your serial
   port. It costs $100 for "The little black box" and 1 probe, $15 extra
   for each additional probe. The software is Open Source so you can hack
   it to your heart's content :) Here are the vital stats:

              Temperature Range:
                    -55/+125 C in 0.1 C.
              Accuracy:
                    0.5 C.
              Sensors:
                    15 (or more) per device, 16 devices per serial line.
              Data Connection:
                    RS-232, 1200 baud, 8,N,1.
                    DB9 or DB25 connector to computer.
              Size:
                    Main unit measures 3.5" x 2.25" x 1".
                    Comes with 14-foot serial cable, 10-foot probe cable.
              Certification:
                    Complies with FCC rules part 15
                    (Class B, for home or office use, US and Canada).

   Here are the URLs for the products mentioned:
        http://www.openbsd.org/
        http://www.sarcheck.com/
        http://www.shpink.com/
        http://www.spiderplant.com/
        http://www.shpink.com/

   Hopefully someone will provide a trip report of the rest of the LISA
   conference for those of us unfortunate enough to have missed it.
   
   --
   Paul
     _________________________________________________________________
   
                       Copyright  1999, Paul Lussier
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
               X Windows versus Windows 95/98/NT: No contest
                                      
                           By Paul Gregory Cooper
     _________________________________________________________________
   
   In the December Issue of the Linux Journal Sergio Martinez wrote in
   asking for a (quick) article about the differences between X and
   Windows 95/98/NT (w95) -- see below. This is my attempt to answer his
   questions - I remember asking similar things when I started using UNIX
   4 years ago. [More answers can be found in the 2 Cent Tips Column.
   --Editor] I've tried to aim this article at the 'Linux newbie' and as
   I am not an X hacker, and never been a w95 hacker, there may well be
   inaccuracies, but I have tried to capture the ideas and spirit of X
   (and w95, such as it has any). I would be pleased to hear from Xperts
   and newbies alike.
   
   Sergio has asked questions relating to GNOME and KDE and for the most
   part I treat them as equivalent (in the same way I'm treating all
   window managers as equivalent). I should state now that I prefer using
   GNOME over KDE, irrespective of the ongoing KDE / Open Source debate,
   hence I have more experience in GNOME than in KDE. This too may lead
   to inaccuracies.
   
   Mail criticisms to pgc@maths.warwick.ac.uk.
     _________________________________________________________________
   
   This is Sergios mail;
   
   I'm just writing in with an idea for a quick article. I've been using
   the GNOME desktop. I'm a relative Linux newbie though, and I think
   that many of your less experienced readers could probably benefit from
   a short article about window managers. These are some things I
   currently don't quite understand:
   
   1.Terminology: The differences (if any) among a GUI, a window manager,
   a desktop, and an interface. How do they differ from X windows?
   
   2.Do all window managers (like GNOME or KDE or FVWM95) run on top of X
   windows?
   
   3.What exactly does it mean for an application to be GNOME or KDE
   aware? What happens if it's not? Can you still run it?
   
   4.What exactly do the GTK+ (for GNOME) or Troll (for KDE) libraries
   do?
   
   5.How does the history of Linux (or UNIX) window managers compare to
   that of say, the desktop given to Win98/95 users? How, specifically,
   does Microsoft limit consumer's choices by giving them just one kind
   of desktop, supposedly one designed for ease of use?
   
   6.What's happening with Common Desktop Environment? Is it correct that
   it's not widely adopted among Linux users because it's a resource hog,
   or not open source?
   
   These are some questions that might make an enlightening, short
   article. Thank you for your consideration.
   
   -- Sergio E. Martinez
     _________________________________________________________________
   
   Before I try to answer each point I'll try to give a quick intro into
   X-windows.
   
   Think of X as just another program. When you type startx what happens
   is that X starts (shock!), in the background, and runs the .xinitrc
   file. The .xinitrc tells X what programs to start once X itself has
   started - more on this later. (Some systems use the .Xclients file
   instead of .xinitrc - I'll just use .xinintrc).
   
   So X is just another program but what does it do? Roughly speaking X
   takes control of the screen from the command line and provides the
   system with the ability to create windows and communicate with them.
   Basically that's ALL it does - decorating, moving, resizing, focus,
   etc, (i.e. managing the windows X provides) is left to the window
   manager.
   
   What's clever about X is uses a client/server model and is network
   transparent. This is a bunch of jargon - what does it mean?
   
   When you type startx you are starting the X-server, when you run an
   'X-application', e.g. netscape, this is a client to which the X-server
   gives a window. Similarly xterm is an X-app which puts the command
   line in a window.
   
   Network transparency doesn't mean much if you're not networked so
   let's suppose the computer you started X on is called fourier and is
   networked. Now a program on any computer on the network can ask the
   X-server on fourier to start a window for it (on fourier), for
   instance from fourier you could telnet to cauchy (another computer on
   your network) and run netscape and have the netscape window appear on
   your screen (connected to fourier).
   
   In fact it works the other way round too - an X-server can have many
   screens (or, as it calls them, displays) connected to it at once - all
   different - and those screens can be at the other end of a network.
   This goes back to (one of) the original design purpose(s) of X which
   was for X-terminals, i.e. things that looked like a computer but were
   no more than a screen, some memory, a bios, and a network card
   connected to one (or many) UNIX mainframe(s). See this page for
   details of how to turn old 386/486's into xterminals.
   
   1.Terminology: The differences (if any) among a GUI, a window manager,
   a desktop, and an interface. How do they differ from X windows?
   
   Ok, so we have more jargon - I hope I get this right ;-). An interface
   is the way in which a piece of software interacts with the user. Unix
   commands use a command line interface (CLI) whereas X-applications use
   a graphical user interface (GUI). However different applications tend
   to use different approaches to the GUI, for instance when you select a
   menu does one click bring the menu up (e.g. netscape) or do you have
   to hold the mouse button down (e.g. ghostview). What GNOME, KDE, and
   w95 try to provide a consistent GUI amongst all applications, or at
   least amongst the common parts of applications, e.g. menus, file
   selection, window controls, scrollbars, etc. See the GUI hall of
   fame/shame for examples of good/bad GUI design (in windows
   environment).
   
   As was mentioned above a window manager takes over where X leaves of -
   that is, controlling the windows X gives it. Window managers usually
   give you alot more that the just the ability to move, resize, or
   iconify windows. Many also provide virtual desktops, taskbars, themes,
   app managers, etc. See Window managers for X for a list of most, if
   not all, wm's.
   
   Desktop has (as far as I can tell) two usages. We use 'the desktop' to
   refer to the background part of the screen. GNOME, KDE, W95, and MacOS
   all 'provide a desktop' meaning the background is more that just a
   canvas for a nice picture - it acts like any directory in the system.
   Technically all this means is that you can place files onto it.
   However these may be data (like a letter to gran) or programs, (e.g.
   netscape, emacs, etc). Usually this 'background as directory'
   philosophy is coupled with a graphical file manager, so that when you
   (double) click on a file either it runs (if it's a program) or a
   suitable program is started to read the data in the file. In this
   context 'desktop' can also include a GUI, so that when people say that
   all Linux/UNIX is missing is a 'desktop' what they mean is a
   consistent design of common parts of programs, a graphical file
   manager, and the ability to leave files littered on the desktop ;-)
   
   2.Do all window managers (like GNOME or KDE or FVWM95) run on top of X
   windows?
   
   I like to think of the window manager, fvwm95, window maker, etc, and
   the desktop, GNOME or KDE, as running in conjunction with X - but this
   is just semantics. The window manager and/or desktop is started (in
   the .xinitrc file) after X has started.
   
   The traditional (i.e. pre KDE/GNOME) setup of .xinitrc (after some
   environment settings) is to have some xterms and a window manager, so
   the last lines of the .xinitrc might look like;

xterm &
xterm &
fvwm95

   The window manager is the last thing started by .xinitrc and when the
   wm exits, .xinitrc finishes and then X terminates.
   
   If you were using GNOME the last few lines of the .xinitrc would now
   be;

fvwm95 &
gnome-session

   And for KDE it would be;
startkde

   As before GNOME (KDE) are the last things started by .xinitrc and so
   when you logout of GNOME (KDE) the gnome-session (startkde) termintes,
   .xinitrc finishes, and then X terminates.
   
   In both these examples the xterms are left out as GNOME and KDE
   provide sessions management, which means any application left running
   when the session ends get started when you startup the next time.
   Windows has some session management too.
   
   See the next answer as to why the window manager is started for GNOME
   but not for KDE.
   
   3.What exactly does it mean for an application to be GNOME or KDE
   aware? What happens if it's not? Can you still run it?
   
   AFAIK an application is a GNOME (KDE) application of it conforms to
   the GNOME (KDE) GUI guidelines/specifications and uses the Gtk+ (qt)
   libraries. All this means is that GNOME apps use Gtk+ to build menu's,
   buttons, scroll bars, file selectors, etc and they do so in a
   consistent way (as defined by the GNOME team), e.g. all menus are left
   justified, all apps have a FILE menu as the left-most menu, etc. Same
   for KDE except they use the qt library by Troll Tech (and possibly a
   different set of design guidelines).
   
   Any GNOME app will run provided you have Gtk+ (and the other GNOME
   libraries) installed and similarly any KDE app will run so long as you
   have qt (and other KDE libraries) installed - you do not have to be
   running GNOME/KDE to use a GNOME/KDE application. The only other
   additional thing GNOME/KDE apps may have is drag and drop awareness,
   e.g. in GNOME you can drag a JPG from a GMC (file manger) window into
   an ElectricEyes (graphics viewer) window and ElecticEyes will display
   this file. You can do similar things in KDE.
   
   GNOME and KDE have different attitudes to window managers. KDE prefers
   to work with its own window manager Kvm, and GNOME is 'window manager
   agnostic' - well those are the 'party lines'. You can get other wm's
   to work with KDE (so I'm told) and GNOME should work with any wm but
   prefers to work with a window manager that is ICCCM compliant and
   'GNOME aware'. I'm not sure what this means but I know the only
   totally compliant wm is Enlightenment DR0.15 (which is only available
   through CVS at the moment) followed by icewm, and with blackbox and
   windowmaker a little way behind. I think that the KDE team are working
   towards making KDE less dependent on Kvm and defining what a KDE wm
   should be.
   
   4.What exactly do the GTK+ (for GNOME) or Troll (for KDE) libraries
   do?
   
   Whoops - I think I answered this above. Gtk+ and qt are toolkits for
   building menu's, buttons, scrollbars, dialog boxes, and loads more.
   
   5.How does the history of Linux (or UNIX) window managers compare to
   that of say, the desktop given to Win98/95 users? How, specifically,
   does Microsoft limit consumer's choices by giving them just one kind
   of desktop, supposedly one designed for ease of use?
   
   I'm not sure I understand this question let alone know how to answer
   it, so instead I'll answer what I think you might be asking which is;
   What's the difference between UNIX + X and W95/98/NT.
   
   The first thing to point out is the component nature of the UNIX
   approach to a GUI/Desktop. First we have the OS itself, in our case
   Linux, on top of that we have the window system, X, and in conjunction
   with that we have the window manager, fvwm (for example), and in
   conjunction with these two we have the desktop/gui, either GNOME or
   KDE. This follows the general philosophy of UNIX which is to build
   small tools that interact which each other in well defined ways. It
   may seem shambolic but it is a strength. It means that one or other of
   the pieces can be interchanged, which gives the user lots of choice
   (perhaps too much), and also allows for technological improvements.
   For instance X is just one windowing system and may not last forever
   (gasp!) There are others, e.g. the hungry programmers Y.
   
   This also gives the user the choice of which window manager or desktop
   to use, or in fact whether to use windows and desktops at all - it may
   seem strange but some people prefer the command line, and others use X
   and a window manager but don't like GNOME or KDE.
   
   Windows95/98/NT on the other hand is a different kettle of fish. Here
   the OS, GUI, WM, and desktop aren't clearly separated (as in UNIX) but
   are all rolled into one. Thus you have whatever choice Microsoft
   happen to give you, i.e. windows themes.
   
   For Microsoft this is an advantage - it stops people butting in and
   rewriting parts of their OS which could potentially lose them money.
   For instance they realized that with the old windows 2/3.1 you could
   simply replace MS DOS with another compatible DOS such as DR DOS from
   Caldera. In an ongoing court case Caldera allege that MS added code to
   windows to make it seem like there was a bug in DR DOS. With 9*/NT
   being all rolled in one there is no need to resort to such tactics.
   
   IMO the W95 desktop is inferior because the user is limited to one
   design whereas on a linux system there is a wm + desktop to suit just
   about everybody (including those that don't want either a wm or a
   desktop).
   
   6.What's happening with Common Desktop Environment? Is it correct that
   it's not widely adopted among Linux users because it's a resource hog,
   or not open source?
   
   It's not widely adopted because it is commercial, not open source, a
   resourse hog, has security problems (RedHat stopped selling it for
   this reason), and is IMHO outdated.
     _________________________________________________________________
   
                   Copyright  1999, Paul Gregory Cooper
            Published in Issue 36 of Linux Gazette, January 1999
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
                          Linux Gazette Back Page
                                      
           Copyright  1999 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the
                              Copying License.
     _________________________________________________________________
   
  Contents:
  
     * About This Month's Authors
     * Not Linux
     _________________________________________________________________
   
                         About This Month's Authors
     _________________________________________________________________
   
    Hassan Ali
    
   Hassan is a Ph.D. degree holder in numerical techniques applied to
   electromagnetics from the University of Ottawa, Canada. He is
   presently working with NORTEL NETWORKS in Ottawa, Canada, as a
   specialist in software tools used to predict signal integrity, and
   electromagnetic compatibility (EMC) on printed circuit boards. Having
   been introduced to Linux by a friend about 2 years ago, he has never
   stopped having fun with it. Hassan loves to write about whatever
   little he knows about for others to learn or to correct him.
   
    Larry Ayers
    
   Larry lives on a small farm in northern Missouri, where he is
   currently engaged in building a timber-frame house for his family. He
   operates a portable band-saw mill, does general woodworking, plays the
   fiddle and searches for rare prairie plants, as well as growing
   shiitake mushrooms. He is also struggling with configuring a Usenet
   news server for his local ISP.
   
    Bill Bennet
    
   Bill, the ComputerHelperGuy, lives in Selkirk, Manitoba, Canada; the
   "Catfish Capitol of North America" if not the world. He is on the
   Internet at www.chguy.net. He tells us "I have been a PC user since
   1983 when I got my start as a Radio Shack manager. After five years in
   the trenches, I went into business for myself. Now happily divorced
   from reality, I live next to my Linux box and sell and support GPL
   distributions of all major Linux flavours. I was a beta tester for the
   PC version of Playmaker Football and I play `pentium-required' games
   on the i486. I want to help Linux become a great success in the gaming
   world, since that will be how Linux will take over the desktop from
   DOS." It is hard to believe that his five years of university was only
   good for fostering creative writing skills.
   
    John Blair
    
   John currently works as a software engineer at Cobalt Microserver.
   When he's not hacking Cobalt's cute blue Qube, he's hanging out with
   his wife Rachel and newborn son Ethan. John is also the author of
   Samba: Integrating UNIX and Windows, published by SSC.
   
    Bryan Patrick Coleman
    
   Bryan attends the University of North Carolina at Greensboro where he
   is persuing a B.S. in both Computer Science and Anthropology. He has
   been involved with Linux since 1994 kernel?, and helped found the
   Triad Linux Users Group located in central North Carolina. His future
   plans are for a PhD in computer science and a career where he can use
   Linux.
   
    Paul Cooper
    
   Paul is a Ph.D. student at the Mathematics Institute Warwick
   university. To help finance his studies he also works in the dept.
   computer support team, mostly writing documentation. His main interest
   outside of Maths and Linux, is American Football, in particular
   playing for the university team, the Warwick Wolves.
   
    Jurgen Defurne
    
   Jurgen is an Analyst/programmer in financial company (Y2K and Euro).
   He became interested in microprocessors 18 years ago, when my eyes saw
   the TRS-80 in the Tandy (Radio Shack) catalog. I read all I could find
   about microprocessors, which was then mostly confined to
   8080/8088/Z80. The only thing he could do back then was write programs
   in assembler without even having a computer. When he was 18, he
   gathered enough money to buy his first computer, the Sinclair ZX
   Spectrum. He studied electronics and learned programming mostly on his
   own. He worked with several languages (C, C++, xBase/Clipper, Cobol,
   FORTH) and several different systems in different areas: programming
   of test equipment, single- and multi-user databases in quality control
   and customer support, and PLCs in an aluminium foundry/milling
   factory.
   
    Jim Dennis
    
   Jim is the proprietor of Starshine Technical Services. His
   professional experience includes work in the technical support,
   quality assurance, and information services (MIS) departments of
   software companies like Quarterdeck, Symantec/ Peter Norton Group, and
   McAfee Associates -- as well as positions (field service rep) with
   smaller VAR's. He's been using Linux since version 0.99p10 and is an
   active participant on an ever-changing list of mailing lists and
   newsgroups. He's just started collaborating on the 2nd Edition for a
   book on Unix systems administration. Jim is an avid science fiction
   fan -- and was married at the World Science Fiction Convention in
   Anaheim.
   
    Vivek Haldar
    
   Vivek is a third year BTech student at the Indian Institute of
   Technology, and has been using Linux for the past two years, both at
   home and college.
   
    Ron Jenkins
    
   Ron has over 20 years experience in RF design, satellite systems, and
   UNIX/NT administration. He currently resides in Central Missouri where
   he will be spending the next 6 to 8 months recovering from knee
   surgery and looking for some telecommuting work. Ron is married and
   has two stepchildren.
   
    Gustavo Larriera
    
   Gustavo teachs database courses at Universitario Autonomo del Sur
   (Montevideo, Uruguay). He is also the webmaster of the only official
   mirror site of Linux Gazette in his country
   [http://www.silab.ei.edu.uy/lg/]. He is a Linux average user and also
   a Microsoft Certified Professional in NT. He hopes that is not
   considered a great disadvantage :-)
   
    Joe Merlino
    
   Joe Merlino is a library assistant at Georgia Tech. He lives with his
   wife in Athens, Georgia. Consequently, he spends a lot of time in the
   car, where he thinks up projects to try on his linux box.
     _________________________________________________________________
   
                                 Not Linux
     _________________________________________________________________
   
   Thanks to all our authors, not just the ones above, but also those who
   wrote giving us their tips and tricks and making suggestions. Thanks
   also to our new mirror sites.
   
   With the Holidays, I took a week vacation to go back to Houston and
   visit family and friends. I had a wonderful time. My grandchildren are
   smarter and more beautiful each time I see them. I have a new picture
   of Sarah and Rebecca on my home page.
   
   My best friend, Benegene, had arranged for a get together with three
   friends from our university (Baylor) days. We had a lot of fun
   catching up and talking about old times. It was amazing how different
   the memories were that stood out in each of our minds. Only goes to
   prove that what one person finds remarkable may be quite "ho hum" to
   the next. Ah well, it was a good evening and one I hope will be
   repeated.
   
   Have fun!
     _________________________________________________________________
   
   Marjorie L. Richardson
   Editor, Linux Gazette, gazette@ssc.com
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back 
     _________________________________________________________________
   
   Linux Gazette Issue 36, January 1999, http://www.linuxgazette.com
   This page written and maintained by the Editor of Linux Gazette,
   gazette@ssc.com
   
   
