
           Linux Gazette... making Linux just a little more fun!
                                      
         Copyright  1996-98 Specialized Systems Consultants, Inc.
     _________________________________________________________________
   
                       Welcome to Linux Gazette! (tm)
     _________________________________________________________________
   
                                 Published by:
                                       
                               Linux Journal
     _________________________________________________________________
   
                                 Sponsored by:
                                       
                                   InfoMagic
                                       
                                   S.u.S.E.
                                       
                                    Red Hat
                                       
                                   LinuxMall
                                       
                                Linux Resources
                                       
                                    Mozilla
                                       
                                   cyclades
                                       
   Our sponsors make financial contributions toward the costs of
   publishing Linux Gazette. If you would like to become a sponsor of LG,
   e-mail us at sponsor@ssc.com.
   
   Linux Gazette is a non-commercial, freely available publication and
   will remain that way. Show your support by using the products of our
   sponsors and publisher.
     _________________________________________________________________
   
                             Table of Contents
                           October 1998 Issue #33
     _________________________________________________________________
   
     * The Front Page
     * The MailBag
          + Help Wanted
          + General Mail
     * More 2 Cent Tips
     * News Bytes
          + News in General
          + Software Announcements
     * The Answer Guy, by James T. Dennis
     * CHAOS Part 2: Readying System Software, by Alex Vrenios
     * Creating a Linux Certification and Training Program, by Dan York
     * DialMon: The Linux/Windows diald Monitor, by Mike Richardson
     * The Fifth International Linux Congress, by John Kacur
     * Fun with Client/Server Computing, by Damir Naden
     * Gnat and Linux: C++ and Java Under Fire, by Ken O. Burtch
     Graphics Muse, by Michael J. Hammel
     Heroes and Friends--Linux Comes of Age, by Jim Schweizer
     Linux Installation Primer: X Configuration, by Ron Jenkins
     New Release Reviews, by Larry Ayers
          + DICT and Word Inspector
          + Pysol: Python-Powered Solitaire
          + Another Typing Tutor
     Mechanical CAD for Linux, by Damir Naden
     The Proper Image for Linux, by Randolph Bentson
     Serializing Web Application Requests, by Colin C. Wilson
     Thoughts about Linux, by Jurgen Defurne
     Using the Xbase DBMS in a Linux Environment, by Gary Kunkel
     Book Review: Website Automation Toolkit, by Andrew Johnson
     The Back Page
          + About This Month's Authors
          + Not Linux
       The Answer Guy
     _________________________________________________________________
   
   TWDT 1 (text)
   TWDT 2 (HTML)
   are files containing the entire issue: one in text format, one in
   HTML. They are provided strictly as a way to save the contents as one
   file for later printing in the format of your choice; there is no
   guarantee of working links in the HTML version.
     _________________________________________________________________
   
   Got any great ideas for improvements? Send your comments, criticisms,
   suggestions and ideas.
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                                The Mailbag!
                                      
                    Write the Gazette at gazette@ssc.com
                                      
                                 Contents:
                                      
     * Help Wanted -- Article Ideas
     * General Mail
     _________________________________________________________________
   
                        Help Wanted -- Article Ideas
     _________________________________________________________________
   
   Date: Tue, 08 Sep 1998 11:02:29 +0000
   From: Kyrre Aalerud, kyrrea@student.matnat.uio.no
   Subject: Minilinux fails to load X11
   
   I am out of ideas...
   I am trying to get Mini-Linux to load the accompanied X11, but i get a
   error about some directory or file that dows not exist, and a
   "Unexpected signal 13" error... What am I forgetting... Is there
   anything special I have to load to get the D.. thing to work ?
   
   PS: I cant find any CDrom-devices either.... (I have looked in etc and
   averywhere else but...)
   
   h.e.l.p.....
   
   Kyrre
     _________________________________________________________________
   
   Date: Sun, 06 Sep 1998 23:29:09 -0400
   From: Nathaniel Smith, slow@negia.net
   Subject: Lost newbe
   
   I find it hard to believe that every one thinks that all people know
   how to operate linux perfectly, and that all are born with this
   information. This must be the case for I cannot find a site on the web
   that teaches you how to operate Linux (and I am desperate to find
   one), I have run into people using windows 95&98 (12 people) that
   would like to try Linux but cant find out how to operate it (there is
   a real good deal at best buy on Red Hat Linux) so I bought it and a
   new Western Digital hard drive to put it on, though my trying to find
   somewhere that teaches Linux, I came upon an article that says you can
   have Linux and windows on the same computer while learning Linux, and
   after learning you can delete windows. Sooooooo how about giving us
   articles on how to utilize this great OS, and help hundreds us poor
   lost souls that are desperate. thank you Nathaniel alias poor lost
   desperate newbe
     _________________________________________________________________
   
   Date: Thu, 03 Sep 1998 15:04:43 -0600
   From: Hugh Shane, hughs@tetonvalley.net
   Subject: Booting from LS120 disk drives
   
   I know this information is out there somewhere, but I'd like to hear
   from anyone who has successfully gotten an x86 Linux machine to boot
   from an LS120 disk drive.
   
   Hugh
     _________________________________________________________________
   
   Date: Wed, 02 Sep 1998 23:01:25 +0800
   From: Lye On Siong, oslye@pacific.net.sg
   Subject: some qn
   
   Just like to ask a few questions.
   
   My CD-ROM is on /dev/hdd. When I want to mount it, it tells me that
   it's not a block device. (previously, it was running fine.. dunno what
   happen)
   
   How can my Linux kernel support PPP? How do I recompile my kernel to
   make it work?
   
   Johnny
     _________________________________________________________________
   
   Date: Thu, 10 Sep 1998 02:03:46 +0530 (IST)
   From: M Anand, manand@bhaskara.ee.iisc.ernet.in
   Subject: proxy
   
   How do I set the proxy server for lynx and irc in Red-Hat Linux
   5.1/SuSE Linux 5.1?
   
   Anand
     _________________________________________________________________
   
   Date: Thu, 17 Sep 1998 01:25:44 PDT
   From: parmentier remy, parmentier_remy@hotmail.com
   Subject: Help : Modem + HP
   
   I am close to commiting suicide!
   I have already spent hours trying to fix my Supra336 PnP internal
   modem and my HP DeskJet 720C under Linux!
   The result is always the same, no communication with teh modem and no
   page printed on the HP printer!
   Could someone help me, I am close to abandon!
   Thank you for answering. ( I use the RedHat 5.1 distribution )
     _________________________________________________________________
   
   Date: Tue, 15 Sep 1998 13:35:01 -0400
   From: Taylor Sutherland, taylors@boone.net
   Subject: Canon BJC-250 question
   
   I have a Canon BJC-250 color printer. I have heard many people say
   that the BJC-600 printer driver will let me print in color. But I have
   not heard anyone say where I can get such a driver. I have looked
   everywhere but where it is. Can you help me?
   
   Thank you.
   Taylor Sutherland
     _________________________________________________________________
   
   Date: Tue, 25 Aug 1998 18:39:03 -0600 (CST)
   From: Dion Rowney, rowney@enterprise.usask.ca
   Subject: Article Suggestion
   
   I just had a nasty problem this morning. I had recompiled my kernel
   the night before and forgot to tell lilo where it was. In the morning
   I found it sitting at the "loading linux ..." prompt hung. My idea
   would be help on getting around this problem, maybe a little about how
   lilo knows where the boot kernel is, recovering easily from this
   mistake (a good idea since as usual I chose the difficult way).
   
   Just am idea because I felt like a tool because I had no idea how it
   could be fixed, aside from reinstalling or upgrading using the boot
   install disks.
   
   Thanks,
   Dion Rowney
     _________________________________________________________________
   
   Date: Tue, 22 Sep 1998 11:51:10 +0200
   From: Jan Jansta, ftx@rainside.sk
   Subject: Problem mounting vfat filesystem ...
   
   I have permanent problem with mounting any vfat/dos filesystem with
   write permisions for all users on my Linux machine. I'm using RedHat
   5.1, kernel version 2.0.34
   
   I've already tried -

mount -t vfat -o mode=0777 /dev/hdb1 /dos

   I've also tried to change permisions for /dos via

chmod 777 /dos

   It didn't work as well.
   
   Does someone know what's not working properly ?
   
   thanx
   Jan
     _________________________________________________________________
   
                                General Mail
     _________________________________________________________________
   
   Last month I printed a letter from Hugo van der Kooij in which he
   asked me to quit using the word "Damn" in the Table of Contents of
   Linux Gazette. I said I would put it to a vote. Well, I received quite
   a bit of mail on this issue, and the vote was essentially 6 to 1 in
   favor of keeping this word.
   
   That said, I intend to renege on my statement that I would abide by
   the vote. Much of the mail I received is not printable, and some of it
   is quite entertaining. The best, most well-thought out answer I
   received is printed directly below, and this letter alone convinced me
   that I should accede to Hugo's request. From now on I intend to call
   that section containing the entire issue TWDT -- this is the best
   compromise I could think of. We all know what TWDT stands for, it will
   just not be printed there. Newcomers may be a bit confused but they'll
   survive.
   
   Enough said. This is my final decision, so please don't write asking
   me to change my mind. As many reminded me, we have more important
   things to spend our time considering, such as helping others to learn
   and love Linux as we do.
   
   Marjorie Richardson, Overseer, Editor and now Ruler of Linux Gazette
   :-)
   gazette@ssc.com
     _________________________________________________________________
   
   Date: Wed, 2 Sep 1998 15:12:55 +0800
   From: Mark Harrison, markh@usai.asiainfo.com
   Subject: Drop the "Damn"
   
   Given his e-mail address, there is a reasonable chance that Hugo van
   der Kooij may be a member of the Dutch Reformed Church, probably one
   of the most strict Protestant denominations.
   
   They are generally quite excellent people (most of the Dutch nationals
   imprisoned by the Nazis for sheltering Jews were in this denomination,
   following their [correct] interpretation of the Bible.). They are also
   very strict in observing proper behavior, such as no swearing.
   
   I don't advocate a wholesale removal of the various naughty words from
   the culture (The title of Audie Murphy's famous book summed up his
   experiences perfectly), but for this case, I see no harm in dropping
   the offending word.
   
   Mark Harrison, Beijing, China
     _________________________________________________________________
   
   Date: Wed, 09 Sep 1998 14:03:04 +0200
   From: Sean Mota, smota@polar.es
   Subject: links between identical sections
   
   Now and then I've found myself reading an article in an issue of the
   gazette and thinking of a past article that I read in a previous
   issue, both belonging to the same section (normally the Graphics
   Muse). Since I would like to read again that past article and I never
   remember in which issue it was published, I have to go to the main
   page, select an issue an view the table of contents, and finally click
   on the section I'm interested in. It would be much quicker if
   "last"/"next" buttons between articles of different issues but
   belonging to the same section were available. That way, if I were
   reading the Graphics Muse's article of this month and he mentions
   something about OpenGL, I might remember there was an article on this
   subject (OpenGL) a couple of past issues; then, with the aid of the
   "last" button, I would start reviewing past articles of the Graphics
   Muse until I found the one I was interested in.
   
   Maybe this is a bit complicated to implement, but I think it would
   certainly be a great improvement. Another application would be: a
   quicker way to find an article belonging to a certain section whose
   subject is not listed in the table of contents. The search engine of
   the gazette is only available online.
   
   Thanks for the marvelous job your doing with the gazette:
   
   Sean Mota
   
     (This is a good suggestion and one I have gotten before. It is
     actually on my list of things to do. I'll try to find time for it
     sooner rather than later. --Editor)
     _________________________________________________________________
   
   Date: Tue, 8 Sep 1998 23:43:49 -0400
   From: "Michael Longval", mlongval@interlinx.qc.ca
   Subject: Linux installation not easy.
   
   As a computer user and technology observer for the past 20 years I
   fear the domination of the tech sector by one very large corporation
   aka Microsoft. We are alas left at the mercy of a company not known
   for the quality of it's products, but rather for the intensity of it's
   marketing of it's products.
   
   Windows 98 works ok for me, but I'm frustrated by it's instability.
   
   I have installed Red Hat 5.0 on my IBM ThinkPad 380, but can't get the
   X windows part up and running. I'm left with the shell only prompt.
   
   I have looked at the manuals and checked the newsgroups, the web sites
   but still can't get the X windows parts up and running. I'm not a tech
   dummy. I've played with complicated systems before. Understand C,
   Rexx, Pascal, Delphi, and others.
   
   However I'm still stranded. So I still use Windows 98...
   
   The day I can easily boot up Linux to a STANDARD GUI DESKTOP is the
   day I'll start thinking about switching. Unfortunately for me that day
   has not arrived yet.
   
   Michael J. Longval M.D.
     _________________________________________________________________
   
   Date: Tue, 8 Sep 1998 23:33:36 -0400
   From: "Chris Bruner", cbruner@ionline.net
   Subject: support problems
   
   I purchased the Red Hat brand of Linux chiefly because of the 90 day
   installation support. In a nutshell, at first I'm told some very basic
   things which I had already tried, then when I ask if an alternative
   was a viable solution (recompiling the kernel with PnP built in) I was
   told that my problem was no longer covered under the installation
   support. I still don't have sound and as for my other open tickets,
   only one other was responded to (after weeks) and I haven't heard back
   on the rest. So I'm not on the Internet yet, I have no sound yet, and
   I'll never recommend Red Hat to anyone because of their support.
   
   Chris Bruner
     _________________________________________________________________
   
   Date: Tue, 01 Sep 1998 21:08:59 +0000
   From: Trey, abelew@wesleyan.edu
   Subject: Linux Desktop
   
   I was flipping through the recent Linux Gazette and noted the article
   about Linux on the desktop. I thought perhaps I should chime in as I
   have had a purely Linux system sitting upon my desk now for well over
   a year and would not have it any other way.
   
   Ashton Trey Belew
     _________________________________________________________________
   
   Date: Tue, 15 Sep 1998 16:01:11 +0100
   From: Peter Houppermans, envelope@pobox.com
   Subject: Linux acceptance
   
   I've seen quite a number of letters stating that to improve Linux
   acceptance it should have an easier to use GUI et all.
   
   I'm not sure I'd agree entirely with this. The point where Linux is
   making inroads is not in the desktop arena. I'll most likely attract
   lots of flames for this, but Microsoft has done a reasonable job in
   making their desktop products useful, and easy to use. How many people
   need the manual with Word or Excel ?
   
   Sure, it crashes frequently for some people, but for a large number of
   users it doesn't matter because they shut down the machine at the end
   of the day, conveniently saving slow memory leaks from exposure. And I
   have a W95 system that tends to get rebooted every two weeks, just to
   clear it up. No need to do it more often. So that community has zero
   interest in an alternative, other than for cost saving reasons. To
   convince those people you'll have to give them something that is
   nearly as easy to use, at a lower cost -and that includes staff costs
   for setting it up. What is needed here is a way of actually
   restricting the richness of the XWindows interface so users don't get
   the chance to shoot themselves in both feet and reduce support needs.
   I'm sure it is possible, but there has been no concise effort towards
   this idea. KDA, Gnome and Enlightenment are extremely impressive
   efforts, but they enrich the setup, not lock it down for Johnny
   EndUser who just wants to run his word processor. Give them a command
   line and they'll panic ;-(...
   
   Where Linux *IS* making a difference is in the server arena. If a
   desktop crashes it affects one (1) user, if a server crashes it takes
   everyone down who's connected. Instantly, the impact on productivity
   is amplified. What creates reluctance to accept Linux as an
   alternative is the lack of people to shout at if it goes wrong. Also,
   there are only now a few companies that offer a Service Level
   Agreement on support for Linux, and lack of support is a very nervous
   thing if you run mission critical applications. Yes, I agree with many
   that the main issue is not support, but not having a need for it, but
   one has to deal with disaster recovery as well, and overall system
   management. Only now CA has brought out some management modules for
   Linux (to make Linux systems visible in Unicenter TNG). And I'm not
   aware of any HP OpenView MIBs for Linux (if there are I'd be very
   happy to hear of them and I'd like to see both of these packages
   themselves run on Linux).
   
   Any company that wants to use Linux wholesale will want to manage it,
   and until hard commercial tools are there this won't happen unless
   through the back door.
   
   I would be very happy to see an alternative to NT, if only just for
   keeping MS on their toes. Linux is very hard on its way to become that
   alternative, but I'm not sure it is entirely there yet. Support from
   SUN, Oracle, CA and Netscape makes a difference, but it takes more
   than that to change a corporate strategy. Case studies where Linux is
   shown to be a viable Enterprise OS with the associated cost savings,
   improved reliability, manageability and all that goes with being a
   grown up OS will do more to convince the board than any other
   well-meant effort.
   
   Just an observation....
   
   For the record:
   
   I myself use Red Hat Linux 5.1 on most of my home systems (except the
   one W95 box) and on my Toshiba 480CDT (HOWTO web page appearing
   shortly), and I've used virtually every version of Windows and DOS
   since DoubleDOS appeared, and all versions of OS/2 since v2. I've been
   a Linux user for about 6 years, having had no previous exposure to
   Real Operating Systems <g>. So I'm not an expert, but I'm not entirely
   clueless either ;-).
   
   Regards, Peter
     _________________________________________________________________
   
   Date: Wed, 23 Sep 1998 15:31:40 +0200
   From: Ian Carr-de Avelon, ian@emit.pl
   Subject: GUI and novices
   
   This is my response to the letters by James Mitchell (Sep 98) and
   Antony Chesser (Aug 98). Well designed GUI's speed up the learning
   process because the user can see that there is a possibility. The user
   may have no idea what the icon of scissors will do, or even recognize
   that they are scissors, but if there is a button you learn very
   quickly you can click on it with the mouse and so lets give it a try.
   That simple peace of knowledge, that buttons can be pressed, will get
   you quite a way in a GUI. Knowing that you could use "<esc>d5" in vi
   will not take you nearly as far. Not only novices benefit, also it is
   a major help to users who work with a program only occasionally.
   Finding the button which does "that" is easier than remembering a
   sequence of keys. Microsoft have added standardization. You click on
   the little x button and the program stops. A command line program
   could require you to type: end, quit, exit, bye... etc. Even with a
   foreign language version of Windows you can normally manage a few
   things, just because the layout is standard. I run a local ISP so I
   have used Linux daily for over 2 years, almost exclusively in command
   line mode. I understand its strengths but I can still recognize the
   problems which other users would have. Possibly that is because I
   visit clients to help them with their problems, or maybe it is because
   I worked as a teacher and later as a designer of educational material.
   At any rate I can see that Linux is not yet a real option for most
   users and anyone who cannot should offer a few hours of their time to
   support new users, the revelation would come quite soon.
   
   This is a truth which I find quite painful to take, because there is
   nothing about the Linux OS which makes it so. The installation does
   not have to end with # prompt and Linux has not just one but several
   GUIs available any of which could be used in a consistent way by well
   designed programs. Although Microsoft have done more work in that
   respect, they are as far from being the best that their could ever be
   as their OS is in other ways. Many people who really want to see Linux
   being more widely adopted feel that this does not matter. Linux is
   being adopted for server applications and they hope that that will be
   enough to get people to make the effort to learn how to use it. My
   feeling is that most users choose NT because it looks like 95 which
   they have on their work station. Linux needs to selectable for basic
   office tasks before it will be widely accepted. Maybe Linux Gazette
   should run a competition for a best GPL suit for novice users:
     * Windows Manger
     * email program
     * Browser
     * Editor
     * Spread Sheet
       
   A small novice package which could be included in most distributions
   and start up at boot time or alternatively with a standard command
   like "desktop". Would make it much easier to say to clients who's
   win95 has died again "Why don't you let me install Linux for you?"
   Yours Ian
     _________________________________________________________________
   
   Date: Wed, 23 Sep 1998 14:24:23 +0200
   From: Stefan Zandburg, szandb@cis.HZeeland.nl
   Subject: text browsers
   
   Ijust have read some of the Linux Gazette. It contains quite a bunch
   of useful information. On many pages some of that information is for
   me difficult or impossible to read.
   
   The reason is that <B> bold text </B> is hardly visible in the browser
   i use. (lynx 2.7.2 beta, alternative, an even older version) The
   machine that acts as a terminal to the Novell Server only has a
   monochrome screen. As you may have concluded from stating the Server,
   it is beyond my abilities as user to install a graphical browser. I
   wish to read the Linux Gazette though and cannot do that on my home
   computer because i do not have an internet connection there.
   
   If you'd use other tags like the Italic tags <I>.. </I> or the Font
   tags <Font size+1>...</Font size> people like me would be able to read
   your Gazette. The browser ignores unknown tags but it does support the
   bold tags and displays it awkward.
   
   Here at our institute nearly 5000 students use the same browser to
   regularly visit the web. Although we all wouldd prefer using a
   graphical browser that is not likely to happen within reasonable time.
   Using the other tags in the future however would be only a small
   effort for you.
   
   Stefan Zandburg
   
     I sympathize with you, but bold and italic are used for two
     different purposes. If I always used italics, the difference in
     emphasis would not be apparent. There is also the problem that most
     articles come to me already tagged and I don't have the time to
     change them. I will think about this though and see what I can come
     up with. I mainly use bold for the subject lines of letters. That I
     can change easily. Consider it done. --Editor 
     _________________________________________________________________
   
   Date: Mon, 21 Sep 1998 14:34:18 EDT
   From: Bobnhlinux@aol.com
   Subject: Linux is the #1 OS on the Internet
   
   Many of you may have seen these results, but I hadn't seen anything on
   any of these lists, so here it is:
   
   Based on surveys of 810,000 European Internet servers, the Linux
   Operating System is the most used OS on the Internet. Three different
   categories were polled, web servers, FTP servers, and news servers.
   Not only was Linux number one in each category, but there wasn't even
   a consistent number two. Linux's market share went from 25.7% for news
   servers, to 26.9% for web servers, to 33.7% for FTP servers. In order
   to get a number two position in web servers and FTP servers, Windows
   95/98 was lumped together with Windows NT. They aren't the same
   system. For news servers, Solaris came in second.
   
   To get to the survey details, go to:
   http://www.hzo.cubenet.de/ioscount/
     _________________________________________________________________
   
   Date: Fri, 25 Sep 1998 08:48:10 -0500 (CDT)
   From: eanna@kc.net
   Subject: WilberWorks
   
   I ordered the GIMP CD from WilberWorks quitre some time ago and have
   heard nothing. E-mails have been ignored; I am getting ready to
   actually call them I wonder if others have had trouble with them? At
   their web site their FAQ includes several questions from people
   wondering where their CDs are--but those are fairly old, so either
   people wised up (except me) or they improved.
   
   Thanks--
   Jim Clark eanna@kc.net
     _________________________________________________________________
   
   Date: Mon, 21 Sep 1998 22:06:58 -0700
   From: Ken Linder, KLinder2@nos.com
   Subject: YMGP (Yet More Good Press)
   
   More mainstream press! And in a rather high-brow weekly CEO/CIO type
   periodical. The September 21st, 1998 issue of Computer World has it on
   page 34 in their "Computer World Quick Study" column. Very well done,
   IMPO. Also references Red Hat and Linux Journal.
   
   With it in this paper, hopefully, the CIOs and CEOs will start talking
   with their technical people, trying to find out more about this OS.
   Normaly when I see the CEO heading twards me, I try to find somewhere
   to hide, but if he wants to ask about Linux, hey... I can talk to him
   as long as he likes!
   
   Later...
   Ken
     _________________________________________________________________
   
   Date: Tue, 29 Sep 1998 13:31:01 -0400
   From: David Nelson, nelson@er.doe.gov
   Subject: In Praise of Wabi
   
   With Wabi selling for $45 or less, I wanted to share my satisfaction
   with this product in case anyone else is interested. I have been
   running WIN 3.1 and Wabi on top of Linux for about five months with
   very good results. It lets me use several Win 3.1 (16 bit)
   applications, primarily Quicken 4 and MS Office 4.2, that previously
   forced rebooting into DOS. I am running a 200 MHz Pentium with 32M of
   memory. No problems with memory (about 13MB to run Quicken, WIN 3.1,
   and Wabi) and only a small speed hit (20-30%) on calculation intensive
   operations. I use the printer, floppy, and modem under Wabi, but no
   sound, as advertised. Wabi has limited printer drivers, but if your
   Linux is set up to print Postscript, using Ghostscript drivers for
   your printer, it will work fine. My Powerpoint viewgraphs, including
   art, look identical under Wabi, printing to Postscript and under
   Win95, printing directly to PCL. The Windows clipboard works as
   expected, and in addition I can cut and paste between Windows and X
   Window applications.
   
   Wabi accesses my application and data files in the DOS/Win95
   partition, so I could convert transparently from DOS over to Wabi -- a
   nice trick for Wabi to look through Linux back to the DOS file system.
   Though I haven't tried it, I expect I could see files on my other
   networked computers using SAMBA. My total extra disk space is 12MB for
   Wabi, and 24MB for WIN 3.1 files. You need a copy of WIN 3.1, WIN
   3.11, or WIN for Workgroups in addition to Wabi. WIN95 won't work. As
   a bonus, you can run Windows applications remotely using an
   X-terminal, such as another Linux box. This is like Citrix Winframe,
   but a heck of a lot cheaper.
   
   Is it a perfect fit? Not quite. I have a formatting problem printing
   checks from Quicken on my ancient FX80 dot matrix printer, and there
   are a few quirks such as a disappearing cursor and "bleed through"
   from background windows in Quicken. But I consider these minor
   nuisances that don't reduce utility. Sure, I can't use 32 bit Win
   apps, and some might say that Quicken 4 and MS Office 4.2 are ancient.
   But I have Quicken 96, 97, and 98 as well as Office 97 sitting on my
   shelf. I tried them and for my needs there was no more useful
   functionality, just more bloat and glitz. You make your own decision;
   I found $45 a good deal.
   
   David B. Nelson
     _________________________________________________________________
   
             Published in Linux Gazette Issue 33, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Next 
   
      This page written and maintained by the Editor of Linux Gazette,
      gazette@ssc.com
      Copyright  1998 Specialized Systems Consultants, Inc.
      
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                               More 2 Tips!
                                      
               Send Linux Tips and Tricks to gazette@ssc.com 
     _________________________________________________________________
   
  Contents:
  
     * Newbie Help Redux (1)
     * Re: Help Wanted : newbie (2)
     * Clearing the Screen (1)
     * Re: simultaneous versions of kernels
     * Question about your Linux Gazette post
     * COBOL Compilers for Linux
     * Resetting the term (2)
     * Re: Help Wanted : newbie (3)
     * 2c tip -- more fun with pipes
     * 2 cents tip: Un-tar as you download
     * Re: Help Wanted: Looking for an Xwin Server software that runs
       under Win95/NT
     * Re: Help wanted for a (Cheap) COBOL combiler for Linux
     * Re: Clearing the Screen (3)
     * Unix Tip
     * rc.local Tip
     * Yet another method of resetting scrambled terminal after dumping
       binary data.
     * Rick's quick and dirty screen-saver
     * MS Word & Netscape
     * Pulling Files from NT
     * Re: The wisdom of US West...
     * RE: Clearing the Screen (4)
     * Re: Keeping track of your config files
     _________________________________________________________________
   
  Newbie Help Redux (1)
  
   Date: Tue, 01 Sep 1998 10:50:21 -0500
   From: Mike Hammel, mjhammel@fastlane.net
   
   Quick answers to get you started:
   
   1. I have grown fat and lazy with Win 98 and find myself looking for
   "Display Properties" and such. I'm very familiar with C and such and
   am not afraid of hacking scripts or the like, but my problem is thus:
   Where is a (succinct) list of what gets run when, from where, and why.
   I'd love to tweak everything if only I could find it.
   
   A. Take a look at /etc/rc or possibly /etc/rcX.d, where X is 1,2,3,
   etc. I don't have RH5.1 but I think it uses the System V init system,
   so these directories should exist. If so, this is where you find the
   scripts that get run at boot time. For more details, you should look
   into the "init" tool. I suspsect this is covered in depth in some of
   the newer Linux system management texts. It's not hard to understand,
   really. There are different run levels, and scripts for specific run
   levels get run at start up to get things going and again at shutdown
   to bring them down again.
   
   2. I have something called an "Ensoniq Audio PCI" sound card with
   "legacy emulation" I don't even know how to begin to get this thing
   working. What are the first steps in enabling hardware?
   
   A. Commercial solution: http://www.4front-tech.com. This is a
   commercial sound driver but don't fret - it's only $20 and works like
   a champ right out of the box. I have it and have had zero problems.
   I've suggested it to a few other folks and they all seemed to like it
   too. There is a non-commercial version of this same set of drivers
   available for Linux too. But I punted on it when I heard about the
   commercial driver.
   
   3. Where do I get information on mounting drives?
   
   A. mount -t fat32 /mount_pt_dir or possibly mount -t vfat
   /mount_pt_dir. I don't use MS on my box so can't remember which one of
   these works with FAT32 partitiions but I'm fairly sure one of them
   does. In any case, other folks are likely to respond with more
   detailed answers on this one.
   
   4. I think my printer works (at least text does), but how do I print
   things (man pages)?
   
   A. xman will print the pages as postscript but you need to set up a
   print spooler using Ghostscript. A print spooler is just a locical
   printer name that accepts print requests, processes them with some
   filters and then feeds them to the printer of choice. Ghostscript will
   translate Postscript input into the printer command language for your
   printer. I keep forgetting where the Ghostscript FAQ (including
   download info) is at, but you can find it by searching on Yahoo.
   That's what I always do.
   
   The hard way to set up printers is to learn about configuring
   /etc/printcap. However, my RH4.2 system has a fairly decent printer
   configuration utility so I suspect 5.1 has an even better one. The bad
   news is I can't remember the program's name (it's in my fvwm2rc at
   home and I never type it by hand). Check the documentation that came
   with the CD. I know it's mentioned in there.
   
   Best of luck.
   
   Michael J. Hammel
     _________________________________________________________________
   
  Re: Help Wanted : newbie (2)
  
   Date: Tue, 01 Sep 1998 07:37:43 +0200
   From: "Anthony E. Greene", agreene@pobox.com
   
   From: Dennis Lambert, opk@worldnet.att.net
   I have grown fat and lazy with Win 98 and find myself looking for
   "Display Properties" and such. KDE (http://www.kde.org/) is supposed
   to be a more integrated desktop environment, and Gnome
   (http://www.gnome.org/) is coming along. I'm very familiar with C and
   such and am not afraid of hacking scripts or the like, but my problem
   is thus: Where is a (succinct) list of what gets run when, from where,
   and why. I'd love to tweak everything if only I could find it.
   
   Linux is a complex OS, so the list isn't succint. There's a
   description of the boot process in the System Administrator's Guide.
   If you're new to Linux, I'd recommend you give the SAG a good browse.
   There's *lots* of useful information there. You should have an HTML
   copy installed in /usr/doc/LDP/sag.
   
   The Network Administrator's Guide (/usr/doc/LDP/nag) is also good to
   have, but the HOWTO's are better if you just need "cookbook" style
   docs. The HOWTO's are in /usr/doc/HOWTO. You should fire up Midnight
   Commander (mc) from the command line and take a look around /usr/doc.
   
   I have something called an "Ensoniq Audio PCI" sound card with "legacy
   emulation" I don't even know how to begin to get this thing working.
   What are the first steps in enabling hardware?
   
   There is a PCI-HOWTO and a Sound-HOWTO.
   
   Where do I get information on mounting drives (FAT 32 especially)
   
   In the Config-HOWTO or the archives of the Red Hat mailing lists.
   
   I think my printer works (at least text does), but how do I print
   things (man pages)
   
   Text and postscript are easy. Fortunately most things are convertable
   to postscript. In this case use:

        man -t CommandOrSubject | lpr

   This is covered in the man page.
   
   If you haven't joined any of the Red Hat mailing lists, you might
   consider doing so. Be warned though; they tend to be busy lists
   (http://www.redhat.com/support/).
   
   Welcome to Linux...
   
   Tony
     _________________________________________________________________
   
  Clearing the Screen (1)
  
   Date: Sat, 05 Sep 1998 11:56:53 -0700
   From: Anthony Christopher, anthonyc@blarg.net
   
   I have seen a lot of hints for restoring a trashed screen or window,
   but none of them mention the reset and clear commands. Are these
   commands deprecated, do they have unwanted side effects, or are they
   ineffective in certain situations?
   
   When I have cat'ed an executable, I usually just type: reset <ENTER>
   and let the garbage scroll off the screen.
   
   If, for some reason, I find the garbage characters annoying, I follow
   this command by typing: clear <ENTER>
   
   Anthony Christopher
     _________________________________________________________________
   
  Re: simultaneous versions of kernels
  
   Date: Fri, 4 Sep 1998 22:01:22 +0200
   From: Henner Eisen, eis@baty.hanse.de
   
   Just my 0.02 Euro:
   
   Most of the installation problems are caused by interaction with the
   linux distribution's default installation method. You can easily work
   around this by simply not installing your compiled kernel. Lilo and
   insmod support loading directly from the compilation directory.
   
   Just unpack your kernel in an arbitrary directory, say
   /home/kernel/linux-test, apply any patches and compile: make
   [x|menu|old]config; make dep; make zImage modules. But do neither make
   install nor make modules_install.
   
   Then add something like this to your /etc/lilo.conf:

# Linux bootable partition config begins
# test new (not installed) kernel just compiled in directory
# /home/kernel/linux-test.
image = /home/kernel/linux-test/arch/i386/boot/zImage
root = /dev/hda3
label = test
append= " MODPATH=/home/kernel/linux-test/modules/ "
# Linux bootable partition config ends
#

   and run lilo whenever you have recompiled your kernel image.
   
   When booting, choose "test" from the lilo prompt. The kernel will pass
   MODPATH to the environment of init and any startup routines that
   insmod's kernel modules will fetch them automatically from the kernel
   compilation tree.
   
   (If you additionally want to insmod some modules by hand from a root
   shell, MODPATH might be unset. But scripts can still extract that
   information from /proc/cmdline).
   
   Henner
     _________________________________________________________________
   
  Question about your Linux Gazette post
  
   Date: Fri, 4 Sep 1998 10:14:47 -0600 (MDT)
   From: "Michael J. Hammel", mjhammel@fastlane.net
   
   In a previous message, mjsendzi@engmail.uwaterloo.ca says: is there an
   url for this program?
   
   No, not that I know of. A couple of people have asked this. It's part
   of the core set of files in my Red Hat 4.2 distribution. Units has
   been around so long, and is available on so many different Unix
   platforms, that I suspect most distributions have a copy of it
   somewhere. On my RH4.2 it's under /usr/bin.

mjhammel(ttyp2)$ type units
units is /usr/bin/units

mjhammel(ttyp0)$ units
501 units, 41 prefixes

You have: 3 miles
You want: kilometers
        * 4.828032
        / 0.20712373

   Michael J. Hammel
     _________________________________________________________________
   
  COBOL Compilers for Linux
  
   Date: Thu, 03 Sep 1998 22:54:19 -0500
   From: cbbrowne@hex.net
   
   Concerning the following, recently posted in Linux Gazette:
   
   I have a friend who is doing a refresher course in Cobol in a Unix
   environment. I have suggested that she run Linux, and pick up a cheap
   / shareware copy of a Cobol compiler for Linux from somewhere. Knowing
   absolutely nothing about either Linux or Cobol, am I dreaming, or is
   there a realistic alternative to the compilers I have seen retailing
   for ~$1,500 US? I'd really appreciate any help/advice anyone can
   offer.
   
   There are several possible COBOL options in the Linux realm; for
   details see:
   http://www.hex.net/~cbbrowne/languages07.html
   
   There's not anything yet that could be considered 100% viable outside
   of (rather expensive) commercial options; obviously these sorts of
   things don't happen without there being a population of people who are
   interested enough to be willing to invest the time necessary to
   implement something.
   
   cbbrowne@hex.net
     _________________________________________________________________
   
  Resetting the term (2)
  
   Date: Thu, 03 Sep 1998 16:44:25 -0700
   From: david, david@kalifornia.com
   
   You posted a program to reset your console should the text become
   garbled. I thought I would mention that most distributions, Slackware
   notably, come with such a program that does this and more.
   
   reset will clear your tty, restore sane tty settings, and perform
   general tty cleanups. You should find this little utility just about
   anywhere :)
   
   David
     _________________________________________________________________
   
  Re: Help Wanted : newbie (3)
  
   Date: Wed, 2 Sep 1998 22:46:15 +0200 (CEST)
   From: rsmith@xs4all.nl
   
   In anwser to your questions in the September issue of the Linux
   Gazette:
   I recently purchased Red Hat 5.1 and got it running. Evidently I was
   lucky in that I have a fairly full FAT 32 Win 98 drive and kind of
   stumbled through the defrag / fips / boot to CD / repartition / full
   install with LILO process. Everything worked, but I'm a little
   nonplussed. A few topics I'd absolutely love to get feedback on...
   Turns out I have a lousy WinModem. I can see the feedback now, (Run it
   over with your car)
   
   Yep. Buy a *real* modem.
   
   I have grown fat and lazy with Win 98 and find myself looking for
   "Display Properties" and such. I'm very familiar with C and such and
   am not afraid of hacking scripts or the like, but my problem is thus:
   Where is a (succinct) list of what gets run when, from where, and why.
   I'd love to tweak everything if only I could find it.
   
   Daemons, boot time initialization: see the man page for `init'.
   There'll be an assortiment of scripts in /etc/rc.d or /etc/init.d and
   /etc/rcX.d (where X = 0 to 6) that do your system's boot-time setup.
   
   For X, especially XFree86, you can fiddle with the XF86Config file,
   which should reside somewhere in /etc. Or if you have an X server
   running you can use `xvidtune'. The programs and window-manager
   started by the X server are usually in a file called xinitrc or
   xsession.
   
   I have something called an "Ensoniq Audio PCI" sound card with "legacy
   emulation" I don't even know how to begin to get this thing working.
   What are the first steps in enabling hardware?
   
   You'll probably need to compile a new kernel. The sound driver that
   comes with the kernel supports this card. install your distribution's
   kernel source package, cd to /usr/src/linux and read the README.
   
   Where do I get information on mounting drives (FAT 32 especially)
   
   Read the manual for `mount' and `umount'. Make sure you have a kernel
   with (V)FAT support compiled in.
   
   I think my printer works (at least text does), but how do I print
   things (man pages)
   
   Use the lpr program. It is a print spooler. You might want to fiddle
   with /etc/printcap to enable your printer to print PostScript (via
   GhostScript).
   
   I'm not an idiot, not even a "dummy", but what is a good book to
   answer the basic questions? I have "Linux in a Nutshell" and it has a
   very good command reference and a few other things, but doesn't help
   in tweaking things.
   
   I haven't read many books on Linux, just *lots* of manpages and
   HOWTO's (in /usr/doc/HOWTO). Ask around in linux newsgroups.
   
   I don't really expect anyone to answer all of these concerns, but any
   little help would be greatly appreciated.
   
   Hope this helps... :-)
   
   Roland
     _________________________________________________________________
   
  2c tip -- more fun with pipes
  
   Date: Wed, 2 Sep 1998 11:59:49 -0400
   From: Larry Clapp, lclapp@iname.com
   
   After reading the "Un-tar as you download" 2-cent tip from
   scgmille@indiana.edu in issue 32, I thought you might like this, too.
   
   Say you have a program with a large initial startup time. After that,
   the program reads a line from a file, processes it, reads the next
   line, processes it, etc, until EOF. You would like to process a single
   line of data without suffering through the initial startup each time.
   Try this:

    mkfifo input_fifo
    rm input_file
    touch input_file
    tail -f input_file >> input_fifo &
    long_program input_fifo &

   When you want to feed it some data, say

    echo data1 data2 data3 >> input_file

   The tail will wake up, read the line, output it to the fifo (aka
   "named pipe"), the program will wake up, read the data from the pipe,
   process it, and go back to sleep.
   
   (You only have to do the mkfifo once; after that, it sticks around. On
   some systems (e.g. my Sun at work, where I came up with this), instead
   of mkfifo filename, use mknod filename p".)
   
   To shut things down, kill the tail. The program will get an EOF
   condition, and shut down normally.
   
   Of course, a better solution might be to rewrite the program to read
   from stdin, and then say

    tail -f input_file | long_program -

   but you can't always do that. Also, neither of these ideas will work
   if the program reads the whole file, and then processes each line from
   an internal list.
   
   -- Larry Clapp
     _________________________________________________________________
   
  2 cents tip: Un-tar as you download
  
   Date: Wed, 02 Sep 1998 03:46:20 -0700
   From: Ben Collver, collver@dnc.net tail -f --bytes=1m
   file-being-downloaded.tar.gz | tar -zxv
   tail -f --bytes=1m file.tar.bz2 | bunzip2 - | tar -xv I've noticed
   that sometimes tail -f does not work reliably. An alternative if you
   have lynx is:

lynx -source http://www.url.dum/file.tar.gz | tee file.tar.gz | tar zxm
lynx -source ftp://ftp.url.dum/file.tar.bz2 | tee file.tar.bz2 | bunzip2 - | ta
r xm

   Ben
     _________________________________________________________________
   
  Re: Help Wanted: Looking for an Xwin Server software that runs under win95/nt
  
   Date: Wed, 02 Sep 1998 11:31:08 +0100 (IST)
   From: Caolan McNamara, Caolan.McNamara@ul.ie
   
   From: Mark Inder, mark@tts.co.nz
   We use a Red Hat 4.2 machine in our office as a communications server.
   This is running well with the facility oftelnet connections for
   maintenance, diald for PPP dial up - internet and email, and uucp for
   incoming mail. I would like to run an X server on my windows PC to be
   able to use X client software on the Linux PC over the local Ethernet.
   Does anyone know of a shareware for freeware version which is
   available.
   
   Try the list at http://www.rahul.net/kenton/xsites.html#XMicrosoft
   
   this one is free for example
   http://www.microimages.com/www/html/freestuf/mix/
   
   Caolan
     _________________________________________________________________
   
  Re: Help wanted for a (Cheap) COBOL combiler for Linux
  
   Date: Wed, 02 Sep 1998 11:27:20 +0100 (IST)
   From: Caolan McNamara, Caolan.McNamara@ul.ie
   
   From: Andrew Gates, andrewga@fcf.co.nz
   I have a friend who is doing a refresher course in Cobol in a Unix
   environment. I have suggested that she run Linux, and pick up a cheap
   / shareware copy of a Cobol compiler for Linux from somewhere. Knowing
   absolutely nothing about either Linux or Cobol, am I dreaming, or is
   there a realistic alternative to the compilers I have seen retailing
   for $1,500 US? I'd really appreciate any help/advice anyone can offer.
   
   I haven't ever used Cobol, but at
   http://www.deskware.com/cobol/cobol.htm, there's a Cobol for Linux
   under development for download (for free I believe). Might be good to
   check it out, and to find out if it's of any use yet.
   
   Caolan
     _________________________________________________________________
   
  Re: Clearing the Screen (3)
  
   Date: Tue, 01 Sep 1998 19:00:31 -0700
   From: "Mark J. Ramos", mjramos@sprintparanet.com
   
   In the September issue you described some C code that can clear the
   screen when it gets screwed up from binary dumps to the terminal.
   There is a much easier way and it all it requires is the keyboard ;)
   Simply type "echo control-v escape-c" where and hit enter. The
   "control-v" allows you to type in the "escape-c" literally.
   
   This has worked much better for me then some other methods such as
   "reset" which comes with your favorite Linux distribution but like a
   compiler it isn't always there. This key sequence is *always*
   available on an ANSI terminal.
   
   Mark Ramos
     _________________________________________________________________
   
  Unix Tip
  
   Date: Tue, 1 Sep 1998 20:01:31 -0400
   From: Ian C. Blenk, eicblenke@Neurotic.Intermedia.Com
   
   As an addendum to Allan Peda's Tip in Linux Gazette issue 32, here is
   a quick tip that applies to most DEC emulators (vtXXX):

        echo ^V^O

   That's echo, control-V, control-O. The control-V portion escapes the
   control-O (terminal reset) from your shell. The echo just puts the
   control-O right back to your terminal emulator/dumb terminal (works
   great on true DEC terms too! :)
   
   This works for most Unix flavors. No code. Easy to remember.
   
   Ian Blenke
     _________________________________________________________________
   
  rc.local Tip
  
   Date: Tue, 1 Sep 1998 14:24:07 -0700 (PDT) From: Creede Lambard,
   fearless@moosylvania.net
   
   I've been reading the Linux Gazette for a couple of months now and I
   think it's great, especially the tips.
   
   Here's one for you to consider that was inspired by Dennis Lambert's
   "Help Wanted" letter in issue #32. I hope it doesn't duplicate
   something you've already published.
   
   To those of us used to the warm, fuzzy DOS world of CONFIG.SYS and
   AUTOEXEC.BAT, the complexities of the /etc/rc.d startup heirarchy can
   be nothing short of intimidating. Well, I decided to make it a little
   less so. I started by putting these lines at the top of
   /etc/rc.d/rc.local:

echo "==============================================="
echo " "
echo "Now running rc.local"
echo " "
echo "==============================================="

   Now, when I start up Linux I can tell just when my local configuration
   starts to run, and if I'm having problems I can see whether they
   happen before or after rc.local starts. You can learn other things,
   too -- I learned that rc.sysinit gets run on startup and shutdown!
   
   Unfortunately, especially if you have a fast system, you can miss
   error messages as they scroll by and dmesg doesn't always echo the
   information you need to solve a problem. I was seeing error messages
   in rc.local, but I couldn't tell what they were because they went by
   too fast. So, I wrote a Perl one-liner:

perl -e "print 'Press ENTER to continue: '; $x = <:STDIN>;"

   This prints a prompt, then waits for you to press ENTER before it
   continues. (Yes, there's probably an easier way to do this with bash
   or some utility, but I already know Perl and I'm still learning bash.
   [grin]) By putting this at the bottom of rc.sysinit I made the boot-up
   sequence stop so I could see the error message, and of course once I
   saw it I knew exactly how to fix it. I comment out the line unless I
   need it, of course -- if everything is working right I want Linux to
   take me straight to the login prompt!
   
   Here's hoping this helps someone.
   
   Creede Lambard
     _________________________________________________________________
   
  Yet another method of resetting scrambled terminal after dumping binary data.
  
   Date: Mon, 14 Sep 1998 03:55:54 +0000
   From: Sang Kang, sang@mocha.dyn.ml.org
   
   Perhaps this is the simplist solution:

        echo '\017'

   that's it.
   
   Sang Woo Kang
     _________________________________________________________________
   
  Rick's quick and dirty screen-saver
  
   Date: Wed, 16 Sep 1998 09:10:04 -0400
   From: "R. Smith", riter311@gte.net
   
   Here's a shell script which cycles through jpgs:

#!/bin/sh

# showjpg Rick's quick and dirty screen saver.

# Run from an xterm. Controll 'C' should get you out. Or run in
# background with '&' and use kill.

# forever
while [ 1 ]; do
# The path to your jpgs
  for file in /usr/local/images/jpg/*.jpg
  do
     xsetbg $file
     sleep 20
  done
done

   xsetbg is from the xloadimage package. It's the same as:

xloadimage -onroot -quiet

   Sleep is in seconds. Use convert from the ImageMagick package to
   convert .gif or .bmp to .jpg.
   
   Rick
     _________________________________________________________________
   
  MS Word & Netscape
  
   Date: Tue, 15 Sep 1998 07:58:56 -0400
   From: Vladislav Malyshkin, mal@mail1.nai.net
   
   I wish to contribute 2 cents story.
   
   One-click view of MSWord files in Netscape.
   
   There is a sad fact, that some people use MSWord to exchange
   documents. When one one gets such file in a mail on Linux (s)he can
   use MSWordView, but this requires:

 Save file
 Convert from .doc to .html
 Start Netscape to view it

   This 2 cents tip is about how to reconfigure netscape in order to view
   MSWord documents in one click.
   
   To do this:
     * Download and install MSWordView from
       http://www.csn.ul.ie/~caolan/docs/MSWordView.html. Usually it
       takes just ./configure ; make ; make install
     * Edit file .mailcap in your home directory (create it if it does
       not exist). Add one line into this file:

application/msword; ns="%s"\; nf="${ns}".html\; mswordview "${ns}" >"${nf}"\;
 netscape -remote 'openURL(file:'"${nf}"')' \; sleep 2 \; rm "${nf}"

   Vladislav
     _________________________________________________________________
   
  Pulling Files from NT
  
   Date: Mon, 14 Sep 1998 23:29:10 +0000
   From: Michael Burns, michaelburns@earthlink.net
   
   Nothing groundbreaking here but, being a newbie to Linux and Samba I
   was having a difficult time getting Samba set up and needed to get
   some large files from an NT server to a Linux machine. I do not have
   any NFS programs for NT but do have a Web/FTP server running on NT so
   my temporary but quick solution was to put the files I needed into my
   NT server's FTP directory and download them from there.
   
   Michael Burns
     _________________________________________________________________
   
  Re: The wisdom of US West...
  
   Date: Thu, 17 Sep 1998 19:30:16 -0600 (MDT)
   From: "Michael J. Hammel", mjhammel@fastlane.net
   
   Michael J. Hammel wrote: I haven't checked, but doesn't IPv6 have 6
   dot-values? And are they larger than 8 bit values? Just curious. I
   haven't heard much about IPv6 in awhile and wondered how we haven't
   run out of IP space yet without it.
   
   From: Jay Kominek, jay.kominek@colorado.edu
   IPv6 addresses take the form of
   'FEDC:BA98:7654:3210:FEDC:BA98:7654:3210' 8 16-bit hexadecimal chunks.
   All kinds of fun. Luckily, if you have a string of zeros in your
   address, you can do something like 1080::8:800:200C:417A
   
   To save yourself some typing.
   
   I hope I'm not running some place's DNS when IPv6 becomes popularized.
   
   Relevent RFCs:
     * 1883 Internet Protocol, Version 6 (IPv6) Specification. S. Deering
       & R. Hinden. December 1995. (Format: TXT=82089 bytes) (Status:
       PROPOSED STANDARD)
     * 1884 IP Version 6 Addressing Architecture. R. Hinden & S. Deering,
       Editors. December 1995. (Format: TXT=37860 bytes) (Obsoleted by
       RFC2373) (Status: PROPOSED STANDARD)
     * 1886 DNS Extensions to support IP version 6. S. Thomson & C.
       Huitema. December 1995. (Format: TXT=6424 bytes) (Status: PROPOSED
       STANDARD)
     _________________________________________________________________
   
  RE: Clearing the Screen (4)
  
   Date: Wed, 23 Sep 1998 08:44:10 -0600
   From: Robert Ferney, rferney@spillman.com
   
   From: Allan Peda, allan@interport.net
   A few days ago a classmate "accidentally" cat'ed a file to the screen.
   He asked asked me what he could do to reset his confused vt100, as
   clear wasn't sufficient.
   
   reset works very well for this. The command reset will effectively
   reset the screen by sending it the proper escape sequence. since reset
   looks up the escape sequence from the terminfo library so it works on
   just about any terminal. If this fails, sometimes a

$ stty sane

   will do the trick.
     _________________________________________________________________
   
  Re: Keeping track of your config files
  
   Date: Mon, 21 Sep 1998 22:30:58 +0200
   From: Andreas
   
   Your idea for keeping track of those files by linking them to a
   central directory is good.
   
   Another idea I am using frequently is keeping track of the
   modifications by either employing SCCS or RCS (or whatever derived
   utility available).
   
   Combining both ideas means for SCCS based systems: Use e.g.

    $ cd /
    $ sccs -d/root/SCCS create etc/inittab

   if you share a lot of these files across several systems, but there
   are some files that may differ you probably like to type

    $ sccs -d/root/SCCS -p`hostname` create etc/lilo.conf

   Which results in the following tree:

/root
|-/SCCS
|    |-etc
|    |     |-s.inittab
|    |     |-apollon
|    |     |     |-s.lilo.conf
|    |     |-jupiter
|    |     |     |-s.lilo.conf
    ...

   For daily use I recommend to keep all the files 'checked-out', i.e.
   'sccs edit' always after 'sccs create' and otherwise 'sccs deledit'.
   The above commands should also be abbreviated by aliases.
   
   For the RCS used admins I recommend 'cvs', but this means a bit more
   work ....
   
   Andreas
     _________________________________________________________________
   
             Published in Linux Gazette Issue 33, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
      This page maintained by the Editor of Linux Gazette, gazette@ssc.com
      Copyright  1998 Specialized Systems Consultants, Inc.
      
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                                 News Bytes
                                      
                                 Contents:
                                      
     * News in General
     * Software Announcements
     _________________________________________________________________
   
                              News in General
     _________________________________________________________________
   
  November Linux Journal
  
   The November issue of Linux Journal will be hitting the newsstands
   October 11. The focus of this issue is Web Programming and we have
   articles on FastCGI, HTMLgen, XML, SGML and Python, as well as an
   interview with Guido van Rossum, the creator of Python. Check out the
   Table of Contents at http://www.linuxjournal.com/issue55/index.html.
   To subscribe to Linux Journal, go to
   http://www.linuxjournal.com/ljsubsorder.html.
     _________________________________________________________________
   
   [LINK]
   
  Links2Go Key Resource Award
  
   Date: Wed, 22 Jul 1998 18:38:48 -0400
   Congratulations! Your page: http://www.linuxgazette.com/ has been
   selected to receive a Links2Go Key Resource award in the Linux topic.
   
   The Links2Go Key Resource award is both exclusive and objective. Fewer
   than one page in one thousand will ever be selected for inclusion.
   Further, unlike most awards that rely on the subjective opinion of
   "experts," many of whom have only looked at tens or hundreds of
   thousands of pages in bestowing their awards, the Links2Go Key
   Resource award is completely objective and is based on an analysis of
   millions of web pages. During the course of our analysis, we identify
   which links are most representative of each of the thousands of topics
   in Links2Go, based on how actual page authors, like yourself, index
   and organize links on their pages.
   
   For more information:
   Links2Go Awards, awards@links2go.com
     _________________________________________________________________
   
  X11.ORG goes public
  
   Date: Thu, 10 Sep 1998 00:31:27 -0400 (EDT)
   One of the main purposes of X11.ORG is to provide the X community with
   up-to-date information regarding "anything and everything X". By
   making this information easily available, you don't have to work quite
   as hard to keep up with the fast-moving pace of X developments. As it
   was imagined in the development process, we will attempt to be a
   slashdot.org of sorts, for the X community, focusing on those topics
   directly or closely related to X. X11.org plans to cover the setup and
   configuration information for the majority of WindowManagers, Desktop
   Environments (eg. CDE, GNOME, KDE), and X Servers.
   
   http://www.X11.org/
   
   For more information:
   Voltaire, voltaire@shell.flinet.com
     _________________________________________________________________
   
  7th Python Conference
  
   Date: Wed, 16 Sep 1998 17:18:14 -0400 (EDT)
   Call for Participation and Advance Program, 7th International Python
   Conference:
   http://www.foretec.com/python/workshops/1998-11/
   
   South Shore Harbour Resort
   Houston, Texas
   November 10-13, 1998
   Sponsored by CNRI and the PSA
   
   The Python Conference brings together a broad range of users, vendors,
   researchers, and developers from the Python community. The conference
   is the premier opportunity to meet other Python programmers, share
   information, and learn about the latest happenings -- including an
   update on the future of Python from its creator, Guido van Rossum.
   
   The program also includes a day of tutorials, two days of papers and
   invited talks, and Developers' Day. The conference program has been
   expanded this year to include a session for demos and posters to
   highlight work that is more interesting to see and interact with.
   
   For registration information, visit:
   http://www.foretec.com/python/workshops/1998-11/registration.html
   
   INVITED SPEAKERS
   
   Eric Raymond, "Homesteading the Noosphere." Custom, ego, and property
   in the open source community.
   
   David Beazley, "Commodity Supercomputing with Python." Python on
   supercomputing systems, and its role in the 1998 Gordon Bell Prize
   Competition, where a Python-driven application achieved 10 Gflops
   sustained performance on a Linux cluster.
   
   Jim Hugunin, "JPython." Recent and coming events in the happy
   integration of Python and Java.
   
   Guido van Rossum, "Python -- the next seven years." Recent and coming
   events in the development of the Python langauge.
   
   For more information:
   Jeremy Hylton, jeremy@cnri.reston.va.us
     _________________________________________________________________
   
  LISA '98, Systems Administration Conference
  
   Date: Mon, 14 Sep 1998 16:04:13 -0800
   The Immediately Practical is the Emphasis at Largest Conference
   Exclusively for System Administrators
   
   LISA '98, the 12th Systems Administration Conference, is co-sponsored
   by SAGE, the premier professional society for system administrators,
   and the USENIX Association. It will take place in Boston at the
   Marriott Copley Place Hotel, December 6-11, 1998. The largest
   conference exclusively for system administrators, LISA is expected to
   attract over 2000 attendees.
   
   Full Technical Program: http://www.usenix.org/events/lisa98/
   
   For more information:
   http://www.usenix.org/ Dona Ternai, dona@usenix.org
     _________________________________________________________________
   
  Linux Links
  
   The Linux Software Encyclopedia:
   http://stommel.tamu.edu/~baum/linuxlist/linuxlist/linuxlist.html
   
   COBOL Center: http://www.infogoal.com/cbd/cbdhome.htm
   
   Deskware COBOL: http://www.deskware.com/cobol/cobol.htm
   
   Collection of Free Resources:
   http://members.tripod.com/~net_tools/index.html
   
   Linux Preview (Spanish): http://linux.ncc.org.ve
   
   Crystal Space 3D Engine: http://crystal.linuxgames.com
   
   GNOME FAQ: http://www.mindspring.com./~tlewis/gnome/faq/v1.0/FAQ.html
   
   Linux Links: http://www.linuxlinks.com/
   
   DOSEMU.ORG: http://www.dosemu.org/
   
   Spanish Linux Index:
   http://www.croftj.net/~barreiro/public/indice.html
   
   Linux soundapps Webpage:
   http://www.bright.net/~dlphilp/linux_soundapps.html
   
   SciTech Display Doctor for Linux:
   http://www.scitechsoft.com/sdd_linux.html
     _________________________________________________________________
   
  K-12 and Linux
  
   Date: Tue, 8 Sep 1998 08:10:42 GMT
   A mailing list has been formed where people with Linux expertise can
   support K-12 people who are trying to use Linux in schools. To join,
   send e-mail to majordomo@lrw.net and in the body of the letter, enter:
   subscribe lxk12
   
   For more information: Randy Wright, rw26@nospam.lrw.net
     _________________________________________________________________
   
  Red Hat Hands Applixware back to Applix, Inc.
  
   Date: Thu, 17 Sep 1998 12:03:00 GMT
   September 14, 1998--In order to focus exclusively on developing and
   marketing the Open Source Red Hat Linux operating system, Red Hat
   Software, Inc. and Applix Inc. today announced that Applix Inc will
   have all future responsibility for the Applixware Office Suite,
   including Sales, Marketing, Product Support, and Quality Assurance.
   
   Applixware products previously purchased directly from Red Hat
   Software will still receive the full technical assistance and support
   of Red Hat Software.
   
   The announcement of the new relationship coincides with the release of
   Applixware 4.4.1 for Linux. This update of Applixware features all the
   standard components of the Applixware Office Suite, as well as Applix
   Data, a new module offering point and click access to information
   stored in relational databases, and Applix Builder, Applix's
   object-oriented, visual, rapid application development tool.
   
   The Applixware 4.4.1 Office Suite is available directly from Applix,
   Inc. for $99. For those wishing to upgrade to Applixware 4.4.1, Applix
   is offering a $79 upgrade. For more information, please see
   
   For more information: http://www.applix.com/
     _________________________________________________________________
   
  Intel, Netscape invest in Linux
  
   Date: Tue, 29 Sep 1998 11:43:45 -0700
   Red Hat Software has announced that Intel, Netscape and two VC firms
   are taking equity positions in the company which will enable it to
   create the Enterprise Computing Division. This division will ready
   Linux for enterprise-wide applications, enabling Linux, the most open,
   robust and carefully scrutinized operating system in the world, to
   tackle the likes of Windows NT.
   
   For more information:
   Full Press Release
     _________________________________________________________________
   
  Red Hat News Flash
  
   Date: Mon, 28 Sep 1998 09:30:03 -0700 (PDT)
   It has recently come to the attention of Red Hat Software that there
   are significant security holes in CDE. All users are affected, both
   those who purchased CDE Client and those who purchased CDE Developer
   that runs on Red Hat Linux 4.0 up to 5.1.
   
   For more information:
   Full Press Release
     _________________________________________________________________
   
  Canadian National Installfest a Success
  
   Date: Sun, 27 Sep 1998 19:58:06 PDT
   The Installfest referred to in last months News Bytes has come off an
   outstanding success! Details at http://www.linux.ca/installfest.html
   A world-wide installfest in the offing?
   
   For more information:
   Dave Stevens, davestevens@hotmail.com
     _________________________________________________________________
   
                           Software Announcements
     _________________________________________________________________
   
  Linux/Personal Productivity Tools
  
   LOS ALTOS HILLS, CA (Sept. 8, 1998) -- Personal Productivity Tools,
   Inc. (PPT) today announced that version 3.0 of its EtherPage (tm)
   client/server-to-pager messaging system is now running under Linux,
   the UNIX clone operating system.
   
   EtherPage delivers messages rapidly and efficiently from computer
   networks to wireless devices, including alphanumeric and 2-way pagers
   and digital cellular phones. In addition to Linux, EtherPage now runs
   under a broad range of operating systems including Windows NT and
   UNIX.
   
   For more information:
   Personal Productivity Tools, Inc., http://www.ppt.com/
     _________________________________________________________________
   
  LinkScan 5.0 - Breakthroughs in Performance, Scalability & Workflow
  
   San Jose, CA, Sept. 10, 1998. Electronic Software Publishing
   Corporation (Elsop) released LinkScan 5.0 today. Major improvements
   have been made to LinkScan 5.0 to make it serve the needs of
   workgroups throughout the enterprise and facilitate the workflow
   between content managers, developers and systems administrators. These
   improvements are the result of radical design changes that make
   version 5.0 essentially a new product compared to earlier versions.
   This effort was energized by the needs of organizations with very
   large intranet websites and public websites.
   
   LinkScan operates on all Unix Servers (including AIX, BSD, Digital
   Unix, HP/UX, IRIX, Linux, and SunOS/Solaris flavors) and Windows NT
   5.0 servers with Perl 5. Free fully functional evaluation copies of
   LinkScan 5.0 may be downloaded (less than 300 Kbytes) from the
   company's website at: http://www.elsop.com/
   
   For more information:
   Kenneth R. Churilla, ken@elsop.com
   Electronic Software Publishing Corporation
     _________________________________________________________________
   
  NetBeans Releases Last Beta Version of Java IDE, Free Download Continues
  
   Prague, Czech Republic, September 14, 1998 - NetBeans, Inc. today
   announced the release of the Beta 3 version of NetBeans Developer 2.0.
   It is the last beta prior to the full release, which is due near the
   beginning of Q4. Beta 3 is available for free download from the
   NetBeans web site, http://www.netbeans.com.
   
   NetBeans IDE is a full-featured Java IDE based completely on
   Swing/JFC. NetBeans is both written in Java and it generates Java
   code. It is an object oriented, visual programming environment based
   on JavaBeans components without relying on any third-party components.
   The IDE is easily extensible, and it runs on any platform that
   supports JDK 1.1.x, including Win95/98/NT, Apple Mac, Linux, OS/2,
   Solaris, HP-UX, Irix, and others. Since the June release of Beta 1,
   over 18,000 new registered users have downloaded the tool.
   
   For more information:
   NetBeans, Inc., http://www.netbeans.com, info@netbeans.com
   Product Overview, http://www.netbeans.com/overview.html
     _________________________________________________________________
   
  NetBeans Bundles Cloudscape with Leading Programming Environment
  
   Oakland, CA and Prague, Czech Republic, September 21, 1998-NetBeans,
   Inc. and Cloudscape(TM) Inc. announced today that NetBeans, Inc. will
   bundle Cloudscape's embeddable Java-based object relational database
   with upcoming releases of the NetBeans IDE. Founded on the principle
   of Java innovation, NetBeans is the first company to offer an all-Java
   IDE based on Swing/JFC. Cloudscape offers the industry's first
   embeddable Java database, designed to be invisibly embedded within
   applications as a local data manager.
   
   The Cloudscape database will be bundled with NetBeans Developer 2.0,
   allowing users of NetBeans Developer 2.0 to create Java applications
   that integrate a fully functional, yet lightweight object-relational
   database manager. The integrated product is expected to be available
   in November 1998. Cloudscape ships the only 100% Pure Java(TM) SQL
   database manager designed to be invisibly embedded within applications
   as a local data manager.
   
   For more information:
   NetBeans, Inc., http://www.netbeans.com, info@netbeans.com
   http://www.netbeans.com, write to info@netbeans.com, or call 011 4202
   8300 7322. Cloudscape, Inc. http://www.cloudscape.com/,
   info@cloudscape.com
     _________________________________________________________________
   
  Prolifics to be launched for Linux!
  
   Mon, 21 Sep 1998 00:34:30 +0200
   Based upon market interest and customer feedback, Prolifics has
   decided to offer a version of Prolifics on Linux. Linux offers the
   development community a strong platform choice at very modest prices.
   We feel that Prolifics, based on industry standards such as COM and
   Java, can offer this community a unique, powerful and flexible tool
   for building cross-platform database applications. Application Servers
   for the Web will be provided to process business logic on the Linux
   servers and deploy the presentation layer on a thin client Web
   Browser. These applications can be deployed for character-based, GUI
   and Web environments.
   
   The Linux platform will first be made available with Prolifics 4
   Standard. Prolifics 4 Standard is our upcoming 2-tier product release
   due out 4Q 1998. Look for a customer letter telling you all about it
   and more this week or next.
   
   For more information:
   Prolifics, Devi Gupta, devi@prolifics.com
     _________________________________________________________________
   
  IGEL
  
   Palmer, PA - September 7, 1998 - IGEL LLC today announced the
   availability of Etherminal J, a Thin Client desktop device. The first
   variant has been exhibited at Thinergy '98, the first global
   conference on thin-client/server computing held in Orlando, Sept. 1-3,
   1998.
   
   Etherminal J, based on IGEL's Flash Linux Technology, is the only thin
   client device incorporating Netscape Communicator Version 4.05, and a
   complete set of UNIX connectivity tools, locally in its own Flash
   Memory. Storing and running these software modules locally keeps
   network bandwidth requirements at a minimum. IGEL's Flash Linux is a
   compressed UNIX-compatible, flash memory accessible operating system.
   It is a POSIX-conform, multi-threading multi user operating system.
   Based on the popular Linux kernel, it offers the largest number of
   available device drivers and applications. It supports Internet and
   Java. IGEL tailored this OS to support a defined Thin Client hardware,
   and developed a flash memory driver technology to compress this OS,
   and all accompanying Emulators, X11R6 X-Server, and Thin Clients for
   multi-user Windows NT, and the Netscape Communicator in 12MB of
   "Disk-on-Chip" Flash Memory. IGEL's BIOS extensions allow to directly
   boot this compressed Flash Linux. At run-time needed OS parts,
   Emulators, Thin Clients, and the Netscape Communicator are
   decompressed on demand.
   
   For more information:
   IGEL*USA, http://www.igelusa.com/
   H. Knobloch, hans@igelusa.com
     _________________________________________________________________
   
  Linux Office Suite 99 from SuSE
  
   OAKLAND, Calif.--(BUSINESS WIRE)--Sept. 24, 1998--S.u.S.E., Inc. today
   announced the release of Linux Office Suite 99 -- a comprehensive
   software package that combines the latest in Linux technology with
   some of the most powerful, user-friendly applications on the market.
   
   S.u.S.E.'s Linux Office Suite 99 includes a spreadsheet, word
   processor, presentation graphics, database, fax program, and many
   other critical business applications.
   
   Linux Office Suite 99 comes with the latest version of Applixware
   4.4.1, which includes Applix Words, Spreadsheets, Graphics, Presents,
   and HTML Author, as well as Applix Data and Applix Builder.
   Applixware's latest release delivers a new filtering framework that
   has been optimized for document interchange with Microsoft Office 97.
   
   In addition, Linux Office Suite 99 integrates Applixware with the
   powerful ADABAS D 10.0 database system, enabling users to import data
   from the ADABAS D database into Applix Spreadsheets. Linux Office
   Suite 99 also contains the KDE and GNOME graphical desktops, S.u.S.E.
   fax, the personal edition of the backup utility ARKEIA 4.0, the
   popular GIMP graphics program, and many other features.
   
   Linux Office Suite is compatible with S.u.S.E., Red Hat, Caldera, and
   other popular versions of Linux. Users who need to install Linux for
   the first time can do so quickly and easily with the base system of
   S.u.S.E. Linux 5.3 that is included with the Office Suite.
   
   For more information:
   S.u.S.E., http://www.suse.com/
     _________________________________________________________________
   
  Xtoolwait 1.2
  
   Date: Tue, 8 Sep 1998 07:54:58 GMT
   One and a half year have gone by without a single bug report, so it's
   time to release a new version of Xtoolwait.
   
   This utility notably decreases the startup time of your X sessions,
   provided that you start a number of X clients automatically during the
   X session startup. Most people, for instance, start X clients like
   xterm, xclock, xconsole and xosview from their .xinitrc,
   .openwin-init, .xtoolplaces or .xsession file.
   
   These X clients are started simultaneously (in the background) which
   puts a high load on the X server and the OS
   
   Xtoolwait solves this problem by starting one X client in the
   background, waiting until it has mapped a window and then exiting.
   
   Download Xtoolwait from this page
   http://www.hacom.nl/~richard/software/xtoolwait.html
   
   For more information:
   Richard Huveneers, richard@hekkihek.hacom.nl
     _________________________________________________________________
   
  Fileman V1.1 - X-window based File Manager
  
   Date: Tue, 8 Sep 1998 08:32:09 GMT
   FileMan, a X-window based File Manager offering a large number of
   features along with great configurability and flexibility is now
   available as version 1.1.
   
   Version 1.1 offers improved performance and many bug fixes over
   earlier releases.
   
   Some features are still not fully complete, but it is still very
   useable and contains enough features to manage a Linux environment.
   
   For more information:
   http://www.bongo.demon.co.uk/page6.html
   Simon Edwards, FileMan Developer, filem@bongo.demon.co.uk
     _________________________________________________________________
   
  ppdd - encrypted filesystem - kernel patch and support progs.
  
   Date: Tue, 8 Sep 1998 08:38:12 GMT
   ppdd is an advanced encrypted file system for i386 Linux only.
   
   ppdd is used in a similar way to the loop device and offers simplicity
   and speed plus full strength encryption (128 bit). The design takes
   into consideration the fact that data on disc has a long lifetime and
   that an attacker may have the matching plaintext to much of the
   cyphertext. A combination of master/working pass phrases offers
   enhanced security for backup copies. Current status is BETA and
   comments on the implemenation and underlying crypography are most
   welcome.
   
   It consists of a kernel patch plus support programs and is intended
   for users with enough knowledge to compile the kernel, setup LILO,
   partition disks etc. It is not for absolute beginners or "non
   technical" users yet.
   
   Available from: http://pweb.de.uu.net/flexsys.mtk
   
   Package is ppdd-0.4.tgz, PGP signature is also available from same
   URL.
   
   For more information:
   Allan Latham, alatham@flexsys-group.com
     _________________________________________________________________
   
  bzip2-0.9.0, program and library for data compression
  
   Date: Tue, 8 Sep 1998 08:47:31 GMT
   bzip2-0.9.0 is a high-quality, portable, open-source, lossless data
   compressor, based on the Burrows-Wheeler transform.
   
   Source code, binaries and further details, are available from
   http://www.muraroa.demon.co.uk
   
   and also from the mirror site
   http://www.digistar.com/bzip2/index.html
   
   bzip2-0.9.0 is fully compatible with the previous version,
   bzip2-0.1pl2. In particular, the .bz2 file format is unchanged.
   
   For more information:
   Julian Seward, Julian_Seward@muraroa.demon.co.uk Xterminal 0.4 -
   Object Oriented User Interface
   
   Date: Tue, 8 Sep 1998 08:45:51 GMT
   Xterminal is an Object Oriented User Interface with a client-server
   architecture. The main purpose is a friendly interface for the UNIX
   operating systems. It is designed to be used to build text-based
   applications in C++.
   
   It consists in a complete object oriented library including multiple,
   resizeable, overlapping windows, pull-down menus, dialog boxes,
   buttons, scroll bars, input lines, check boxes, radio buttons, etc.
   Mouse support, advanced object management, events handling,
   communications between objects are provided, too, together with a
   complete programmer's manual.
   
   Xterminal is available for download from:
   ftp://sunsite.unc.edu/pub/Linux/libs/ui/Xterminal-0.4.tar.gz
   http://www.angelfire.com/sc/Xterminal/download.html
   
   For more infomation:
   http://www.angelfire.com/sc/Xterminal
   Dragos Acostachioaie, dragos@iname.com
     _________________________________________________________________
   
  connect v1.0alpha - tool to ease the sharing of a PPP link
  
   Date: Tue, 8 Sep 1998 08:50:39 GMT
   Here is the first ALPHA release of connect package (v1.0a)
   
   connect package is a client-server program designed to ease the
   sharing of a PPP link to the internet over a small network.
   
   connect is a solution. By running a tiny daemon (connectd) that will
   take care to launch the PPP when asked to and keep it up as long as
   needed, you can control your link.
   
   As you can talk to the daemon with a command-line or a Java applet,
   access is easy from a unix host or a browser running on a Win95/NT
   workstation.
   
   connect can be freely downloaded from its home page, see
   http://www.caesium.fr/freeware/connect/index.html
   
   For more information:
   Nicolas Chauvat, nico@caesium.fr
     _________________________________________________________________
   
  PalmPython 0.5.2 - PalmPilot sync/database library for Python
  
   Date: Thu, 10 Sep 1998 10:22:50 GMT
   I am pleased to announce version 0.5.2 of PalmPython, a conduit
   programming kit which enables desktop applications to access
   PalmPilots and their data. PalmPython is available at the following
   URL:
   
   http://www.io.com/~rob/cq/palmpython/
   
   PalmPython requires the pilot-link library, which can be found at
   ftp://ryeham.ee.ryerson.ca/pub/PalmOS/
   
   For more information: Rob Tillotson, robt@debian.org
     _________________________________________________________________
   
  C++ library wxWindows/Gtk 1.93 and GUI builder
  
   Date: Thu, 10 Sep 1998 09:49:21 GMT a new version of the GTK+ port of
   the cross platform library wxWindows has been released.
   
   To our knowledge, wxWindows is the only cross platform library
   available for creating native Windows and Unix/GTK+ applications.
   Although it is not its primary goal, wxWindows should help make the
   transition from Windows to Linux much smoother, not the least for
   small companies.
   
   Apart from being platform independent, wxWindows is arguably the most
   complete free class library around offering features from database
   connectivity to configuration management to internationalization to a
   multiple document interface and support for printing using Postscript
   on Unix. We also provide detailed documentation and a set of sample
   apps.
   
   http://wesley.informatik.uni-freiburg.de/~wxxt/
   
   The main wxWindows site:
   http://web.ukonline.co.uk/julian.smart/wxwin/
   
   wxWindows is free and has been an open source project since long
   before that term has been trademarked.
   
   For more information:
   Robert Roebling, roebling@sun2.ruf.uni-freiburg.de
     _________________________________________________________________
   
  hm-3.0 - multiplatform curses-based filemanager
  
   Date: Fri, 11 Sep 1998 08:50:49 GMT
   hm 3.0 is a multiplatform cursesbased filemanager. Developed, adjusted
   and matured for 3 years by and for unix system managers. Versatile
   look from ls-like to ls -ail. All the basics with one keystroke: cd,
   cat, chgrp, chmod, chown, cp, diff, file, ln, man, mkdir, mv, od, rm,
   sh, sum, tail -f, vi, wc. Help-facility built in (no man page needed).
   
   http://sunsite.unc.edu/pub/Linux/utils/file/managers/hm-3.0.tar.gz
   
   For more information:
   Hans de Hartog, dehartog@csi.com
     _________________________________________________________________
   
  mswordview 0.4.0 released
  
   Date: Fri, 11 Sep 1998 12:43:27 GMT
   yes the best thing since sliced bread, the ongoing office98 word
   format to html conversion project has notched up another few
   victories.
   
   changes since last announced version are basically:
   many many many bug fixes.
   improved lists.
   vastly improved header and footer support.
   section support.
   page numbering styles support.
   improved handling of hyperlink fields.
   and....
   prelinary support for graphics !, yep given a gif/jpg/png inserted via
   the insert->picture->from file mechanism, mswordview can to date
   successfully find its way to outputting a graphic, though this feature
   is very alpha and based upon more that a little bit of guesswork.
   
   http://www.csn.ul.ie/~caolan/docs/MSWordView.html
   http://www.gnu.org/~caolan/docs/MSWordView.html
   
   For more information: Caolan McNamara, Caolan.McNamara@ul.ie
     _________________________________________________________________
   
  acua 2.11 - modem pool administration utility
  
   Date: Tue, 15 Sep 1998 14:28:51 GMT
   ACUA is designed to facilitate the administration of Linux systems
   hosting modem pools. ACUA's high-level goals are:
     * to automate the enforcement of access restrictions
     * to automate (as much as possible) user administration tasks
     * to provide accounting information
     * to collect and provide useful statistics
       
   http://acua.gist.net.au/
   
   For more information:
   Adam McKee, amckee@iname.com
     _________________________________________________________________
   
  InfoPrism v0.0.3 - A General Document Processing System
  
   Date: Tue, 15 Sep 1998 14:25:51 GMT
   
   InfoPrism is a general document processing system that translates SGML
   source files to different output formats like HTML, Texinfo, LaTeX and
   plain text.
   
   In addition to plain old SGML documents InfoPrism handles so-called
   SGML scripts as well. These are Tcl scripts using additional commands
   for document creation. The commands are * counterparts of SGML
   elements (e.g. `ul', `pre'). * shortcuts for multiple SGML elements
   (e.g. `liwul'). * simulate SGML facilities (e.g. `include').
   
   Examples can be found in the `sgml' subdirectory of the distribution.
   
   http://www.han.de/~racke/InfoPrism/
   
   For more information:
   Stefan Hornburg, racke@gundel.han.de
     _________________________________________________________________
   
  Fixkeys 0.1 - Mini-HOWTO on home/end/del/backspace keys
  
   Date: Tue, 15 Sep 1998 14:35:58 GMT
   Fixkeys is a mini howto on howto get home/end/del/backspace behaving
   the way you want under linux. This howto comes with prepared config
   files and doesn't only describe what to do to get your keys to work
   but also why.
   
   http://electron.et.tudelft.nl/~jdegoede/fixkeys.html
   
   For more information:
   Hans de Goede, j.w.r.degoede@et.tudelft.nl
     _________________________________________________________________
   
  Linux PC-Emulator DOSEMU, new stable release: dosemu-0.98.1
  
   Date: Tue, 15 Sep 1998 14:51:27 GMT
   The DOSEMU team is proud to announce DOSEMU 0.98.1, the PC Emulator
   for x86 based *nix. Please remember to consider this as ALPHA
   software.
   
   DOSEMU is a PC Emulator application that allows Linux to run a DOS
   operating system in a virtual x86 machine. This allows you to run many
   DOS applications.
   
   The DOSEMU PC Emulator can be downloaded from the following FTP sites:
   
   ftp://ftp.dosemu.org/dosemu/
   ftp://tsx-11.mit.edu/pub/linux/ALPHA/dosemu/
   
   The binary distribution is statically linked against libc-5.4.46 and
   libX* from XFree-3.3.2.3. It should run on all current Linux
   distributions.
   
   For more information:
   The DOSEMU-Delopment-team, linux-msdos@vger.rutgers.edu
   http://www.dosemu.org/
     _________________________________________________________________
   
  ROADS 2.00 - a free Perl based Yahoo-like system
  
   Date: Mon, 21 Sep 1998 10:31:31 GMT
   ROADS version 2.00 is a free Yahoo-style system written in Perl. It is
   a collection of tools which can be used in building on-line
   catalogues.
   
   ftp://ftp.roads.lut.ac.uk/pub/ROADS/roads-v2.00.tar.Z
   
   For more information:
   Martin Hamilton, martin@net.lut.ac.uk
     _________________________________________________________________
   
  Loadmeter 1.20 - Linux/Solaris system stats utility
  
   Date: Mon, 21 Sep 1998 10:33:20 GMT
   Loadmeter is a useful little system monitoring utility I've hacked up
   to keep track of various system stats. It monitors: Load average,
   Uptime, Disk usageb,and Memory usage.
   
   http://www.zip.com.au/~bb/linux/
   
   For more information:
   Ben Buxton, bb@zip.com.au
     _________________________________________________________________
   
  Gtk-- 0.9.15 - C++ wrapper for gtk
  
   Date: Mon, 21 Sep 1998 11:08:08 GMT
   Version 0.9.15 of Gtk-- is now available.
   
   http://www.iki.fi/terop/gtk/
   
   Gtk-- is a C++ wrapper for GTK, the Gimp ToolKit. GTK is a library for
   creating graphical user interfaces. Gtk-- is distributed under GNU
   LGPL.
   
   Gtk-- provides C++ abstraction of gtk library. The C++ interface is
   kept very similar to the interface gtk has. Thus documentation and
   knowledge of gtk can be utilized for creating GUI applications using
   Gtk-- while still enjoying advantages C++ language can offer.
   
   Gtk's homepage: http://www.gtk.org/ Gnome homepage:
   http://www.gnome.org/
   
   (*) gnome and gtk1.1 widget support require newest versions from gnome
   cvs server.
   
   For more information:
   Tero Pulkkinen, terop@assari.cc.tut.fi
     _________________________________________________________________
   
  klp-0.2 - a print queue manager for KDE
  
   Date: Mon, 21 Sep 1998 11:22:12 GMT
   It's here -- klp - a line printer queue manager for KDE -- Second
   (alpha) release 0.2.
   
   klp is a GUI based replacement/wrapper for lpr/lpq/lprm (or similar in
   case of other types of print servers). It manages the print queue of
   printers. klp is intended for use with the K Desktop Environment
   http://www.kde.org/.
   
   You can print by drag&drop documents from KDE's filemanager on it. You
   can view the queue and remove items from it.
   
   klp can dock itself in the panel, still showing the printer status.
   The docked icon also allows printing by drag&drop.
   
   http://rulhmpc49.LeidenUniv.nl/~klp
   
   For more information:
   Frans van Dorsselaer, dorssel@MolPhys.LeidenUniv.nl
     _________________________________________________________________
   
  TkDesk 1.1 released
  
   Date: Mon, 21 Sep 1998 12:27:48 GMT
   TkDesk is a graphical desktop and file manager for several types of
   UNIX (such as Linux) and the X Window System. It offers a very rich
   set of file operations and services, and gives the user the ability to
   configure most aspects of TkDesk in a powerful way. The reason for
   this is the use of Tcl/Tk as the configuration and (for the biggest
   part of TkDesk) implementation language.
   
   http://people.mainz.netsurf.de/~bolik/tkdesk/
   
   For more information:
   Christian Bolik, Christian.Bolik@mainz.netsurf.de
     _________________________________________________________________
   
  XFCE 2.1.0 - Window/Backdrop Manager and Toolbar for X released
  
   Date: Mon, 21 Sep 1998 12:37:49 GMT
   XFce is now a set of applications including a powerfull Window Manager
   compatible with MWM(tm), OpenLook(tm), GNOME and KDE hints, a toolbar
   a backdrop manager and a system sound manager (NEW!) for X11. Unlike
   so many other X applications, XFce is very easy to use and to
   configure, thanks to menus, all mouse driven ! Features pulldown menus
   with color icons, 3D widgets, etc.
   
   HTTP://xfce.penguincomputing.com/
   HTTP://www.linux-kheops.com/pub/xfce/
   HTTP://tsikora.tiac.net/xfce
   
   Anonymous ftp sites :
   
   ftp://antarctica.penguincomputing.com/pub/xfce
   ftp://ftp.linux-kheops.com/pub/xfce-2.0.4
   ftp://tsikora.tiac.net
   
   XFce is a toolbar and a kind of desktop environment (XFce standing for
   XForms Cool Environment) With XFce, no need to learn any definition
   language, or type any configuration file. XFce does it itself! Just
   use the mouse to define your preferences. XFce provides an elegant and
   easy way to start all your X-Window applications, using nice color
   icons, popup menus, etc.
   
   For more information:
   Olivier Fourdan, fourdan@csi.com
     _________________________________________________________________
   
  Subject: COMMERCIAL: Better Counter for Linux
  
   Date: Mon, 21 Sep 1998 10:31:00 GMT
   Better Counter - one of the leading CGI script for counting web pages
   - is now also available for Linux (on Intel hardware). Better Counter
   is the world's first counter that combines the following features:
   
   - - Counts all pages of your site (depending on your service level)
   - - Counts the click-through of your external links
   - - Usability and clarity of the data presentation by using a Java
   Applet
   - - Complete hits analysis within a freely customizable page structure
   
   Better Counter is also available as FREE service.
   
   http://www.better-counter.com/
   
   For more information:
   Stefan Ruettinger, Stefan_Ruettinger@rocketmail.com
   http://www.better-homepage.com/
     _________________________________________________________________
   
             Published in Linux Gazette Issue 33, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
      This page written and maintained by the Editor of Linux Gazette,
      gazette@ssc.com
      Copyright  1998 Specialized Systems Consultants, Inc.
     _________________________________________________________________
   
                           (?) The Answer Guy (!)
                                      
                   By James T. Dennis, answerguy@ssc.com
          Starshine Technical Services, http://www.starshine.org/
     _________________________________________________________________
   
  Contents:
  
   (!)Greetings From Jim Dennis
   
   (?)AutoCAD for Linux? Not Yet. Sorry.
          
   (?)fd0 --or--
          Floppy/mount Problems: Disk Spins, Lights are on, No one's
          Home?
          
   (?)SCSI drive installation --or--
          Partition your HD before you try to use it.
          
   (?)Supressing cc: lines in Emacs' Mail replies
          
   (?)chroot, twist, and other rescue-boot fun --or--
          "Virtual Hosting" inetd based services using TCP Wrappers
          
   (?)Linux/Samba as a Primary Domain Controller
          
   (?)ip masquerading --or--
          IP and Sendmail Masquerading over a Cablemodem
          
   (?)tty help --or--
          Psuedo tty Becomes Unusable
          
   (?)connect script failed --or--
          O.K. It's not a Winmodem
          
   (?)[linuxprog] more shuffling experiments --or--
          Shuffling Lines in a File
          
   (?)Conditional Execution Based on Host Availability
          
   (?)Desqview --or--
          Buying DESQview and/or DESQview/X
          
   (?)Thanks for the pointer to uuencode sources.
          
   (?)Download a Catch 22? --or--
          Chicken and Egg (Catch-22) for Linux Download/Install
          
   (?)Important typo in Anti-Windows emulator rant --or--
          Will the "Real" freshmeat Please Get Bookmarked?
                        ____________________________
   
  (!)Greetings From Jim Dennis
  
    Back to School Special
    
   Well, it's been another great month for Linux. We hear that Intel and
   Netscape are investing in Red Hat Inc. and Intel is joining Linux
   International.
   
   So, everything is looking rosy for our favorite platform.
   
   What could be better?
   
   Well, I read an interesting editorial in ``;login'' the USENIX
   (http://www.usenix.org/) Association's monthly magazine. This is by
   Jordan Hubbard, one of the founders of the FreeBSD project --- and an
   employee at Walnut Creek.
   
   He talks about the tendency of the freenix "clans" to fragment and
   duplicate development effort over relatively petty differences in
   licensing and --- more often as a result of the slithings and bites of
   "the snakes of Unrestrained Ego and Not Invented Here."
   
   This fragmentation has been crippling the overall Unix marketplace for
   twenty years. The odd thing is that there is both a Unix "community"
   and a "marketplace." The members of the community tend to form "clans"
   which may bicker but mostly feel that they have mostly common goals.
   We'll argue incessantly over the advantages of a BSD'ish vs. a GPL
   license, or the superiority of 'vi' over 'emacs' or vice versa (I'm a
   heretic on that battle --- I use xemacs in "viper" -- vi emulation
   mode).
   
   The Unix community has a long history of producing free software ---
   one that predates Linux, FreeBSD, X Windows, and even the Free
   Software Foundation itself. The FSF's GNU project was the first
   organized and formal effort to produce a fully usable system of tools
   that was unencumbered by corporate copyright (some argue that the
   "encumberances" of the GPL are even too much --- but that's back to
   the perennial clan feud; so let's skip it).
   
   We may believe that Linux is the culmination of that effort. I hope
   it's not.
   
   Jordan goes on to explain the FreeBSD attitude to software vendors
   that are expressing a renewed interest in the UNIX market and why he
   (and his associates) tell them "to port to Linux first (or at all)"
   
   The FreeBSD support for running Linux binaries is apparently pretty
   solid (my use of FreeBSD has only required native binaries). It's
   possible that FreeBSD could be "fully Linux compatible" right down to
   compliance with the "Linux Standards Base." (It's likely to be easier
   for FreeBSD to achieve compliance than it will be for the various
   non-x86 Linux ports).
   
   Jordan also goes on to speculate:
   
     `` Say, for example, that someone fairly prominent in the Linux
     community popped up and told various users that they might want to
     give FreeBSD a whirl, just to check out what it has to offer
     lately. ''
     
   Well, I'm probably not "fairly prominent" enough to fullfill Jordan's
   wish here. However, I've been saying that for years, here and in other
   fora. I think some of the SVLUG members are sick of hearing me suggest
   it.
   
   My co-author (on the Linux book that we're writing) is a FreeBSD user.
   Some of my best friends favor NetBSD. My wife has been recently
   working for an outfit that uses FreeBSD for most of their desktop
   systems (only occasional spots of Linux) and Solaris for their
   servers. (The FreeBSD support for Japanese is apparently very good ---
   and it seems to be *much* more popular than Linux in Japan)
   
   I've used FreeBSD and still recommend as an FTP server. I tend to
   stick with Linux for two reasons. The first is laziness, I've gotten
   much more used to Linux' quirks than FreeBSD's, and it's easy to pick
   up new CD's for Linux --- they're everywhere; I have to hunt around a
   bit for FreeBSD CD's.
   
   However, I'm going to be trying a copy of 3.0 when it ships (I guess
   that will be near the end of this month). I'd suggest that all serious
   Linux students and enthusiasts try one of the BSD's --- FreeBSD for
   x86's; NetBSD for just about anything else; OpenBSD if your putting up
   an "exposed" system and allowing shell access to it.
   
   Meanwhile I'll also suggest that you look at other operating systems
   entirely. Linux, FreeBSD, NetBSD, OpenBSD, Solaris .... they're all
   Unix. When you get beyond DOS/Windows/NT and MacOS all you see is
   UNIX.
   
   However there's quite a bit more out there. You just have to dig for
   them. Here's one place where you can start:
   
   http://www.starshine.org/OS/
   
   I wrote that page a long time ago --- but most of the links still seem
   to be alive (O.K. Sven moved --- so I had to fix one link).
   
   Two notes of interest:
   
     Amoeba is now "free"
     
     Amoeba is a distributed OS (think Beowulf clusters with lots of OS
     level support for clusering, process migration etc). It was written
     as a research project by Andrew S. Tanenbaum of Vrije University
     (the author of Minix, and the text book from which Linus learned
     some of what he know about OS design). There was a legendary
     "flamewar" (actually just a public debate) on the alt.os.minix
     newsgroup about the merits of monolithic kernels (Linux and the
     traditional Unix implementations) vs. "microkernels" (Minix, MACH,
     the GNU HURD, NeXTStep, and many others).
     
     To learn more about Amoeba:
     
     http://www.cs.vu.nl/pub/amoeba/
     
     The EROS project (Extremely Reliable OS) has apparently finally
     been completed (for its initial release). I've mentioned this
     project in my earlier columns --- it is a microkernel OS which
     implements a "pure capabilities" security and authority model. This
     is so unlike the identity and access control lists models we see in
     Unix, NT, Netware, VMS and other multi-user OS that it took me
     about a year to "unlearn" enough to get some idea of what they were
     talking about.
     
     EROS is not a free system. However, there are provisions for free
     personal use and research.
     
     You can read more about EROS at:
     
     http://www.cis.upenn.edu/~eros/
     
     (The FAQ's explanation of capabilities and its comparison to ACL's
     and identity based authority models is *much* better than anything
     that I found back when I first looked at this project a couple of
     years ago).
     
   So, before you sing the praises of Linux to another potential convert
   --- consider your basis for comparison. If you've only only used
   DOS/Windows/NT and Linux --- you'll want to go back to school.
            ____________________________________________________
   
  (?) AutoCAD for Linux? Not Yet. Sorry.
  
   From david stankus on 24 Sep 1998
   
   Hi, I was talking with Terry and he told me you may know of a way to
   use an AutoCAD14 compatible on the Linux OS platform? I'm an AutoCAD
   driver for pay and am about to build a machine and I'll need an OS for
   said machine. Do you think Linux is a good way to go? Thanks Dave 
   
     (!) Last I heard there was no support for Linux from Autodesk.
     Although they originally developed on Unix, Autodesk has shifted
     almost completely to Windows in recent years --- and they've been
     cutting their margins and trying to make it on volume. The prices
     for their Unix versions were always much higher than the Windows
     versions --- so their perception of the market interest levels is
     probably a matter of "self-fulfilling" prophecy. (Naturally the
     market will appear to have greater demand for the version that
     costs one quarter the price).
     
     So you probably won't get AutoCAD running directly. I also wouldn't
     try to run it under one of the Windows emulators that's available
     for Linux --- those are generally too slow and unstable for
     productive use on major applications. They are most suited to the
     occasional case where you need to get into Word or Excel to extract
     some data from a proprietary document.
     
     Of course I could be wrong --- you should definitely call Autodesk
     and ask them. We've recently had Informix, Oracle, Sybase, Corel,
     IBM and other major companies announce product plans (and actually
     release products) for Linux. So, Autodesk might be jumping on this
     bandwagon to blow their own horn any time. Calls by real users, who
     are really interested in making an immediate purchase are bound to
     help. I've copied their webmaster on this message so that he or she
     (or they) can forward this along to the the appropriate parties. (I
     did search their web site at http://www.autodesk.com for Unix and
     Linux --- and there didn't seem to be any support for any PC based
     Unix -- though there was mention of AIX, HP-UX, and Sun [sic] ---
     that would presumably be Solaris/SPARC).
     
     If that doesn't work you could try some of the native Linux CAD
     packages. There are a couple of these out there --- one is called
     "Microstation" from Bentley systems. It is available only in a
     "student version" and they won't sell a "fully support" edition for
     commercial/professional use at this time. There also one called
     VariCAD and another called Varkon. Actually there's a whole list of
     related products at:
     
     http://www.linuxapps.com/cgi-bin/group.cgi?cad3d
     
     ... LinuxApps.com is an extensive site that lists a good cross
     section of the available Linux software (mostly commercial software
     in this case).
     
     Two other favorite sites for Linux applications are:
     
   Christopher B. Browne's home pages:
          http://www.hex.net/~cbbrowne
          
     Christopher is very active on the comp.os.linux.* newsgroups ---
     where he is often a voice of cool reason amidst the flames. His
     Linux pages covers DBMS (databases) more extensively than any
     others I've found.
     
     ... and:
     
   Linas Vepstas
          http://www.linas.org
          
     Linas Vepstas should not be confused with Linus Torvalds. However,
     Linas does maintain a nice trim set of web pages devoted to "Linux
     Enterprise Computing." I particularly like Linas' commentary and
     annotations, including the occasional wisecrack. This is not "just
     another bookmarks" page.
     
     These might not work like AutoCAD at all and I don't think they
     support the same document formats nor the "AutoLISP"
     programming/macro'ing language. However they might suit you.
     
     Ultimately if your most important requirement is AutoCAD --- then
     you're probably stuck with Windows until Autodesk figures it out.
     Until then you could toss Linux up on a cheap little PC in the
     closet --- run an ethernet cable to it and access all your Linux
     applications remotely (via telnet and/or VNC or X Windows). If you
     use 'screen' and VNC it's possible to leave jobs running on the
     Linux box "detached" from your Windows box, so that the frequent
     reboots required by Windows won't disturb your other work. (My
     boxes at the house usually stay up for months at a time. I only
     occasionally reboot any of them --- usually to add hardware or
     install a new kernel.
     
     Your "closet" server can be as modest as a 386 with as little as
     16Mb of RAM and a 100Mb hard drive. (Actually it's possible to boot
     from a single diskette and do limited work in 8Mb of RAM or less
     --- but 16Mb and a hard drive is still a good idea).
            ____________________________________________________
   
  (?) Chicken and Egg (Catch-22) for Linux Download/Install
  
   From Richard Santora on 14 Sep 1998 
   
   Question. Can you download Linux applications onto a floppy disk
   formatted under dos and then install to Linux? 
   
     (!) You can put tar, rpm or other types of packages on a DOS floppy
     (MS-DOS filesystem) and use that to transport any (sufficiently
     small) application.
     
     You'd just mount the floppy (with a command like:
     
     mount -t msdos /dev/fd0 /mnt/a
     
     ... and access the files under the /mnt/a (or whatever mount point
     you chose). You could then extract the members of a .tar.gz file
     with a command like:
     
     cd /usr/local/from/floppy &&
     tar xzf /mnt/a/mynew.tgz
     
     ... or you could use your favorite packaging commands to work with
     rpm and deb files.
     
   (?) Background. I am an inexperienced Linux Red Hat 5.0 home user. I
   selected the "everything" software installation choice from the
   installation CD onto my Dell XPS 60 with 40 Mb of Ram. I am using
   System Commander to use this pc to run dos based operating systems as
   well as Linux. I have internet access through Windows 3.1 and Windows
   95. I am having difficulty getting a connection under Linux to my ISP,
   the Microsoft Network. (I have been able to get the modem to dial out
   using the Modem Tool and the Network Configurator in the X window
   Control Panel.) I would like to download one of the freeware PPP
   programs and also Netscape Navigator. When I download the PPP program
   using Windows 95, the file format extension will remain ".tar" or
   ".rpm" however; the Linux OS will not mount the floppy. I can get to
   the directory using "mdir" but I can not seem to get the program to
   install. Is there a work around? 
   
     (!) If you can't get the floppy (or your hard drive) to mount under
     Linux than you're probably missing some module or kernel driver
     (your kernel might not have the MS-DOS/FAT -- or VFAT, etc support
     enabled).
     
     If you can see it under Linux with 'mdir' (from the 'm-tools'
     package) than you can also copy it to one of your Linux native
     directories (such as /tmp) using the 'mcopy' command.
     
     Read the 'mtools' man pages for details.
            ____________________________________________________
   
  (?) Another (Lose)-Modem
  
   From Barbara Ercolano on 20 Sep 1998 
   
   Hi James, I saw your "Answer Guy" page and I thought that maybe if you
   spare a few minutes you might help me with solving my connection
   problem. I have recently installed redhat linux on my PC and i am now
   trying to set up an internet connection. I have the chatscript the
   ppp-on and the ppp-off script the thing is that when i try to run the
   ppp-on nothing happens . 
   
   The syslog file says: 
   
....kernel: PPP Dynamic channel allocation code copyright 1995 Caldera, Inc.
....kernel: PPP line discipline registered
....kernel: registered device ppp0
....pppd[243]: pppd 2.2.0 started by root, uid 0
....chat[244]: timeout set to 5 seconds

     (!) This is where the chat script sets a timeout.
     
 (?) ....chat[244]: sent (ATZ^M)
....chat[244]: alarm

     (!) This is where the timeout occurs.
     
 (?) ....pppd[243]: Connect script failed
....chat[244]: Failed
....pppd[243]: Exit.
....kernel: PPP: ppp line discipline successfully unregistered

     (!) Just from this I know that your ATZ is getting no response.
     That suggests that there is not a Hayes compatible modem on the
     other end of the connection. Either you're pointing this at the
     wrong device (it's going to your serial mouse)-- or you have a
     WINMODEM!
     
     'winmodems' are NOT hayes compatible devices. They are little
     chunks of cheap hardware that can be used with proprietary (MS
     Windows only) drivers to emulate a modem --- and a measure cost in
     your system's CPU cycles.
     
   (?) this is my chatscript (/etc/ppp/chatscript) 
   
TIMEOUT 5
"" ATZ
OK ATDT08450798888
ABORT 'NO CARRIER'
ABORT BUSY
ABORT 'NO DIALTONE'
ABORT WAITING
TIMEOUT 45
CONNECT ""
"ogin:" uk,ppp,myusername
"ssword:" password

     (!) Good, you sanitized it. It's not good to send usernames and
     passwords to public discussion fora.
     
   (?) this is my /usr/sbin/ppp-on script: 
   
#!/bin/sh
#
# ppp-on - Set up a PPP link
#

CFG_DIR=/etc/ppp
LOCKDIR=/var/lock

DEVICE=cua1

MYIP=0.0.0.0

if [ -f $LOCKDIR/LCK..$DEVICE ]; then
   echo "PPP device is locked"
   exit 1
fi

/usr/sbin/pppd -d /dev/$DEVICE 38400 connect "/usr/sbin/chat -v -f
$CFG_DIR/chatscript" defaultroute $MYIP: && exit 0

echo "PPP call failed"
exit 1

   this is my /usr/sbin/ppp-off script 
   
#!/bin/sh
#
# ppp-off - Take down a PPP link
#

if [ "$1" = "" ]; then
   DEVICE=ppp0
else
   DEVICE=$1
fi


if [ -r /var/run/$DEVICE.pid ]
then
   kill -INT `cat /var/run/$DEVICE.pid`

   if [ ! "$?" = "0" ]; then
      rm -f /var/run/$DEVICE.pid
      echo "ERROR: Removed stale pid file"
      exit 1
   fi
   echo "PPP link $DEVIVE terminated"
   exit 0
fi

echo "ERROR: PPP link is not active on $DEVICE"
exit 1

     (!) This is all much too elaborate. I'd just use a command like:
     
     pppd file /etc/ppp/myisp.options
     
     ... and let it contain all the other options specific to this ISP.
     
     pppd will read the global options file (/etc/ppp/options) which in
     most cases should just contain the "lock" directive.
     
   (?) this is my /etc/ppp/options file: 
   
0.0.0.0:
/dev/cua1

     The cua* devices are deprecated. Use ttyS* instead.
     
 (?) lock
crtscts
defaultroute
asyncmap 0
mtu 296
mru 296

   this is my etc/resolv.conf 
   
search netcomuk.co.uk
nameserver 194.42.224.130 194.42.224.131

     (!) This is irrelevent to getting the modem to dial (chat). Also it
     is interesting that you sanitized your login name and password but
     left in this identifier.
     
     Oddly enough you can use just about any nameserver on the Internet
     --- not just the one that your ISP provides. I've occasionally used
     the nameserver from one of my former employers when setting up a
     new machine at a customer site --- just long enough to have DNS to
     'dig' up the more appropriate and closer nameservers (which should
     all have names or CNAMES of the form: ns*.foo.org in my
     not-so-humble-opinion).
     
   (?) This is all i can think of... mmhhh. I am not sure this is
   relevant but i tried to run minicom as well and that didn't work
   either, I mean it seems to be getting stuck... anuway... i hope you
   can help me... 
   
     If you can't get a boring old terminal emulation package like
     'minicom', 'ckermit' talking to your modem --- then it is quite
     relevant to your problems running SLIP, PPP, fax, or anything else
     to that modem. The simplest think you can do to a modem is send it
     an ATZ and get an OK response. If you can't do that --- the modem
     (or your serial port, or your way of talking to the serial port)
     isn't working.
     
   (?) Thanks a lot for your time
   love
   Barbara 
   
     (!) No problem. Please, chuck that winmodem and get a real, Hayes
     compatible.
                        ____________________________
   
  (?) O.K. It's not a Winmodem
  
   From Barbara Ercolano on 20 Sep 1998 
   
   Hi James, thanks for your email... I am not sure whether i have a
   winmodem .... my modem's a Hayes Accura 336 External Fax Modem...
   
     (!) By their nature winmodems must be internal. Since you have an
     external modem (and a Hayes (TM) brand one at that) we can rule out
     that as the culprit.
     
     This leads us to the next possibility. I mentioned that it might be
     a problem between the OS and your serial hardware.
     
     If you are using the correct /dev/ttyS* node --- then the next
     mostly likely problem is an interrupts conflict.
     
     Is this a (PnP) "Plug and Pray" system? (Reboot and get into the
     CMOS setup program to look for those features). If so, try
     disabling that and setting all of your COM and printer ports to
     manually selected, non-conflicting ranges.
     
     One of the bugaboos about Linux and most other Unix variants is
     that they tend not tolerate IRQ's conflicts the way that MS-DOS and
     Win 95 might. (This tendency in DOS and Windows probably leads to
     some of the intermittent hands and that you see with those OS').
     So, you should not set your COM2 and COM3 ports on the same IRQ.
     
     First, read the Linux Serial HOWTO. It will go into excruciating
     detail about the topic. Next play with commands like 'statserial'
     and 'setserial' and look at the /proc/interrupts and /proc/ioports
     psuedo-files. Also the boot up messages might help.
     
     Also I think i have got the right port ttyS1 (cua1) for COM2... I
     have tried running minicom, and the init string appeared whith my
     cursor at the end of it, so i pressed enter and nothing happened
     after that (I should have got OK, shouldn't I?) I tried to enter my
     username and password (even though no login prompt appeared), and
     again nothing really happened I saw the modem blink but that's
     about it, so i exited minicom without resetting and looked at the
     syslog file... it said something about the line not being 8-bit
     clean and that bit 7 was set to zero.... all this has
     
     I'm glad you looked in the syslog --- I don't think I remembered to
     suggest that in my earlier response.
     
     This could be a cabling or IRQ problem. Make sure that the modem
     cable is a good one. I used to see problems with cheap cables that
     didn't have all of the handshaking lines connected and things like
     that.
     
   (?) absolutely no meaning to me whatsoever... I thought maybe you'd
   find it more illuminating.
   Thanks a lot for your time
   Cheers,
   Barbara
   
     (!) Yes, I was wrong to assume that it was a winmodem (I've been
     getting too many of those recently) but it looks like I'm still on
     the right track. There is some problem with Linux's ability to talk
     to the device --- in this case it's either having trouble talking
     to the serial port --- or the cable isn't relaying that to the
     modem. Or, it is still possible that you just have the wrong ttyS*
     port. Try the others, ttyS0 through ttyS3 for good measure. (If
     your modem is working on one of those --- skip that one).
                        ____________________________
   
  (?) Yet More on the Serial Port (it's not a WinModem) thing...
  
   From Barbara Ercolano on 21 Sep 1998 
   
   Hi ... it's me again , still tryin'... I've just done 
   
   cat /proc/interrupts
   
   and this is what i've got: 
   
   0: 646864 timer 1: 2933 keyboard 2: 0 cascade 4: 2457 +serial 8: 1
   +rtc 13: 0 matherror 14: 71407 +ide0 
   
   now the question is , shouldn't i get two lines saying serial if my
   modem was correctly installed??? The 4: 2457 + serial line is the
   mouse isn't it? 
   
     (!) Yes. You probably should have another line there. But what
     about the rest of the suggestions in the Serial-HOWTO. Did you read
     through that?
     
     It used to say something about doing a 'dmesg' command or viewing
     syslog's /var/log/messsages shortly after a reboot --- with an
     example of the sorts of lines you should see from the kernel.
     
     The dmesg command is to "display" (actually *re-display*) message
     that were generated during the boot sequence. All those messages
     that tell you what your kernel "found."
     
     If this port works under DOS, Windows, et al, then you might use
     the "MSD.EXE" (Microsoft Diagnostics) package to tell you where DOS
     is finding the port. You can also use the "procinfo" command (from
     Linux) to get handy one page summaries of some system diagnostics
     and performance stats (including how many interrupts have been
     recieved and handled by kernel on each IRQ).
     
     It may be that your serial port is set at a reasonable
     (non-conflicting) IRQ --- but that it's at one that the kernel
     doesn't probe by default.
     
     To fix that you'd use the 'setserial' command to associate a give
     /dev/ttyS* device with an IRQ and set other characteristics on the
     line. It's also possible, though less likely, that you might have
     to use the stty command to set yet other characteristics of the tty
     lines.
     
   (?) Maybe this is where my problem is... what do you think? And if
   this is the problem , what do i need to do? 
   
     (!) Try reading that HOWTO. It's a bit long --- but I'd just end up
     retyping most of it at this point anyway. Also read the man pages
     for 'setserial' and 'stty' and play with them a little bit.
     
     Since you seem to have a serial mouse --- try putting the mouse on
     that other serial port, and changing your start scripts
     (/etc/rc.d/$whatever) to have gpm, and X use that.
     
     Actually on most Linux systems there's a symlink under /dev/ from
     "mouse" -> ttyS1 or -> psaux or whatever, and anther from "modem"
     -> ttyS* (or to the deprecated cua* "callout" ports). So, when you
     move a mouse or modem to a different serial port, you usually only
     have to change those symlinks according (just 'rm' the symlink and
     create a new one or use the 'ln -sf $device mouse' command.
     
   (!) cheers
   Barbara 
   
     (!) I hope we get closer this time. Do you have a local users group
     or other local guru to tap into for some in person and hands on
     expertise?
            ____________________________________________________
   
  (?) Buying DESQview and/or DESQview/X
  
   From Larry Herzog Jr. on 19 Sep 1998 
   
   Do you have any idea where someone can by the final releases of both
   Desqview386 and Desqview/X?? 
   
     (!) Larry,
     
     Unfortunately I don't. If they don't offer it direct from
     Quarterdeck (try calling and pestering for it via voice line) then
     I have no idea where you could get it.
     
     I presume you ask because you found a references to DESQview on my
     web pages. The fact is that I gave up on DV (and MS-DOS in general)
     about five years ago --- when I switched to Linux full time.
     
     Linux will run on just about any hardware that could support
     DESQview/386 --- and it's DOSemu package is just about as good as
     DV ever got. XFree86, the X Windows system supported by Linux (and
     the other freenix varieties) is much more stable and modern than
     DESQview/X ever was (although I did like dwm --- their quick little
     window manager, and "appman" (applications manager).
     
     I think it's a pity that Quarterdeck as done so poorly. However, I
     must say I saw it coming. That's one of the reasons I left their
     employ when I did (long before they gutted the whole department I
     had been in).
     
     I think that the best things that Quarterdeck could do now are:
     
     * Release DESQview, QEMM386, the DESQview API programming kits etc
       all under the GPL. [I think one of the other Open Source(tm)
       licenses would work fine, too; for example, the NPL style with
       Quarterdeck filled in as originator. -- Heather]
     * Encourage Caldera (current owners of DR-DOS --- and a major
       distributor of Linux) to incorporate these into their DR-DOS
       package (which is now targeted toward embedded x86 systems)
     * Release the DV/X sources, dwm and the related utilities.
     * Start writing Linux and freenix applications --- and adding some
       professional polish and consumer touches to various freeware
       projects and sell collections of these add-ons.
     * Offer paid Linux telephone support (Quarterdeck had the most
       effective and efficient tech support department that I've ever
       seen or worked with --- with the most expeditious and sensible
       call escalation methodology. If they haven't obliterated that from
       their corporate memory --- they could rock! I, the Linux Gazette
       "Answer Guy" would call them in a heartbeat if they were offering
       commercial support.
       
     But, alas and alack, it is not likely to be. Sorry I can't help you
     more than that.
            ____________________________________________________
   
  (?) Supressing cc: lines in Emacs' Mail replies
  
   From Ning on 23 Sep 1998 
   
   Hi Jim, 
   
   I found your email address from the Linux Gazette web site. Hope it's
   ok to ask you a question. I use emacs to read and reply email. Could
   you please tell how to set up the .emacs file such that the CC line(s)
   is automatically removed when replying an email? 
   
   Many thanks,
   Ning 
   
     That would depend on which mail reader you're using under 'emacs'.
     
     I use mh-e -- the emacs front end to the Rand MH mail handling
     system. When I hit "r" for "reply" it asks "Reply to whom:" (my
     choices are "all" or <enter>/(none))
     
     If I choose "all" or "cc" than mh-e will add the cc: lines to my
     headers. Otherwise, if I just hit enter it will only include the
     address(es) listed on the From: line.
     
     If you use RMAIL or VM or Gnus you answer will be different. There
     are several mail readers for emacs --- and you'll want to read the
     help and 'info' pages for the one you're using to find out how to
     customize it. Sometimes you have to resort to reading the elisp
     sources, particularly the comments in order to under an emacs
     package. This is particularly handy if you intend to do any
     customizations of you're own .emacs configuration file, since that
     is also written in elisp.
     
     In VM and Gnus you can use "r" to reply ("R" to reply with the
     original quoted) and "f"/"F" to "follow" (do a "wide reply"). Even
     if you pick the lower case options you can yank in (quote) the
     original message. The capitalized forms just save you an extra
     couple of keystrokes. Gnus can be used as a mail reader as well as
     a newsreader --- and allows you to see your mail folders in the
     same sort of "threaded" mode as you might be used to from
     newsreaders.
     
     Gnus will allow you to view mail and news that are stored in just
     about any format. I use it to view some of my MH folders
     (particularly on the rare occasions when I can get into the Linux
     Kernel mailing list digests.
     
     VM allows you to "view" your standard "mbox" mail folders --- which
     the the same sorts as you'd get from using /usr/ucb/mail (mailx),
     'elm' and/or 'pine'
     
     RMAIL is the oldest and least featureful of the emacs mailreaders.
     It stores your messages in a single folder in the "Babyl" format.
     I've never used it and the info pages don't reveal any obvious
     difference between replying to "just the sender" and to the whole
     group of recipients (what 'elm' users think of as "r" vs "group" or
     "g" replies).
     
     The reason I use MH folders is because it allows me to use
     glimpseindex and get meaninful results when I search for multiple
     keywords in proximity to one another. For instance, earlier this
     evening I wanted to find any copy of the "comp.unix.admin" FAQ that
     I might have mailed myself. Using the command glimpse "admin;faq" I
     was able to zero in on the specific item in my "ref" (reference)
     folder in one shot. (I let the command run for a couple of minutes
     in the background and continued by writing --- so I don't know how
     long the search took).
     
     When I used 'elm' a search like that wouldn't have helped much ---
     after finding the right folder I'd still have to find the message
     and cut and paste that portion of the file out to what I was
     working on.
     
     Another feature that's important to me is that I can have multiple
     drafts in progress. I have a whole folder for drafts, and once a
     draft is started it doesn't get "lost" just because I have to set
     it aside and handle more pressing issues, or go look up something.
     
     Naturally you can use Supercite or other "citation/quoting"
     packages with any of the emacs mailreaders to manage exactly how
     your attributions look to them. I've tried Supercite and don't much
     like it. There are also a couple of emacs PGP interfaces that are
     designed to link to your news and mailreaders, and the "tools for
     MIME" (tm). to help compose, view, and extract those pesky MIME
     attachments. Of course you also have 'ispell' available within a
     keystroke or two. (I have mine bound to [F3],$ to check the word at
     point and [F3],% to check the whole buffer --- however this is
     usually not terribly handy for my writing since I tend to have so
     many abbreviations, filenames, and non-words in my work).
     
     One nice think using a mailreader under emacs is I also have easy
     access to the emacs "calendar" ([F3],C in my configuration). From
     there I can add an entry to my "diary" using "i,d" which I can
     check (using [F3],D in my case).
     
     So, I get mail inviting me to lunch on the tenth of next month and
     I hit a couple of keystrokes (usually [F9] to switch to the message
     buffer, a couple of 'vi' keystrokes to "Yank" the message or a
     couple paragraphs into a kill buffer, [F3],C to bring up the
     calendar, a couple of keystrokes to navigate to the 9th of next
     month, "id" to "insert a date/diary entry" and "p" (another 'vi'
     key) to paste that note into place).
     
     Now I just try to remember to check my diary folder at least a
     couple times a day. I usually put two entries in for each date. One
     is a one-liner that says: "tomorrow" and the other gives the time
     and details. It might refer me to the "todo" folder, where I'll
     find the original message.
     
     Similarly I use my mh/aliases folder (e-mail address book) as a
     telephone and postal address book as well. I do this just using
     comments (start comment lines with a semicolon --- just as you
     might in a sendmail /etc/aliases file).
     
     Before I give people the impression that I'm some sort of emacs
     fanatic I should point to that I detest the default emacs
     keybindings (which I think were devised by sado-masochists on bad
     drugs). I use 'viper' mode as the default for most buffers, and I
     have fairly long list of custom bindings to save my sanity for the
     things that old 'vi' was just never meant to do (like splitting the
     screen between two buffers and launching "shell-prompt" buffers and
     other editor "packages" like "dired" (file/directory management
     buffers).
     
     I rarely use "dired" (I prefer 'mc' --- midnight commander) and
     almost never use "gnuscape gnavigator" --- WM Perry's w3 mode. It's
     an impressive bit of work --- but I like lynx for text mode --- and
     Netscape's Navigator is better if I have to go into X anyway.
     
     There are a number of "helper" modes that seem to be more of a
     hinderance than a help to me (like the AucTeX, LaTeX, TeX, and
     html-helper modes). They all seem to take a radically different
     approach to structured text editing than I'm willing to embrace.
     Also I don't like emacs' abbreviations mode -- since I like to have
     abbreviations that including punctuation and it considers all of
     those to be word boundaries and won't let me use them easily. (The
     old 'vi' abbreviations feature was very unassuming --- you gave it
     a list of characters to watch for and a list to expand those into
     --- it just did).
     
     I'm told that most of the things I do in emacs are now possible in
     'vim' --- and I use 'vim' frequently to do quick edits. I don't use
     'emacs' (actually xemacs) as 'root' -- so all configuration and
     /etc/ files are maintained in whatever version of 'vi' happens to
     be lying around. That's almost always 'vim' these days. However, I
     don't know any of the 'vim' improvements --- they aren't "portable"
     to other vi's or to emacs, so they'd be a loss to invest any time
     in learning, at this point. I use xemacs because it supports a
     mixture of "applications" and utilities (modes and packages in its
     own terminology) that I can use from any old text mode login.
     
     As an "OS within an OS" xemacs is a bit of a pain. Installing a new
     package, like the 'calc' scientific calculator mode (think HP 48
     calculators with all sorts of algebraic expression processing
     analsysis and features to export to GNUplot), and BBDB (the "big
     brother database" --- a sort of "Rolodex" (tm) utility, is
     difficult. It's easy if you just want to wedge it into the same
     directories with the other elisp code --- but I like to put new
     packages that I install into /usr/local or /usr/local/opt (which is
     symlinked from /opt) --- so I can tell what I put there from what
     my distribution installed. That takes extra work.
     
     Anyway -- I'll finish my rant by appending my latest .emacs file.
     Actually my .emacs only reads:
     
(load (expand-file-name "~/.elisp/init.el"))

     ... and my ~/.elisp/init.el is where all the action is:
     
;; Jim Dennis' .elisp/init.el file
(setq inhibit-startup-message 't )
(setq load-path (cons (expand-file-name "~/.elisp") load-path ))
(column-number-mode 1)
(line-number-mode 1)
(setq display-time-day-and-date 't)
(display-time)
(setq version-control 't)
(indented-text-mode)
(setq fill-column-default 72)
(setq fill-column 72)
(setq fill-prefix "  ")
(auto-fill-mode)
(setq viper-mode t)
(require 'viper)

;; Custom Functions:

(defun insert-output-from-shell-command (commandstr)
"Insert output from a shell command at point"
(interactive "*sInsert From Command:")
(shell-command commandstr 1))

(defvar my-mh-folder-keys-done nil

"Non-`nil' when one-time mh-e settings made.")

(defun my-mh-folder-keys ()
"Hook to add my bindings to mh-Folder Mode."
(if (not my-mh-folder-keys-done) ; only need to bind the keys once
(progn
(define-key mh-folder-mode-map "a" 'visit-mh-aliases)
(define-key mh-folder-mode-map "b" 'mh-redistribute)
(define-key mh-folder-mode-map "T" (mh-put-msg-in-seq nil "t"))
(define-key mh-folder-mode-map "j" 'mh-next-undeleted-msg)
(define-key mh-folder-mode-map "k" 'mh-previous-undeleted-msg)
(setq my-mh-folder-keys-done 1)
)))

(defun my-mh-letter-keys ()
"Hook to add my bindings to mh-Letter Mode."
(progn
(define-key mh-letter-mode-map '[f4] 'mh-yank-cur-msg)
(define-key mh-letter-mode-map '[f5] 'mh-insert-signature)
(define-key mh-letter-mode-map '[f10] 'mh-send-letter)
(setq fill-column 68)
(setq fill-prefix "     ")
(auto-fill-mode)
))

(add-hook 'mh-folder-mode-hook 'my-mh-folder-keys)
(add-hook 'mh-letter-mode-hook 'my-mh-letter-keys)

( defun paragraph-fill-justify-forward ()
"Fill and justify paragraph at point and move forward"
(interactive "*")
(fill-paragraph ())
(forward-paragraph))

( defun save-and-kill ()
"Save and kill current buffer"
(interactive)
(save-buffer)
(kill-buffer (current-buffer)))

;; Some stuff for mh-e:
(setq mh-progs "/usr/bin/mh/")
(setq mh-lib "/usr/lib/mh")

;; Something for Gnus (to save outgoing stuff)
(setq gnus-select-method '(nntp "news"))
;; (setq gnus-secondary-select-methods '(nnmh "~/mh"))
;; (setq message-default-headers "Fcc: ~/mh/gnus.mbox\n")
;; (setq message-default-mail-headers "Fcc: ~/mh/gnus.mbox\n")
;; (setq message-default-news-headers "Fcc: ~/mh/gnus.mbox\n")
;; (setq gnus-author-copy "|/usr/lib/mh/rcvstore +gnus.out")

(defun my-gnus-summary-keys()
"Hook to add my bindings to Gnus Summary Mode."
(progn

(define-key gnus-summary-mode-map  '[f4]

(progn (gnus-summary-tick-article)(gnus-cache-enter-article))

)))

;; Start gnuserv -- so gnuattach, gnudoit, and gnuclient will work:
;; (server-start)
(gnuserv-start)

;; Quick access to my aliases file from my mh-e folder view

( defun visit-mh-aliases ()
"Visit my MH aliases file"
(interactive "")
(switch-to-buffer (find-file-noselect "~/mh/aliases")))

;; For Tools for MIME: MH version
(load-library "tm-mh-e")

;; For Supercite
;;(autoload 'sc-cite-original     "supercite" "Supercite 3.1" t)
;;(autoload 'sc-submit-bug-report "supercite" "Supercite 3.1" t)
;;(add-hook 'mail-citation-hook 'sc-cite-original)

;; For XEmacs color/terminal support:

(when (eq (device-class) 'color)
(set-face-background 'default      "black")     ; frame background
(set-face-foreground 'default      "cyan")      ; normal text
(set-face-background 'zmacs-region "cyan")        ; When selecting w/mouse
(set-face-foreground 'zmacs-region "blue")
(set-face-font       'default      "*courier-bold-r*120-100-100*")
(set-face-background 'highlight    "blue")       ; ie when selecting buffers
(set-face-foreground 'highlight    "green")
(set-face-background 'modeline     "blue")       ; Line at bottom of buffer
(set-face-foreground 'modeline     "white")
(set-face-font       'modeline     "*bold-r-normal*140-100-100*")
(set-face-background 'isearch      "cyan")     ; When highlighting while
(set-face-foreground 'isearch      "black")
(setq x-pointer-foreground-color   "black")      ; Adds to bg color,

(setq x-pointer-background-color   "blue")       ; This is color you really

)

(defun my-quick-buffer-switch ()
"Quick Switch to previous buffer"
(interactive "")
(switch-to-other-buffer 1))

(custom-set-faces)
(setq minibuffer-max-depth nil)

(custom-set-variables
'(user-mail-address "jimd@starshine.org" t)
'(query-user-mail-address nil)
)

;; ... and I'll learn to make real use of abbreviations -- eventually
(abbrev-mode 1 )
(setq abbrev-file-name (expand-file-name "~/.elisp/abbreviations"))
(quietly-read-abbrev-file)

;; My personal key binding for non-vi'ish stuff:
(global-set-key '[f3 ?0] 'delete-window)
(global-set-key '[f3 ?1] 'delete-other-windows)
(global-set-key '[f3 ?2] 'split-window-vertically)
(global-set-key '[f3 ?4] 'split-window-horizontally)
(global-set-key '[f3 ?!] 'insert-output-from-shell-command)
(global-set-key '[f3 ?$] 'ispell-word)
(global-set-key '[f3 ?%] 'ispell-buffer)
(global-set-key '[f3 ?b] 'switch-to-buffer)
(global-set-key '[f3 ?B] 'buffer-menu)
(global-set-key '[f3 ?c] 'shell)
(global-set-key '[f3 ?C] 'calendar)
(global-set-key '[f3 ?d] 'dired)
(global-set-key '[f3 ?D] 'diary)
(global-set-key '[f3 ?f] 'find-file)
(global-set-key '[f3 ?F] 'find-file-at-point)
(global-set-key '[f3 ?m] 'mh-rmail)
(global-set-key '[f3 ?n] 'gnus-no-server)
(global-set-key '[f3 ?k] 'kill-buffer)
(global-set-key '[f3 ?r] 'insert-file)
(global-set-key '[f3 ?o] 'other-window)
(global-set-key '[f3 ?s] 'save-buffer)
(global-set-key '[f3 ?S] 'save-some-buffers)
(global-set-key '[f3 ?w] 'w3-follow-url-at-point)
(global-set-key '[f3 ?x] 'execute-extended-command)
(global-set-key '[f3 f1] 'manual-entry)
(global-set-key '[f3 f7] 'auto-fill-mode)
(global-set-key '[f3 space] 'set-mark-command)
(global-set-key '[f7] (quote paragraph-fill-justify-forward))
(global-set-key '[f8] (quote my-quick-buffer-switch))
(global-set-key '[f9] (quote other-window))
(global-set-key '[f10] (quote save-and-kill))
(global-set-key '[f11] (quote kill-this-buffer))
(global-set-key '[f12] (keyboard-quit))
;; end: JimD's .elisp/init.el

     There is undoubtedly some cruft in there that will make real
     emacs/elisp gurus gnash their teeth in disgust. I don't pretend to
     know anything about lisp programming (other than that it has an
     inordinate propensity for parentheses). I mostly use two key,
     unshifted, key sequences that are prefixed with [F3] so that I
     rarely have to use the 'viper' mode's [Ctrl]+z (switch to emacs
     mode) or the viper command mode "\" command (escape next keystroke
     to emacs mode).
     
     There are more things I'll do eventually. That's one of the reasons
     I adopted Linux and xemacs in the first place --- the tools have
     enough depth that I can always learn more about them. They don't
     limit me.
            ____________________________________________________
   
  (?) Floppy/mount Problems: Disk Spins, Lights are on, No one's Home?
  
   From Jonathan on 24 Sep 1998
   
   I have built a custom system just for Linux, but the only problem I
   have is that when I try to mount a floppy, the light just comes on and
   the disk just spins. The motherboard is a new Tyan that is full of PCI
   PnP. Any ideas would be greatly appreciated. TIA 
   
   -=Jonathan=- 
   
     (!) What is the exact mount command that you are attempting? If you
     are relying on an entry in your /etc/fstab to provide the
     filesystem type and options, what does that line look like?
     
     How is this diskette formatted? What if you use the mtools commands
     on a DOS formatted floppy?
     
     Do any associated messages appear in your syslog
     (/var/log/messages)? Are you sure that you have the flopp support
     compiled into this kernel? (Perhaps you have to load a module)?
     
     If it really is a PnP issue you could look for the PnP tools for
     Linux (these are userspace tools, mostly without kernel patches ---
     though I'm pretty ignorant on the details. I generally recommend
     disabling any BIOS PnP ("Plug and Pray") features when installing
     Linux --- particularly on a dedicated Linux server where you don't
     have to accomodate some other OS.
     
     If all else fails, boot up with a copy of DOS (DR-DOS even) and
     access the floppy that way --- or try a tomsrtbt (Tom Oehser's
     Root/Boot distribution on a floppy -- the best Linux rescue
     diskette I've found). Naturally these have to use the floppy --- so
     it should be pretty obvious if there's some hardware failure or
     incompatibility.
            ____________________________________________________
   
  (?) Conditional Execution Based on Host Availability
  
   From Vladimir Kukuruzovic on the Linux Users Support Team mailing list
   on 20 Sep 1998 
   
   Hi, regarding to your answer guy message 
   
   Conditional Execution Based on Host Availability
   
   From the L.U.S.T Mailing List on 07 Aug 1998
   
#!/path/to/perl
$ping = Ping -c 1 10.10.10.10;
exec ("program") if $ping =~ /100\% packet loss/;

   What's wrong with a simple:
   
   ping -c 1 $target && $do_something $target || $complain
   
   ... where you fill $do_something and $complain with commands that you
   actually want to run on success or failure of the 'ping'.
   
   That's what shell "conditional execution operators" (&& and ||) are
   for after all.
   
   your program does not work well with current release of net-tools and
   ipv6 support.
   you should rewrite it this way: 
   
   ping -c 1 -q $target 2> /dev/null | fgrep "1 packets received" \
       /dev/null && $do_something $target || $complain 
   
     (!) This doesn't look right to me. My example simply sends a ping
     packet and tests the return value. It's possible that this host
     might not be reachable by some ping's (ICMP's) --- that there might
     be some lossage. However, I was just giving the simple case of a
     "well-connected" system on the local LAN.
     
     I should not have to use 'grep' and parse the output from the ping
     command. It should return an error level that reflects the results.
     
     If it doesn't do that in some new release --- I'll hack it back in
     myself. (Ideally it might offer an option to specify a threshold
     lossage percentage --- on which it returns an error. But adding a
     command line option to 'ping' for this might be "gilding the lily"
     --- and adding anything to it (since it is, by nature, an SUID
     program) is a unpleasant prospect.
     
   (?) kind regards,
   Vladimir 
   
   p.s. the original program would say that everything is ok when $target
   is in DNS, but is not reachable (no route to host) 
   
     By that I presume you're referring to the fragment of perl code.
     Mine did not seem to do this (since I tested it with several
     degenerate cases).
            ____________________________________________________
   
  (?) IP and Sendmail Masquerading over a Cablemodem
  
   From Marty Leisner on 22 Sep 1998 
   
   I read your column in the May LG. (I'm behind on my reading :-) ) 
   
   I recently (last month) got a cable modem and hooked up a masquerading
   firewall... 
   
   On the firewall machine, I have the rule: 
   
ipfwadm -F -p deny
ipfwadm -F -a m -S 192.168.0.0/24 -D 0.0.0.0/0

   I got this of the IP-masquerade howto... 
   
   I'm not sure if its the same as the rule:
   ipfwadm -F -a accept -m -S 192.168.1.0/24 -D any 
   
     (!) Mine is similar, all 253 of the 192.168.1.* through the
     192.168.254.* class C address blocks are reserved for "private net"
     addressing (use behind proxying firewalls, masquerading/NAT
     (network address translation) routers, and on disconnected LAN's).
     
     I've heard conflicting reports about using 192.168.0.* and
     192.168.255.* (the first and the last of this range). So I don't
     recommend it. If you needed a very large network of "private net"
     (RFC 1918 --- aka RFC 1597) addresses you could also use 172.16.*.*
     through 172.31.*.* --- that's sixteen adjacent class B networks, or
     your could use 10.*.*.* --- a full class A.
     
   (?) Also, you sendmail .mc: 
   
--          FEATURE(always_add_domain)dnl
FEATURE(allmasquerade)dnl
FEATURE(always_add_domain)dnl
FEATURE(masquerade_envelope)dnl
MASQUERADE_AS($YOURHOST)dnl

   adds always_add_domain twice... 
   
     (!) That's just a typo.
     
   (?) Is $YOURHOST defined someplace (I just went through the work of
   configuring sendmail a few weeks ago). 
   
     (!) I used $YOURHOST as a marker for my readers to fill in with
     their sendmail name. Mine is "starshine.org" --- yours is a
     subdomain off of "rr.com" I expected people to clue into that;
     though I probably should explicitly pointed it out.
     
   (?) The Feynman problem solving Algorithm 
   
    1. Write down the problem
    2. Think real hard
    3. Write down the answer
       
   --- Murray Gell-mann in the NY Times 
   
     (!) He forgot to show his work in step two!
            ____________________________________________________
   
  (?) Linux/Samba as a Primary Domain Controller
  
   From Prophet on 22 Sep 1998 
   
   I looked over your answer to another gentalmen's question about the
   PDC for linux. My question is very similar. Can you tell me how to
   configure samba to be the Primary Domain Controller. I am have two
   other clients on my network, NT server (stand alone), and a Win95
   client. I want both of these machines to log in to Samba. But this is
   not possible untill I get a PDC established. I understand that NT can
   handle the job well, but that isn't any fun. If you could help I would
   appreciate it. 
   
   =Prophet= 
   
     (!) I think you should have read my answer more carefully. I said
     that the Samba team is working on supporting NT domain controller
     services through Samba --- and I think I said that it would
     probably be avaailable before NT 5.x was released.
     
     However, I hope I didn't imply that this is already available as
     production quality code. Last time I talked to Jeremy Allison (one
     of the core members of the Samba team) he said that they had some
     beta level code out there. I just noticed a note on Freshmeat
     (http://www.freshmeat.net) that Samba 2.0 alpha version #6 has just
     shipped. So that would be a good place to start looking.
     
     The Samba home pages are at: http://samba.anu.edu.au/samba It's a
     good idea to remember that Samba is not a Linux specific project.
     Although many of the developers and users are running Linux, many
     others are running various BSD flavors and other forms of Unix.
     
     Your question is probably a pretty common one. There is a Samba NT
     Domain FAQ at:
     http://samba.gorski.net/samba/ntdom_faq/samba_ntdom_faq.html
     
     ... and yours is the first question listed.
     
     As with any Open Source (TM) project, if this isn't moving fast
     enough to meet your needs, consider contributing some time,
     programming skill or other real support to the effort.
            ____________________________________________________
   
  (?) Partition your HD before you try to use it.
  
   From Adam Ray on 23 Sep 1998 
   
     (!) What's this about non-partitioned? You have to partition the
     drive before you can use it as your root.
     
   (?) Yep! 
   
   I have an adaptec 1505 SCSI card (no bios). an a seagate 1gig SCSI
   HDD. I want to install linux to boot from a floppy, and then use the
   SCSI drive as the root. But when i put in the rescue disk and at the
   boot: prompt type "rescue aha152x=0x340,12,7,1" it finds the card then
   finds the drive, but it comes up with an error that the kernel can't
   load at something like "10:" i'm not sure if that is the exact number,
   but i' mnot a that machine right now. I was wondering if you could
   give me, or know where there is a blow-by-blow installationi tutorial
   for non-partitioned SCSI drives. 
   
     (!) If you read the Linux Installation and Getting Started (LIGS)
     Guide from the LDP --- the Linux Documentation Project --- you'll
     find a fairly extensive discussion of 'fdisk' and 'Lilo'. LDP is at
     http://sunsite.unc.edu/LDP and many mirror sites.
     
     There are also man pages on 'fdisk' and Lilo --- and there is a
     pretty good Lilo guide (usually included as a .dvi or .ps
     PostScript file to provide the diagrams and illustrations).
     
     I realize that you won't be using Lilo in the usual way to load
     this copy of Linux (since a boot sector installed on your SCSI hard
     drive will never be reached by your BIOS's boot up sequence).
     However, reading the docs about the way its "usually" done can help
     understand the exception cases in any event.
     
     Another problem I see in this case is that you're trying to
     "rescue" a "new installation." That doesn't work. You use a
     "rescue" diskette to fix an damaged or misconfigured existing
     installation. To install a new system use an "installation"
     diskette. Most of the friendly installation programs out there
     these days (Red Hat, S.u.S.E. etc) will not handle your situation
     particuarly well. They should install just fine --- but they may
     not offer the option to "boot from diskette."
     
     So, use their installation to get to the point where it wants to
     run Lilo --- and let it do that even (no harm in it, even though
     you don't have a BIOS that will call on it). Then use the rescue
     diskette to boot into the running system and read the BootDisk
     HOWTO for advice on creating a custom boot diskette.
     
     You could also use Tom's Root/Boot (tomsrtbt at
     http://www.toms.net/rb) as the basis for your custom boot disk. It
     is the easiest single diskette distribution to customize (of the
     ones that I've tried).
     
   (?) please E-mail me 
   
   Thanks,
   Adam 
            ____________________________________________________
   
  (?) Shuffling Lines in a File
  
   From David Stanaway on the Linux Programmers Support Team mailing list
   on 20 Sep 1998 
   
   Now I'm trying to shuffle the order of the lines in a text file
   without reading in the whole file... Does anyone have any advice,
   code, etc on this? If I can read in the whole file, this is simple,
   but I might want to shuffle a file several megs long.
   
   What do you mean by shuffle?
   
     (!) I think he means something like: randomly or arbtrarily reorder
     the lines of the file without reading the whole thing into
     RAM/core.
     
     I think the approach I'd take is to lock the file from access by
     whatever programs and/or processes are intended to read the data
     out of it.
     
     Then I'd "index" the file --- search through it finding all of the
     line boundary offsets and their lengths. I'd then use an standard
     shuffling techniques on that index file. The problem with
     "shuffling" a normal text file on line boundaries is the variable
     record lengths. So we create a table of offsets and lengths to
     those --- and all of the offset/length pairs are of a fixed size.
     
     So I could use the index file and "shuffle" it with the following
     psuedo code:
     
   open index file
          
   while read index file entry (readbuf)
          pick a random place to put it
          load the "place to put it" entry (writebuf)
          swap these entries in read and write buf.
          write both buffers
          
     If the intent is to shuffle the files by some other criteria
     (arbitrary vs. random) when you'd modify the above algorithm
     accordingly. If the criteria for resequencing has to do with the
     data in the files (i.e. your "sorting" the file) you'd have a bit
     more work ahead of you.
     
     ... actually I'd optimize this a bit by read x entries into a
     buffer, for looping through that, and maintain a few write bufs
     into random locations into the file. For example I might load 100
     entries in the read buffer and up to ten unique randomly selected
     write buffers. For each of the 100 read buffer entries I'd randomly
     select among the open write buffers (1 to 10) and randomly select a
     place in that buffer to put it). At the end of the for loop I'd
     write everything back out, read the next read buff, select more
     write buffs, and so on until the end of the file.
     
     Every entry in the index file will have been exchanged with some
     random entry at least once --- and the average will be two. There
     is a small chance that a given entry would be swapped out of and
     back into the same location (which is usually a good feature of a
     shuffling algorithm).
     
     Then I'd open the original text file and the shuffled index file
     and I'd walk through the shuffle file sequentially reading
     offset/length pairs and using them to seek into the text file and
     copy to a new file. After each seek I'd do one sanity check --- it
     there should be a newline there, and as I was copying I'd do
     another, there should be no newlines between my offset and the end
     of my length. I'd abend with an error message if either sanity
     check failed, or if any seek failed (the original file was
     shortened while I was shuffling).
     
     Finally I'd mv the new file back into place.
     
     This algorithm assumes that you have files with variable length
     records delimited by newlines. It also assumes that you are not
     disk space constrained (that you have at least enough room to make
     one full copy of the file to be shuffled + enough for an index
     file. Oddly enough the index file could, in some degenerate
     circumstances be several times the size of the original file. (that
     happens if all of the lines in the old file were only zero or one
     characters long and that your offsets and lengths are 32 bits each.
     
     Note that I chose to use a file for the index rather than RAM. If
     I'm guaranteed that the file will have a "reasonable" number of
     lines I can build that in memory --- thus simplifying the code
     somewhat. I chose the method that I describe so that you could as
     easily shuffle multi-gigabyte files as multi-megabyte.
     
     The whole program could probably run in less than a 100K and work
     on any size file that was supported by your OS.
     
     You could also look at the sources for the GNU 'sort' utility. I
     handles arbitrarily large inputs (using sequences of temp files
     which then merged together).
     
   (?) If you open a file for reading, the only space it takes up is the
   read buffer, so if you read a line at a time, the memory usage depends
   on how you are shuffling. 
   
   If you wanted to reverse the file, you could jsut be writing the lines
   you read to another file. 
   
   [deletia] 
   
   Then you may like to read the source file from the tail first. I don't
   know how to do this in C, or C++, but it is possible in Java. 
   
     (!) There is a program called tac ("cat" backwards) which does
     exactly this. I'm sure it's written in C and the sources can be
     found at any good GNU or BSD software archive.
     
   (?) You really need to say more about what you mean by <Shuffle>
   David Stanaway 
   
     (!) I think the term is sufficiently unambiguous.
     
     Shuffle: to resequence. to place a group of objects into some
     arbitrary or random order.
     
     The problem at hand is a classic CS homework assignment. It has
     quite a bit to do with the variable length nature of the objects to
     be sorted. We can't do this with "in place" editing (arbitrary
     seeks and writes into the orginal file) because the record we're
     trying to move might overwrite two or more record fragments at its
     destination.
     
     When you are editing a file (the whole thing being in memory) there
     are ways that the editor's buffer handling handles the issue ---
     look at the sources to 'vi' or some other smaller, simpler editor
     and find out how they "delete a line" in terms of their internal
     data structures. These don't work well for files since you might
     end up re-writing from the current offset to the end of the file
     for each replacement.
     
     If the lines are of a fixed length it is much easier, we can skip
     the indexing step and we can, if we wish, shuffle the file "in
     place" --- without the copying. Naturally we'll still want to lock
     the file (or move it to someplace where other processes and
     programs won't be giving us concurrency fits).
            ____________________________________________________
   
  (?) Dear answer guy..
  
   From Josh Assing on 15 Sep 1998 
   
   Thank you very much!
   Cheers
   -josh 
   
   I am a woeful windoze database programmer that must interface with the
   almighty unix environment... I am in search of source code (c is best)
   for uudecode/uuencode.
   
     (!) Any decent Linux CD will come with source code (mostly in C) to
     all of the GNU software. You'll also find it on any good Linux FTP
     repository --- such as ftp://sunsite.unc.edu and
     ftp://tsx-11.mit.edu.
     
     Another good place to look for these sorts of things is at the
     master repository of GNU software:
     
     ftp://prep.ai.mit.edu
     
     ... or at its principal mirror: ftp://ftp.gnu.org
     
     ... where it should be part of the "sharutils" package.
     
     Also I think you should be able to find the sources at the FreeBSD,
     NetBSD, and OpenBSD sites:
     
     * http://www.freebsd.org
     * http://www.net.org
     * http://www.open.org
       
     ... respectively.
     
     In general the best places to find any Linux software (most of it
     is available in source form) are:
     
     http://www.freshmeat.net
     
     and:
     
     http://lfw.linuxhq.com
     
     Freshmeat is nice for keeping up on new and recent package
     releases. It is updated daily and there are usually about a dozen
     new packages or versions available every day. Today is light ---
     there's only nine items --- there were thirty one on the two
     previous days.
     
     It gives a brief (one paragraph) description of each package and a
     usually three links to "Download" it or view its "HomePage" or
     "Appindex Record."
     
     LFW (Linux FTP Watcher) is a forms based search engine that indexes
     the top twenty or so Linux FTP sites.
     
     The problem with requests to help find the source code is that many
     of the most basic packages (the ones that have been part of most
     Unix implementations forever) are bundled together in a few "base"
     packages (like sharutils for uuencode/uudecode).
     
     Although I don't know where most of them are I think the sources
     for commands like 'cp' and 'ls' are in binutils, and for commands
     like 'cut' and 'tail' are in fileutils.
     
     So, unfortunately, it can be a bit difficult to find the source to
     a given package. Yggdrasil and some traditional Unix flavors used
     to offer a "whence" command to point to the sources for any
     command. However, the current crop of distributions doesn't seem to
     offer this handy feature.
     
     On RPM based distributions you could use a variation of the RPM
     command to find out which package included a given file like so:
     
     rpm -qf /usr/bin/uuencode
     
     ... which reports sharutils-4.2-5 on my S.u.S.E. 5.3 system.
     Different distributions package these differently. However, given
     that you could then look on your CD's or on the FTP sites for a
     "sharutils-4.2-5.SRPM.rpm" or a "sharutils-4.2-5.spm" (these being
     different naming conventions for representing "source" RPM's).
     
     You can read my back issues or look to http://www.rpm.org to learn
     more about the RPM package management system --- and a few searches
     should net you considerably comparison and debate about its merits
     and faults relative to the "tarball" (Slackware pkgadd) and Debian
     packaging systems and formats.
     
   (?) I was directed to www.ssc.com; and then to you... Hopefully; you
   can be of assistance..
   Thanks :)
   Cheers
   -josh 
            ____________________________________________________
   
  (?) Psuedo tty Becomes Unusable
  
   From Scott R. Every on 21 Sep 1998
   
   i have a system which has been running for a while(actually a number
   of systems) after a bit the ttyp0 port is no longer available when
   telnetting in. it doesn't list anywhere as being used, but it doesn't
   work! 
   
   can you offer any suggestions? 
   
   thanx
   s 
   
     (?) Try the 'lsof' command. That should find out which process is
     using it.
     
     The /dev/ttyp* devices are for "psuedo" tty's --- these are used by
     rlogind, telnetd, xterms, screen and many other programs. There are
     usually many of these psuedo tty's on a system.
     
     Normally a daemon that uses a psuedo tty searches through the list
     and uses the first one that it can open. There is another approach
     used by some other forms of Unix --- and supported in recent kernel
     whereby the daemon makes a request of a sort of "dispatcher" device
     which then provide it with the number of the next available
     pty/ttyp device. This is referred to as "Unix '98 PTYs Support" in
     the linux kernel -- and I've heard it referred to as "ptmx"
     (psudo-tty multiplexing, or something like that). In the case of
     the Linux implementation the pty's can be dynamically generated
     under the "pts" virtual filesystem (which is a bit like the /proc
     filesystem in that it doesn't exist on a "disk" anywhere --- it
     simply provides a filesystem abstraction of the system's in memory
     data structures). Linux 2.2 will also probably support a "/devfs"
     --- another virtual filesystem which makes all of the entries under
     /dev into dynamic entries.
     
     Of course, none of that applies to your situation. That's just the
     "vaporware report" on the future of the Linux kernel.
     
     If there really is no process that still owns the ttyp0 in your
     case then it might be a bug in your kernel. I'd check the
     permissions of the device node to see if they are changing (or to
     see if there is something that's just blowing the device node
     away), then I'd look through the "Change Logs" for the recent
     2.0.3x kernels. It might be that you are bumping into one of the
     bugs that Alan Cox and crew have been fixing. If you aren't running
     a 2.0.35 or 2.0.36 kernel --- consider trying it to see if that
     solves the problem.
     
     To be honest I haven't seen a good description of the whole
     pty*/ttyp* mess or a decent explanation of what problems the Unix
     '98 ptmx design is supposed to solve. I've heard that pty's and
     ttyp's are paired off in "master/slave" pairs that have something
     to do with providing different device nodes for control (ioctl()?)
     and communications over the channel. If any of our readership knows
     of a good treatise on the topic, please pass me a pointer or mail
     me a copy.
            ____________________________________________________
   
  (?) Will the "Real" freshmeat Please Get Bookmarked?
  
   From Richard C on 14 Sep 1998 
   
   You referenced http://freshmeat.org in this article, when I assume you
   meant http://freshmeat.net... Freshmeat.org does point to
   freshmeat.net, but you can't rely on a newbie to find it, can you? 
   
   -) Keep up the good work
   Cheers
   Richard Cohen 
   
     (!) I use these two addresses interchangeably. As you say the .org
     URL requires an extra click to get to the site --- but that's not
     much of a consideration for me and sometimes I want to visit
     "RootShell.org" (also listed at the freshmeat.org site; but not
     linked from freshmeat.net).
            ____________________________________________________
   
  (?) "Virtual Hosting" inetd based services using TCP Wrappers
  
   From Nick Moffitt on 23 Sep 1998 
   
   Hullo thar!
   
   You mentioned that you might mail me some example conf files to show
   me how you did all those nifty things we talked about on Saturday. I'm
   actually working on setting up a chrooted system for public use here
   at Penguin, so any examples would be keen (and no, I haven't searched
   through the answer guy archives yet).
   
     (!) [Question stems from a discussion over beer and pizza at one of
     the local user's groups events in my area. It relates to using TCP
     Wrappers to launch different services or different variations of a
     given service depending on the destination address of the incoming
     request. Normally TCP Wrappers, all those funny looking
     "/usr/bin/tcpd" references in your /etc/inetd.conf file, is used to
     limit which hosts can connect to a service by matching against the
     source address]
     
     Here's a couple of trivial examples (I don't have a copy of
     'chrootuid' handy on this box, but you can find it at
     cs.purdue.edu's "COAST" security tools archive).
     
# hosts.allow   This file describes the names of the hosts which are
#               allowed to use the local INET services, as decided
#               by the '/usr/sbin/tcpd' server.
# $Revision: 1.2 $ by $Author: root $ on $Date: 1998/02/08 09:35:55 $
#
in.ftpd: 127.0.0.1: ALLOW
in.ftpd@192.168.1.127: jimd@192.168.1.2: ALLOW
in.ftpd: ALL: DENY
in.telnetd@192.168.1.127: ALL: twist /bin/echo "Not Available\: Go Away!"
in.ftpd: 192.168.1.: ALLOW
ALL: 127.0.0.1
ALL: 192.168.1.

     These are order dependent. The first rule that matches will be one
     one that tcpd uses --- so the ALL: rules at the bottom are
     significant. If I put them first -- they'd over-ride the more
     specific ones --- whereas here, they don't.
     
     In this case my "normal" IP address on eth0 is 192.168.1.3
     (canopus.starshine.org). For playing with tcpd I add an eth0:1
     alias (ifconfig eth0:1 192.168.1.127). That would work as easily if
     it was a second interface --- ethernet, PPP or whatever.
     
     Now, if I telnet localhost or telnet to canopus, everything works
     fine. But if I telnet to the ...127 address it tells me to go away.
     The hosts_options and the hosts_access(5) man pages list a number
     of replacement operators like %a for the source IP address of the
     request and %d for the "daemon" name (argv[0] of the process).
     These parameters can be used in the shell commands.
     
     Note that the "twist" option is completely different than the
     "spawn" option. "spawn" seems to imply "ALLOW" and spawns a process
     that is run in addition to the service. This process is spawned
     with it's standard file descriptors all set to /dev/null --- so it
     doesn't interact with the user at all.
     
     The twist option runs an alternative to the requested service.
     Thus, if you request my web server I might "twist" that into a cat
     command what will spit out an HTTP redirect with a simple 'echo' or
     'cat' command like so:
     
     www@192.168.64.127: ALL: twist /bin/cat /root/web.redirect
     
     I don't know of a way to to call for both a twist and a spawn --
     but you can write a script (or better, a small C wrapper) to run
     the desired "spawn" commands in the background (with outputs
     directed to /dev/null, of course).
     
     Naturally, of course, you'll want to follow proper coding practices
     for "hostile" environments when you're writing something that will
     be "exposed" to the Internet.
     
     Matt Bishop, at the UC Davis has some excellent papers on this
     topic, and presents his own, more robust, implementations of the
     system(), and popen() library calls --- which are called msystem(),
     and mpopen() in his library.
     
     Matt's site is at: ftp://nob.cs.ucdavis.edu/pub/sec-tools (I think
     there's a web site there, too).
     _________________________________________________________________
   
                     Copyright  1998, James T. Dennis
              Published in Linux Gazette Issue 33 October 1998
     _________________________________________________________________
   
   [ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Next
   Section ]
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                   CHAOS Part 2: Readying System Software
                                      
                              By Alex Vrenios
     _________________________________________________________________
   
    Introduction
    
   My first article, "CHAOS: CHeap Array of Obsolete Systems" (see Linux
   Gazette, volume 30, July, 1998), describes a somewhat bazaar set of
   circumstances that led to my building a network of aging PCs, running
   Red Hat Linux. A number of readers contacted me after reading it,
   asking me how it was going and if there would be a follow-up article -
   this is it!
   
   A few PCs, an Operating System, and networking hardware form the
   largest part of the infrastructure necessary for the kind of software
   systems that I want to design and work with, but systems cannot run on
   basics alone. A little administration, a few shell scripts, and a
   couple of utility programs will bring it all together into what I want
   it to be: a distributed system.
   
   Distributed algorithms often consist of several identical copies of a
   single program, each running on a different computer in the network. I
   can write and debug a single copy on my big '486 machine, named
   "omission," but that's just the first step. Debugging the final
   product, running on seven machines simultaneously, requires me to
   develop a way to remotely start a process on each machine, to see how
   well that process is running, and to kill them all, if necessary,
   centralizing their trace file data so I can figure out what went
   wrong.
   
   This article describes what I added to my system to make this all
   happen.
     _________________________________________________________________
   
    System Administration
    
   I have worked on many Unix networks in the past. I thought nothing of
   using the remote shell command, "rsh," to switch to some other machine
   in the network, to get access to its local data. I thought nothing of
   it, that is, until I wanted to work like that on my own network.
   
   >From omission, there are three ways I can think of to switch over to
   one of the '386 machines. I can use the telnet command, which puts up
   a login prompt, asking me for a userid and password. I can "rlogin" to
   another machine, which asks me for a userid, but not a password, if
   the system files are properly set up. Finally, there is "rsh" which
   lets me go about my business without so much as a userid if all the
   system files are just so; getting them just so, I find, is a black
   art.
   
   I knew that my userid's home directory, /home/alex, needed a ".rhosts"
   file with my userid: a single line with "alex" in it. I knew too, that
   the /etc/hosts.equiv file played a part, but I wasn't sure exactly
   how, so I started reading, and asking a lot of questions. Most
   references to these system files, it seemed, were more interested in
   telling me how to keep others out instead of welcoming them in!
   
   I am not above a brute force approach to solving problems. I'll bet
   that a smart sysadmin reading this article might be appalled by my
   methods, but they worked for me and sometimes that's enough of a
   reward.
   
   My domain name, as you may recall from the first article, is
   "chaos.org" and my seven '386s are named after the seven deadly sins.
   User alex has a home directory on omission, which is nfs mounted on
   each of the seven other machines. My /home/alex/.rhosts and each
   /etc/hosts.equiv file contain exactly the same eight line entries, as
   follows:


omission.chaos.org alex
greed.chaos.org alex
lust.chaos.org alex
anger.chaos.org alex
pride.chaos.org alex
gluttony.chaos.org alex
envy.chaos.org alex
sloth.chaos.org alex

   I am not sure where I got my initial ideas about how this all worked,
   but what's listed above works on my systems and again, that's enough
   for me for now.
   
   I wanted to have at least some reasonable time-of-day clock
   synchronization, so I added a "clock reset" command to the boot
   process. The following lines were added to each remote machine's
   rc.local file:


# reset date and time from server
date `rsh omission "date +%m%d%H%M"`

   I boot omission first and wait for it to come up before starting
   others because it contains the /home directory that each of the other
   machines must mount. When each of the other machine boots, it sets its
   time-of-day to that of omission, accurate to the minute.
     _________________________________________________________________
   
    System File Distribution
    
   There is only one copy of /home/alex/.rhosts file, but every system
   has its own copy of /etc/hosts.equiv. Maintaining a set of eight
   identical copies of anything is not a pleasant task, especially when
   you are making subtle changes, trying to get them all to work in your
   favor.
   
   One way to handle this is to copy the file to a diskette and load it
   onto every machine, but that's too much of a pain. The sophisticates
   might have a separate partition for such files, local to their main
   server, and remotely mounted everywhere else. Since I am both the
   system administrator and the user community, I overlapped things a
   bit.
   
   I created a /home/alex/root subdirectory, owned by root, and copied
   each of these volatile system files into it. That way I could make
   changes in only one file and distribute it more easily than from a
   floppy. I copied /etc/hosts to that area, additions to large system
   files, like rc.local, and all the shell scripts that the root user on
   each machine might use, too. I'll discuss these next.
     _________________________________________________________________
   
    System Shell Scripts and Utility Programs:
    
   I might want to reset the time-of-day clock manually, so I used the
   same clock set command (above) in a shell script named "settime":


#!/bin/csh -f
#
#   settime - resets data and time from server
#
date `rsh omission "date +%m%d%H%M"`

   I might be monitoring some long running tests and, being the nervous
   type, I might want to watch the overall system performance. Here is my
   "ruptime" (which stands for remote uptime) script:


#!/bin/csh -f
#
#   ruptime - remote uptime displays system performance
#
cat /etc/hosts \
 | grep -v localhost \
 | awk '{ print $3": ";system("rsh "$3" uptime") }'

   This displays the loading on each of my machines and I use this as a
   high level indication of overall system performance. The word loading,
   by the way, means the number of processes on the operating system's
   ready queue, waiting for the cpu. (The cpu is usually busy running the
   active task. The three numbers uptime displays are the 1, 5, and 15
   minute loading averages - see the uptime man page for more
   information.) If I see what might be a problem, all zeros e.g., I can
   follow up with other commands that give me more specific information.
   
   The "ps" command presents process status for every process in the
   system. The addition of a "grep" for my userid, alex, will limit the
   display to only the ones I happen to be running, but it will include
   the grep command itself. Additional greps with a "-v" option can
   reduce the content of the display to just those processes that I am
   interested in monitoring:


#!/bin/csh -f
#
#   rps - remote process status
#
ps -aux | grep alex \
 | grep -v rps \
 | grep -v aux \
 | sed -e "s/alex\ \ \ \ \ /`hostname -s`/" \
 | grep -v sed \
 | grep -v hostname \
 | grep -v grep

   The "sed" command substitutes the remote host name for my userid. I
   use this script along with the rsh command to display the status of
   remote processes:


omission:/home/alex> rsh pride rps
pride  218  0.4  7.0  1156   820   1 S   13:34   0:02 /bin/login -- alex
pride  240  0.7  6.6  1296   776   1 S   13:37   0:01 -csh
pride  309  0.3  1.8   856   212   1 S   13:41   0:00 ser
pride  341  0.0  4.4  1188   524  ?  R   13:41   0:00 /bin/sh /home/alex/bin

   Careful readers might notice that the ruptime script displays uptime
   for all machines on the network, while rps targets only one machine.
   My general version of rps works through a pair of programs named
   "rstart" and "psm," controlled by a script named rpsm:


#!/bin/csh -f
#
#   rpsm - remote process status for my userid
#
rstart psm

   The program rstart.c accepts the name of an executable in the user's
   path:


#include <stdio.h>
#include <chaos.h> /* a list of all the remote host names in chaos.org */
main(argc, argv)
char *argv[];
int argc;
/*
**   rstart.c - start a process named in argv[1] on all remote systems
*/
{
   int i, j, pids[NUM];
   char command[64];
   /*
   **   insist on at least two command line arguments
   */
   if(argc < 2) {
      printf("\n\tUsage: %s <process> [<parameters>]\n\n", argv[0]);
      exit(-1);
   }
   close(0); /* avoid stdin problems if we run in the background */
   /*
   **   initialize the remote process name
   */
   strcpy(command, argv[1]);
   if(command[0] != '/') /* prepend path if nec */
      sprintf(command, "%s%s", Bin, argv[1]);
   /*
   **   append any other command line parameters specified
   */
   for(i=2; i<argc; i++) {
      strcat(command, " "); /* append a blank */
      strcat(command, argv[i]); /* append a parameter */
   }
   /*
   **   start remote tasks
   */
   for(i=0; i<NUM; i++) {
      if(i) /* pause between starts */
         sleep(1);
      if((pids[i] = fork()) == 0) {
         if(execl("/usr/bin/rsh", "rsh", Hosts[i], command, NULL) == -1) {
            perror("execl()");
            exit(-1);
         }
      }
   }
   /*
   **   wait for all processes to complete
   */
   for(i=1; i<NUM; i++)
      waitpid(pids[i]);
   return(0);
}

   The rpsm script (above) runs the rstart program, which runs psm:
#include <string.h>
#include <stdio.h>
main()
/*
**   psm.c - lists process status for my userid
*/
{
   FILE *fp;
   int len, pid1, pid2;
   char host[32], *p;
   char line[128];
   /* request name of local host */
   gethostname(host, sizeof(host));
   if((p = strchr(host, '.')) != NULL)
      *p = '\0'; /* cut domain name */
   len = strlen(host);
   /* our proc id */
   pid1 = getpid();
   /* request listing of all process' status */
   fp = popen("ps -aux", "r");
   while(fgets(line, sizeof(line), fp) != NULL) {
      if(strstr(line, "alex ") == NULL)
         continue; /* not our userid */
      if(strstr(line, "psm") != NULL)
         continue; /* skip ourself */
      sscanf(line, "%*s %d", &pid2);
      if(pid2 >= pid1)
         continue; /* skip higher pids */
      /* replace userid with host name */
      strncpy(line, host, len);
      printf("%s", line);
   }
   return(0);
}

   Here is a sample run:

> rpsm
pride   218  0.0  7.0  1156   820   1 S   13:34   0:02 /bin/login -- alex
pride   240  0.0  6.6  1296   776   1 S   13:37   0:01 -csh
pride   309  0.0  1.8   856   212   1 S   13:41   0:00 ser
pride   487 38.3  5.4  1240   636  ?  S   14:17   0:01 csh -c /home/alex/bin
greed   222 35.8  7.3  1240   636  ?  S   14:17   0:01 csh -c /home/alex/bin
   .
   .
   .
sloth   201 36.5  7.1  1240   636  ?  S   14:17   0:01 csh -c /home/alex/bin

   The rstart program concept can be expanded to gather a good deal more
   than process status. I created script-program pairs that dump trace
   and log files from a particular machine. I can also kill a remote
   process by name on all my remote machines by running rstart with k.c:


#include <string.h>
#include <stdio.h>
main(argc, argv)
int argc;
char *argv[];
/*
**   k.c - kills the named user process
*/
{
   FILE *fp;
   int pid1, pid2;
   char line[128];
   char shell[32];
   char host[32];
   char proc[16];
   if(argc < 2 || argc > 3) {
      printf("\tUsage: k <process_name> [noconf]\n\n");
      exit(-1);
   }
   /* get process name for strstr line compares */
   sprintf(proc, "%s ", argv[1]); /* add blank */
   sprintf(shell, "-c k %s", proc); /* our mom */
   pid1 = getpid();
   /* get host for print message */
   gethostname(host, sizeof(host));
   /* request listing of all process' status */
   fp = popen("ps -aux", "r");
   while(fgets(line, sizeof(line), fp) != NULL) {
      if(strstr(line, "alex ") == NULL)
         continue; /* not our userid */
      if(strstr(line, shell) != NULL)
         continue; /* skip shell */
      if(strstr(line, proc) == NULL)
         continue; /* must match */
      sscanf(line, "%*s %d", &pid2);
      if(pid2 >= pid1)
         continue; /* skip higher pids */
      /* kill the process */
      system(line);
      sprintf(line, "kill -9 %d", pid2);
      if(argc != 3)
         printf("%s: %s\n", host, line);
   }
   return(0);
}

   All of the above programs and scripts were pasted into this article
   from tested source code, but I removed blank lines and made other
   cosmetic changes to make it more readable and to manage its size.
   Please accept my apologies in advance for any difficulties you may
   experience. I cannot assume any liability for your use of the above,
   so you must do so at your own risk.
     _________________________________________________________________
   
    Conclusions
    
   I feel like I am ready now to start developing software according to
   my original plans. I hope some of my solutions will help you too,
   should you try this yourself.
   
   My next step is to develop a central "manager" process, running on
   omission, that will display real-time status and behavior of the
   system of distributed processes running on all the other machines. I
   want to be able to "drive" the system by sending requests to one of
   the processes on a randomly chosen machine, and then to "watch" how
   all the remote processes interact in developing their response. Each
   remote process interacts with a local "agent" process running in
   parallel with it. Each agent will send messages back to the manager,
   telling it what state that part of the system is in; the manager
   combines these remote states into a global state display for the
   entire distributed system. If you're interested in this sort of thing,
   stay tuned!
   
   This project has been quite a learning experience for me. I am proud
   of what I've built and I hope these simple tools will motivate some of
   you to give this a try - perhaps with only three or four systems,
   perhaps with more than the eight machines that I combined. Home
   networking is in vogue now, and developing software that takes the
   greatest advantage of a network cannot be far behind. Try this if you
   dare, and be ready for the future.
     _________________________________________________________________
   
                       Copyright  1998, Alex Vrenios
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                  DialMon: The Linux/Windows diald Monitor
                                      
                             By Mike Richardson
     _________________________________________________________________
   
  In The Beginning
  
   There seem to be quite a number of small networks, either at home or
   at small companies, which use Windows (be they 3.1 or 95/98/NT
   workstation) clients, and a Linux box as a dial-up router to the
   InterNet at large. A common setup is to use IP masquerading, so that
   the the clients can hide behind a single IP address, with diald, the
   dial-on-demand daemon, so that the Linux box connects as and when
   required. This works pretty well
   
     * you only need to pay for the single IP addess rather than a subnet
       (ie., a block of IP addresses)
     * the masquerading almost automatically provides a firewall
     * you only pay connection charges when needed (in places where local
       phone calls are not free)
       
   The real problem with this is that users on the Windows clients have
   no real indication of the state of the dial-up link. So, if a
   connection fails to materialise (ie., your web browser cannot find a
   URL), you may not know whether the URL doesn't exist, or the dial-up
   link didn't come up.
     _________________________________________________________________
   
  Let There Be Light
  
   Dialmon was originally conceived as a simple monitor to provide the
   Windows user with some information about the link. In its original
   form, it comprised a single daemon process dialmon which ran on the
   Linux box in parallel to daild, and a client dialm to run on the
   Windows client.
   
   The dialmon daemon connected to the diald daemon using the laters
   control fifo, requesting that state information be retured via a
   second fifo which dialmon created. When dialm clients connected, the
   state information provided by diald, suitably filtered to remove
   un-needed stuff, was passed back to the dialm client, which could then
   display the current dial-up state. Two sorts of information were
   displayed, the actual link state (up, down, connecting ...) and
   message output generated by diald's connect and disconnect scripts.
   
   So if, for instance, you pointed your browser at
   http://www.linuxgazete.com (sic) then you could see the link come up
   and, when the browser failed to find the URL, you hopefully realised
   that you should have pointed it at http://www.linuxgazette.com.
     _________________________________________________________________
   
  Keep Your Finger On The Pulse
  
   This seemed a big improvement, but there were still some more minor
   niggles. Firstly, the web browser would often time out a URL before
   the dial-up link came up (particularly in the early evening!), which
   meant trying the URL a second time. Of course, by this stage the
   dial-up link had often just gone down again on account of there being
   no traffic. Secondly, if you ran sendmail or similar on the Linux box
   and used a mail reader on the Windows client, then to get an urgent
   item of mail on its way from the Linux box to your ISP (or to check
   for incoming mail), you'd need to indulge in some trick like using
   your web browser simply to force the link up. Try explaining that one
   to your users!
   
   So, dialmon was extended to allow control over the link Actually,
   these changes spanned three releases, but the effect is that users on
   the Windows clients, can, subject to various access controls, request
   that the link be brought up, request that it be taken down, and even
   request that diald itself be stopped and restarted with a different
   configuration (which appeared because I need to use two ISPs). This
   feature also has the side effect that if diald crashes, then dialmon
   will restart it.
   
   The access control can be based either on the host on which dialm is
   running, or on a user name with password checking. The latter can be
   set up to use Linux box user names which do not have login access and
   which are different to the Windows user's real user name (if any) on
   the Linux box.
     _________________________________________________________________
   
  Icing On The Cake
  
   One or two users asked whether dialmon could show some load
   information, ie., the amount of traffic going through the dial-up
   link. Having done nothing myself, someone (Jim Mathews, thanks)
   provided some code to give an indication of this via an icon in the
   Win95/98/NT system tray. This has now been extended to show a pair of
   bars in the dialm window, one for transmit and one for receive, which
   show, at least approximately, the percentage of the dial-up bandwidth
   which is being used.
   
   This is quite useful if you are doing a large download, to get an idea
   of whether it is worth carrying on, or whether you should kill the
   download and try later (while America sleeps, maybe).
     _________________________________________________________________
   
  Building The Edifice
  
   So, how does one set all this up? The distribution
   (ftp://sunsite.unc.edu/pub/Linux/system/daemons/dialmon-0.4.tgz.THISON
   E) contains the Linux and Windows sources, plus prebuilt Win31 and
   Win95/98/NT clients. Once you have built and installed the Linux
   dialmon daemon, you need to configure it.
   
   I'll describe the setup I use at home (which is also the office). The
   network comprises two Linux boxes, of which one called quaking runs
   diald and sendmail, plus a Windows 3.1 machine called rover which my
   wife Tina mainly uses, and a Windows 95 machine called gingling which
   I use. I want to be able to bring the dial-up link both up and down,
   and to switch between two ISPs, and I want to allow Tina to bring the
   link up and down, but not to switch ISPs.
   
   The dialmon daemon uses two configuration files, /etc/dialmon.conf to
   specify its own setup, and the options to be given to client machines,
   and /etc/dialmon.users to specify options to be given to specific
   users. These are shown below:
   
   /etc/dialmon.conf
   
[host]
        port    7002
        force   90
        fifo    /etc/diald/diald.ctl
        allow   up
        ddconf  Planet  "-f /etc/diald.conf.planet"
        ddconf  Demon   "-f /etc/diald.conf.demon"

   This specifies that dialmon listens for dialm clients on port 7002 and
   will force the dial-up link up for 90 seconds (after which, if there
   is no traffic on the link, diald will shut it down). The allow up line
   specifies that any client dialm is allowed to bring the link up. The
   two ddconf lines specify ISP configurations; the text in "...." is the
   arguments to diald.
   
/etc/dialmon.users

[mike]
        passwd  dialmon
        allow   up
        allow   down
        allow   ctrl

[tina]
        passwd  dialmon
        allow   up
        allow   down

   The users file specifies the access for myself and Tina. The lines
   passwd dialmon indicates that when mike (or tina) connects, the
   password supplied should be checked against that for the user dialmon
   rather than mike (or tina).
   
   Lastly, the daemons run from a startup script /etc/rc.d/init.d/diald
   which is linked as /etc/rc.d/rc3.d/S99diald (I use the RedHat
   distribution which has SysV style startup scripts):
   
   /etc/rc.d/init.d/diald
   
#!/bin/sh
#
# diald         Start or stop the dialer daemon
#

. /etc/rc.d/init.d/functions

if [ ! -f /etc/sysconfig/network ]; then
    exit 0
fi

. /etc/sysconfig/network

# Check that networking is up.
[ ${NETWORKING} = "no" ] && exit 0

[ -f /sbin/ifconfig ] || exit 0


# See how we were called.
case "$1" in
  start)
        echo -n "Starting dialer demon: "
        /sbin/route del 0.0.0.0
        # Start dialmon, which will in turn run diald with the Demon
        # configuration, and will if necessary kill off the ppp0
        # PPP daemon
        #
        daemon /usr/sbin/dialmon -rDemon -pppp0 -b28800
        [ -f /proc/sys/net/ipv4/ip_dynaddr ] &&
                echo 1 > /proc/sys/net/ipv4/ip_dynaddr
        echo ""
        ;;
  stop)
        # Shut dowm. Don't use killproc because we want a SIGTERM and
        # not a SIGKILL, so that dialmon can terminate diald (and maybe
        # pppd as well).
        #
        echo -n "Shutting down dialer daemon: "
        [ -f /var/run/dialmon.pid ] && (
                kill -TERM `cat /var/run/dialmon.pid`
                rm -f /var/run/dialmon.pid
                echo -n "dialmon "
        )
        echo ""
        ;;
  *)
        echo "Usage: diald {start|stop}"
        exit 1
esac

exit 0

   The -rDemon argument to /usr/sbin/dialmon tells dialmon to initially
   run diald with the Demon configuration. The -ipppp0 argument says
   that, when dialmon restarts diald, it should kill any ppp daemon
   running for the ppp0 link (it looks in /var/run/ppp0.pid), and -b28000
   says that the nominal link bandwidth is 28000 baud (used for the
   receive and transmit displays).
     _________________________________________________________________
   
  In Conclusion
  
   I've found that dialmon makes life easier for myself, and my wife (who
   claims to be a computerphobe but loves eMail) uses it all the time;
   I've also installed it on the office network of one of my clients.
   Quite a number of people have eMail'ed me about it (thanks for the bug
   reports, suggestions, contributions, not to mention the thanks) so I'd
   like to think that its made life a bit better for them as well.
   
   As I mentioned above, it should be available from
   ftp://sunsite.unc.edu/pub/Linux/system/daemons/dialmon-0.4.tgz.THISONE
   (THISONE on account of an upload error, please ignore the tgz file
   without the extension unless it's been sorted!) Please feel free to
   eMail me at mike@quaking.demon.co.uk .
     _________________________________________________________________
   
                     Copyright  1998, Mike Richardson
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                   The Fifth International Linux Congress
                                      
                               By John Kacur
     _________________________________________________________________
   
   Photo Album
     _________________________________________________________________
   
   The fifth International Linux Congress was held June 3-5, 1998, at
   Cologne University in Germany. This was only a few days after the
   Linux Expo held at Duke University in North Carolina, U.S.A. (May
   28-30), which made for a few tired participants including some of the
   speakers who attended both events. On the first day of the Congress,
   intensive tutorials on various subjects were offered in both English
   and German. These included ``Becoming a Debian Developer'' by Bruce
   Perens, ``KDE Programming'' by Kalle Dallheimer and Matthias Ettrich,
   and ``ISDN4 for Users'' by Klaus Franken.
   
   The talks began the next day, opening with the keynote speaker, Jon
   ``maddog'' Hall. Jon's talk, entitled ``Economics of Computing for the
   21st Century'', began with a historical survey of computers. He talked
   about early computer systems, which cost three times more than what
   his parents paid for a house and were much less powerful than modern
   home systems which are now inexpensive enough to buy with credit
   cards. He predicted that in the near future, one will be able to buy a
   computer in the check-out line at the local supermarket. Indeed, at
   least two grocery stores in Germany already sell inexpensive PCs. He
   ended his talk by expressing the need for Linux to reach the ``Moms
   and Pops'' of this world and with a plea to lobby not just for Open
   Source software but for open hardware standards.
   
   After the keynote speech, participants got to choose between two talks
   running in parallel. The format was forty-five minutes per speaker,
   with breaks every ninety minutes. The majority of the talks were held
   in English, to accommodate guests from the United States, Canada,
   England and the Netherlands, with a few held in German. Although it
   was possible to attend up to six talks a day, some participants
   expressed regret that they couldn't attend all the interesting talks
   due to simultaneous scheduling.
   
   During the breaks, participants had an opportunity to explore the
   various booth displays. S.u.S.E., a company which makes a popular
   German Linux distribution, offered free demo CDs with their newest 5.2
   version. O'Reilly had a nice book display with offerings in both
   English and German. The KDE group had a very popular display showing
   off their attractive desktop environment. John Storrs, who also
   presented a talk, had a display demonstrating the use of Real Time
   Linux for the purpose of CAD/CAM design.
   
   The University also provided the Congress with a small number of Linux
   computers connected to the Internet for those participants who found
   it hard to be away from the keyboard for too long. Among the many
   interesting talks presented on the first day was one entitled
   ``Designing an Ext2fs Resizer'', given by Theodore Y. Ts'o. Theodore
   has made contributions to the development of the Ext2fs system in the
   past and is presently working on a method for enlarging and reducing
   the size of an Ext2 file system and adding B-tree support.
   
   Christian Gafton, one of the programmers from Red Hat, gave a talk
   entitled ``Migration to glibc''. He said the use of glibc is no longer
   as controversial in the fast-moving Linux world as it was when Red Hat
   first adopted it. With the latest versions of glibc available on the
   Internet, the most common problems with porting code to the library
   occur when programmers write code which is dependent on bugs which
   exist in the old libc libraries, or when programmers use bad
   programming practices such as the use of #include<linux/foo.h> instead
   of the recommended #include<sys.foo.h>.
   
   A few sessions were purposely left open. The organizers called these
   ``Birds of a Feather Sessions'' where the congress attendees could get
   together for ``spontaneous and informal meetings for presentation or
   discussion of any interesting subject''. Some people from Debian took
   advantage of this opportunity to discuss various issues concerning
   their Linux distribution.
   
   @lay:please note that ASCII246 is an o with an umlaut
   
   That evening, participants got a chance to socialize and experience a
   bit of German culture. The social event was held at a local pub
   reserved for the Linux Congress. There was a wonderful smorgasbord and
   the waiters were very quick to fill our beer glasses with Cologne's
   famous klsch. Everyone enjoyed themselves and hopefully some long
   term computer friendships were formed.
   
   The talks continued on the third day with interesting topics such as
   IEE-1394 (also known by the commercial name Fire Wire) by Emanuel
   Pirker. Emanuel designed support for this technology as part of his
   work as a university student in Austria. Warwick Allison gave an
   interesting account of the QtScape Hack, in which a small group of
   programmers created a port of Netscape to Qt in a five-day programming
   spree while on vacation in Norway.
   
   The final panel board discussion was perhaps the most interesting, and
   certainly the most contentious topic of the congress. The subject was
   GNOME vs. KDE. (See Linux Journal, May 1998.) Participants included
   Miguel de Icaza of the Gnome Project, Kalle Dalheimer of the KDE
   project and Bruce Perens who helped to define the Open Source License.
   The people from the KDE project, which is already in its second year,
   felt that Linux was in need of a comfortable desktop environment.
   Linux has already captured the server market, but has not reached the
   desktop widely because the technical capabilities required are beyond
   that of the average user. They also felt that Linux is about choice,
   and that since the GNOME project is now being financed by Red Hat,
   people would be unduly influenced to use GNOME.
   
   The people from GNOME countered that Red Hat had no influence on the
   direction of their project, and the reason KDE is not included in the
   Red Hat distribution is because of its use of the Qt-toolkit. Many
   people were of the opinion that although the KDE project is further
   ahead than the GNOME project, its use couldn't be wholeheartedly
   embraced by the Linux community because of the non-GNU license of the
   Qt-toolkit. They fear a similar situation to the Open Group who
   recently changed the licensing policy of the X server. Some members of
   the audience informed the Congress that a project to make a GNU clone
   of the Qt-toolkit was underway, and other audience members expressed
   the opinion that the two KDE and GNOME groups should work more
   closely, but still acknowledge the positive creative push of healthy
   competition. Any hurt feelings were laid to rest and all friendships
   renewed as we said our goodbyes at the O'Reilly Publishing House.
   
   The O'Reilly team invited participants of the Linux Congress ``zum
   Klnen bei Klsch'' or for a chat and beer. Participants agree the
   fifth annual Linux Congress was a success and look forward to next
   year's Congress, which the organizers promised us would not be quite
   so soon after next year's Linux Expo!
     _________________________________________________________________
   
                        Copyright  1998, John Kacur
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                      Fun with Client/Server Computing
                                      
                              By David Nelson
     _________________________________________________________________
   
   Psst, wanna have some fun? Try client/server computing. It's like
   talking through two tin cans and a taut string, upgraded to the
   computer era. Linux has all the tools you need. You are already using
   client/server computing in applications such as Netscape, telnet, and
   ftp. And it's easy to write your own client/server apps, maybe even
   useful ones.
   
   Client/server computing links two different programs (the client and
   the server) over a network. For practice you can even skip the network
   by letting Linux talk to itself. So read on even if you aren't
   attached to a network. (But your Linux installation needs to be
   configured for networking.)
   
   A very common form of client/server computing uses BSD sockets. BSD
   stands for Berkeley Software Distribution, an early version of Unix.
   Logically, a BSD socket is a combination of IP address and port
   number. The IP address defines the computer, and the port number
   defines the logical communication channel in that computer. (In this
   usage a port is not a physical device. One physical device, e.g. an
   Ethernet card, can access all the ports in the computer.)
   
   Linux Journal ran a nice three-part series on network programming by
   Ivan Griffin and John Nelson in the February, March, and April, 1998,
   issues. The February article contains the code to set up a skeleton
   client/server pair using BSD sockets; it includes all the plumbing
   needed to get started. You can download the code from SSC, then use
   this article to start playing with more content.
   
   After downloading the file 2333.tgz, expand it with the command
   tar&nsbp;-xzvf 2333.tgz. Rename the resultant file 2333l1.txt to
   server.c, and the file 2333l2.txt to client.c. Edit server.c to delete
   the extraneous characters @cx: from the start of the first line, and
   either delete the last line or make it a comment by enclosing it
   between the characters /* and */. Similarly, delete the last line of
   client.c, or make it a comment. Compile server.c with the command gcc
   -oserver server.c; similarly compile client.c using gcc -oclient
   client.c.
   
   The server runs on the local computer, so it only needs to know its
   port number to define a socket. The client runs on any computer, so it
   needs to know both its target server computer and the server's port
   number. You have thousands of port numbers to play with. Just don't
   use a port that is already taken. Your file /etc/services lists most
   of the ports in use. I found that port 1024 worked fine.
   
   Now I said you didn't need to be connected to a network, but you do
   need to have your computer configured for networking to try this out.
   In fact, this code won't run for me if I use the generic name
   localhost; I have to give the explicit name of my computer. So
   assuming you are set up for networking, start the sever by typing
server 1024 &

   and then start the client by typing
client hostname 1024

   where hostname is the name or the IP address of your computer. If
   things work right, you will see output similar to the following:
Connection request from 192.168.1.1
14: Hello, World!

   The first line gives the IP address of the client, and the second line
   is the message from the server to the client. Considering all the code
   involved, this would be a good entry for the World's Most Complex
   "Hello, World" Program Contest! Note that the server keeps running in
   the background until you kill it with the commands fg and ^C (ctrl-C).
   
    Example of Query-Respone Client/Server
    
   Now let's do something more useful. Debugging two programs
   simultaneously is no fun, so let's start simple by simulating a
   client/server pair in a single program. Then when you understand how
   things work we can divide the code between the client and the server.
   In the following program the client is simulated by the function
   client. The main routine simulates the server:
/* local test of client-server code */

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

char name[256] = "";
char buffer[256] = "";

void client(char *buffer)
{
 printf("%s", buffer);
 fgets(buffer, 256, stdin);
}

 int main(int argc, char *argv[])
{
 int year, age;

 sprintf(buffer, "Please enter your name: ");

 client(buffer);

 strcpy(name, buffer);
 sprintf(buffer, "Hi, %sPlease enter your year of birth: ", name);

 client(buffer);

 year = atoi(buffer);
 age = 1998 - year;
 sprintf(buffer, "Your approximate age is %d.\nEnter q to quit: ", age);

 client(buffer);

 return(0);
}

   You don't have to be an expert at C code to see how this works. The
   simulated server (main) sends the string "Please enter your name" to
   the simulated client (client) through the array buffer. The client
   prints the string, reads the name as a string from keyboard, and
   returns that string through buffer. Then the server asks for the year
   of birth. When the client collects it as a string, the server converts
   it to a number and subtracts it from 1998. It sends the resultant
   approximate age back to the client. We are done now, but because the
   client needs a keyboard entry before returning, the server requests
   that a "q" be entered. More sophisticated coding could eliminate this
   unnecessary awkwardness. This simulated client/server illustrates
   passing strings between server and client, asking and responding to
   questions, and doing arithmetic.
   
   Copy the above code into an editor and save it as localtest.c. Compile
   it with the command gcc -olocaltest localtest.c. When you run it you
   should get output like:
Please enter your name: joe
Hi, joe
Please enter your year of birth: 1960
Your approximate age is 38.
Enter q to quit: q

   Now let's turn this into a real client/server pair. Insert
   declarations into server.c by changing the beginning statements of
   main to read:
int main(int argc, char *argv[])
{
int i, year, age;
char name[256] = "";
char buffer[256] = "";
char null_buffer[256] = "";
    int serverSocket = 0,

   The application-specific code in server.c is towards the end. Replace
   it with the following:
/*
* Server application specific code goes here,
* e.g. perform some action, respond to client etc.
*/

sprintf(buffer, "Please enter your name: ");
write(slaveSocket, buffer, strlen(buffer));
for (i = 0; i <= 255; i++) buffer[i] = 0;

/* get name */
read(slaveSocket, buffer, sizeof(buffer));
strcpy(name, buffer);
sprintf(buffer, "Hi, %sPlease enter your year of birth: ", name);
write(slaveSocket, buffer, strlen(buffer));
for (i = 0; i <= 255; i++) buffer[i] = 0;

/* get year of birth */
read(slaveSocket, buffer, sizeof(buffer));
year = atoi(buffer);
age = 1998 - year;
sprintf(buffer, "Your approximate age is %d.\nEnter q to quit: ", age);
write(slaveSocket, buffer, strlen(buffer));

close(slaveSocket);
exit(0);

   This is almost the same as the server code in the simulated
   client/server, except that we read and write slaveSocket instead of
   calling the function client. You can think of slaveSocket as the
   connection through the socket between the server and client.
   
   The client code is very simple. Insert declarations into client.c by
   changing the beginning statements of main to read
int main(int argc, char *argv[])
{
  int i;
  int clientSocket,

   Find the application specific code near the end of client.c and
   replace it with the following:
/*
* Client application specific code goes here
* e.g. receive messages from server, respond, etc.
* Receive and respond until server stops sending messages
*/

while (0 < (status = read(clientSocket, buffer, sizeof(buffer))))
  {
    printf("%s", buffer);
    for (i = 0; i <= 255; i++) buffer[i] = 0;
    fgets(buffer, 256, stdin);
    write(clientSocket, buffer, strlen(buffer));
  }
    close(clientSocket);
    return 0;
  }

   Again, this is almost the same as the client code in the simulated
   client/server. The main differences are the use of clientSocket, the
   other end of slaveSocket in the server, and the while statement for
   program control. The while statement closes the client when the server
   stops sending messages.
   
   Recompile server.c and client.c and run them again as before. This
   time the output should be something like:
Connection request from 192.168.1.1
Please enter your name: joe
Hi, joe.
Please enter your year of birth: 1960
Your approximate age is 38.
Enter q to quit: q

   Now you can really play: try running multiple client sessions that
   call the same server, and if you are on a network try running the
   server on a different computer from the client. The server code is
   designed to handle multiple simultaneous requests by starting a
   process for each client session. This is done by the fork call in
   server.c. Read the man page for fork to learn more.
   
    Chat Program as a Client/Server
    
   As a final example, let's look at a chat program for sending messages
   between users. It's primitive, because it only allows alternating
   lines between each person, and it requires the server to keep a window
   open. But it shows how a client/server pair can carry on an unlimited
   dialog, and it could be extended into a practical program.
   
   Insert declarations into server.c by changing the beginning statements
   of main to read:
 int main(int argc, char *argv[])
{
  char buffer[256] = "";
  int i, serverquit = 1, clientquit = 1;
    int serverSocket = 0,

   Replace the application-specific code towards the end of server.c with
   the following:
/*
* Server application specific code goes here,
* e.g. perform some action, respond to client etc.
*/

printf("Send q to quit.\n");
sprintf(buffer, "Hi, %s\nS: Please start chat. Send q to quit.\n", inet_ntoa(cl
ientName.sin_addr));
write(slaveSocket, buffer, strlen(buffer));
for (i = 0; i <= 255; i++) buffer[i] = 0;

while (serverquit != 0 && clientquit != 0)
{
 status = 0;
 while (status == 0)
  status = read(slaveSocket, buffer, sizeof(buffer));
 clientquit = strcmp(buffer, "q\n");

 if (clientquit != 0)
 {
  printf("C: %s", buffer);
  for (i = 0; i <= 255; i++) buffer[i] = 0;

  printf("S: ");
  fgets(buffer, 256, stdin);
  serverquit  = strcmp(buffer, "q\n");
  write(slaveSocket, buffer, strlen(buffer));
  for (i = 0; i <= 255; i++) buffer[i] = 0;
 }
}
printf("Goodbye\n");
close(slaveSocket);
exit(0);

   Insert declarations into client.c by changing the beginning statements
   of main to read:
int main(int argc, char *argv[])
{
 int i, serverquit = 1, clientquit = 1;
    int clientSocket,

   Replace the application-specific code toward the end of client.c with
   the following
/*
* Client application specific code goes here
* e.g. receive messages from server, respond, etc.
*/

while (serverquit != 0 && clientquit != 0)
{
  status = 0;
  while (status == 0)
    status = read(clientSocket, buffer, sizeof(buffer));
  serverquit = strcmp(buffer, "q\n");

  if (serverquit != 0)
  {
    printf("S: %s", buffer);
    for (i = 0; i <= 255; i++) buffer[i] = 0;

    printf("C: ");
    fgets(buffer, 256, stdin);
    clientquit = strcmp(buffer, "q\n");
    write(clientSocket, buffer, strlen(buffer));
    for (i = 0; i <= 255; i++) buffer[i] = 0;
   }
 }
 printf("Goodbye\n");
 close(clientSocket);
 return 0;
 }

   Recompile both server.c and client.c and you are ready to try it out.
   To simulate two computers on one, open two windows in X or use two
   different consoles (e.g. alt-1 and alt-2.) Start the server in one
   window using the command
server 1024

   and the client in the other using the command
client hostname 1024

   where hostname is replaced by your actual hostname or IP address.
   
   Server and client code for this chat program are almost identical, and
   very similar to the previous example. There are two main differences.
   The first is the test to see whether either party has entered a "q" to
   quit. The flags serverquit and clientquit signal this. The second is
   the tight loop waiting for response from the other party. The function
   read returns the number of character read from the socket; this is
   stored into status. A non-zero number of characters indicates the
   other side has sent a message.
   
   Here is an example session as printed by the server:
Connection request from 192.168.1.1
Send q to quit.
C: Hi server
S: Hi client
C: Bye server
S: Bye client
Goodbye

   And here is the same session as printed by the client:
S: Hi, 192.168.1.1
S: Please start chat. Send q to quit.
C: Hi server
S: Hi client
C: Bye server
S: Bye client
C: q
Goodbye

   I hope these examples have shown how easy it is to set up
   client/server computing. May your appetite be whetted to try your own
   applications. If you cook up something tasty, let the rest of us know.
   And don't forget to keep that string taut!
     _________________________________________________________________
   
                       Copyright  1998, David Nelson
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
   Welcome to the Graphics Muse
   
    Set your browser as wide as you'd like now.  I've fixed the Muse to
                    expand to fill the aviailable space!
                                1998 by mjh
   ______________________________________________________________________
   
   Button Bar muse:
    1. v; to become absorbed in thought
    2. n; [ fr. Any of the nine sister goddesses of learning and the arts
       in Greek Mythology ]: a source of inspiration
       
   W elcome to the Graphics Muse! Why a "muse"? Well, except for the
   sisters aspect, the above definitions are pretty much the way I'd
   describe my own interest in computer graphics: it keeps me deep in
   thought and it is a daily source of inspiration.
   
            [Graphics Mews][WebWonderings][Musings] [Resources]
                                      
   T his column is dedicated to the use, creation, distribution, and
   discussion of computer graphics tools for Linux systems.
   
   Wow, what a month.  Since I'd finished working on my Gimp book in July
   and early August, I had all of September to work on my Muse column.
   Its been quite some time since I've been able to devote this much time
   to the Muse.   I managed to keep up to date on all the product
   announcements made over on Slashdot, freshmeat, and on
   comp.os.linus.announce.  And there were a ton of them.  So many, in
   fact, I considered leaving some out just to keep this page from being
   too large.  But that didn't seem right, so this month the Muse is a
   big, big column.
   
   What we've got this month for you:
     * Visual DHTML from Netscape - a review of their initial release
     * Configuring and using X Input for use with Wacom drawing tablets
       
   I got rather motivated with all this extra time on my hands.  First, I
   planned some hardware research into getting X Input running, which
   then lead to plans for an article on off the shelf video boards.  This
   latter idea will be in next months issue since its quite a bit of
   information to gather and organize.  I got quite a bit of help on the
   X Input issues from Owen Taylor.  His tips got me up and running with
   X Input and allowed me to gather some reasonable information for
   helping my readers do the same.  Along with X Input, I've got a review
   of Netscapes Visual DHTML in the Web Wonderings section.
   
   You may also want to take a look at the new and improved Graphics Muse
   Website.  I've complete revamped the site.  The old Linux Graphics
   mini-Howto and Unix Graphics Utilities pages are no more - they've
   been replaced by a searchable database of graphics tools, texts, news
   stories, and reviews.  No more frames either, at least not in the
   Linux specific sections (my bio page still uses them, however).  Its
   not as nice as Slashdot or Freshmeat, but its better than the static
   frame-based pages I had before.  Hopefully, everyone will find these
   updates to their liking.  It should certainly make finding tools a
   little easier.  At least that was the plan when I started on it.
   
   For those who don't want to see the new graphics in my portal pages,
   you can jump straight to the Linux specific section.  But take a look
   at the graphics in the portals some time.  I really kind of like them.
   
   
   Graphics Mews       Disclaimer: Before I get too far into this I
   should note that any of the news items I post in this section are just
   that - news. Either I happened to run across them via some mailing
   list I was on, via some Usenet newsgroup, or via email from someone.
   I'm not necessarily endorsing these products (some of which may be
   commercial), I'm just letting you know I'd heard about them in the
   past month.
   
   indent
   
imwheel 0.7

   Imwheel makes the wheel of your Intellimouse (and other wheel mice)
   work in Linux/X11 to scroll windows up and down, or send keys to
   programs. It runs in the background as a daemon and requires little
   reconfiguration of the XFree86 setup. 4 or more button mice and Alps
   Glidepad 'Taps' may also be used.
   
   http://solaris1.mysolution.com/~jcatki/ imwheel/
   ______________________________________________________________________
   
WorldEd 0.2.0

   WorldEd is a 3d modeller for KDE. It has a grid, a tree view, a 3d
   view, a Layout manager, and a Modeller. It will have full texture
   mapping, skeletal modelling, more heirarchal model design, 3dfx
   dual-screen support and other goodies.  Development urgently needs
   additional contributors.
   
   New in version 0.2.0 is autoconf/automake support, seperate Modeller
   and Layout views, support for Lightwave/Blender ASCII imports, object
   rotation/scaling and updated screenshots.
   
   http://www.geocities.com/ Pentagon/Quarters/2865/
   ______________________________________________________________________
   
Red Hat to Release NeoMagic source

   Slashdot reports that Red Hat will release the source for the X Binary
   Free NeoMagic server after having received permission to do so from
   NeoMagic.  This X server source includes support for NeoMagic's
   MagicGraph128 family of integrated single-chip graphics hardware.  The
   full announcement from Red Hat can be found at
   http://slashdot.org/articles/98/09/21/1626214.shtml indent
   
3dom snapshot 980910 (or later)

   3dom stands for 3-Dimensional Object Modeler. The aim of 3dom is to
   offer a tool to model reality with user-chosen accuracy, and
   user-chosen inclination for a particular purpose, which can be
   gradually improved and extended. 3dom is designed to be a
   general-purpose modeler, however it is especially inclined to model
   scenes for Global Illumination purposes.
   
     This release features better Renderpark integration, some new
   concepts, various bugfixes and enhancements.
   
   http://www.gv.kotnet.org/~kdf/3dom/
   ______________________________________________________________________
   
Linux Quake HOWTO 1.0.1.12

   The Linux Quake Howto explains how to install, run and troubleshoot
   Quake, QuakeWorld, and Quake II on an Intel Linux system.
   
   This version includes updated QuakeWorld install information for the
   new 2.30 release, info on using the new 3Dfx GL miniport with regular
   Quake and Quake2, more help on making Quake behave on glibc systems,
   and lots more.
   
   http://webpages.mr.net/bobz/
   ______________________________________________________________________
   
   Other Announcements:
   Simple Direct Media (SDL) Logo contest
   New Version of Quake 2 is out.
   
   [INLINE] [INLINE]
   
aKtion! 0.2.0 and KXAnim

   aKtion! is a video player based on xanim. It (xanim) supports many
   different file formats like FLI animations, FLC animations, IFF
   animations, GIF87a and GIF89a files, GIF89a animation extensions, DL
   animations, Amiga MovieSetter animations, Utah Raster Toolkit RLE
   images and animations, AVI animations, Quicktime Animations and SGI
   Movie Format files.
   
   NOTE: You'll need to have xanim 2.70.7.0 properly installed in your
   machine to run aKtion!.
   
   KXAnim is a C++ widget wrapper around xanim to allow video playing in
   your apps.
   
   Both of these appear to be KDE applications, although they don't
   specifically state that on the Web site.
   
   aKtion! and KXAnim - http://www.geocities.com/
   SiliconValley/Haven/3864/aktion.html
   xanim - http://xanim.va.pubnix.com/home.html
   ______________________________________________________________________
   
Prometheus Truecolour 2.0.8

   Prometheus Truecolour (PTC) 2.0 C++/Java is the library of choice for
   demo programming. It allows you to render into an offscreen surface of
   your choice and then converts it on the fly to whatever video mode is
   available on the host machine. And it is designed to be small so it
   can be statically linked into your application.
   
   Version 2.0 of the library is currently under heavy development and
   updated nearly daily. A final release has been scheduled at around the
   end of August 1998. PTC 2.x is free software under the terms of the
   GNU Library General Public License (LGPL)
   
   http://www.cs.ucl.ac.uk/students/ c.nentwich/ptc/
   ______________________________________________________________________
   
PyroTechnics 1.2

   PyroTechnics is an OpenGL-based firework simulator. Features include
   multiple kinds of fireworks, the ability to choreograph firework
   displays, a texture-mapped water surface, reflections, a moving
   camera, and the ability to save screenshots.
   
   This version updates v1.0 with bugfixes, portability fixes, and the
   addition of command-line arguments.
   
   http://www.ling.ed.ac.uk/~oliphant/pyro/
   ______________________________________________________________________
   
k3de 0.0.6

   k3de is a 3D editor for the K Desktop Environment which generates
   sources for POVray.
   ftp://ftp.kde.org/pub/kde/ unstable/apps/graphics/k3de-0.0.6.tgz
   ______________________________________________________________________
   
Quick Image Viewer 0.5

   Quick Image Viewer (qiv) is a very small and pretty fast GDK/Imlib
   image viewer.  http://www.idnet.de/~AdamK/
   ______________________________________________________________________
   
FxEngine 0.31

   FxEngine is a 3d graphics library that uses the glide API. It was made
   by Andreas Ingo and ported to Linux by Michael Pugliese. It is very
   powerful and easy to use.  http://welcome.to/3dfxPS/ 
   Editor 's Note:  watch out for bright red background - eek!
   
ElectricEyes 0.2

     ElectricEyes is a lightweight GTK+/GNOME-based image viewer. It
   allows you to view and do simple manipulate of several image formats
   and gives a nice thumbnail selection mechanism.
   
   http://www.labs.redhat.com/ee.shtml
   ______________________________________________________________________
   
fltk beta-19980825

     fltk (pronounced "fulltick") is a GPL'd C++ user interface toolkit
   for X and OpenGL (it has also been ported to windows). Fltk is
   deliberately designed to be small, so that you can statically link it
   with your applications and not worry about installation problems. As a
   side effect it is also extremely fast.
   
     This beta includes slight layout modifications, ports to Cray and
   other 64 bit machines as well as lots of bug fixes and small additions
   from users.
   
   http://www.cinenet.net/ users/spitzak/fltk/
   ______________________________________________________________________
   
VMD 1.2

   VMD is designed for the visualization and analysis of biological
   systems such as proteins, nucleic acids, lipid bilayer assemblies,
   etc. It may be used to view more general molecules, as VMD can read
   standard Protein Data Bank (PDB) files and display the contained
   structure. VMD provides a wide variety of methods for rendering and
   coloring a molecule: simple points and lines, CPK spheres and
   cylinders, licorice bonds, backbone tubes and ribbons, cartoon
   drawings, and others. VMD can be used to animate and analyze the
   trajectory of a molecular dynamics (MD) simulation. In particular, VMD
   can act as a graphical front end for an external MD program by
   displaying and animating a molecule undergoing simulation on a remote
   computer.
   
   http://www.ks.uiuc.edu/ Research/vmd/
   ______________________________________________________________________
   
XawTV 2.25

   XawTV is a simple Xaw-based TV program which uses the bttv driver or
   video4linux. It contains various command-line utilities for grabbing
   images and avi movies, for tuning in TV stations, etc. A grabber
   driver for vic and a radio application (needs KDE) for the boards with
   radio support are included as well.
   
   Recent releases include updates to work with version 0.5.14 of the
   bttv driver and adds a command line tool for recording avi movies plus
   an ncurses based radio application and driver bugfixes.  If you don't
   get a picture with version 2.24, check out this version.
   
   http://user.cs.tu-berlin.de/~kraxel/ linux/#xawtv
   ______________________________________________________________________
   
Magician

   Magician is a commercial OpenGL implementation for Java.  Portable to
   Unix systems, but its unclear if it runs on Linux or not.
   http://www.arcana.co.uk/ products/magician/
   [INLINE]
   
gifc

   Gifc reads a file with graphical commands and outputs a GIF file.  It
   originated from the need of the authors system administrator to show
   various system information graphically.  The administrator found that
   HTML did not suit his needs, so he started a kind of contest from
   which this program was born.
   
   gifc is a Perl script that requiresPerl version 5.003, patchlevel 23
   (preferably 5.004).  It also needs the GD Perl module which can be
   downloaded at http://www.perl.com/CPAN.  Although the current version
   of gifc is 2.5, this is the first public release,.  It has been tested
   on Linux 2.0 and HP-UX 10.20.  The home page of gifc is:
   http://www.club.innet.be/~pub01180/gifctxt.html, from which you can
   also download the package.  The program is released under the GPL.
   The README file contains build and installation instructions.
   
   The author, Peter Verthez, can be reached for suggestions and bug
   reports at  pver@innet.be.
   ______________________________________________________________________
   
Gifsicle 1.3

     Gifsicle manipulates GIF image files on the command line. It
   supports merging several GIFs into a GIF animation; exploding an
   animation into its component frames; changing individual frames in an
   animation; turning interlacing on and off; adding transparency; adding
   delays, disposals, and looping to animations; adding or removing
   comments; optimizing animations for space; and changing images'
   colormaps, among other things.  This version has flip and rotate
   options. It also fixes a longstanding bug that would rarely corrupt
   one pixel in an image.
   http://www.lcdf.org/~eddietwo/gifsicle/
   ______________________________________________________________________
   
X-TrueType Server 1.0 - New TrueType Font Server

   X-TrueType Server is an X server and/or an X font server that can
   handle TrueType fonts directly. With X-TT, you can use TrueType fonts
   on the X Window environments without modifying existing applications,
   and in the same feel as using BDF fonts or PCF fonts. Thanks to widely
   spreading Windows, you can get a large variety of TrueType fonts at no
   or relatively low cost. X-TT supports various font transformations,
   such as slanting or magnifying which makes X-TT very useful for X
   users especially in far-east Asia, including Japan. These users have
   been suffering a bitter experience that only a few fonts were
   available.
   
   http://hawk.ise.chuo-u.ac.jp/student/person/tshiozak/x-tt/index-eng.ht
   ml - English version of web site
   http://hawk.ise.chuo-u.ac.jp/student/person/tshiozak/x-tt/index-jap.ht
   ml - Japanese version of web site
   
   Editors Note:  I think this is not really an X server but rather
   serves as an embeddable library for X servers or as a stand alone font
   server.  Check the web pages for more detailed information.
   ______________________________________________________________________
   
Mesa 3.0 Officially Released

    Mesa is a 3-D graphics library which uses the OpenGL API (Application
   Programming Interface). Mesa cannot be called an implementation of
   OpenGL since the author did not obtain an OpenGL license from SGI.
   Furthermore, Mesa cannot claim OpenGL conformance since the
   conformance tests are only available to OpenGL licensees. Despite
   these technical/legal terms, you may find Mesa to be a valid
   alternative to OpenGL. Most applications written for OpenGL can use
   Mesa instead without changing the source code.
   http://www.ssec.wisc.edu/~brianp/Mesa.html
   ______________________________________________________________________
   
Xi Graphics Accelerated X 4.1.2 Laptop X Server Upates

   Explicit support has been added to the Accelerated-X Laptop Display
   Server for the Acer (also known as TI) TravelMate 7100 using the
   NeoMagic 2160 chip.  Update 7 for Accelerated-X 4.1.2 is available
   from the Anonymous FTP site as URL
   ftp://ftp.xig.com/pub/updates/accelx/laptop/L4102.007.tar.gz .  A
   description of the process to add the update is in the same directory
   as URL ftp://ftp.xig.com/pub/updates/accelx/laptop/L4102.007.txt .
   
   Additionally, another update supports the Fujitsu Lifebook 990Tx2
   using the ATI Rage LT Pro chip.  If using Accelerated-X Laptop Display
   Server version 4.1.2, apply the update from URL
   ftp://ftp.xig.com/pub/updates/accelx/laptop/4.1.2/L4102.003.tar.gz .
   A description of the process to add the update is in the same
   directory, URL
   ftp://ftp.xig.com/pub/updates/accelx/laptop/4.1.2/L4102.003.txt .
   
   Detailed results from benchmarking should be available on the Xi
   Graphics Web Site, URL http://www.xig.com/ , soon.  The summary of the
   Xmark'93 single figure benchmark results for these machines are:
   
   
                          Acer/TI TravelMate 7100
                           Depth 8bpp 16bpp 24bpp
                        Number of colors 256 64K 16M
                          Accelerated-X 12 9.9 4.8
                         X Binary Free 9.9 8.1 2.1
                          Fujitsu Lifebook 990Tx2
                           Depth 8bpp 16bpp 24bpp
                        Number of colors 256 64K 16M
                           Acclerated-X 27 21 2.1
   ______________________________________________________________________
   
SciTech is readying the first release of SciTech Display Doctor for Linux!

   SciTech Display Doctor is the universal display driver utility that
   supports over 250 different graphics chips -- just about every one
   ever made. SciTech Display Doctor for Linux will bring SciTech's
   proven device driver technology to the Linux platform (x86 only at
   this point in time).
   
   SciTech is looking for all types of Linux users to help us stress test
   the utility before its final release. If you would like to participate
   in a beta, please contact KendallB@scitechsoft.com or visit the
   SciTech Web site at http://www.scitechsoft.com.
   
   Editors Note:  a form for registering to participate in the beta
   release program accompanied this announcement in
   comp.os.linux.announce, however I felt it was a bit too large for
   inclusion here.  The form doesn't appear to be on their web site, so
   you'll probably need to send email to the above contact address to
   request a copy of the form.  Also, this program may have already
   expired by the time this column reaches you.  Display Doctor may
   already be released for Linux by that time.
   ______________________________________________________________________
   
Intel signs agreements with RealVideo and MetaCreations

   Intel has been busy moving into streaming video.   C|Net News reported
   an agreement between Intel and RealNetworks was signed licensing new
   streaming video technology to RealNetwork for their next RealVideo G2
   release.  Along with that, Design Graphics reports in Issue 37 that
   Intel and MetaCreations have jointly released a new open streaming 3D
   format based on MetaCreations Real Time Geometry technology.  The
   problem with the MetaCreations agreement is that the  3D file format
   appears to be Intel-specific.  Not very useful to Alpha or PowerPC
   users, I suppose.
   ______________________________________________________________________
   
OpenGL driver for xmame in development

   Slashdot reports that an OpenGL display driver is being worked on for
   xmame. Xmame is the MultiArcade Machine Emulator, basically a way to
   port lots of old arcade style video games to X windows.  The OpenGL
   driver allows you to do vector graphics direct to the hardware,
   eliminating the need to render to bitmaps first.  It also allows easy
   scaling of the game (ie for larger displays) and bilinear filtering.
   The latter allows for a cleaner display using anti-aliased lines and
   lettering after scaling or rotations.
   
   http://www.ling.ed.ac.uk/%7Eoliphant/glmame/
   ______________________________________________________________________
   
Crystal Space 0.11

   Crystal Space is a free and portable 6DOF 3D engine based on the
   portal technology. Latest version supports colored lights, mirrors,
   transparent textures, reflecting surfaces,optional BSP trees, 3D
   triangle mesh sprites (limited currently), mipmapping, scripting
   language, static shadows, dynamic lights (but with no shadows), ...
   http://crystal.linuxgames.com/
   ______________________________________________________________________
   
GdkRgb 0.0.7

   GdkRgb is a rewrite of the image rendering subsystem of Gtk+.
   Advantages over plain Gtk+ 1.0.x include higher speed, very smooth and
   pretty dithered modes, and support for more displays and visuals. It
   is currently checked into development versions of Gtk+ (and used in
   the development tree of the Gimp), but is also packaged separately for
   application authors who want to maintain Gtk 1.0.x compatibility. The
   programming interface is quite simple.
   http://www.levien.com/gdkrgb/
   
   ______________________________________________________________________
   
Blender 1.37

   Being the in-house software of a high quality animation studio,
   Blender has proven to be an extremely fast and versatile design
   instrument. The software has a personal touch, offering a unique
   approach to the world of Three Dimensions. Use Blender to create TV
   commercials, to make technical visualizations, business graphics, to
   do some morphing, or design user interfaces. You can easy build and
   manage complex environments. The renderer is versatile and extremely
   fast. All basic animation principles (curves & keys) are well
   implemented.
   
   Version 1.37 adds UV Mapping for NURBS as well as bug fixes.
   http://www.neogeo.nl/blender.html
   
     [INLINE]
   
kvideogen 1.1

   KVideoGen allows for easy generation of Modelines, as used by XFree86
   to determine your refresh rate, resolution etc. It will allow you to
   use higher refresh rates, and different resolutions to the 'standard'
   ones offered by the usual X setup utilities. Note: Read the docs on
   the website. This program can damage your hardware. Handle with care.
   http://www.rikkus.demon.co.uk/
   ______________________________________________________________________
   
PhotoShow 0.1

   PhotoShow is a simple Perl script that allows viewing, zooming, and
   adjustment (brightness/contrast/gamma) of images. It also has
   slideshow capability and is amazingly fast thanks to Imlib.
   http://www.verinet.com/~devious/ PhotoShow.html
   ______________________________________________________________________
   
WebGFX - A New Gimp-based NetFu Site

   This is a very nice Net-Fu site.  The design is quite artistic
   although the options available for logo generation from Log-O-Mat are
   a little limited (no foreground/background color, pattern or gradient
   specifications permitted).  The Try-O-Mat is more configurable.  The
   difference is probably due mostly to the limitations in the generic
   logo Script-Fu scripts that the site is using.
   http://www.webgfx.ch/
   ______________________________________________________________________
   
JMK-X11-Fonts

   The jmk-x11-fonts package contains character-cell fonts for use with
   the X Window System. The current font included in this package is
   NouveauGothic, a pleasantly legible variation on the standard fixed
   fonts that accompany most distributions of the X Window System. It
   comes in both normal and bold weights in small, medium, large, and
   extra-large sizes. Currently only ISO-8859-1 encoding is available.
   http://www.ntrnet.net/~jmknoble/ fonts/jmk-x11-fonts
   ______________________________________________________________________
   
KuickShow 0.5

   KuickShow is a fast, comfortable and easy-to-use image viewer/browser
   like Acdsee for the Windows environment. It is based on Rasterman's
   Imlib and therefore pretty fast in showing images. You can browse all
   the images in a filebrowser and display as many of them as you like at
   the same time.  KuickShow can zoom and flip images, as well as moving
   an image in its window, if it is too large to fit in it.
   http://kisdn.headlight.de/
   Editors Note:  beware the popup for kISDN at this page, though.
   ______________________________________________________________________
   
Serious3D Magazine hosting contest - win an new Alpha!

   The bi-monthly magazine is offering 3D artists a chance to win a new
   Alpha computer (preloaded with semi-useless software, but Linux users
   know how to deal with that).  They run a contest for each issue of the
   magazine.  The contest is open to anyone and is not specific to any OS
   or software.  In fact they specifically encourage users of any
   software to enter, even if its not high end, high dollar packages.
   The only requirement is that you be a subscriber to the magazine.
   Interesting trade-off, but if you like the magazine you have nothing
   to lose.  Take a look at the Web site for more details:
   http://www.serious3d.com/winanalpha.html.
   ______________________________________________________________________
   
Binary versions of xfsft plug additional tool

   A Linux glibc2 ia32 (Intel x86) binary of xfsft-1.0 is available.  The
   binary is provided as a gzipped ELF executable dynamically linked
   agains glibc2.  The URL is:
   http://www.darmstadt.gmd.de/~pommnitz/xfsft-1.0-glibc.gz  To find out
   more about xfsft, you can read Juliusz  Chroboczek xfsft Web site at
   http://www.dcs.ed.ac.uk/home/jec/programs/xfsft/.  Example screen
   shots of Netscape under X using TrueType fonts are available at
   http://www.darmstadt.gmd.de/~pommnitz/xfsft.html.
   
   Additionally, to complement xfsft, another a small tool that
   automatically creates a fonts.dir file for TrueType fonts. It is
   available from http://www.darmstadt.gmd.de/~pommnitz/ttmkfdir.tar.gz.
   The distribution package contains a ttmkfdir binary for Linux/glibc2
   (Intel).
   ______________________________________________________________________
   
MpegTV Player 1.0.7.0

   MpegTV Player is a realtime MPEG Video+Audio player that runs on Linux
   and other Unix platforms. It supports network streaming, VideoCD, and
   uses hardware acceleration when supported by a XIL library (Solaris
   Sparc). It runs on x86, PowerPC, Alpha, MIPS, HPPA.
   
   MpegTV Player is now able to stream MPEG's directly from a URL, and
   HTTP/FTP support has been added
   http://www.mpegtv.com/download.html
   ______________________________________________________________________
   
Did You Know?

     ...A new objects collection, called simply "POV Objects", is now
     available for POV-Ray users.  See http://povobjects.fsn.net/
     
     ...the September issue of Digital Video (www.dv.com) has a very
     good article on the availability of stock images on CD.  These
     images run the gamut in prices, but one place which is recommended
     is Corel's huge collection of stock photos.  See
     http://www.corel.com/products/
     clipartandphotos/photos/index.htm for information.  The only
     problem is their web site doesn't make it very easy to order the
     CDs.  The Super 10 Packs are supposed to offer 1000 PhotoCD images
     for only $39.95.  Not bad (and you can view all the images (with
     watermarks) online.  Its just not obvious how to order them!  I did
     manage to find them at MicroCenter, but CompUSA did not seem to
     carry the Super 10 Packs.  They did have other Corel CD image
     packages, however.
     
     ...issue #1 of Serious 3D, which I saw at the local Barnes and
     Noble, had excellent articles on texturing and modeling "creatures"
     (see http://www.serious3d.com/ for their web site). However, a
     notable omittision from all of the creatures was.... hair.  They
     all had scales, etc. Hair is tough.  I think the best results (see,
     for example some of the furry examples in recent IRTC rounds) come
     from image maps. -- from Dan Connelly on IRTC-L
     
   New Gimp Plug-Ins announced this past month:
   
   I have the pleasure of announcing a new plug-in for the GIMP. It
   called 'cam' and allows the GIMP to read CAM files directly. Those
   files are the ones stores in Casio QV-* digital cameras, that you can
   dump using QVplay for instance. I am afraid this plug-in is of no use
   for people who do not possess one of those little toys, though.
   
   URL: http://www.mygale.org/~jbn/qv.html
   Jean-Baptiste <jbnivoit@ix.netcom.com>
            ____________________________________________________
   
   wind - similar to what comes with Photoshop
   jigsaw - as in puzzle
   diff - produces an output image based on it's two input images
   duplicate - just a quick way to copy an image and all it's layers
   Screenshots and more info as well as source are available at:
   Nigel Wetten <http://www.cs.nwu.edu/~nigel/gimp/shack.html>
   ______________________________________________________________________
   
   More Did You Know...
   
   
   
     ...Issue #37 of Design Graphics has explanation of high-end
     graphics boards and AGP vs. PCI on pg 67.  Very good article.
   ______________________________________________________________________
   
Q and A

   Q:   I want to place a block of text with evenly single-spaced lines
   using some arbitrary font onto my Gimp image.  Rather than doing it
   line by line with the Text Tool, is there an easier way?
   
   A:  Yes.  Use the ASCII 2 Image script:
   Xtns->Script-Fu->Utils->ASCII 2 Image
       
   or
   Script-Fu->Utils->ASCII 2 Image Layer
       
   The former is available from the Toolbox, the latter from an Image
   Window.  Both of these options run a Script-Fu script that reads in a
   text file and turns it into one or more layers using the font you
   specify.  If you're installation does not have this script, check the
   Plug-In Registry.
   
   Q:  A Gimp-User mailing list member asked - A few months back someone
   posted a method (maybe a script) for making text look like it was
   dripping, as if it had just been painted on and the paint.
   
   A:  Alan F. Ho responded:  Perhaps the page you are thinking of is:
   http://www.gimp.org/tut-disp2.html.  It's a great tutorial, though I
   can't seem to make my drippy text quite as nice as JTL's.
   
   Q:  Also, if there anyone knows of more "tips" type pages beyond the
   links on the Gimp page, could you let me know as well.
   
   A:  Here are a few:
   http://abattoir.cc.ndsu.nodak.edu/~nem/gimp/tuts/
       http://xach.dorknet.com/gimp/gimp-tips.html
       http://tigert.gimp.org/gimp/tutorials/
       http://xach.dorknet.com/gimp/tutorials/
       http://luthien.nuclecu.unam.mx/~federico/gimp/title-../gx/hammel/i
       ndex.html
       http://members.tripod.com/~shepherdess1/Gimpmanual_omslag.html -
       Besides being a great manual, the GUM has "tips" too!
       http://www.cooltype.com/ - Some interesting non Gimp specific tips
       here.
       Thanks to Alan for this information.
       
   [INLINE]
   
Reader Mail

   Alligator Descartes contacted the IRTC Administrators with the
   following email:
   Hi. I was wondering if the IRTC Admin Team would be interested in
       Arcane Technologies giving out some personal use licenses of
       Magician, our Java OpenGL interface, as prizes for the next round
       of the IRTC?
       If this is of possible interest to you, please get in touch with
       me. The appropriate blurb on Magician is at:
        http://www.arcana.co.uk/products/magician
       We're beginning a fairly intensive period of POV tools conversion
       and building with Magician which will be distributed as freeware
       in the not too distant future.
       
   'Muse:  My reply to Alligator was as follows:  I'm actually contacting
   you on a side note.  I write the Graphics Muse column for the Linux
   Gazette and maintain the list of graphics tools for Linux/Unix systems
   on my web site (www.graphics-muse.org, which is undergoing a major
   rewrite at this time).  I was curious if you've tried Magician on
   Linux platforms and, if so, what sort of success you had with it.  I'm
   still not clear on the use of the runtime and development environs for
   Java on Linux, so a little info from a commercial venture who might
   have some insight on this would be helpful to my readers.
   
   And his reply to me follows:
   Magician supports Linux both libc and glibc variants on a bunch of the
       JDK ports (except JDK-1.1.6 which seems hopelessly busted in many
       places ). We're in the process of porting to Kaffe and the
       OpenGroup JVM as well for Linux.  MkLinux support in the near
       future is planned as is SparcLinux. Basically, we support Linux.
       It runs pretty fast even though it's using the slightly slow Mesa
       OpenGL-a-like implementation and supports hardware acceleration
       where Mesa supports it, typically on Voodoo Graphics accelerators.
       
   'Muse:  I did notice the note on portability, but Linux was
   specifically mentioned so I thought I'd ask.
   Yup. The identical Java code is supplied for Windows95/98/NT, Linux,
       Irix, Solaris, OS/2, AIX and MacOS so far. BeOS ports will happen
       when Be supply a JVM that we can write to. So, it's pretty damn
       portable!
       
   Sudhakar Chandrasekharan wrote:
   I am a regular reader of your column in the Linux Gazette.  I have a
       tip for you about a JavaScript debugger for Linux.  I have it from
       a reliable source that starting with Netscape Navigator /
       Communicator 5.0 a JS debugger will be available for Linux.
       
   I just thought I'd let you know.
       
   'Muse:  Many thanks for the heads up on this Sudhakar!
   
   Caminati Carlo wrote:
   At http://www.graphics-muse.org/linux/lgh.html I found some
       interesting suggestions on how to add fonts to Linux
        "Mount a DOS partition and use the wide array of True Type fonts
            available for DOS"
       I tried and I restared the Xserver but xfontsel didn't show the
       new fonts.  What do tou mean exatly with "use the wide array of
       True Type ..." ?
       
   'Muse:  Under X Windows (ie all Unix systems), the X server usually
   only understands how to deal with bitmap fonts (ie Adobe Type 1
   fonts).  In order to use the True Type fonts you need what is called a
   font server. This is a special daemon that runs along side the X
   server and can tell the X server how to render the True Type fonts
   (thats a oversimplification, but its about right).  There are 3
   possible font servers that you can consider:
    1. xfstt
    2. xfsft
    3. Caldera's font server in their commerical distribution of Linux
       
   The first two are freely available.  The latter is only available (or
   was available, I haven't checked on it in quite some time) with the
   Caldera distributions of Linux.
   Carlo:  I have a RedHat 5.0 box
       
   You probably want to look at xfsft or xfstt.  There are links to these
   in Septembers Graphics Muse column in the Linux Gazette:
   http://www.linuxgazette.com - look in the September 1998 issue for the
   Graphics Muse column or try
   http://www.graphics-muse.org/muse/muse.html - which is where I keep my
   archived copies of my column.
   
   The links are in the section of the column titled Did You Know?.
   
   Andrew Kuchling <akuchlin@cnri.reston.va.us> suggested this:
   Sometime, you might want to take a look at the Python Imaging Library,
       maintained by Fredrik Lundh. See
       http://www.pythonware.com/library/pil/handbook/overview.htm for
       the manual.  PIL lets you read in graphics files in a bunch of
       different formats, perform various operations on them, and write
       them out again.  For example, I wrote a SANE interface for PIL,
       and use it in a code snippet like this to grab an image, resize
       it, and write it out to a .jpg file:
       
        self.camera = sane.open('dmc:/dev/camera')
            self.camera.imagemode='Full frame'
            self.camera.shutterspeed = 16
            ...
            image = self.camera.snap()
            image = image.resize( (self.image_width, self.image_height) )
            # Convert from 24-bit colour to an 8-bit palette
            image = image.convert( 'P' )
            # The quality factor ranges from 0 to 100, with the default
            being
            # 75.  The documentation for libjpeg says that 95 is about
            # as high as you want to go; higher values increase the
            # image size but don't affect quality significantly.
            image.save( 'foo.jpg', 'JPEG', quality=95)
       It's more powerful than gd, because you're not limited to GIF
       format, but can also handle JPEG (if you have libjpeg installed),
       PNG, and various other formats.
       
   'Muse:  My only objection to doing a review of PIL is that I don't
   know Python.  As it is I'm behind the curve on languages.  I just
   picked up Perl and want to learn Java and Tcl/Tk (I'm a GUI programmer
   by trade, and these are tools I hear requests for in potential jobs).
   Plus I have to learn Scheme in order to offer tips for Gimp developers
   (another reason to learn Perl and Tcl, since these also have scripting
   extensions for Gimp).  Python is Yet Another Language and its hard to
   find the time to learn them all.
   
   However, I'll put it on my list of things to do.  If you'd like to
   write a review for this package and have it included in the Graphics
   Muse column (with full credit to you, of course) feel free to send it
   my way.  I'll make sure it gets included (I may edit it a little to
   make sure it reads well, but thats about it).
   
   Michal Jaegermann <michal@ellpspace.math.ualberta.ca> wrote to take a
   minor issue with last months Perl advice in the Muse:
   I have a small issue with your advice on Perl which you dish out in
       your Graphics Muse in issue 32 of Linux Gazette.  You write:
       
        "The ampersand is important - you should always prefix calls to
            your subroutines with the ampersand.  Although things may
            work properly if you don't, proper Perl syntax suggests the
            results can be unexpected if you don't use the ampersand."
       Quite to the contrary!  The above was indeed valid for an obsolete
       Perl 4.  Nowadays this is straight from 'man perlstyle' which
       undoubtely you have installed on your machine and which is a
       worthwhile reading:
       
        Call your subroutines as if they were functions or list operators
            to avoid excessive ampersands and parentheses.
       Things not only "may work properly" without this ampersand but are
       guaranteed to work if you either defined or declared your
       subroutines before the first use and ampersands are really
       retained for a backwards compatibility.  Prevailing practice among
       people who really know Perl is to avoid spurious ampesands to even
       greater degree than the quoted documentation may suggest.  See,
       for example, perl tutorials on Randal Schwartz web page
       (www.stonehenge.com). This implies that if you do not want/can't
       define your subroutines early then you should declare them (and
       "use strict").  One reason is that if you would happen to
       reimplement your subroutine as a function provided by a new module
       you would be hunting for those pesky ampersands all over the
       place.
       Nobody will run you out of town for an excessive use of
       punctuation in a Perl code - if these are your private kinks.  But
       claims in a widely published material that one should do that,
       instead of presenting this as an unhealthy personal habit, is a
       totally different matter.
       
   'Muse:  You're obviously more well versed in Perl than I, so I bow to
   your recommendations here.  I had wondered why the ampersands didn't
   seem necessary (I had left them off initially for some routines which
   were not previously declared).  I also thought they seemed rather
   unwiedly and wondered why a language such as Perl, which I am quite
   fond of after my first few weeks of working with it, would use such a
   syntax.  Your response clarifies the situation for me.  Many thanks
   for your letter.
   
   However, I would like to address a few points about your reply.
   First, I don't have the perl documentation installed.  I did install
   Perl 5 binaries at one point, but I don't (currently) run Perl at home
   - I run it on my Web server, whose Perl installation is handled by the
   commercial Web server provider (vservers.com).  I ran "man perlstyle"
   but it died trying to display the page for unknown reasons.  Same
   thing with any of the man pages I tried for Perl on that system.  So
   my sources at the time the article was written were the two documents
   I listed:  Programming Perl by Wall & Schwartz and the Official Guide
   to Programming with CGI.pm by Stein.  The former is where I got the
   information about using ampersands for subroutines.  Perhaps this is
   an outdated document - although I had just purchased it from Borders
   Books, its print date appears to be 1992!  Still, its all I had.  Yes,
   the Perl archives have documenation too, but I also have deadlines.
   The problem with writing articles (I've slowly discovered) is choosing
   between reaching a certain level of expertise and actually getting
   something out to my readers.  In this case, I just happened to be
   working with Perl, so thats how I chose to write about Perl.  In fact,
   its pretty much how every months articles get written.  Whatever I
   happened to be working on that month.  But it limits how much of an
   expert I can become before I have to start writing.  Its not a very
   good excuse, but it is the reality of trying to do this column.
   Writing is much more work than I had expected.
   
   But, "unhealthy"?  Hmmm.  The excessive use of ampersands doesn't seem
   to have affected my current bench press max....
   
   Douglass Turner <turner@redballpro.com> wrote:
   
     I've recently started reading you "Graphics Muse" column.  Lots of
     good stuff. I'm a 3D graphics guy and I'm looking for code to
     read/write 3D models into/outof the rendering system I wrote. Have
     you any idea where I should be looking?
     
   'Muse:  Take a look at Keith Rule's text 3D Graphic File Formats: A
   Programmers Reference.  This is not a Unix package/text, but he has
   source code for reading and writing many file formats.  He says in the
   book (last time I read it, which was some time back) that it hasn't
   been ported to Unix but he doesn't know why it wouldn't port easily.
   You can find a little more info on the text on his Web site.
   ______________________________________________________________________
   
   [INLINE]
   
Visual DHTML from Netscape

   Last month I came across an announcement that Netscape had released a
   graphical-based interface for designing Dynamic HTML, otherwise known
   as DHTML.  DHTML is the next phase of the evolution of HTML and allows
   for more animated and configurable Web pages using a programmatic
   interface (as opposed to using, for example, the animation features of
   the GIF image file format).  With DHTML and JavaScript you can
   implement such features as drag and drop, menus and scrolling text
   subwindows.  Netscape's tool for supporting DHTML is known as
   VisualDHTML.  Although not supported officially, I thought it would be
   interesting to explore the features and problems of this new product
   as a way of getting a little more exposure to one of the Web's latest
   markup languages.
   
   Where do you get it?
   
   VisualDHTML, which I'll shorten to VDHTML for this article, is a
   actually a tool written in entirely in DHTML.  It is available from
   Netscape's Web site.  Since it is written in a form of HTML you can
   actually run it across the network, but you may find it more
   convenient to download the complete package from their web site to
   your local hard disk.  In the tests I ran I found that the performance
   was significantly better running locally.
   
   The download page for VDHTML is the same as the index page in the
   package you download.  The download file is a zip file which you can
   save to any local directory.  Use the Linux (or equivalent) "unzip"
   command to unpackage the files, which will be placed in a newly
   created directory called "visual".
   
   The only prerequsite for running VDHTML is that you have a browser
   that supports JavaScript 1.2.  That fairly well eliminates all
   browsers except Netscape Communicator 4.06 or the latest 4.5 beta
   releases of Communicator.  If you don't have one of these, you may
   want to skip the rest of this article.  Also, although you are
   supposed to be able to run this on your local system, attempting to
   run the application without being connected to the Net or by using
   local URL's seemed to cause unexpected behaviours:  drag and drop no
   longer worked, widgets did not become visible in the preview window,
   etc.  I suggest, during your experimentation, that you only run this
   early version while connected to the Net, if possible.
   
   What does it look like?
   
   Once you've unpacked the package you simply need to open the
   index.html file to get started.  For example, if you unpacked the zip
   file in the /tmp directory you can type the following in the Location
   field of the Netscape browser:
   
     file:/tmp/visual/index.html
     
   The "file:" prefix is not actually necessary, but if you're unfamiliar
   with accessing files this way you might use it till you get used to
   where you're headed with this sort of URL.  On the index page you'll
   find a link to Launch Visual DHTML.  Just click on this and a small
   window will open announcing that the application is starting.  For the
   sake of this article we'll refer to this window as the VDHTML Main
   Window.  Once the page starts it looks pretty much like any other
   application.  However, its really just another Web page!  This is the
   first bit of magic to learning about DHTML.  The pages they create can
   look like real applications.  Note that the VDHTML page can take a
   while to load, even from a local hard drive.
   
   Before we get too far I should note that VDHTML is relatively buggy at
   this point.  If you use it just right it works fine, but straying from
   the straight and narrow (ie not using it just right) can cause
   Netscape to crash.  I'll point out the caveats that I know about as I
   go.
   
   
                                  [INLINE]
                  Figure 1 - The Visual DHTML Main Window
                                      
   The New Page dialog opens when you start the application for the first
   time.  Its not obvious, but that dialog lives within the VDHTML
   window.  It cannot be moved outside the borders of that window.
   Figure 2 shows what happens when you try to do so.
   
   
                                  [INLINE]
      Figure 2 - Dialogs don't exist outside of the application window
                                      
   The four options in the New Page dialog allow you to select the size
   of a new browser window to open.  This new window will be used to
   preview your DHTML page and allow you to make edits by dragging and
   dropping DHTML components around the preview.  Of the four options
   provided, the Normal Window will probably be the most useful.  Its
   window is about 3/4 the size of my display, which gives it a
   resolution of roughly 950x750 pixels.  The Normal and Full Screen
   windows provide the familiar menu bars you normally see in your
   Netscape browser windows.  The Kiosk window is smaller than these and
   does not provide those menus.  That means to close the Kiosk window
   you have to use the window manager Close option.  Be certain you use
   "Close" and not "Destroy" (assuming you use a flavor of FVWM) since
   Destroy will exit Netscape completely and you'll have to start over.
   
   The Desktop option opens a window that will stay underneath all your
   other windows and acts like an interactive background image, except
   that its not "sticky", meaning it doesn't follow you around to other
   desktops (again, assuming you have a window manager like FVWM or
   CDE/mwm that allows multiple virtual desktops).
   
   Once you've opened your New Window you are ready to start adding DHTML
   components to it.  VDHTML comes with a set of predefined widgets that
   you can add to your page.  Clicking on the Widgets icon in the menu
   bar of the Main Window will open the Widgets dialog (see Figure 3).
   Note that you may need to click and hold the left mouse button over
   the Widgets icon longer than you might normally in order to get the
   dialog to open.  At least I did on my system.  Also, when you click on
   a widget name in the dialog you need to hold the mouse button down
   until after the dialog is closed.  Then release the mouse button. If
   you don't do it in this order the configurable parameters for the
   widget will not be shown and you won't get the widget in the preview
   window.  Clicking in the Widgets dialog and releasing the mouse button
   before the dialog closes will simply close the dialog.
   
   [INLINE] Ok, so you've got the Widgets dialog opened.  Notice that the
   dialog is actually labeled "Components Palette" - a bit of
   inconsistancy from Netscape, but thats to be expected with any first
   release of a product.  The available widgets are listed in a table,
   below a set of three options which act something like tabs in a
   notebook widget.  The first tab is the widgets tab, which provides
   components like menus and buttons and clocks.  The next tab is for
   setting specific HTML tags.  The last tab is for setting link
   properties.
   
   Bug:  don't try to access the Tags option in the Widgets dialog
   without a preview window open.  Doing so will crash netscape.  In
   fact, the widgets dialog in general seems to cause Netscape crashes at
   random.
   
   The available widgets include some unusual components, such as the
   drawer widget.  This option creates what appears to be a small button
   that, when pressed, opens a drop down menu.  This button can be placed
   anywhere in the page but seems to want to be anchored only to window
   edges.  I'm not sure if thats intentional or a bug in VDHTML.  Also,
   the default image for the drawer (the small button) can be changed to
   any image you want when you configure the widget.  Figure 4 shows the
   configuration options for the drawer widget.
   
   
                                  [INLINE]
           Figure 4 - Configurable options for the Drawer Widget
                                      
   Any of the components you add can be dragged around the preview window
   (except the marqee which must be positioned using its configurable
   parameters before its added).  When you drag a component to a new
   location it causes the preview page to be reloaded.  Remember - that
   page is a form of HTML, so all the links have to be resolved again.
   If those links are across a network (as they are likely to be if you
   followed my suggestion of trying this initial version only while
   connected to the Net) then page reloads may take a little while.  Be
   patient.
   
   One exception to dragging is the marquee widget.  This widget creates
   a window that drops down (or comes in from the sides or up from the
   bottom of the browser window) with an image or text, sort of like an
   animated menu.  But you can't drag marquees in the preview window.
   You have to specify the direction from which the marquee will enter
   the browser when you create it by using the configurable options.
   Apparently the marquee will always be on the left side of the preview
   window but as always you can edit the source later to move it to
   another location and have it enter the browser appropriately from any
   point.
   
   Bug:  While experimenting with the various widgets I discovered that
   they often didn't perform as expected in the preview window.
   Sometimes I could open a drawer, for example, but not close it.
   Buttons would post a menu but then I couldn't clear it.  Its clear
   that the widgets functionality and their interaction within the
   preview window are still to be worked out.
   
   With all widgets the VDHTML Main Window offers configurable
   parameters.  The defaults for those options which require a URL point
   to Netscape's site.  This isn't a problem but you should keep it in
   mind if you take the default option values.  If you decide to use the
   defaults (remember:  be online if you do so or VHDTML might crash
   Netscape!) you can edit the HTML document by hand later and use your
   own URLs. [INLINE]
   Figure 5 - Default Marquee
   Widget
   
                                   <More>
   ______________________________________________________________________
   
   
   Musings
   [INLINE]
   
Working with X Input and Wacom Tablets

   What is X Input?
   
   To quote from the X Input Howto:
   
     The XInput extension is an extension to X to allow the use of input
     devices beyond the standard mouse and keyboard. The extension
     supports a wide range of devices, including graphics tablets,
     touch-screens, joysticks, and dial-boxes. The most common use is
     probably for graphics tablets.
     
   For many readers of the Muse, X Input is how you'll want to interface
   with the Gimp.  Outside of the Gimp there are only a few other tools
   that currently make use of the X Input extension.  However, as
   graphics tools on Linux mature, there will be a much greater need for
   these sorts of extra input devices.  Later, after we cover some
   configuration and testing issues, we'll restrict our application
   discussion to the two tools you are most likely to use with X Input:
   Gimp and gsumi.
   
   What X servers support X Input?
   
   X Input is reported to be supported by all 3 of the major X server
   vendors: XFree86 (which includes SuSE since they work so closely in
   their X server development), Xi Graphics, and MetroLink.  Xi Graphics
   sent me their latest server, 4.1.2, to try for this article.  I also
   downloaded the 3.3.2 XF86_SVGA server for use with my Matrox
   Mystique.  I neglected to contact MetroLink in time to ask for a copy
   of their server, unfortunately.  An email I received from MetroLink
   back in March stated that their 4.3 server includes support for
   dynamically loadable X Input driver modules.  This includes Elo
   Graphics, Carroll, Micro Touch and Lucas/Deeco touch screens.  They
   also mentioned plans for support of Wacom tablets and 3D input devices
   such as the Space Orb but I don't know if this support has been
   released yet or not.  I also don't have any information on how devices
   would be configured to work with their X Input drivers.
   
   In testing the two servers I did have, I was successful in getting
   only one of them to work, XFree86's XF86_SVGA server.  I have to thank
   Owen Taylor for his helpful hints and suggestions in getting that
   server up and running with X Input.  Most of the information I'm going
   to provide came with clarifications from Owen.
   
   The Xi Graphics server does list X Input as a supported extension,
   both in the documentation and from the xdpyinfo program.  However,
   there is no information available on how to get that extension to
   recognize and work with any particular devices.  It may be possible to
   use the gxid daemon, a daemon program which comes with the Gtk+ source
   distribution, to work with this server but I was unsuccessful in doing
   so.  I contacted Xi Graphics about this and the last I heard they are
   still looking into it.  I haven't heard if they had any more success
   than I did.
   
   Since I was only able to get one server to work with X Input, the rest
   of this article will focus on that server.  If I get feedback from any
   one, vendors or users, on getting the other two servers to work with X
   Input I'll write up an update here in the Muse.
   
   What devices are supported?
   
   The XFree86 support of X Input includes drivers for the following
   devices:
     * Wacom devices:
          + ARTZ II; in Europe this is currently known as the UltraPad,
            but the older tablets also called UltraPad (but a different
            tablet, apparently) only partially work.
          + ArtPad II
          + PenPartner, but only with 3.3.2 servers and modules
          + PL300, which is the combined LCD screen and tablet
       
     * Summagraphics, which is actually CalComp (see
       www.summagraphics.com)
          + Only tablet specifically listed was the DrawingSlate II.
            This was from a guy who patched the Summagraphics driver to
            work with this CalComp tablet.  I didn't find any other
            information regarding other specific tablets.
       
     * Joysticks are supported but I didn't try this nor do I have any
       information on what joysticks are known to work.
       
   The new Wacom Intuos line, which is Wacom's latest line of tablets, is
   not yet supported.  It is unclear, according to Owen, whether or not
   drivers will become available for these devices.
   
   Requirements for making use of the XFree86 X Input support
   
   I have a Matrox Mystique card with 4Mb of memory which I've been using
   for about 2 years now.  This card is still on the market and will cost
   you roughly $100US or less depending on where you purchase it.   Along
   with this I'm using a Wacom PenPartner, a 4"x5" tablet that sells for
   about $79US.  This is the low end tablet from Wacom.
   
   The Matrox card is supported by the XF86_SVGA server (see the
   Resources section at the end of this article).  X Input support in
   XFree86 has been available in since the 3.3.1 release (at least,
   perhaps longer).  Most Linux users will probably have either the 3.3.1
   or the latest 3.3.2 servers if they use any distribution that is less
   than 2 years old.
   
                            -Top of next column-
                                      
   [INLINE]
   
    No other musings this month.
   [INLINE]
   Along with the servers you also need to make use of one or more
   loadable modules.  If you are like me and use the PenPartner tablet
   then you need to make sure you have the 3.3.2 version of the
   xf86wacom.so module.  The 3.3.1 version of this module does not
   support the PenPartner but should work fine for other Wacom tablets.
   
   If you have the 3.3.1 version of XFree86, you can download the
   particular server you need and the X3323bin.tgz file, which contains
   the binary versions of the 3.3.2 modules (plus other tools).  You can
   find links to these packages from the XFree86 web site.  You might
   wonder if you can run your older 3.3.1 libraries with an the newer
   3.3.2 servers and modules.  The answer is yes, you can.  You don't
   have to update all your libraries, development tools, and X
   applications (the tools under /usr/X11R6/bin) although you can if you
   want a full 3.3.2 update.
   
   Along with the server and modules there are a couple of other tools
   you'll want to make sure you have:
     * Configuration tools:
          + xinput
          + xsetpointer
     * Graphics tools:
          + gsumi
          + Gimp
       
   The xinput program shouldn't be confused with the generic term X
   Input.  The program is a little tool written to set various parameters
   for the device you are using with the X Input server extension.  This
   includes things like mapping pen buttons to mouse buttons and so
   forth.  The xsetpointer program is used to set the pointer to a given
   device but using the configuration we will be using in this article
   you shouldn't need to do this as both pen and mouse should work as
   your pointer device at all times.
   
   Configuring the X server and hardware
   
   In order to make use of the X Input extension you need to tell the X
   server about how you want it configured and what driver to load for
   the device you will be using.  XFree86's configuration file,
   XF86Config, is located under the directory /etc/X11.  Although you can
   use the graphical setup tool XF86Setup for most options, you can't use
   it to configure X Input.  You'll need to edit the configuration file
   by hand.
   
   The first thing you need to know about is which modules you'll need.
   Under /usr/X11R6/lib/modules you will find the X Input modules.  For
   Wacom tablets you'll be using the xf86Wacom.so module.  Similarly,
   SummaGraphics tablet users will want to use the xf86Summa.so modules.
   There are also modules for Elo Graphics devices (xf86Elo.so) and
   joysticks (xf86Jstk.so).
   
   To configure the module for use with the server, edit the XF86Config
   file and add the following lines:
   
     Section "Module"
        Load "xf86Wacom.so"
     EndSection
     
   Substitute the module of choice, of course.  These lines can go
   anywhere, I believe, but I placed them immediately after the Pointer
   section.  Next you need to add the section which defines the devices
   you'll be using.  According to Owen Taylor's X Input Howto there is a
   simple configuration and a more complete configuration.  We'll skip
   the simple version since its just a subset of the complete version and
   Owen discusses it in his Howto quite well.
   
   The text to add looks like the following:
   
     Section "Xinput"
        SubSection "WacomStylus"
           Port "/dev/ttyS1"
           DeviceName "Wacom"
           Mode Absolute
           Suppress 17
        EndSubSection
        SubSection "WacomStylus"
           Port "/dev/ttyS1"
           DeviceName "WacomCore"
           Mode Absolute
           AlwaysCore
           Suppress 17
        EndSubSection
        SubSection "WacomEraser"
           Port "/dev/ttyS1"
           Mode Absolute
           Suppress 17
        EndSubSection
        SubSection "WacomEraser"
           Port "/dev/ttyS1"
           DeviceName "EraserCore"
           Mode Absolute
           AlwaysCore
           Suppress 17
        EndSubSection
     EndSection
     
   The 4 SubSections define different devices to X Input.  You can see
   these listed (after you start the server) by running xsetpointer -l.
   I'm not completely certain why you have to have two entries for each
   device but assume that the first entry is used by applications and the
   other entry is used to allow the tablet pen to be used as your regular
   pointing device.
   
                                   <More>
                                      
   [INLINE]
   Resources The following links are just starting points for finding
   more information about computer graphics and multimedia in general for
   Linux systems. If you have some application specific information for
   me, I'll add them to my other pages or you can contact the maintainer
   of some other web site. I'll consider adding other general references
   here, but application or site specific information needs to go into
   one of the following general references and not listed here.
   
   Online Magazines and News sources
   C|Net Tech News
   Linux Weekly News
   Slashdot.org
   Amazon.com's Linux Book Section
   
   General Web Sites
   Linux Graphics mini-Howto
   Unix Graphics Utilities
   Linux Sound/Midi Page
   
   Some of the Mailing Lists and Newsgroups I keep an eye on and where I
   get much of the information in this column
   The Gimp User and Gimp Developer Mailing Lists.
   The IRTC-L discussion list
   comp.graphics.rendering.raytracing
   comp.graphics.rendering.renderman
   comp.graphics.api.opengl
   comp.os.linux.announce [INLINE]
   
Future Directions

   Next month:
     * Off the shelf video cards:  whats popular, cheap and supported by
       Linux.
     * My ramblings on having worked on the Muse for 2 years.  Yes, next
       month is my 2 year anniversary with the Muse.  It just may be the
       longest relationship I've ever wanted to keep stable!
       
   Let me know what you'd like to hear about!
   ______________________________________________________________________
   
                                                  1998 Michael J. Hammel
     _________________________________________________________________
   
                    Copyright  1998, Michael J. Hammel
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                  Heroes and Friends -- Linux Comes of Age
                                      
                              By Jim Schweizer
     _________________________________________________________________
   
             "I've found only two things that last 'til the end
               One is your heroes, the other's your friends."
                        -- Randy Travis/Don Schlitz
                                      
   Could it be that one of the reasons the Linux phenomena is so strong
   is that it fulfills the above? Quick, without thinking, name one or
   two people you really look up to. Chances are, since you're using
   Linux, the names of Torvalds, Raymond or Stallman may have flashed
   through your mind.
   
   As members of the Linux community, we have heroes. We have people we
   can look up to. We have heroes we can look up to and still disagree
   with. Can we say the same of our physical communities, our companies,
   our nations?
   
   And what of friends? Think about the mailing lists you belong to, the
   news groups you read, and the Linux users group you belong to - who do
   you turn to when you need advice about your latest upgrade?
   
   Does commercial software and Microsoft give you the same feeling? Can
   they compete with the feeling you just had while thinking about what
   Linus has wrought and the last helpful Linux-related email you
   received?
   
   Community! That's what this is really all about. It's about having the
   best operating system, and the best software and the best support.
   It's about having the best. Period. And we know the best is still to
   come.
   
   The question is often asked, "Will Linux be able to defeat the
   marketing muscle of Microsoft?" We already know the answer. And the
   answer is being provided by the growing number of people who use Linux
   as an everyday solution to their own needs.
   
   Will there be an 'office suite'? Probably. But that's not what brought
   us to Linux in the first place, is it? So, why are you here?
   
   What makes Linux really special is the people you never hear about in
   the press. The people who patch software and give it back to the
   community - you all know someone who's done this, or helped you with a
   shell script, or guided you as you learned more about Linux. You also
   know someone who is maintaining a Linux site, writing a driver or
   volunteering in some way to bring Linux to fruition. Linux is what it
   is because thousands of people, every day, contribute in small ways to
   Linux's success.
   
   Heroes help you see a goal worth attaining. Your friends help you get
   there. When someone new to Linux asks a question, what they are really
   asking for is a friend's advice. Be there for them.
   
   So, the next time some one asks you why you are using Linux, smile and
   think, "That's how it goes, with heroes and friends."
     _________________________________________________________________
   
                      Copyright  1998, Jim Schweizer
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                 Linux Installation Primer: X Configuration
                                      
                               By Ron Jenkins
     _________________________________________________________________
   
   Copyright  1998 by Ron Jenkins. This work is provided on an "as is"
   basis. The author provides no warranty whatsoever, either express or
   implied, regarding the work, including warranties with respect to its
   merchantability or fitness for any particular purpose.
   
   The author welcomes corrections and suggestions. He can be reached by
   electronic mail at rjenkins@unicom.net.
     _________________________________________________________________
   
                         Part Two: X configuration
                                      
   Welcome to the second installment of the series. In this installment,
   you will configure your X server, choose a Window Manager (WM,) and
   learn a few things about how the X system works. Don't worry, it's not
   as hard as you've heard, and can even be a great deal of fun, so LET'S
   GET GRAPHICAL!
   
   In this installment, I will cover the following topics:
    1. A brief introduction to the X windowing system
    2. Supported Hardware
    3. Unsupported Hardware
    4. Gathering Information about your hardware
    5. Safety concerns and precautions
    6. Starting the configuration program
    7. Configuration of the mouse under X
    8. Configuration of your video card
    9. Configuration of your monitor
   10. Testing your configuration
   11. Customization tips and tricks
   12. Troubleshooting your configuration
   13. Resources for further information
       
   While the steps needed to configure the X system are fairly
   standardized, due to some differences and peculiarities between the
   Slackware 3.5 and RedHat 5.1 versions of Linux, where necessary, I
   will distinguish between the steps to be taken to accomplish a given
   task on each distribution.
     _________________________________________________________________
   
    A brief introduction to the X windowing system
    
   This document will cover the configuration of the X windowing system,
   XFree86 version 3.3.2-2. This is the version that ships with both
   RedHat 5.1 and Slackware 3.5. If you are using a different version of
   XFree86, your mileage may vary, although many of the steps will remain
   the same.
   
   Unlike Windows based systems, the X windowing system is composed
   primarily of two separate and distinct components, the X Server, and
   the Window Manager.
   
   The X Server is the interface between the hardware and the Window
   Manager. This is somewhat analogous, although not entirely, to the
   "video driver" in Windows. In addition to servicing hardware requests,
   it also performs several other important functions, such as managing
   all X connections to the machine, both local and remote.
   
   One of the advantages of a Unix or Linux system is the fact that it
   was built from the ground up to be a multi-user system.
   
   This gives a Unix or Linux system the ability to service, or "host"
   many users, both locally through the use of TTY connections or virtual
   terminals, or remotely through socket based communication using a
   variety of protocols.
   
   For an overview of the concept of remote X sessions, see my article in
   the September Issue of the Linux Gazette.
   
   It is important to note that the aforementioned X Server, as well as
   most of the functions it performs, occur in the background, and are
   functionally transparent to the end user. In short, it's a busy little
   beaver!
   
   The second component of the X windowing system is the Window Manager.
   This is the element of the X system that comprises the portion of the
   Graphical User Interface that you interact with. The Window Manager is
   responsible for the look and feel of your desktop; as well the
   front-end interface to the commands and programs you run.
   
   There are many Window Managers available for Linux, and each person
   will have their favorite. It will be up to you to decide which one
   best fits your needs and preferences.
   
   Since both distributions default to FVWM95, I will confine myself to
   this Window Manager for the purposes of this introductory document.
   For further information on some of the many other Window Managers
   available, consult the resources section.
     _________________________________________________________________
   
    Supported Hardware
    
   Video Cards:
   (The following information is excerpted from the Xfree86 3.3.2
   documentation.) This documentation can be found in
   /var/X11R6/lib/docs/README.
   
   At this time, XFree86 3.3.2 supports the following chipsets:
   
   Ark Logic
   ARK1000PV, ARK1000VL, ARK2000PV, ARK2000MT
   
   Alliance
   AP6422, AT24
   
   ATI
   18800, 18800-1, 28800-2, 28800-4, 28800-5, 28800-6, 68800-3, 68800-6,
   68800AX, 68800LX, 88800GX-C, 88800GX-D, 88800GX-E, 88800GX-F, 88800CX,
   264CT, 264ET, 264VT, 264GT, 264VT-B, 264VT3, 264GT-B, 264GT3 (this
   list includes the Mach8, Mach32, Mach64, 3D Rage, 3D Rage II and 3D
   Rage Pro)
   
   Avance Logic
   ALG2101, ALG2228, ALG2301, ALG2302, ALG2308, ALG2401
   
   Chips & Technologies
   65520, 65530, 65540, 65545, 65520, 65530, 65540, 65545, 65546, 65548,
   65550, 65554, 65555, 68554, 64200, 64300
   
   Cirrus Logic
   CLGD5420, CLGD5422, CLGD5424, CLGD5426, CLGD5428, CLGD5429, CLGD5430,
   CLGD5434, CLGD5436, CLGD5440, CLGD5446, CLGD5462, CLGD5464, CLGD5465,
   CLGD5480, CLGD6205, CLGD6215, CLGD6225, CLGD6235, CLGD6410, CLGD6412,
   CLGD6420, CLGD6440, CLGD7541(*), CLGD7543(*), CLGD7548(*), CLGD7555(*)
   
   Digital Equipment Corporation
   TGA
   
   Compaq
   AVGA
   
   Genoa
   GVGA
   
   IBM
   8514/A (and true clones), XGA-2
   
   IIT
   AGX-014, AGX-015, AGX-016
   
   Matrox
   MGA2064W (Millennium), MGA1064SG (Mystique and Mystique 220), MGA2164W
   (Millennium II PCI and AGP)
   
   MX
   MX68000(*), MX680010(*)
   
   NCR
   77C22(*), 77C22E(*), 77C22E+(*)
   
   Number Nine
   I128 (series I and II), Revolution 3D (T2R)
   
   NVidia/SGS Thomson
   NV1, STG2000, RIVA128
   
   OAK
   OTI067, OTI077, OTI087
   
   RealTek
   RTG3106(*)
   
   S3
   86C911, 86C924, 86C801, 86C805, 86C805i, 86C928, 86C864, 86C964,
   86C732, 86C764, 86C765, 86C767, 86C775, 86C785, 86C868, 86C968,
   86C325, 86C357, 86C375, 86C375, 86C385, 86C988, 86CM65, 86C260
   
   SiS
   86C201, 86C202, 86C205
   
   Tseng
   ET3000, ET4000AX, ET4000/W32, ET4000/W32i, ET4000/W32p, ET6000, ET6100
   
   Trident
   TVGA8800CS, TVGA8900B, TVGA8900C, TVGA8900CL, TVGA9000, TVGA9000i,
   TVGA9100B, TVGA9200CXR, Cyber9320(*), TVGA9400CXi, TVGA9420,
   TGUI9420DGi, TGUI9430DGi, TGUI9440AGi, TGUI9660XGi, TGUI9680, Pro-
   Vidia 9682, ProVidia 9685(*), Cyber 9382, Cyber 9385, Cyber 9388,
   3DImage975(PCI), 3DImage985(AGP), Cyber 9397, Cyber 9520
   
   Video 7/Headland Technologies
   HT216-32(*)
   
   Weitek
   P9000
   
   Western Digital/Paradise
   PVGA1
   
   Western Digital
   WD90C00, WD90C10, WD90C11, WD90C24, WD90C24A, WD90C30, WD90C31,
   WD90C33
   
   (*) Note, chips marked in this way have either limited support or the
   drivers for them are not actively maintained.
   
   All of the above are supported in both 256 color, and some are
   supported in mono and 16-color modes, and some are supported an higher
   color depths.
   
   Refer to the chipset-specific README files (currently for TGA, Matrox,
   Mach32, Mach64, NVidia, Oak, P9000, S3 (except ViRGE), S3 ViRGE, SiS,
   Video7, Western Digital, Tseng (W32), Tseng (all), AGX/XGA, ARK, ATI
   (SVGA server), Chips and Technologies, Cirrus, Trident) for more
   information about using those chipsets.
   
   The monochrome server also supports generic VGA cards, using 64k of
   video mem- ory in a single bank, the Hercules monochrome card, the
   Hyundai HGC1280, Sigma LaserView, Visa and Apollo monochrome cards.
   
   The VGA16 server supports memory banking with the ET4000, Trident,
   ATI, NCR, OAK and Cirrus 6420 chipsets allowing virtual display sizes
   up to about 1600x1200 (with 1MB of video memory). For other chipsets
   the display size is limited to approximately 800x600.
   
   Notes: The Diamond SpeedStar 24 (and possibly some SpeedStar+) boards
   are NOT supported, even though they use the ET4000.
   
   The Weitek 9100 and 9130 chipsets are not supported (these are used on
   the Dia- mond Viper Pro and Viper SE boards). Most other Diamond
   boards will work with this release of XFree86. Diamond is actively
   supporting The XFree86 Project, Inc.
   
   3DLabs GLINT, Permedia and Permedia 2 support could unfortunately not
   be included in XFree86 3.3.2 since there are open issues regarding the
   documentation and whether or not they were provided to us under NDA.
   (End excerpt from Xfree86 documentation.)
   
   Monitors:
   Hypothetically, any monitor you have the documentation for, that is
   capable of at least VGA or SVGA resolution, SHOULD be compatible.
   However, the following monitors are explicitly supported:
   
   Slackware 3.5:
   Standard VGA, 640x480 @ 60Hz Super VGA, 800x600 @ 56Hz
   8514 Compatible, 1024x768 @ 87 Hz interlaced (no 800x600)
   Super VGA, 1024x768 @ 87 Hz interlaced, 800x600 @ 56 Hz
   Extended Super VGA, 800x600 @ 60 Hz, 640x480 @ 72Hz
   Non-Interlaced SVGA 1024x768 @ 60 Hz, 800x600 @ 72 Hz
   High Frequency SVGA, 1024x768 @ 70 Hz
   Multi-Frequency that can do 1280x1024 @ 60 Hz
   Multi-Frequency that can do 1280x1024 @ 74 Hz
   Multi-Frequency that can do 1280x1024 @ 76 Hz
   
   NOTE: There is also an option to explicitly specify the Horizontal and
   Vertical Sync rates for your monitor if you have them available.
   
   Red Hat 5.1:
   Custom Mode (see above description for information about standard
   modes, as well as suggestions for
   acquiring information for your monitor if the documentation is not
   available.
   Acer Acerview 11D, 33D/33DL, 34T/34TL
   AOC-15
   Apollo 1280x1024 @ 68Hz
   Apollo 1280x1024 @ 70Hz
   Axion CL-1566
   CTX-1561
   Chuntex CTX CPS-1560/LR
   Compudyne KD-1500N
   CrystalScan 1572FS
   DEC PCXBV-KA/KB
   Dell VS17
   EIZO FlexScan 9080i, T660
   ELSA GDM-17E40
   ESCOM MONO-LCD-screen
   Gateway 2000 CrystalScan 1776LE
   Generic Monitor
   Generic Multisync
   HP 1280x1024 @ 72Hz
   Highscreen LE 1024
   Hitachi SuperScan 20S
   Hyundai DeluxScan 14S, 15B, 15G, 15G+, 15 Pro, 17MB/17MS, 17B, 17B+,
   17 Pro, hcm-421E
   IBM 8507
   IDEK Vision Master
   Impression 7 Plus 7728D
   Lite-On CM1414E
   MAG DJ717, DX1495, DX1595, DX1795, Impression 17, MX15F
   MegaImage 17
   NEC MultiSync 2V, 3D, 3V, 3FGe, 3FGx, 4D, 4FG, 4FGe, 5FG, 5FGe, 5FGp,
   6FG, 6FGp,
   A500, A700, C400, C500, E500, E700, E1100, M500, M700, P750, P1150,
   XE15, XE17,
   XE21, XP15, XP17, XP21, XV14, XV15, XV17, XV15+, XV17+
   Nanao F340i-W, F550i, F550i-W
   Nokia 445X, 447B
   Optiquest Q41, Q51, Q53, Q71, Q100, V641, V655, V773, V775, V95, V115,
   V115T
   Philips 7BM749, 1764DC
   Princeton Graphics Systems Ultra 17
   Quantex TE1564M Super View 1280
   Relisys RE1564
   Sampo alphascan-17
   Samsung SyncMaster 15GLe, 15GLi, 15M, 17GLi, 17GLsi, 3, 3Ne,
   500b/500Mb, 500s/500Ms,
   500p/500Mp, 700b/700Mb, 700p/700Mp, 700s/700Ms
   Samtron SC-428PS/PSL, SC-428PT/PTL, 5E/5ME, 5B/5MB, SC-528TXL,
   SC-528UXL, SC-
   MDL, 7E/7ME/7B/7MB, SC-728FXL, SC-726GXL
   Sony CPD-1430, CPD-15SX, CPD-100SF, CPD-200SF, CPD-300SF, CPD-100VS,
   CPD-
   120VS, CPD-220VS
   Sony Multiscan 100sf, 100sx, 200sf, 200sx, 15sf, 15sfII, 17se, 17seII
   TARGA TM 1710 D
   Tatung CM14UHE, CM14UHR, CMUHS
   TAXAN 875
   Unisys-19
   ViewSonic 15ES, 15GA, 15GS, 17, 17PS, 17GA, 5e, 6, 7, E641, E655,
   EA771, G653, G771, G773,
   GT770, GT775, P775, PT770, PT775, P810, P815, PT813, VP140
   Mice (listed in order of appearance in the selection list,
   horizontally, from left to right.)
   Microsoft Standard mouse
   MouseSystems
   MMSeries
   Logitech
   MouseMan
   MMHitTab
   GlidePoint
   Intellimouse
   ThinkingMouse
   BusMouse
   PS/2
   Auto
   IMPS/2
   ThinkingMousePS/2
   MouseManPS/2
   GlidePointPS/2
   NetMousePS/2
   NetScrollPS/2
     _________________________________________________________________
   
    Unsupported Hardware
    
   If X does not directly support your video card and/or monitor, all may
   not be lost. Try choosing one of the "generic" cards and monitors that
   most closely resembles your hardware. The SVGA server is a good place
   to start if you have an unsupported card. Another possible option is
   the VGA16 server. Almost any card will run (at reduced performance)
   with one of these two servers.
   
   Another possible option is to consider purchasing a "commercial" X
   server. Two possible choices are:
   MetroX http://www.metrolink.com/
   XInside http://www.xinside.com/
   
   These commercial servers often support a wider range of cards and
   monitors, due to the willingness of the developer of the X server
   software to abide by Non Disclosure Agreements required by some card
   manufacturers. In plain English, some card manufacturers refuse to
   work with the open source community. Something to consider the next
   time you get ready to purchase a video card.
   
   Likewise, the generic VGA or SVGA monitors will usually at least get
   you up and running. However, as I have mentioned previously, DO NOT
   EXCEED THE CAPABILTIES OF YOUR CARD OR MONITOR! Otherwise, you may
   initiate what is called in the electronics world "a smoke test." This
   is a bad thing, and makes your house smell, as well as setting off
   your smoke detector.
     _________________________________________________________________
   
    Gathering Information about your hardware
    
   It is imperative that you know as much as you can about your video
   card and monitor. (You did keep those manuals and documentation didn't
   you?)
   
   If you do not have the documentation available, check the various docs
   in the /var/X11R6/lib/docs area, or search the Internet. Another
   possible option is to go directly to the manufacturer's website if
   available, and acquire the specifications there. A final option on
   some monitors, the synch rates is sometimes listed on the back along
   with the model number and other information.
   
   Make sure, if at all possible, that your card and monitor are on the
   supported hardware list. This will save you a lot of grief and give
   you the best chance of success, as well as enabling you to take full
   advantage of the accelerated features of your video card.
   
   At a bare minimum, you should have the following information
   available: Manufacturer, make and model of your video card: e.g.
   Matrox Millenium Amount of RAM resident on the video card: e.g. 8MB
   
   Manufacturer, make and model of your monitor: e.g. Viewsonic 15E
   Horizontal synch rate of your monitor: e.g. 31.5-82.0 Vertical synch
   rate of your monitor: e.g. 40-100
   
   A special note on mice: If at all possible, try to get a three-button
   mouse. X uses the middle button for some special functions. While it
   is possible to configure a two-button mouse to behave as a
   three-button mouse using an emulator that requires you to depress both
   buttons simultaneously to emulate the middle button, this feature is
   flaky at best on many mice and sometimes hard to master.
   
   Note for PS/2 mice users: It has been reported that some users
   experience problems with the behavior of a PS/2 mouse under X. This is
   almost always due to the fact that the general-purpose mouse (gpm)
   program is being loaded at boot time, and for some reason, freaks out
   X.
   
   Some have suggested adding a variety of switches or other parameters
   to the start up file that are purported to correct this problem.
   However, I have had limited success with these methods. Sometimes they
   will correct the problem, other times they will not.
   
   What does work all the time is to comment out the start up of gpm at
   boot time.
   
   On a Slackware machine, cd to /etc/rc.d/rc.local and place a pound
   sign (#) in front of the lines that look similar to the following:

# echo starting gpm
# gpm /dev/mouse

   Should you find the need to use gpm while in text mode, simply type
   gpm <return> and start it manually.
   
   On a RedHat machine, from the command prompt, simply type setup
   <return>
   
   You will be presented with a dialog box prompting you to select a
   configuration tool. Select ntsysv then
   tab to the run button and press return.
   
   Scroll down the dialog box until you see an entry for gpm. Highlight
   this entry and depress the spacebar to remove the asterix (*), then
   exit.
     _________________________________________________________________
   
    Safety concerns and precautions
    
   Although the X windowing system offers infinite flexibility and
   configurability, it is very picky about what hardware it will and will
   not run on.
   
   Just as Unix or Linux will not run on marginal hardware that may work
   with Windows, it may or may not run on marginal or clone-type video
   cards and monitors.
   
   While it is possible to "hand tune" X to work with just about any
   video card and monitor, to do so is NOT RECOMMENDED. Diddling around
   with your clock settings, choosing a card or monitor "similar" to your
   equipment, or just picking horizontal and vertical synch rates at
   random can damage or destroy your video card or monitor. DON'T DO IT!
   
   The optimal configuration, and the only one I can recommend, is to
   make sure your video card and monitor are explicitly listed and
   supported by X before trying to configure and run it. While I do offer
   some suggestions for people with unsupported hardware, there is no
   guarantee these suggestions will work, nor do I offer any assurance
   that they won't damage your equipment. Proceed at your own risk.
     _________________________________________________________________
   
    Starting the configuration program
    
   Before you can actually use X, you must generate a configuration file
   that tells X about your video card, monitor, mouse, and some default
   preference information required to initialize the X environment and
   get it up and running. All of the following configuration steps will
   need to be done as root initially, then if necessary, you can create
   your own unique X configuration for each of your respective users.
   
   The method and program used to accomplish this task will depend on
   which flavor of Linux you are using.
   
   NOTE: The instructions listed below assume you are using Xfree86
   3.3.2-2. If you are using one of the commercial X servers, such as
   MetroX or XInside, your configuration methods may be different. Please
   consult the documentation that comes with your commercial product.
   
   Slackware 3.5:
   The X configuration program for Slackware 3.5 is called XF86Setup. To
   start the program, at the command prompt, simply type:
   
   XF86Setup <return>
   
   You will be presented with a dialog box prompting you to switch to
   graphics mode. Select OK.
   
   After a moment, you will enter the XF86Setup screen. Along the top of
   the screen will be a series of buttons to configure the various
   components of the X windowing system. They will appear in a horizontal
   row in the following order:
   
   Mouse Keyboard Card Monitor Modeselection Other
   
   RedHat 5.1:
   The X configuration program for RedHat Linux is called Xconfigurator.
   To start the program, at the command prompt, simply type:
   
   Xconfigurator <return>
   
   Press return to get past the welcome screen, then skip to the video
   card section.
     _________________________________________________________________
   
    Configuration of the mouse under X
    
   Slackware 3.5:
   This should already have been taken care of during installation. If
   you have something other than a three-button mouse, be sure to select
   the Emulate3Buttons option for maximum functionality under X.
   
   The next option, Keyboard, should be already configured properly.
   Under normal circumstances, no adjustments should be required here.
   
   RedHat 5.1:
   This should already have been taken care of for you during
   installation. If not, break out of the Xconfigurator and run
   mouseconfig, then start over.
     _________________________________________________________________
   
    Configuration of your video card
    
   Slackware 3.5:
   Select the card option from the menu at the top of your screen. Scroll
   down and select the appropriate video card for your system.
   
   If necessary, you may also need to select the Detailed setup button to
   configure Chipset, RamDac, ClockChip, Device options, and the amount
   of video RAM on your card. Usually these options will be probed
   automatically. I only mention this so you can "tweak" the card if you
   are feeling brave.
   
   RedHat 5.1:
   The setup program will now autoprobe for your type and model of video
   card. On the plus side, this can simplify things, IF it properly
   identifies your card. On the minus side, if it does not, it does not
   offer you an alternative to manually choose the card. If your card is
   not properly identified, see the unsupported card section for some
   general suggestions on some things to try.
     _________________________________________________________________
   
    Configuration of your monitor
    
   Slackware 3.5:
   If you have the documentation available, you may enter the Horizontal
   and Vertical Synch rates manually in the input boxes, or alternately,
   you may choose one of the preset configurations in the scroll box.
   
   It is almost always safe to choose either the Standard VGA or Super
   VGA option to start, then work up to the specific settings and color
   resolution you desire (subject to the limitations of your hardware.)
   
   Lastly, select the Modeselection option, and choose your desired
   screen resolution and color depth. To begin with, less is better.
   Start with 640x480 @ 8bpp to start, then work your way up.
   
   When you are finished with your configuration, select done from the
   bottom of the screen, and the setup program will attempt to start X
   with the configuration you have selected. If all goes well, you will
   be prompted to write the configuration to your XF86Config file and
   exit. If you have any problems, you will be prompted to try again
   until you have your configuration setup properly.
   
   RedHat 5.1:
   At the Monitor Setup dialog screen, scroll down and choose the
   appropriate monitor. If your monitor is not listed, choose generic or
   custom. If you choose custom, have your vertical sync rate and amount
   of video RAM handy, you will need them.
   
   You will be presented with a dialog box that contains the same monitor
   choices listed in the Slackware section. After choosing a monitor, you
   will be prompted to select your vertical sync rate. Finally, you will
   be asked to specify the amount of video RAM present on your card.
   
   After exiting the Xconfigurator program, you are ready to test your
   new configuration
     _________________________________________________________________
   
    Testing your configuration
    
   At the command prompt, simply type startx. If all went well, you
   should shortly be on your way. If for any reason X fails to start up,
   go back and run your configuration program again, double-checking that
   you have all the proper settings.
     _________________________________________________________________
   
    Customization tips and tricks
    
   By default, both Slackware and RedHat install the FVWN95 Window
   Manager, a Windows 95 look-alike. This is probably a good start for
   users transitioning from a Windows based environment, as it will be
   the most familiar to you.
   
   Since X is infinitely configurable, and also stunningly cryptic at
   times, an in depth discussion of all the configuration options
   available under X is beyond the scope of this document. However, what
   follows are a few things you may be interested in.
   
   A few words about the X desktop:
    1. X allows the use of something called a virtual desktop, which is
       simply a fancy way of saying you can have a virtual desktop
       resolution that is larger than the actual resolution you have set
       your monitor to. As an example, say you have your card resolution
       to 1024x768 @ 32 bit color. X allows you to set your virtual
       desktop to 1280x1024, which some people love, and some people
       hate. If you want to disable this behavior, locate your XF86Config
       file, scroll down to the Screen sections, and look for a line
       similar to the following: Virtual 1280 1024. To disable the
       virtual screen, change this entry to the default screen resolution
       you have chosen, 1024 768 in this example. Similarly, to enable
       it, simply change to the next higher resolution, 1280 1024 in this
       example.
    2. FVWN95, as well as the other popular Window Managers, offer a
       variety of configuration options. Experiment with them until you
       find the one you like best.
    3. Finally, depending on your distribution, you may or may not have
       other Window Managers available to you. Experiment with the
       different ones available on your system until you find the one you
       like best. My personal favorite is Afterstep, but you may find you
       can't live without one of the others. Choose the one you like
       best. Under FVWM95 on a Slackware box, choose Exit Fvwm95 from the
       Start menu, then choose the Window Manager you want to use from
       the drop down box accessed by moving your mouse to the right edge
       of the menu option, highlighting the arrow (>) that resides there.
       On a RedHat box, from the Start menu, choose Preferences/WM Style
       to change to a different Window Manager.
       
   Stupid X Tricks:
    1. To start an X session, simply type startx at the command prompt.
    2. If you have configured your X server for more than one screen
       resolution, say 640x480, 800x600, and 1024x768, and you want to
       switch between the different resolutions, simply depress
       Cntrl+Alt+(either the plus (+) sign, or the minus (-) sign to
       switch to a higher or lower resolution, respectively. Why would
       you want to do this? I often do a great deal of Web Design on my
       machine, and being able to quickly see what a given page will look
       like at different resolutions is quite handy.
    3. To terminate an X session, you can either exit the session using
       the appropriate menu selection for your respective Window Manager,
       or you may depress Cntrl+Alt+Backspace.
    4. You may also set up your personal user accounts (you're not always
       working as root are you?) by setting up an .xinitrc file in your
       home directory, if needed. Usually, this is only necessary on a
       Slackware box. On a RedHat box, I believe this is taken care of
       for you. Check the documentation.
     _________________________________________________________________
   
    Troubleshooting your configuration
    
   Basically, there are only a few things that can go wrong with your X
   installation. Either the X server will refuse to start at all, the X
   server will start but you get a blank screen, or the X server will
   start, but for one reason or another, the screen will be improperly
   sized, flickering, or unreadable.
   
   If the X server refuses to start at all, pay close attention to the
   error messages that appear while the server errors out. Most
   frequently, this is an improperly configured monitor or card that
   causes the server to die. Check your configuration.
   
   If the X server starts, but the screen exhibits an improper size, or
   excessive flickering, you probably need to adjust your horizontal or
   vertical sync rates.
   
   If the screen appears to be unreadable, due to excessive lines or
   smearing of the pixels, check your card and monitor configurations.
   
   Simply put, most problems can be traced back to an improper
   configuration of the card, the monitor, or both. This is why I
   strongly recommend making sure your hardware be explicitly supported,
   or using one of the "generic" configurations to start with.
   
   Beyond this, check the documentation for specific card set problems,
   specific monitor problems, and other general troubleshooting
   procedures.
   
   Another possible option is to troll the newsgroups for a similar
   problem, or post a brief description of the trouble you are having,
   and hopefully, someone with a similar problem they have solved before
   will get back to you.
   
   If all else fails, drop me e-mail and I'll be glad to try to help.
     _________________________________________________________________
   
    Resources for further information
    
   Xfree86 Resources:
     * http://www.xfree86.org/
     * http://sunsite.unc.edu/LDP/
       
   Window Managers:
     * http://www.gaijin.com/X/
     * http://www.afterstep.org/
     * http://www.pconline.com/~erc/xwm.htm
     * http://www.PliG.org/xwinman/
     _________________________________________________________________
   
   I had originally planned to include the configuration of your basic
   networking setup into this installment as well, but as you can see,
   this is a real porker as it is. So look for the networking stuff in
   part three.
     _________________________________________________________________
   
                       Copyright  1998, Ron Jenkins
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                          DICT and Word Inspector
                                      
                               By Larry Ayers
     _________________________________________________________________
   
                                Introduction
                                      
   Access to an on-line dictionary has been possible for several years
   now due to the Webster TCPIP protocol. Webster is useful but the
   number of servers has been on the decline, and the protocol itself is
   limited by its dependence on a single dictionary database. Rik Faith,
   a programmer responsible for many of the
   essential-but-taken-for-granted Linux utilities, has created a new,
   more flexible protocol known as DICT. DICT is another TCPIP protocol
   (usable either over a network or on a local machine) which provides
   access to any number of dictionary databases. Local access is provided
   by a client program called dict which contacts the dictd server
   daemon. Dictd then searches the available databases and makes any hits
   available to dict, which pipes its output to the default pager on the
   local machine (usually either more, less, or most). Net access is
   available from several servers, including the home DICT site. Looking
   up words while on-line frees the user from needing to install and run
   the dictd and dict client and server programs (as well as having to
   make room for the bulky databases on a local disk), but if you have
   the disk space it's convenient to have the service available at any
   time.
   
   The dictd and dict programs are licensed under the GPL, so naturally
   they are set up to use freely available word databases.
   
                  Installing The DICT Distribution Locally
                                      
   DICT is a typical Unix-style command-line set of programs. GUI-fans
   will regret the absence of a graphical interface, but the glass is
   really half-full. Due to the absence of oft-troublesome GUI toolkit
   dependencies, the source for the client and server programs should
   compile easily. Toolkits come and go, but applications written with a
   simple console interface can easily be adapted to whatever the future
   toolkit du jour might be. There are numerous programmers who lack the
   time or inclination to develop Linux utilities from scratch, but
   welcome the opportunity to write GUI front-ends to console programs
   (see the Word Inspector section below).
   
   Compiling and installing dictd and dict isn't difficult, but to make
   use of them the word databases need to be downloaded and installed.
   Here is a list of the free databases which are currently available
   from the DICT FTP site:
     * A 1913 edition of Webster's Revised Unabridged Dictionary
     * The Free On-Line Dictionary of Computing
     * Eric Raymond's Jargon File
     * The WordNet database
     * Easton's 1897 Bible Dictionary
     * Hitchcock's Bible Names Dictionary
     * The Elements (physical elements)
     * U.S Gazetteer (1990)
     * The 1995 CIA World Factbook
       
   All of these files and their indices will occupy about thirty-one
   megabytes of disk space, roughly the same amount as the WordNet
   dictionary files alone. The DICT data-files are compressed with a
   variant of gzip called dictzip, also written by Rik Faith. Dictzip
   adds extra header information to a compressed file which allows
   pseudo-random access to the file. When the dictd server processes a
   request for a word it looks first in the various index files. These
   files (which are human-readable) are just simple lists with the
   location of each word within the compressed dictionary file. Dictd is
   able to use this information to uncompress just the single 64-kb.
   block of data which contains the word-entry. This greatly speeds up
   access, as the entire dictionary file doesn't need to be uncompressed
   and subsequently re-compressed for each transaction. Files compressed
   with dictzip can be recognized by the *.dz suffix.
   
   Although dictzip doesn't compress quite as tightly as gzip, the added
   advantage of the header information (at least for the sort of access
   dictd needs) is a compensation. The above-listed dictionary files
   would need nearly seventy-five megabytes of disk space if they weren't
   compressed.
   
                          Comparison With WordNet
                                      
   In issue 27 of the Gazette, (April, 1998) I wrote about another
   dictionary-database system called WordNet. In order to access a DICT
   database the dict server must be running which communicates with dict
   client programs, whereas WordNet isn't a client-server program; the
   small wn program searches the database indices directly. The upshot is
   that WordNet uses less memory than a DICT system, but since WordNet
   databases aren't compressed they occupy more disk space than the
   specially compressed DICT files. DICT files contain more words (along
   with etymologies, which WordNet lacks) and can be supplemented with
   new files in the future, but DICT lacks WordNet's powerful thesaurus
   and lexical usage capabilities. Another factor to consider is that
   development of WordNet has ceased, whereas DICT is still being
   improved and the chances of its continued development seem likely.
   Additionally, DICT can use the WordNet data-files in a compressed
   format.
   
                               Configuration
                                      
   Sample configuration files are included with the DICT distribution.
   The file /etc/dictd.conf should contain the location of your local
   dictionary files in this format:

database web1913   { data "/mt/dict/web1913.dict.dz"
                     index "/mt/dict/web1913.index" }
database jargon    { data "/mt/dict/jargon.dict.dz"
                     index "/mt/dict/jargon.index" }

   The dict client needs to know where the server is; if a local server
   is used a simple ~/.dictrc file containing this line will work:

server localhost

   If both ~.dictrc and /etc/dict.conf are missing the dict client
   program will first attempt to access the www.dict.org web-server; if
   that fails it will try some alternate sites. To prevent these attempts
   (when running a local dictd server) just use the above ~/.dictrc file.
   
                                 Drawbacks
                                      
   Dictd might not be a service which you would want to run all of the
   time. Though not a large executable, it uses a significant amount of
   memory, typically four to five megabytes. I surmise that the daemon
   reads the dictionary index-files into memory when it starts up and
   keeps them there. This premise also would explain why the word
   look-ups are so speedy. Memory access is much faster than disk access,
   and once the daemon determines from the index which sixty-four
   kilobyte block holds the desired information it can quickly decompress
   that small chunk of the dictionary file and serve up the word
   information. I've found that starting dictd while writing or whenever
   I become curious about word-usage and killing the daemon at other
   times works well.
   
                               Word Inspector
                                      
   Scott W. Gifford has written a nice graphical front-end to the dict
   client program called Word Inspector. Here's a screenshot of the
   initial window:
   
   Word Inspector Main Window
   
   And here is one showing the output window:
   
   Word Inspector Output Window
   
   In the README file accompanying Word Inspector Scott Gifford suggests
   setting up the application with several different window-manager
   menu-items. Running wordinspect --define --clipboard will bring up a
   Word Inspector output window (as shown in the second screenshot) with
   the contents of the current X primary selection as the input.
   Alternatively, wordinspect --search --clipboard will cause the initial
   window to appear with the X primary selection already shown in the
   entry field, and running just plain wordinspect will bring up an empty
   initial window, so that a word can be typed in which isn't a
   mouse-selection. These three commands could be set up in a submenu
   stemming from a top-level Word Inspector menu-item.
   
   Word Inspector makes good use of right-mouse-button pop-up menus.
   Right-clicking on any word in a definition pops up a menu allowing you
   to either open a search (initial) window with the selected word
   already filled in, or open a definition window for the word.
   Highlighting a series of words with the mouse, then right-clicking,
   will enable the same behavior for phrases.
   
   The source of the current version of Word Inspector is this web-site.
   The GTK toolkit is required for compilation, with version 1.06
   recommended. Last modified: Mon 28 Sep 1998
     _________________________________________________________________
   
                       Copyright  1998, Larry Ayers
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                      Pysol: Python-Powered Solitaire
                                      
                               By Larry Ayers
     _________________________________________________________________
   
                                Introduction
                                      
   Playing solitaire card games on a computer became popular when
   Microsoft bundled a Klondike game with Windows 3.1. Since then such
   programs have proliferated on nearly every platform which possesses a
   windowing interface. There is a certain appeal to dragging miniature
   representations of playing cards around the screen. A side benefit is
   that such games usually can keep track of scores, provide hints, and
   sometimes auto-play in demo mode.
   
   There have been many solitaire games released for Linux. One of the
   older ones is xpat2, which has some of the nicest design-work of any
   of these games. Xpat2 shows its age due to the lack of card-dragging,
   which contributes greatly to the feel of a computer card-game.
   Clicking on a card instantly moves it to a legal destination; when
   there is more than one possible move, the one xpat2 chooses may not be
   the one you had in mind. Otherwise it's a fine game, with several
   solitaire variants to choose from and well-done on-line help.
   
   Users of the GNOME and KDE desktop environments each have native
   solitaire games, both of which are quality applications. If you aren't
   a user of one of these desktop systems it's hardly worthwhile to keep
   the bulky shared libraries around just to play a simple game.
   
   Recently I was browsing the incoming directory at the Sunsite FTP
   site; I happened across a small file which, according to its
   accompanying *.lsm file, purported to be an implementation of
   solitaire called Pysol written in the Python programming language. I
   was a little dubious of this claim. Python is a versatile and powerful
   interpreted programming language, but is it possible to write a
   card-game using Python which is as usable and pleasing to the eye as
   one written in C or C++?
   
   It evidently is possible if the tkinter module is used to provide the
   graphical interface. Tkinter (which I assume stands for "interface to
   Tk") lets a Python script use John Ousterhout's versatile Tk toolkit
   to provide the windowing interface. Tk is normally used with the Tcl
   command language, but Tcl has several limitations. These have been
   sufficient to provide motivation for several replacements; Tkinter is
   widely used, but there are others. Perltk uses Larry Wall's Perl
   language as the command language; another is Stk, which uses Scheme as
   its scripting language. The Xxl spreadsheet is the first major project
   I've seen which uses this Tk/Scheme hybrid. (Perhaps I'll review Xxl
   one of these months).
   
   Pysol was written by Markus F.X.J. Oberhumer and he has released it
   under the GNU license. The game is an extensive reworking and
   enhancement of a simple Python solitaire demo written by Guido van
   Rossum (creator and maintainer of the Python language) which is
   included as an example in the Python distribution.
   
                           Features and Game-Play
                                      
   Here is a list of Pysol's features, adapted from the README file in
   the distribution:
     * It's based upon an extensible solitaire engine
     * A very nice look and feel
     * Unlimited undo & redo
     * Pysol can load & save games
     * Player statistics are available
     * Hints for possible next moves
     * Demo games
     * HTML-based help browser
     * Playable under all platforms which TK and Python support,
       including MacOS, Windows, and of course X11
       
   Nine games can be played: Gypsy, Picture Gallery, Irmgard, 8X8,
   Freecell, Seahaven,Braid, Spider, and Forty Thieves. The rules and
   documentation are supplied in HTML format and are displayed in a
   separate window using a Python HTML extension. Card-dealing at the
   onset of a game is nicely animated, and the mouse-dragging of cards
   works smoothly.
   
   If you have ever spent much time playing solitaire on a computer you
   probably have noticed that after a certain point in a game the outcome
   seems obvious. This intuition isn't always accurate when you suspect
   the game is lost, but sometimes it's obvious that several more
   card-moves will win the game. Pysol binds the a key so that, when
   pressed, it will automatically cycle through those moves and bring the
   game to completion. When you strongly suspect that the game can't be
   won, the menu-item Demo (in the Game menu) will ask if you want to
   abandon the current game; pressing the "Yes" button will start the
   demo mode and either finish the game or find that it can't be
   completed. I've found that about one-quarter of the times I resort to
   demo mode my intuition was wrong and the game could have been won. If
   a game is hopeless a pop-up window appears informing you that "This
   won't come out".
   
   Pysol's help key is h; when it's pressed a black arrow appears
   extending from a card to a recommended destination. The same arrows
   appear when Demo mode is initiated, though in this case the cards are
   actually moved.
   
   Here is a screenshot of a Pysol Klondike game:
   
                                Pysol Window
     _________________________________________________________________
   
                              Installing Pysol
                                      
   Pysol won't work at all if you don't have a current Python
   installation, including the tkinter module. A current Linux
   distribution will include all the Python stuff you would ever want,
   it's just a matter of installing it. As far as that goes, Python is
   one of those high-quality applications which is very likely to compile
   well from source, assuming you have the basic Linux development
   packages installed, such as gcc, make, etc.
   
   Pysol is just a 75-kilobyte executable Python script; running make
   install will copy the script to /usr/games and the necessary
   data-files to /usr/share/pysol, after which the game is ready to run.
   
   I'm impressed by this game's quality and playability. It does take
   several seconds to start up, probably due to the necessity of loading
   the Python interpreter and the Tkinter module into memory.
     _________________________________________________________________
   
   Last modified: Mon 28 Sep 1998
     _________________________________________________________________
   
                       Copyright  1998, Larry Ayers
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                            Another Typing Tutor
                                      
                               By Larry Ayers
     _________________________________________________________________
   
   Last month I discussed Simon Baldwin's lesson-based typing tutor,
   Typist. In response to that article John Chapman sent me e-mail
   concerning another typing tutor commonly used on FreeBSD systems. With
   his permission, I'll quote from his message:
   
     Dear Mr. Ayers:
     
     In the Linux Gazette you recently expressed your interest in
     learning the Dvorak keyboard, and I thought you might enjoy playing
     with the attached Tk program called kp (=Keyboard Practice). It
     seems to be standard issue with FreeBSD, but I've never seen it in
     any Linux distribution or archive.
     
     It was written for Tk4.1, but works perfectly well with 4.2. I
     haven't tried it with 8.0, though, so you might have to hunt up an
     older version of Tk, if you don't already have one. To set it up,
     either untar it in /usr/local/lib, or put everything into ~/bin, or
     whatever you like best. Then edit the "executable" kp file so that
     the first line corresponds with your version of wish (I have the
     4.2 version in /usr/bin/wish4.2 on my Debian system), and the "cd"
     line points to /usr/local/lib/kp, $HOME/bin, or wherever you
     decided to plant the .tcl files. Copy kp to /usr/local/bin (or
     leave it in $HOME/bin, if that's in your path), fire up X, invoke
     "kp", and off you go!
     
     In the "options" menu you can switch between Dvorak and qwerty, and
     in the "file" menu you can insert any ascii text file you wish as a
     model for practice. The filter allows you to limit the text to
     words composed of specific letters; swipe your finger over the keys
     in the home row, for example, and the filter will pull out only
     those words made up of the letters in the home row. Quite cleverly
     done.
     
     If you decide that you want to use the Dvorak layout for Real
     Work(TM), it's quite easy to have xmodmap load a Dvorak keymap for
     you, and switch back to qwerty when you're done. Emacs can load a
     Dvorak keymap for you, too. And some clever soul came up with the
     idea of aliasing "asdf" to "xmodmap .kbd.dvorak" and "aoeu" (the
     same four keys!) to "xmodmap .kbd.default", so that your whole
     family doesn't have to suffer :-) , but can switch back to a
     "normal" layout with one simple key pattern. Presumably you could
     use the same trick to reset the keymapping in console mode, too.
     
   The "clever soul" referred to above is Don Reed (according to a later
   message from John Chapman). Don Reed wrote an HTML file explaining his
   approach to switching keyboard layouts on the fly; John sent me the
   file, which you can read here.
   
   Keyboard Practice is a useful and well-designed Tcl-Tk program; its
   ability to use any text file as practice material is a nice touch. It
   was written by Satoshi Asami <asami@cs.berkeley.edu>. It's not just
   for practicing Dvorak typing; a menu-item lets a user switch to QUERTY
   as well. Since the archived files occupy just a little over twelve
   kilobytes, you can access them in this issue of LG here. To try it
   out, just follow the instructions in the above quoted message from
   John Chapman.
   
   John also suggested a reference to the Dvorak International Web-site,
   which (although not updated recently) has links to most Dvorak sites
   on the net.
     _________________________________________________________________
     _________________________________________________________________
   
   Last modified: Mon 28 Sep 1998
     _________________________________________________________________
   
                       Copyright  1998, Larry Ayers
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                            mcadforlinuxlogo.jpg
                                      
                               By Damir Naden
     _________________________________________________________________
   
INTRODUCTION
     _________________________________________________________________
   
   I am a Mechanical Engineer and an owner of a small business, L&D
   Technologies, specializing in mechanical design and drafting and
   project management of small to medium size projects in mechanical
   engineering field. As any small business owner knows, the cost of
   start-up can be quite high, especially in the field where high end
   workstation and 3-D software are very important. I knew that I
   couldn't afford the SGI(TM) or UltraSPARC2(TM) machine (even though
   that would have been perfect), so my options were down to which
   operating system I would be running my PC under.
   
   I had two options:
   
     * WindowsNT(TM)- which I use at my other, daytime job, and thus
       already have a very good understanding of the CAD software
       available on this platform ( CADKEY(TM) and Pro/E(TM))
     * Debian/GNU Linux OS- which I use on my home computer
       
   Doing the preliminary cost estimate comparison between these two
   options, I quickly ruled out Windows(TM)-based system.
   
   And so my search began for a production quality mechanical CAD system
   that would run under Linux, and be reasonably priced.
     _________________________________________________________________
   
SEARCH
     _________________________________________________________________
   
   (go back to the top)
   
   I have used Linux for three years, and all that time the available
   applications and their quality have constantly been improving. I have
   felt that the only field where Linux was thin in available
   applications was mechanical engineering. True, there have been some
   CAD apps out there, but they either required too much programming
   (very powerfull VARKON, for example) or were too simplistic for
   production drafting (otherwise very good xfig/transfig combo). I have
   also looked into Bentley's Microstation (TM)( for Linux, but they only
   offered educational licences at the time ( a move I will never
   understand: who would get an educational licence for a piece of
   software they can not continue using after graduation- at least not
   under the same OS-?). Just for the record, I think the
   Microstation(TM) could blow away anything offered for Linux in this
   field, if they had some management vision and interest in developing
   for Linux community. One other site is worth mentioning, if for
   nothing else but for more exposure of this project to other Linux
   users- FREEDraft project. It is an attempt to bring to life a GNU
   drafting package, and I wish those people the best of luck in future
   development.
   
   Then I have noticed two new entries in the software arena, LinuxCAD
   and VariCAD. I have almost purchased LinuxCAD (at $75, it seemed like
   a great deal), but didn't like the fact they had no demo available,
   and their E-mail reply to my preliminary inquiry had amounted to a
   little more than self-promoting junk mail. Only a couple of days later
   there was a usenet discussion about LinuxCAD and result was a page
   posted here, which comletely turned me away from LinuxCAD. I went to
   VariCAD's USA site instead, and quickly found out there is a working
   demo (without Save features) available for a download.
   
   If you are interested in doing a search for available apps for Linux
   on your own, I recommend following sites as a good starting point:
   
     * very good for scientific applications: Kachina Technologies site
     * for general linux application: linux applications mirror site
     _________________________________________________________________
   
VariCAD FOR LINUX
     _________________________________________________________________
   
   (go back to the top)
   
  Obtaining And Installing The Software
  
   Download consisted of getting four tarred files, and amounted to about
   5Mb, which is very reasonable for a CAD system, along with the
   installation script. Available for the download is also RPM package,
   which must be downloaded as a roughly 5Mb single file, and it
   represents a nice touch for people running Red Hat distribution of
   Linux.
   
   Installation instructions, for people who choose to get the plain
   tarred files, are very simple and clearly stated at the download site.
   I have simply followed those instructions, and it worked like a charm
   with version 6.1. As of a Aug. 29 1998, they have released new
   version, 6.2-0.3, and in my experience, there is a small glitch in
   installation script inst.sh which requires one to log in as root for
   it to work. On my system trying to execute the inst.sh script under su
   did not work; only 'true' root login managed to install the program.
   Also, the tarred files had been deleted in the installation process,
   so if you want to have a backup on the floppies, be sure to copy
   tarred files someplace else first, before executing the inst.sh
   script. This didn't happen with the version 6.1, though. On the other
   hand, new version (6.2-0.3) seems to be more robust, and it adds a
   drop-down menu for Internet access, which I haven't tested yet.
   
   Since I'm running the Debian distribution, I would have liked to see
   the option in the installation process for choosing the target
   directory, and would have rather placed the VariCAD under /usr/local
   tree than under the default /usr tree. On the other hand, after
   installation script had completed, executing varicad command for the
   very first time in the rxvt resulted in a flawless start. I'm running
   the Xfree86 windowing system, with xserver-mach64 running in the
   1152x864 resolution and 32 bpp, and VariCAD didn't seem to have a
   problem with those settings. After I have been playing with the
   software for a week, I decided it was worth the price they are asking
   for it and, after I have mailed in the cheque, received a small file
   in an E-mail which enables the save feature. As per instructions in
   the E-mail I copied the file to the /usr/lib/Varicad tree and at the
   next start of the program, the pop-up message about demo nature of the
   program went away, and I could happily save files and settings
   
  Using The VariCAD
  
   Before going any further, I would like to say that my exposure to
   AutoCAD (TM) has been limited to version 10, way, way back, and if you
   are expecting the direct comparison between Mechanical Desktop (TM)
   and VariCAD, I'm afraid you will be dissappointed. If you are using
   AutoCAD and have given VariCAD a try, please E-mail me your short
   review in an HTML format, and I'll post it here or send me an URL
   pointer to your page.
   
   Because VariCAD does not use the Motif libraries, the executable is
   rather small and efficient. Fired up and having a rather simple 2-D
   drawing running, VariCAD toll on the system's resources is rather
   small ( output from ps on my system running VariCAD):
   
   ~$ ps aux
   
   USER PID %CPU %MEM SIZE RSS TTY STAT START TIME COMMAND
   
   dnaden 2406 11.4 2.7 4844 1760 1 S 22:16 0:02 /usr/bin/varicad
   
   The interface is very plain which is a plus in my opinion. Starting
   with ver. 6.2-0.3, the 'tool-tip' style description is available for
   all the buttons of the toolbar, which is a very important feature if
   you are just strarting to use the software. On-line manual is
   available from the drop-down menu, and it is very complete. Some parts
   suffer from less-than-optimum english translation, but I haven't found
   that to be in a way of getting the gist of the information through.
   Then again, my english is not perfect, either...
   
   Sytem starts up in a 2-D mode, and switching into the 3-D mode is a
   matter of simple click on the icon in the top right-hand corner.
   Default toolbar features the icons for drafting functions, and paging
   through the toolbars for other functions ( dimensioning, for example)
   is done by clicking on the respective icon in the bottom part of the
   toolbar. All toolbars seem to be of the tear-off variety, but I
   haven't tested that extensively ( I like my interface clean). And all
   the functions are available through the drop-down menus as well.
   
   First thing I have noticed is that panning and zooming back and forth
   is done fast. A simple subjective comparison between very similar
   machines running CADKEY (TM)for Windows(TM) v.7.5 (under WinNT(TM))
   and VariCAD v6.2-0.3 under Debian/GNU Linux v.2.0 would suggest that
   VariCAD is slightly faster in redrawing the screen. Another feature I
   like is the way zooming and panning work (users of Pro/E should feel
   at home here): dragging the mouse ( and having the Shift+LMB pressed)
   up and down zooms in and out, respectively, and dragging the mouse,
   having the Control+LMB pressed, does the panning. It is very
   convenient feature when you get used to it. And if you get lost in all
   this zooming and panning, there is a feature called Aerial View, which
   brings up a small window with the overview of the entire drawing area
   and highlights the square you are in at that moment in the main window
   ( I believe I have seen same feature in AutoCAD Lite(TM)...). Other
   noticeable feature ( for me at least) enables one to highlight the
   feature when the mouse cursor is over it. If you ever worked with lots
   of lines spaced close to each other, you will learn to appreciate
   this. It can also highlight feature's significant points (i.e. end- or
   mid- point of the line, center of the circle and so on) by popping up
   a small code when your cursor is on top of it. I haven't had that in
   CADKEY(TM), so it will take me some time before I can remember all the
   symbols and their meaning, but AutoCAD(TM) users should be familiar
   with them ( for example, @ for the center of the circle...).
   
   Otherwise, VariCAD seems to have all the drafting, geometric
   tolerancing and dimensioning functions one would expect to find in a
   decent CAD package. In addition to that, there is a macro language,
   which I haven't had a chance to try yet, rather complete 3-D kernel
   (see some screenshots from VariCAD's site) and ability to import DXF
   and IGES formats. I have imported a 1.2Mb DXF file from CADKEY(TM)
   without a lost line, but dimension text was angled, and it could not
   be edited. But, as I said, I used CADKEY (TM)to export the file, and
   therefore the file is being translated twice, and it is hard for me to
   determine which one is "wrong" translation. I haven't tried to
   optimize the translator in VariCAD either. Translation itself is
   transparent, which means that as soon as you read the DXF or IGS file,
   VariCAD produces its native (dwb) file on which you continue to work.
   To translate the file to DXF from within the VariCAD, just save the
   file with a DXF extension. As simple as that.
   
   Developers have been smart enough to include in the "core" software a
   database of Parts, consisting of nuts, bolts, washers, pins and SKF
   bearings. Also a part of the package is a calculation program for
   calculating spur and straight bevel gears, splines, shafts, bearings
   and compression and extension springs, as well as the V-belt drives. (
   I have probably missed some other elements in here. Check out their
   site for full description...) There is also a possibility of creating
   the information needed for making the BoMs, although I haven't touched
   that yet myself. I also haven't had the need to print anything as of
   yet; most of my jobs are being sent in a DXF format on a floppy.
   
   The only gripe I have with the software is that I can't seem to be
   able to find out how to dimension to or from "imagined" intersection.
   I frequently need to use the dimension from this or that edge to the
   intersection of the chamfered or radiused corner, and I can not get
   VariCAD to recognize that I want to use the point where two lines
   would have intersected each other, had it not been for the radius for
   example, as one of the references for the dimension. If anyone knows
   how to do it, please let me know.
   
  People Behind The Software (Support)
  
   I have found people at VariCAD to be knowledgable and courteous.
   Everyone, from sales rep in Canada to their HQ in Checz Republic, had
   answered my e-mails within 24 hours. As an example: in the 6.1
   version, there was a bug in vertical dimensioning when using the
   toleranced dimension (the dimension line would not break around the
   text, but go right through the text). I have written an E-mail about
   it to their tech support, and within 12 hours, I had an answer- they
   were aware of it, and it happened only in inch drawings, not in metric
   ones, and will fix it in upcoming 6.2 version. Fair enough, I
   thought... About a month later, on the very day of the new version
   release, I have received an E-mail (from the same tech support guy)
   notifiying me that the new version is available for download, and the
   bug I have asked about had been squashed. That is what I consider a
   good customer service.
   
  Other User's Opinions On VariCAD
  
   In a couple ow weeks my mCAd page was up, I have already received a
   couple of E-mail responses from other VariCAD users. Thanks for your
   input. Keep it coming...
   One user had a problem with too much mousing (not enough command line
   input) in the earlier (but don't know which) version of variCAD and
   didn't try it since. I know there is a command line input, but as I
   said, it is not straight *utoCAD copy, so some commands may need
   re-learning. And also the quality of the help files was questioned,
   but I maintain that is mainly a language barrier. We English speaking
   folks take the fact that everyone knows English too much for granted.
   The other E-mail was regarding the inconsistent volume calculator. I
   can not attest or deny that, as I didn't use 3-D enough as of yet, and
   VariCAD allegedly claims they have had no such problems.
     _________________________________________________________________
   
                       Copyright  1998, Damir Naden
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                         The Proper Image for Linux
                                      
                            By Randolph Bentson
     _________________________________________________________________
   
   I get mail from folks about my book, the device driver I wrote for
   Linux, and about articles I've written for Linux Journal. A few months
   ago I got one which said, in part:
   
     My boss is a great guy to work for ...[but he] is of the opinion
     that Linux is the work of ``college punks'' and will not consider
     it for serious work.
     
     He had a nightmare with the MINIX file system and is permanently
     convinced that UNIX simply cannot be trusted and that Linux is the
     work of pimply-faced sophomores with time on their hands. I got a
     good laugh out of that while looking at your picture and reading
     your bio.
     
   I can only hope his laughter was kindly. The opinions expressed by his
   boss weren't the first I've heard of that sort. Nor, I fear, will this
   be the end of it. Nonetheless, I decided to take a shot at confronting
   these claims.
   
   I had suspicions that Linux contributors are a bright, experienced and
   well-educated bunch of folks. The discussions in the various Linux
   newsgroups and mailing lists aren't lightweight, nor is the resulting
   operating system. My ``feel'' of the operating system is that it's
   based on a lot of mature judgments and there is some theoretical
   grounding in what's being done.
   
   I gathered up a list of contributors (from /usr/src/linux/CREDITS) and
   sent off 241 notes. Partial text of this note follows:
   
     I'm conducting a brief survey of fellow contributors to the Linux
     kernel. ...
     
     It seems that products developed by students, no matter how well
     designed and implemented and no matter how qualified the students,
     are regarded as having lower quality. ...
     
     But that's not really the case with Linux. Almost from the start it
     has been more than just a school project. ...
     
     I'd like to investigate the educational backgrounds and current
     work situations of the contributors.
     
   I sent my notes with some trepidation--I didn't want to bother folks
   while they were working on important projects, and I feared a lack of
   response.
   
   I needn't have worried. So far I've received 103 replies, many of
   which have included a few words of encouragement. It seems that I
   wasn't the only one who wanted to respond to unjustified complaints
   about Linux. (Another 29 notes were returned with address errors. I
   hope to see corrections to the CREDITS file.)
   
  Education
  
   The level of response was the first piece of good news. The second was
   that I've been stunned by how strong the development team is with
   regards to both credentials and experience.
   
   From these replies I found:
   
     * 1 had completed just basic public education (high school)
     * 15 had attended college or technical school
     * 23 had an undergraduate degree (B.S., B.A., etc.)
     * 19 had attended graduate school
     * 15 had a graduate degree (M.S., M.A., etc.)
     * 9 had done further graduate work
     * 19 had a terminal degree (Ph.D., M.D., etc.)
       
   That's got to totally demolish the image of college hackers--at least
   the sophomore part of it. I figured I was an exception when I started
   working on the Cyclades driver while avoiding rewriting my
   dissertation. I thought, once folks were awarded a Ph.D., they would
   be busy with research, teaching or some other interest. I guess Linux
   development may be the doctor's favorite hobby.
   
   When I offered an earlier summary of these results, my correspondent
   reported that his boss wisely intoned, ``those folks are all academia
   and none of them have ever tried to run a business.''
   
  Experience
  
   I had sort of expected a comment along those lines and fortunately I
   had asked a few more questions in my survey. One hundred of the
   replies also reported the number of years spent programming or doing
   system design.
   
     * 4 had 1 year
     * 10 had 2-4 years
     * 31 had 5-9 years
     * 40 had 10-20 years
     * 16 had 20+ years
       
   More than a few of us were programming before the integrated circuit
   came into general use. (Perhaps a mixed blessing--some of us may still
   suffer from post-FORTRAN syndrome.)
   
   As I noted earlier, I have also felt that Linux has benefitted from a
   broad experience in its developer base. Linux may be a first operating
   system for a lucky few, but almost everyone (all but three) claimed to
   be at least a skilled user of another operating system. Eighty-three
   were skilled users of several other operating systems.
   
   Nor was their contribution to the Linux kernel the first of that sort.
   Twenty have contributed to another operating system and another
   twenty-two have contributed to several other operating systems. One
   reported:
   
     Speaking for myself, I had the same idea Linus did, but he beat me
     to it. (I've heard others say this as well.) I knew how to build a
     UNIX-like system from the ground up, and there was a need for it
     for PCs. (Vendors were charging exorbitant amounts for poor
     products in those days, and there was no good 32-bit development
     system for 386s.) I just didn't have the time. I had been playing
     with MINIX when Linus showed up on the MINIX newsgroups, and it
     took off from there. I can tell you that though I was a student at
     the time, I'd been a professional systems programmer for many years
     before. So, I and many others knew what professional quality
     software was, as well as how to produce it. I think it turned out
     pretty well.
     
  Current Use
  
   Finally, I wanted to know if the contributors were ``doing Linux'' in
   their careers. Eighty-two said their current employment was based on
   their computer skills. It was interesting to note that over a third
   reported their current employment supported or relied on their Linux
   development efforts. Sadly, two reported they were currently
   unemployed, but one of those also noted that he was ``voluntarily
   unemployed to have time to put my life in better order.''
   
   Perhaps one significant difference between Linux development and
   academic or commercial development is the duration of personal
   interest. In an academic setting, a student typically has one term, or
   at most one year, to work on any given program. When programmers leave
   a company, support is picked up by someone who has no sense of what
   has gone before. There is greater continuity in the Linux community
   because of the nature of submission and distribution. No matter what
   is happening at school or where one works day to day, contributors can
   keep in touch with progress on their piece of the puzzle. One person
   noted, ``Personally, I did start my code in school, but that does not
   stop me from maintaining it now.''
   
  Motivations
  
   There are some other issues which weren't addressed by my survey.
   Although it might not seem relevant to quality and performance, a
   person's interest has a great deal to do with the outcome--it leads to
   a distinction between ``craftsmanship'' and ``work product''. Another
   person noted:
   
     ``Intent'' is what I think all of these debates are about. In the
     commercial world there is only one true answer to ``Why are you
     helping develop Linux?''--``To make a living.'' In the Linux
     community I'm quite certain the answer would be more closely
     aligned to ``For me to use.'' The Linux community tends to be
     self-driven and self-motivated, and that is what leads to the
     successes and the apparent failures in our development environment.
     
     We are not a company; we don't have any one person, or group of
     people, setting the direction Linux will take. That direction is
     set by those with the energy to actually do something.
     
   Another motive, akin to what pushed me to first join the effort, was
   shared by another respondent who said, ``When I wrote [my code] for
   the Linux kernel I was working at [my former employer]. Linux use
   there was extensive, and I wanted to give something back.''
   
   Motivation leads to the final, and most significant issue--one which
   cannot be examined by a developer survey.
   
  Quality
  
   In a world driven by marketing, image is the basis for purchasing
   decisions. Even if a good image could be established for Linux by
   listing credentials or tabulating years of experience, I'd be
   reluctant to shift to that level. I'd much rather see acceptance and
   popularity for Linux based on quality and performance.
   
   Even though I hadn't asked specific questions on this topic, a few
   people offered comments. One note seemed to identify, however
   obliquely, on what may be the key to Linux's success.
   
     In general, my experience is that most software I have seen which
     was developed by students is not of the professional quality I
     would like to see. On the other hand, much of the commercial
     software I have seen, which was developed by professional software
     development companies, is also not of the professional quality I
     would like to see. The difference is most people do not get to see
     the internals of commercial software.
     
   Developing on this theme, another wrote:
   
     The reason Linux is stable and usable is not because of its student
     programmers [or lack thereof]. It is because of the overwhelming
     feedback that ALPHA and BETA testers provide. When you read the
     Linux kernel, you will find many parts are poorly structured,
     poorly written and poorly documented. However, people dared to test
     it and report their problems; Linus and friends respected the error
     reports and went ahead to fix them. That is why it works so well.
     
     In addition, psychology sometimes causes weird effects. If a user
     discovers a bug in his system, reports the bug and sees it fixed
     eventually, that user is happy because he was treated with respect.
     Most likely, he is even happier than he would be in the bug-free
     case.
     
   We not only need to bring the CREDITS file into an accurate state, but
   we also need to acknowledge the thousands who have contributed to
   Linux by using it and sharing their discoveries--good or bad--with
   others.
   
   Peter H. Salus reports the UNIX philosophy in A Quarter Century of
   UNIX as:
     * Write programs to work together.
     * Write programs that handle text streams, because that is a
       universal interface.
       
   I'd like to close by adding another entry, suggested by UNIX and
   dominant in Linux:
   
     * Write programs you enjoy.
       
  Postscript
  
   I just received a note from the person who sparked the original
   survey. He reports:
   
     I took my ``hand-me-down'' Linux box, an unimpressive 75MHz Pentium
     with 64MB RAM and a tiny 600MB HD to work. My boss was amazed that
     office applications such as StarOffice were available and was quite
     impressed when I read a Word document with StarOffice and then
     converted it to HTML. Samba was another revelation. Overall
     performance impressed him. In a few crude tests, it outperformed a
     ``commercial'' system running with 128MB RAM, dual 200MHz
     processors and all ultra-fast/ultra-wide SCSI drives.
     
     After a couple of callers indicated an interest in UNIX versions,
     we checked the price of current systems. My boss decided Linux was
     indeed priced right, and asked me to start on a port.
     
   It looks like we've won one more away from the dark side.
   
  Credits
  
   Linux kernel developers are self-reported in the file
   /usr/src/linux/CREDITS. If their names weren't entered there, I didn't
   find them. Furthermore, there are many more who contribute by testing
   various development releases and reporting on the problems. Sometimes
   they even report possible source code corrections, but they aren't
   included in the CREDITS file.
   
   Linux consists of much more than just the kernel. There are a host of
   related programs, such as those which are broadly distributed by the
   Free Software Foundation for UNIX and other operating system users,
   and others which support only Linux.
   
   It would take significant effort to identify all those who have
   contributed to make Linux a success. The Debian project reports who is
   working on that distribution, but that's not enough. I'd like to see a
   CREDITS file in every package and tar file. I'd appreciate hearing of
   efforts along this line.
     _________________________________________________________________
   
                     Copyright  1998, Randolph Bentson
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                    Serializing Web Application Requests
                                      
                             By Colin C. Wilson
     _________________________________________________________________
   
   Web application servers are an extremely useful extension of the basic
   web server concept. Instead of presenting fairly simple static pages
   or the results of database queries, a complex application can be made
   available for access across the network. One problem with serving
   applications is that processing on the back end may take a significant
   amount of time and server resources--leading to slow response times or
   failures due to memory limitations when multiple users submit requests
   simultaneously.
   
   There are essentially three basic strategies for handling web requests
   which cannot be satisfied immediately: ignore the issue, use
   unbuffered no-parsed-header (NPH) CGI code to emit ``Processing''
   while the back end completes, or issue an immediate response which
   refers the user to a result page created upon job completion. In my
   experience, the first option is not effective. Without feedback, users
   invariably resubmit their requests thinking there was a failure in the
   submission. The redundant requests will exacerbate the problem if they
   aren't eliminated. To make matters worse, the number of these
   redundant requests will peak precisely at peak usage times. NPH CGI is
   most useful when the processing times are short and the server can
   handle many simultaneous instances of the application. It has the
   drawback that users must sit and wait for the processing to complete
   and cannot quickly refer back to the page. My preferred method is
   referral to a dynamic page, combined with a reliable method of
   serializing requests.
   
  Description
  
   Origins of Generic NQS
   
   As an example, I will describe my use of Generic NQS (GNQS) (see
   http://www.shef.ac.uk/~nqs/ and http://www.gnqs.org) to perform
   serialization and duplicate job elimination in a robust fashion for a
   set of web application servers at the University of Washington Genome
   Center. GNQS is an Open Source queueing package available for Linux as
   well as a large number of other UNIX platforms. It was written
   primarily to optimize utilization of supercomputers and large server
   farms, but it is also useful on single machines as well. It is
   currently maintained by Stuart Herbert (S.Herbert@Sheffield.ac.uk).
   
   At the genome center, we have developed a number of algorithms for the
   analysis of DNA sequence. Some of these algorithms are CPU- and
   memory-intensive and require access to large sequence databases. In
   addition to distributing the code, we have made several of these
   programs available via a web and e-mail server for scientists
   worldwide. Anyone with access to a browser can easily analyze their
   sequence without the need to have UNIX expertise on-site, and most
   importantly for our application, without maintaining a local copy of
   the database. Since the sequence databases are large and under
   continuing revision, maintaining copies can be a significant expense
   for small research institutions.
   
   The site was initially implemented on a 200MHz Pentium pro with 128MB
   of memory, running Red Hat 4.2 and Apache, which was more than
   adequate for the bulk of the processing requests. Most submissions to
   our site could be processed in a few seconds, but when several large
   requests were made concurrently, response times became unacceptable.
   As the number of requests and data sizes increased, the server was
   frequently being overwhelmed. We considered reducing the maximum size
   problem that we would accept, but we knew that, as the Human Genome
   Project advanced, larger data sets would become increasingly common.
   After analyzing the usage logs, it became apparent that, during peak
   periods, people were submitting multiple copies of requests when the
   server didn't return results quickly. I was faced with this
   performance problem shortly after our web site went on-line.
   
  Implementation
  
   Listing 1. Sample GNQS Commands
   
   Instead of increasing the size of the web server, I felt that robust
   serialization would solve the problem. I installed GNQS version 3.50.2
   on the server and wrote small extensions to the CGI scripts to queue
   the larger requests, instead of running them immediately. Instead of
   resorting to NPH CGI scripts which would lock up a user's web page for
   several minutes while the web server processed, I could write a
   temporary page containing a message that the server was still
   processing and instructions to reload the page later. By creating a
   name for the dynamic page from an md5 sum of the request parameters
   and data, I was able to completely eliminate the problem of multiple
   identical requests. Finally, all web requests were serialized in a
   single job queue, and an additional low priority queue was used for
   e-mail requests. It was a minor enhancement to allow requests
   submitted to the web server for responses via e-mail to simply be
   queued into the low priority e-mail queue. Consequently, processor
   utilization was increased and job contention was reduced.
   
   While this proved quite effective from a machine utilization
   standpoint, the job queue would get so long during peak periods that
   users grew impatient. An additional enhancement was made which
   reported the queue length when the request was initially queued. This
   gave users a more accurate expectation about completion time.
   Additionally, when a queued job was resubmitted, the current position
   in the queue would now be displayed. These changes completely
   eliminated erroneous inquiries regarding the status of the web server.
   
   After over a year of operation, we had an additional application to
   release and decided to migrate the server to a Linux/Alpha system
   running Red Hat 5.0. The switch to glibc exposed a bug in GNQS that
   was initially difficult to find. However, since the source code was
   available, I was able to find and fix the problem myself. I have since
   submitted the patch to Stuart for inclusion in the next release of
   GNQS and contributed a source RPM
   (ftp://ftp.redhat.com/pub/contrib/SRPMS/Generic-NQS-3.50.4-1.src.rpm)
   to the Red Hat FTP site.
   
  Future Directions
  
   Queuing requests with GNQS allows another interesting option which we
   may pursue in the future as our processing demands increase. Instead
   of migrating the server again to an even more powerful machine or to
   the complexity of an array of web servers, we could retain the
   existing web server as a front-end server. Without any changes in the
   CGI scripts on the web server, GNQS could be reconfigured to
   distribute queued jobs across as many additional machines as necessary
   to meet our response time requirements. Since GNQS can also do load
   balancing, expansion can be done easily, efficiently and dynamically
   with no server down time. The number of queue servers would be
   completely transparent to the web server.
   
  Evaluation
  
   There are a number of ways to handle web applications which require
   significant back-end processing time. Optimizing application servers
   requires different techniques than optimizing servers for high hit
   rates. For application servers, the limiting resource may be CPU,
   memory or disk I/O, rather than network bandwidth. Response times to
   given requests are expected to be relatively slow, and informing
   waiting users of the status of their jobs is important. Queuing
   requests with GNQS and referring the user to a results page has proven
   to be an effective, easily implemented and robust technique.
   
  Acknowledgements
  
   Thanks to Stuart Herbert, GNQS maintainer.
   
   This work was partly supported by grants from the Department of Energy
   and the National Human Genome Research Institute.
     _________________________________________________________________
   
                     Copyright  1998, Colin C. Wilson
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                            Thoughts about Linux
                                      
                             By Jurgen Defurne
     _________________________________________________________________
   
                                 Introduction
                                       
   First, I want to give a small explanation on the backgrounds of this
   document. There are several parts which lead to my advocating of Linux
   in the corporate environment.
   
   First of all, it is already four years since I discovered Linux. It is
   only recently however that I really started using Linux itself. I used
   some GNU tools on the DOS and OS/2 platform, but only through recently
   expanding my storage could I install Linux. I printed some manuals,
   subscribed myself to the Linux Journal, I try to read the Linux
   Gazette frequently. Well, I consider myself almost a fan of the first
   hour of Linux.
   
   Secondly, since the beginning of 1997 I have worked in a traditional
   mini/COBOL/database environment and I have noticed that the people who
   use these systems, find a lot in such an environment : easy to control
   and operate, you need only one person to program, background operation
   etc. The other side of the coin is that these proprietary systems are
   expensive. You pay every year an expensive maintenance contract or you
   pay an expensive price for reparation and upgrading.
   
   My third reason, last but not least, is that I have never liked
   Windows in any of its incarnations since 1990. It generated GPF's for
   unknown reasons in 1990 and eight years later, it still does. It
   forces people in buying expensive hardware, which then cannot be
   utilised efficiently (if you don't want to crash).
   
   These three reasons have lead me to the writing of three document,
   which I want to be published via the Linux Gazette. The reasons for
   this is that I found that the Linux Gazette is also read by people who
   have other system backgrounds than only DOS or Linux, and this is
   crucial for the objective that I want to reach.
   
   This objective is in its essence the same as Linus Torvalds says, and
   it is :World domination. However, I have my own reasons to believe
   that world domination will not be attained only through the PC,
   workstation and Internet applications market. I believe that Linux has
   the potential to compete in the corporate marketplace. Alas, there are
   still a lot of holes to be filled in before this will come true.
   However, I also think there is enough potential among the Linux
   enthusiasts to make this dream come true.
   
   The following text consists of three parts in which I was trying to
   order my ideas about what Linux further needs to attain the stated
   goal.
   
   In the first part that I wrote, I am trying to compare Linux systems
   with mini- and mainframe-computers that I know and their architectures
   and I want to make an appeal to people who might be interested in
   developing Linux for large systems. I posted it on several c.o.l.*
   newsgroups, but I did not receive much response (only 1 person seemed
   interested).
   
Mainframe Linux/Linux Mainframes

  0. To do
  
   This document should be thoroughly cleaned up and restructured. The
   main reason that I send over the Internet as is, is that I want to
   know the amount of response it generates. If there is no interest
   whatsoever, then the project will be cancelled. If you made it through
   here, please read on. Any ideas to have a good working title or
   something like that, are always welcome.
   
   This document doesn't have the status of HOWTO. If I would assign it a
   status, then it would be something like an RFC, although not that
   official.
   
   I apologise if things are not always clear. I need to document some
   parts with graphics to provide a clearer understanding. It should
   probably also be created as an sgml-file, to have more processing
   power.
   
   Although this paper is sent to different linux newsgroups, it should
   be best to try to pick just one newsgroup to communicate about this
   document.
   
  1.
  
   This document is by no means complete. It attempts to define a
   framework to develop and deploy Linux as a mainframe operating system.
   If any idea's in this document have duplicates somewhere else in the
   Linux development community, I would be glad to know of them, so that
    1. They can be used with much less development effort
    2. They can be referenced to in (hopefully) further editions of this
       document
       
  2. Terms and conditions
  
   This document is for the moment completely my own responsibility and
   my own copyright. It may be distributed everywhere, but I am the only
   one who may change it. Please, send questions and suggested changes to
   my email-address jurgen.defurne at scc.be. All trademarks
   acknowledged.
   
   I intend to put much time into this project. I have a fine, regular
   job working daytime as a COBOL programmer, so time should be not
   really a concern.
   
  3. Rationale
  
   The ideas in this document are a reflection of my own experiences in
   working with computers and things that I have read about in a whole
   bestiary of publications (magazines, books, RFC's, HOWTO's, The Web,
   Symposium records, etc...). The basis is this : Linux is highly
   scalable. For me, it has proven to be far more scalable than any MS
   product. I run Linux on the following systems :
     * Toshiba portable computer 386sx/16MHz/3Mb/200 Mb
     * 386sx/16MHz/16Mb/Diskless
     * 386/33MHz/4Mb/200Mb
     * 486/50MHz/32Mb/100Mb
     * 6x86/P150+/16Mb/500Mb
       
   Some of these systems are interconnected, others not (yet). With the
   use of telnet, X and TCP/IP it is possible to use these systems
   together, to run tasks on different systems etc. But I want more. What
   I would really like is that these interconnected systems can be viewed
   as one single system, with a common address space, and where their
   individual resources are added together to form a more powerful
   computer. The main target would be to make it possible to introduce
   Linux in environments where traditional minicomputers are used for
   data-entry and data-processing. This may sound like pretty ambitious
   goal. I don't know if it is. What I do know is that these are
   environments where high availability is a top priority (Note 1).
   
   Another reason to do this project is the fact that in the beginning of
   the year Tandem has built a mainframe computer using 64 4-way SMP
   systems, NT, their own interconnection software and Oracle Parallel
   Server. Why shouldn't we be able to do something alike ?
   
  5. Goals
  
   This document must describe not only software, but also hardware and
   system procedures. I hope to revise it very regularly. I would like it
   to contain links to used source code, schematics, construction plans,
   all used sources and a history and possible planning of the project.
   It should also give people who want to make money from Linux the
   possibility to do this on a professional level. That is, they should
   be able to help companies with processing requirements to assess their
   needs, give advice on required hardware, install and implement the
   system and provide service, maintenance and education.
   
  6. What makes a good mini/mainframe environment ?
  
   I haven't had a regular programming education. I am an electronics
   engineer. After school I got into microcomputers and programming and I
   broadened my education with courses on business organisation and
   industrial informatics. My experiences in the mini/mainframe world
   date back from as recently as januari 1997. At first I got to work in
   WANG VS (Note 2) environment, now I am still working as WANG
   programmer, but the WANG's have a duty as front-end input processors
   to the mainframe (Bull DPS8000/DPS9000) and as legal document
   processing systems. In my first job, the WANG VS minicomputer was used
   more as production mainframe system.
   
   Now, what do these systems have in common ?
     * a fault-tolerant, high performance file system
     * database products with transactional capabilities
     * a single compiler for development, which supports the file system
       and the database management system
     * a versatile scripting or job control language
     * an interactive development system
     * easy access to operating system functions
     * easy, but powerful access to operate the system
     * optional : powerful tools for non-programmers to access, extract
       and process data from the database (Reports, Queries, ...)
       
   The main difference between the mini and the mainframe is in the
   operation of the system. The four main tasks that have to be done on a
   computer system are administration, exploitation, production
   maintenance and development. On a mainframe these tasks are done by
   different people, on a mini these tasks can be done by one person, or
   shared, but you don't need full time personnel for the different tasks
   (except for programming, that is). The system running on a mainframe
   can be sufficiently complicated that some tasks or operations may only
   be done by some trusted personnel.
   
   Operating the system comprises the following tasks :
     * Managing printers and print-queues
     * Managing jobs and job-queues
     * Managing communications and communications devices
     * Managing disks, tapes, workstations and system options
       
  7. What makes a good mini/mainframe computer ?
  
   Basically, the ability to handle tasks efficient and fast. If you want
   to know more about the chores of operating systems, there is enough
   literature available (see literature list). The basic problem in
   running a large computer system is the difference between
   batch-operations and interactive or real-time operations. You want
   batch programs as fast possible to be executed and you want for the
   other kind a fast response time. The basic problem with PC's versus
   mini/mainframe computers is that the IO structure of the PC is very
   primitive. This is starting to change, first with VESA, now with PCI,
   but it still comes nowhere in the neighbourhood of a minicomputer.
   Basically, these systems always have a separate internal processor (or
   more than one) on the IO bus to handle data transport between devices
   and the memory. With I2O, this should become available to the PC
   world, but it is still proprietary and not available to Linux and/or
   Open Source developers.
   
   Tasks compiled for x86-architectures tend also to use more memory.
   Let's take some examples from minicomputers and mainframes I know
   about and have access to documentation.
   
   CAPTION: Overview of some corporate systems
   
   System Main memory Clock Bus size Users supported
   WANG VS 6 4 Mb 16 MHz 16 bit 32
   WANG VS 6120 16 Mb 20 MHz 32 bit 253
   WANG VS 6250 64 Mb 50 MHz 32/64 bit 253
   WANG VS 8000 32 Mb N.A. 32/64 bit 253
   BULL DPS9000 2 x 64 Mb N.A. N.A. N.A.
   BULL DPS8000 2 x 32 Mb N.A. N.A. N.A.
   
   These systems are smaller than PC's in terms of memory, yet they
   support more users and tasks than a PC would do. I wouldn't use my
   Toshiba portable to support ten users on a database. Yet, that is what
   the WANG VS 6 is (was) capable of, with the same characteristics.
   
   This is for the moment my main criticism of standard PC's and their
   software : they are extremely inefficient. The first inefficiency
   comes from the methods used to lower the price of a PC : the CPU is
   responsible for data transport between devices and memory. You have
   DMA available, but it isn't very efficient. The second inefficiency
   comes from the software mostly used on PC's : it takes up much space
   on disk and in memory.
   
   A third inefficiency is in the software itself : it has so many
   features, but these aren't used much. The more features in the
   software, the less efficient it becomes (Note 3).
   
   There is another thing to be learned from mini/mainframe environments
   : keep things simple. I don't think the current desktop/GUI
   environment is simple. It doesn't have a steep learning curve, but
   basically what you have are super souped up versions of what are
   basically simple programs. When writing programs or designing systems,
   one should always keep in that after a certain point it costs more
   effort to add more functionality to a program, while this
   functionality decreases efficiency.
   
  8. Existing functionality
  
   In the area of parallel computers there is the Beowulf system and
   associated libraries. Their basic target is parallel processing for
   scientific purposes, while my purpose is business data processing. As
   I see it, some of their goals walk parallel with mine, especially in
   the areas of existing bottlenecks : the network, distributed file
   access, load balancing etc. However, the way business programs are run
   differs from scientific computing. MPP is also more in the way of
   creating a computer to run really big tasks, while on a business
   machine you have logins from users for data querying, transactional
   processing, batch processing of incoming data, preparing outgoing
   data, establishing communication with other systems. In this sense,
   what we are looking for is not to distribute one task over several
   computers to speedup processing, but to serve up adequate processing
   power, data manipulation facilities and information bandwidth for a
   large number of users. These goals need different OS support than MPP.
   
   I have studied the Beowulf structure (a Beowulf HOWTO is available on
   the Internet). The Beowulf structure works is a MPP system in which
   only one computer effectively runs the application. All other nodes in
   the system are slaves to this one CPU. This is why the Beowulf system
   is only partially suited to attain my goal.
   
  9. Where do we start ?
  
   We need to start with a set of completely defined Linux operated
   computers, from now on called CPU's, which are somehow connected to
   each other by means of an abstract communications layer or CL. This CL
   can be implemented using serial connections, Ethernet, SCSI or
   anything else that we can devise to make CPU's talk to each other. A
   CPU may be a single-way computer or a multi-way SMP computer.
   
  10. Where do we want to end ?
  
   I think the end point should be to view the system as one single
   entity. To do this, the following requirements should be met :
     * Every process should see the same file system
     * Resources (via /dev files) should be shareable accross CPU's
     * Every CPU should have the same view of OS and memory
     * Process information should be shared accross CPU's
       
   One of the fundamental changes in the OS should be the way exec()
   operates. When exec() starts a new process, this could be on any CPU.
   The original links need to be preserved and processes should end in
   the same way as always.
   
   Interprocess communication is straightforward I think. What I would
   like to know is if it is worthwile to strive for a system view in
   which all memory is mapped into one address space ? (Idea behind it :
   provide every CPU with the same view of the system : it's OS, followed
   by the memory pools of all other CPU's mapped into the same address
   space). This is what NUMA (non-uniform memory access) is about. Can
   the Linux community attain this subgoal, or does it need to much
   specialised resources ?
   
  11. High Availability
  
   Some key parts of Linux should be redesigned or replaced by
   fault-tolerant parts. The largest part which comes to mind is the
   file-system. A few months ago I had a nasty experience. A connector on
   the cable of my SCSI subsystem had a defect, with the consequence that
   the system of a sudden completely froze while I was busy using
   X-Windows. The trouble with e2fs is that on these occassions the whole
   filesystem gets corrupted. This should be made more sturdy.
   
   The other part is that the system may not freeze on these occasions.
   It should be possible to provide a bare minimum of functionality, eg.
   that the kernel takes completely over and switches to text mode to
   provide diagnostic information or tries to create a core dump.
   
   Another problem that I have encountered is the lack of reliability
   when a harddisk drive gives trouble. What happened to me whas that on
   using an old SCSI drive the kernel and/or e2fs started to write
   strange messages when I tried to use the disk. When the system
   encounters problems with devices, the problems should be logged, the
   operation should be stopped and informative messages should be
   displayed.
   
   Other key features in the area of HA should be the tolerance of the
   complete system when a CPU is missing. A CPU may only be added when it
   passes the self test completely and finds out that everything is
   working fine. When a CPU quits while being in the system, there should
   be possibilities to restart processes which have been interrupted. For
   this one should provide the programmer with features to help with this
   problems : a transactional file system, checkpoint functions (other
   ?).
   
   The last idea I think of is maybe the possibility of swapping a
   complete task between two CPU's. A task consists of CODE and DATA. You
   don't need to save CODE. DATA can be completely swapped to harddisk.
   If you have a way to transfer the process information from one CPU to
   another, then it should be possible to reload CODE and DATA and
   restart the process on another system.
   
  12. Summary
  
   There are two targets. The first is the creation of an extension which
   combines several Linux PC into one system. Users and processes should
   get a same view of of the complete system as one system. This should
   also mean that certain administrative chores should depend only on
   centrally stored and shared information.
   
   The second one is to add more and better managed fault tolerance,
   preferably more interactively managed.
   
   Well, this is it. I hope that people ask sane questions, that I don't
   get flamed and that it raises enough interest to advance Linux to a
   higher level.
   
  References.
  
   Ths reference list is clearly not finished. I need to obtain more
   details about some works.
   The Linux High Availability White Paper.
   The Beowulf HOWTO
   The Parallel Processing HOWTO.
   Andrew S. Tanenbaum, Design and Implementation of Operating Systems
   
  Notes.
  
   Note 1.
   
     NOTE: I worked for 16 months in a small transport company. The core
     of the business was contained in a WANG VS minicomputer. If the
     system was offline, then nobody could do his work properly. The
     system was basically a database to store dispatching operations,
     the revenues of all operations, the cost control and the
     accountancy. I think there are many small firms, who can't afford
     mainframes, but who need more processing power than the average PC
     can handle, and where many people need different views of the same
     data.
     
   Note 2.
   
     NOTE: The WANG VS is a particular good example of proprietary
     solution which does an excellent job, but with a very steep price.
     They are very expensive for the initial buy, the expansion of the
     system and the maintenance. I think this is one of the main
     reason's why people want to get rid of their WANG systems. You can
     buy, expand and maintain an HP system for one tenth of the price
     the WANG VS costs.
     
   Note 3.
   
     NOTE: If you wonder why I emphasize efficiency : I became
     interested in microprocessors in 1980 when you hadn't much
     microprocessor and memory. My first computer was a Sinclair ZX
     Spectrum with a> whopping 48 kB RAM. I am still astonished what
     some programmers could do with that tiny amount of memory. There
     are other points besides this : what processing power could be
     freed up if you were able to use all those wasted processor cycles
     in the common desktop PC's ? For small companies, a PC is still
     rather expensive. Combining the power of their PC's could maybe
     give them an extra edge in their operations.
     _________________________________________________________________
   
   In the second part I am trying to develop an architecture to extend
   Linux into a parallel processing system, not for numerical processing
   like Beowulf, but for administrative dataprocessing.
   
Description of the booting sequence of the multi-processing architecture

   The goal of this document is to establish the components which should
   comprise the project which was mentioned in the previous document
   (Linux mainframes). To do this, a description of the boot sequence
   will be given, together with the possible failures and the solutions.
   
   Before attempting this, however, I want to give a short summary of the
   guidelines which should lead us toward the goal of Linux systems which
   can be deployed in corporate environments.
   
   Minicomputers and mainframes provide reliability and high processing
   power. The reliability is largely obtained in two ways. The first one
   is in the design of the system, the second one is the existence of a
   thorough support department with online help and specialised
   technicians. The emphasize in this document is on the hardware side of
   the system.
   
   High processing power is obtained in several ways. They involve the
   use of cache-memory, wider data-paths, increasing clock frequency,
   pipelining processing and efficient data-transfer between memory and
   IO.
   
  What can we do about reliability ?
  
   On the reliability side the system is dependent on hard- and software.
   If we are to use currently available parts (motherboards and cards)
   then the only thing we can influence is the way systems are assembled.
   Care should be taken to avoid static discharges, by using anti-static
   mats and bracelets.
   
   On the software side we have the Linux operating system which is very
   reliable, with reports of systems running for months without erroneous
   reboots.
   
   However, hardware can fail and in this respect I think that there
   still needs work done on Linux. If the error is not in the processor
   or the system memory, then a running system should be able to
   intercept hardware errors and handle them gracefully. If at all
   possible, system utilities should be available to test the CPU, the
   system memory, the cache and the address translation system.
   
   The Linux High Availability White Paper documents clustering of small
   systems. Later on in this document, some other techniques will be
   proposed.
   
  What can we do about the processing power ?
  
   Processing power comes on several levels. On the first level, that of
   the CPU and the main memory we can't do much. With current
   motherboards with bus speeds of 66, 75 and 100 Mhz, we get data
   transfer speeds between memory and CPU of 264 MB/s, 300 MB/s and 400
   MB/s. These should be sufficient for most applications. Memory is
   cheap, sizes of 64 to 128 MB should also give headroom for large
   applications.
   
   The largest problem with standard motherboards is that all IO needs to
   be handled by the CPU or else by a slow DMA system. This means that a
   large part of the operating system is being used by device driver
   code. In mini/mainframe systems this is not the case. All IO is
   handled by separate IO-processors. These IO-processors implement the
   device drivers and as such free a large part of the central processor.
   
   To relieve the central processor of this burden, there are three
   solutions. The first one is being implemented by the I2O consortium.
   It defines standards for intelligent IO-boards on the PCI bus. These
   boards can transfer the requested data themselves to the main memory
   of the CPU. The only problem is that as far as Linux is concerned, I2O
   is proprietary.
   
   I think that two other solutions should be possible. The first, and
   probably easiest, is to use an SMP motherboard and program the
   operating system so that one processor is completely responsible for
   all IO, and the rest of the CPU's do the real work. Another idea is in
   the absence of SMP use two motherboards, run one with an adapted
   version of Linux to handle all IO and use the other one to run only
   applications. The only trouble here is which system will be used to
   interconnect the motherboards. Especially in the case of mass storage
   devices, you want to stream the data from the device as fast as
   possible into the memory of the application. Currently, this means
   using the PCI bus in one way or another.
   
  Summary
  
   Since we, as Linux users, have no sight on the design process of
   motherboards, reliability should be obtained through good standards of
   assembly and by implementing redundancy.
   
   To obtain more processing power, the main CPU should be relieved as
   much as possible from IO. This could be implemented by using SMP or by
   interconnecting motherboards.
   
  A proposal for an architecture for Linux mini/mainframes
  
   Based on the previous ideas, using several motherboards interconnected
   by a high-speed network could give us the following benefits :
     * Redundancy to increase reliability
     * Offloading IO tasks to one or more specially appointed nodes
     * Increased processing power
       
   To obtain these benefits when the system is assembled, some operating
   system changes need to be provided. It is possible to interconnect
   computers and make these work in parallel, but all administration must
   be manually accounted for. So, what we need when the system is booted,
   is not a vision of several separate systems, but only one system.
   
  Description of the boot sequence
  
   When booting the system, all nodes start in the usual way : installed
   hardware is identified, necessary drivers are run, a connection to the
   network should be made, NFS drives should be mounted, local file
   systems should be checked and mounted.
   
   In the case of a normal system, all background processes would be
   started and users should be able to log in on the system.
   
   When the system should be seen as one complete system, the boot
   sequence should be modified at this point. Resources which are
   normally only accessible on one node, should be shareable throughout
   the system. To build a common view, every node should have access to a
   common file system. In this file system the directories /dev, /etc and
   /proc should be accessible by every node.
   
   The directory /dev contains all shared devices. The directory /proc
   provides access to system structures which should be shared by every
   node. The directory /etc contains the necessary files to control the
   system :
     * users
     * groups
     * fstab
     * inittab
     * ...
       
   Every operating system on every node must be adapted to work via these
   shared directories.
   
   To control the creation of this shared system, one node will have to
   be designated 'master'. After the initial boot sequence, every node
   will have to wait for the master to initialize the network. This
   initialization can proceed in the following way :
     * create /proc
     * create a system process table (accessible via proc)
     * create /dev
     * gather all shared devices on the network
     * execute fstab, inittab and other scripts to initialize the
       complete system
       
   Started processes fall apart in two categories. Local processes run on
   the nodes which contain the resources that the process needs access to
   eg. getty, fax drivers, etc. Global process are independent of
   hardware and should be able to run on any node in the system.
   
   Any node should also be able to start a new process on the system. By
   using a load balancing system, all started processes must be evenly
   divided over all nodes.
   
  Providing reliability in the system
  
   The system as proposed above could present some problems. The first
   one is its dependency on a single master computer. If this master
   fails, then the whole system fails. To alleviate this, it should be
   possible to define several masters. If the power is applied and the
   master nodes boot, then the first one to get hold of the
   interconnection network will act as coordinator. If one master then
   fails, the only implication would be that his shared resources are not
   available in the system.
   
   If a master fails while the system is up and running, then the basic
   coordination of the system is gone. To overcome this problem, a backup
   master must be defined. This backup master needs to keep an updated
   copy of all master system information. If the real master should fail
   then all nodes in the network should block themselves until the backup
   master has come up. The system should provide dynamic management of
   nodes. This means that nodes must be attachable by using system calls.
   This goes via the master, which then adds the system on the network.
   If a node must be detached, then none of its resources should be in
   use, otherwise the call fails.
   
   If a node fails when in use then this surely will pose problems. A
   failure can show itself on the network (network interface problem,
   processor error) or local. If a process uses a remote device, it will
   do this by means of messages which are sent over the interconnection
   network. In the case of malfunction, the addressed node won't (can't)
   answer anymore. The OS must block the process until the malfunction is
   removed.
   
   If there are problems in critical parts of the system, device drivers
   or system processes should not blow-up the system or interfere with
   user processes, but they should have the means to correctly report the
   problem and block the processes which are using the particular
   resource. If the malfunction is on a local level (device) then the
   device driver can return a message stating the error.
   
   The most critical part in the system is the interconnection network.
   This should be tested and tuned according to system demands. If
   possible, a fast protocol should be used instead of TCP/IP.
   
  Summary
  
   The view every node has of the system should be the same. Devices must
   be shareable accross the interconnection network. The OS should be
   extended so that the exec() function, which is basic for starting
   processes, executes on a global level.
   
   Reliability should be built-in and configurable on several levels. A
   message-based protocol is needed to share devices across the
   interconnection network.
   
  Proposals for interconnection
  
   Basically, there are for the moment two interconnection systems which
   can be used of the shelf.
   
   The first is Ethernet. Based on the money to spend, you can assemble
   systems with 10 Mbit, 100 Mbit or 1 Gbit networks. Increasing bandwith
   means increasing processing power. To obtain the maximum of your
   bandwidth, the ideal is using an SMP motherboard in which one CPU
   takes care of all network-to-memory data transport.
   
   The second one which attracts interest in the Linux community, is the
   SCSI interface. Using modern SCSI cards, up to 16 motherboards could
   be connected together to provide for parallel processing.
     _________________________________________________________________
   
   This is the third part. I have compiled some cases where I have
   participated to highlight some points that need more support in Linux.
   
Cases where Linux might be employed, but where it isn't

   Through several enhancements (Beowulf, Coda FS, Andrew FS) Linux gets
   more and more powerful. But how powerful is powerful really ? Linux is
   announced and used in more and more places, but there is a serious
   lack of numbers on the capacity of Linux in different environments and
   configurations.
   
   This is however a crucial point. In many environments, Linux gets
   introduced through the reuse of PC's (which is in itself a good
   point). There are however other environments where the introduction of
   new hard- and software depends on the provision of hard numbers for
   acquisition, deployment, education, maintenance, infrastructure and
   depreciation of systems. This can range from a small office which only
   needs to cough up the required cash up to a financial institute which
   has large dataprocessing and communication needs.
   
   In some of these areas Linux hasn't probably even touched anything
   because those people use computers as a means to an end. The computer
   itself does not stir their imagination. They have tasks to be done and
   the computer is their instrument to complete those tasks faster and
   more precise. These are the environments which are lured into buying
   MS products. I know however several people which work in various
   different Wintel environments and none of them are satisfied. Some
   complaints :
   
   Lock up of course : power users lock up more easily their PC, because
   they use a lot of applications next to each other.
   
   Unexplainable configuration changes : you enter your office and your
   application does not start. Reason : some ASCII text file has reverted
   to a previous state (I had this one several times with the TCP/.IP
   'services' file).
   
   MS Office for Windows 95 : You can not seem to use Word for large
   documents (this is a complaint from a user in a large company).
   
   Windows NT : can not be deployed in situations where older
   applications need access to older and/or proprietary hardware.
   
   I am sure anyone who has ever used the system, knows other bugs.
   
   I think that one of the reasons why Linux isn't more employed in these
   environments is that it is mostly deployed using a single type of
   configuration existing of an IA32 CPU, a PC AT architecture, IDE/SCSI
   disk subsystem, an Ethernet NIC and standard serial devices. This
   makes it very easy to use Linux in the following places :
     * e-mail
     * nntp
     * http
     * file systems
     * printing
     * MPP (Beowulf)
     * embedded systems
     * workstations
     * telecommunications
     * networked, distributed systems
       
   These are technical solutions for technical problems, implemented by
   technical people. However, for some places, some pieces are still
   missing and there are places where Linux could be used, but where it
   is not. The usability of Linux still depends too much on the technical
   skill level of the user. This should not be necessary. Companies
   should be able to deploy Linux quick, efficient and flawless.
   Introductory courses should be provided. This will mostly mean
   migrating from Windows knowledge to Linux knowledge. People should be
   made to understand that there are three pillars in the usage of a
   computer system and/or program :
     * operations
     * administration
     * maintenance
       
   On the system level these should be integrated transparently and
   tightly. A user shouldn't need to go through heaps of paper and
   manuals to find something quick, so menu driven is probably the best
   answer for this, with good context sensitive help. I even think that
   from the point of view of the user, things should be accessible under
   a heading 'Applications' where all his production programs should
   reside, and a heading 'Maintenance' where operational, administrative,
   system maintenance and diagnostic programs are located.
   
   If we want Linux systems to be used more in environments where people
   are not concerned with their computer per se, but as a means to do
   their job, then support will have to grow on several levels. To
   project these levels, I will present some cases more or less detailed.
   These cases present environments where I have worked, customers which
   needed support, people I know.
   
  Case 1 : The SOHO environment
  
   With this I mean the family sized company which provides some basic
   services (grocer, plummer, carpenter, etc...). At most two persons are
   responsible for handling all administration. This consists mostly of
   two parts : accounting and handling of incoming/outgoing messages. The
   first part of the problem is providing this environment with a
   suitable accounting package which is applicable for the country where
   the company resides.
   
   The second part of the problem is handling all incoming and outgoing
   messages. This requires access to three channels : phone, fax and
   e-mail (if there are any other options then these are probably too
   expensive for this environment). Depending on the situation, there
   could be constraints on the usage of the channels (eg. no channel
   should block another channel, when answering the phone, the fax and
   e-mail should not be prohibited and/or prohibit each other). The
   configuration could probably be extended using a PABX card in the
   system, to provide extended telephony services via Linux.
   
   Like it or not, but these people have become accustomed to using
   WYSIWYG word processors and spread sheets, so the least that must be
   done is provide them with this functionality. There are at least two
   good packages available for Linux in this respect. Another thing that
   should be provided is a customer database which is closely linked to
   the former package. Creating new documents and using fill in documents
   from a user entry should be a must. Creation and insertion of simple
   graphics should be an available option too.
   
   If we consider at most two people then the system could be configured
   using two workstations of the same capacity, where some tasks are
   shared between each other, or it could be done using one more powerful
   system, which provides all services, and one cheap PC workstation,
   configured as an X-server.
   
  Case 2 : A medium sized company I (10 users)
  
   File- and print-services, bookkeeping, inventory control
   
   The company where I first worked from 1990 to 1991 had a Novell
   Netware system installed. We used the system to provide printservices
   for Mac- and PC-systems, as a repository for all kinds of drivers and
   diagnostic software and as a shared database via the bookkeeping and
   inventory control program. Everybody who needed access to the network
   had his or her own PC or Mac. We mostly used DOS back then, although
   with the introduction of Win 3.0 some people migrated to it. Everybody
   had access to a phone and there was one central fax in the
   administrative department. We installed and maintained PC's and Mac's
   for graphical applications. These applications provided output for
   typesetting printers (mostly via Postscript) or plotters. The
   supported applications where Adobe Photoshop, Aldus Pagemaker and
   AutoCad. We were also a reseller of the bookkeeping package that was
   used on the network.
   
   The printing could be spooled to several large laserprinter, a
   high-speed dot-matrix printer and a photographic typesetter.
   
   File services under Linux are probably the easiest of problems. I
   networked, recompiled, linked and started a small TCP/IP network using
   two computers in less than an hour. NFS is very comprehensive, as are
   telnet and other TCP/IP services. If you need to provide only a
   central server, then the following things need to be done :
     * assign separate network numbers to your NICs
     * configure server and WSs for NFS
     * configure the exports file
       
   For the workstations the following needs to be done
     * assign separate node numbers
     * configure NFS
     * add your network directory to fstab
       
   The main difference between Novell and NFS is in the administration.
   On a Netware server, all administration is kept central to the server.
   The only thing which needs to be done on a workstation is load an IPX
   driver at boot time. On a TCP/IP workstation, some administration is
   kept centrally and some administration is kept locally. This makes the
   process of maintaining and updating the network more laborious.
   
   Installing print services under Linux is generally much harder than
   under Netware. This is because all settings are to be added manually
   using a text editor in the file printcap. But, since this is a very
   structured file, with a rather small set of commands, why hasn't any
   body ever written a dialog system to scan printcap and present the
   user with an overview of available printers and the possibilities of
   adding and modifying printers and their settings ? This would be a
   great step forward in installing printers. Filters for different types
   of printers could be presented, so that the configuration on the
   network could be simplified (as an aside, RedHat provides such a
   system).
   
   The other part of printing is the operation of the queues. The lpd
   system provides only command line control. But since this system is
   also understood very fine, why haven't there been any attempts to
   rewrite the lpd system for menu-driven operation ? After all, entering
   a command or pressing a function key can invoke the same behaviour.
   All queues and printers can be presented to the user, with the
   possibility of providing more details.
   
   The accounting program was written in Clipper and did not use Btrieve.
   This means that all access to the data in the files generated a lot of
   traffic over the network. This was alleviated by segmenting the
   network in three parts so that the accounting department didn't
   interfere with the other departments. The whole package ran under DOS.
   In the course of years, the company which programmed the package made
   in 1994 the transition from Clipper to FoxPro, and only as recent as
   1997 they made the transition from DOS to Windows (with the DOS
   version still being sold and supported).
   
   This presents us with a case of providing support for migration of
   xBase dialects to Linux, while adding value to these languages through
   transparent client/server computing. There should also be support for
   people migrating from these DOS-based systems to Linux. There are a
   whole lot of programmers who work alone and who make a living by
   writing and maintaining small database applications for SOHO users
   (using xBase and several 4GL tools which run under DOS). Providing
   incentives and support for these people to migrate and to help their
   customers migrate could give a double benefit to Linux. The key lays
   of course in the way that support for these tools becomes available
   under Linux or that conversion tools become available under Linux.
   
   Printing support under Un*x and hence Linux has always strongly been
   oriented at typesetting. Providing support for Postscript should not
   be a problem under Linux. Adding a typesetter should be as easy as
   installing a printer on a server or on the network via a print server.
   There are already some strong graphical packages available for Linux.
   In this case, migration is a question of importing and/or converting
   graphical files and showing the user how to do the tasks he does
   normally with the new application.
   
   Plotting and/or cutting should be the same as printing. The
   application program is responsible for translating it's own internal
   drawing database into a format that can be used by the addressed
   peripheral.
   
  Case 3 : The drafting department
  
   Drawing workstations, central database, drawing lock, usage statistics
   
   Drafting departments are a case where networking and central storage
   are really put to the test. It consists of a drawing database, which
   is a front-end to the drafting programs. User should be able to look
   at drawings, create, edit, delete and print drawings and collect usage
   statistics about drawings. In addition, only one user should be able
   to edit a drawing or part of a drawing at one time, and it should be
   possible to see who is editing what. If this all sounds like using a
   file system, then you are right. The difference is that you only use
   one type of file. I worked on one system in the previous case. It was
   written using Clipper as a front-end. I know of other environments
   where Autocad is used, but under a WinNT network, and there are some
   companies who deliver complete turnkey solutions consisting of
   powerful minicomputers and proprietary workstations for real high-end
   drafting work.
   
   Providing the incentive to migrate to Linux consists in providing a
   powerful server with large storage to accomodate all the drawings and
   a fast network to deliver them to the workstations. All workstations
   should be tuned to the max to deliver the utmost in graphic display
   and manipulation. Of course, utilities are necessary to convert the
   original drawing database and all the drawings. Networking should be
   flawlessly, and the program which uploads the drawing should provide
   an indication of the time necessary to get the file and where it is in
   the process.
   
  Case 4 : A medium sized company II (20 users)
  
   Mini computer system, data entry and retrieval, commercial department
   
   This pertains to my previous job : a small transport company, which
   had ten years ago decided to implement a computer system to automate
   several tasks and to keep a database of all done transportations. They
   had taken WANG VS, which was back then a successfull system, with many
   advanced features. Custom software had been developed by an outside
   company first, by an in-house programmer later. The system contains a
   very comprehensible fax package, which can be used by anyone, but with
   strong security features. All outgoing messages are put in one queue,
   where the operator can change their times and/or priorities. All
   communication with the minicomputer is via terminals or via emulation
   cards on PC's. Accounting is also done on the minicomputer, but the
   two systems are not linked. The system is also equipped with a
   background task which controls batch tasks in a queue.
   
   There are many medium-sized companies which still use minicomputers
   and who have a problem shedding them, due to their highly specialized
   software. Migration from a Un*x system to a Linux system should not
   pose as much problems as migrating from a completely proprietary
   system to Linux.
   
   The main problem with these mini-computer systems is their high
   maintenance cost. That should be the most pressing reason to migrate,
   although Y2K could also be an incentive (not so with WANG VS, which is
   fully Y2K compliant).
   
   To provide the same functionality a DBMS package should be available
   which provides a data dictionary, a screen design package and a
   COBOL74 compiler with preprocessor to translate simple SQL SELECT
   statements. There are several packages available. One package aids in
   the migration from WANG PACE (the WANG DBMS) to Oracle (at the moment
   Oracle has only announced porting Oracle to Linux), while Software AG
   has tools to port WANG PACE applications and screens to ADABAS. On
   part of the compiler, where I work currently the porting is done from
   WANG to HP-UX using Microfocus Cobol. The security features of the
   database package should at least contain rollback recovery. The
   provided file-system should absolutely not be e2fs. Reliability should
   be favored over speed. When the power fails the file-system it self
   may be damaged, but these damages should be simple to clean-up.
   Damages in transactional files are to be repaired with the rollback
   option.
   
   On the hardware side, I noted that SCSI II provided enough speed to
   handle some 20 users, but ... this was a system with a specialized
   IO-processor to handle all data transfers between main memory and all
   peripherals. To know how Linux fares in this, benchmarks should be run
   and numbers should be provided. In our last configuration (a 50 MHz
   CPU with 64 Mb), under a heavy load, our response time was under 10 s.
   
   Fax support must be provided to interactive applications, but also to
   batch applications.
   
   Batch processing of all tasks should be supported. Some programs can
   be started, used to enter selection data and then launched at will in
   the background or in the foreground at a time and day the user can
   enter. cron is fine for highly skilled people, but not for your
   data-entry clerk, so you need a front end which asks the date, time
   and repetition rate of your job. The application itself should be able
   to provide the required parameters.
   
  Case 5 : OEM
  
   Cash registers, inventory control, proprietary hardware
   
   This company builds cash register systems using mostly common PC
   hardware and one piece of proprietary hardware which interfaces to a
   magnetic card reader, a bar code reader, a money drawer and a
   keyboard/display/pricing printer. The cash register is connected via a
   network to a server which provides an inventory and a price list. Upon
   booting, the cash register connects to the network and loads its OS
   from the server. Every server has the possibility to connect at night
   to a central database to update its pricelists and to order items
   which are getting out of stock.
   
   For the cash register, a multi-user, multi-tasking OS is clearly
   overkill, while in the case of the server, multiple cash-registers
   could connect via the network to the server. The cash register would
   benefit, though, from multi-threading.
   
   Software development for servers and departmental systems is usually
   done with a 4GL tool, with a higher-level language only for those
   parts which 4GL does not support.
   
  Case 6 : Financial company (appr. 1000 users, agencies)
  
   Minicomputers, mainframe computers, terminals, workstations, TCP/IP
   
   The production environment of this company consists of 5 WANG VS
   minicomputers, used for data-entry, data-preprocessing and to connect
   agencies remotely through a telephone line. It consists also of a Bull
   mainframe system with two CPU's, 128 Mb memory, 240 Gb of on-line
   storage capacity, a transaction processing system consisting of a
   network database and a screen editing and runtime program. All this is
   controlled using JCL and COBOL-74. TCP/IP is implemented between all
   systems.
   
   Replacing the minicomputers with Linux systems should be relatively
   straight forward. Since no WANG PACE is implemented on these, only
   migration of the COBOL-74 programma's needs to be done. Data entry and
   remote connection could be done using telnet and/or serial
   connections. Transferring data between mainframe and other systems is
   no problems. All this happens using FTP.
   
   Now, let us think really BIG! Could a case be made to build a system
   using Linux, which can replace a mainframe computer, given the specs
   above ? As said above, more numbers and benchmarks are needed on Linux
   and its implementations to know how powerful Linux can be.
   
  Case 7 : Software for highly skilled, non-technical people
  
   Doctors, dentists, lawyers, chemists, ...
   
   These cases resemble the SOHO, but additionally need very specialized
   software to support their job. This software is mostly written by very
   specialised companies (niche software). What would they need in terms
   of software and maintenance to be convinced to migrate to Linux ?
   
   One of the answers is surely that they can migrate their existing
   applications easily and that conversion of their source code is
   supported by tools and API's which provide the same (or better)
   functionality than their old tools.
   
   Configuration of these systems may be more specialized. Normally the
   user would only use his system (enter customers, query the system).
   All administrative and configuration chores could be left to the
   implementor. The applications themselves are already as user-friendly
   as they can be, due to their specialised nature.
   
  Conclusion
  
   I have presented several real-world cases, where Linux IMHO could be
   used. In most cases there are two recurring themes.
   
   The first is the need for migration support from other platforms to
   Linux. This support spans a whole range, varying from multi-platform
   compilers over database migration, up to replacement user
   applications.
   
   The second is the need to provide more user-friendly administration
   and operation. This may be as well through character-based dialog
   boxes as through GUI systems. In any case their access should be more
   centralised.
   
   Other themes which pop up are the following :
     * Enhanced telecommunications support through more comprehensible
       fax packages and a PABX support
     * Enhanced reliabality
     * Numbers and benchmarks on Linux applications and configurations
     * Internationalised accounting packages
     * A customer database system which integrates with other apps
     _________________________________________________________________
   
                      Copyright  1998, Jurgen Defurne
            Published in Issue 33 of Linux Gazette, October 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                Using the Xbase DBMS in a Linux Environment
                                      
                               By Gary Kunkel
     _________________________________________________________________
   
    Introduction
    
   The Xbase file structure has been around quite a while and was one of
   the first widely available DBMS tools for micro computers. It has
   become a de-facto industry standard for text based databases and is
   supported by many vendors to include the Borland Database Engine,
   Microsoft's FoxPro, Clipper, Sequitor's Codebase and others. Xbase
   type datafiles will be with us for a while.
   
   The Startech Web Server at
   http://www.startech.keller.tx.us/xbase/xbase.html maintains a public
   domain, open source C++ library for accessing Xbase type datafiles in
   a multi-user environment. The library supports automatic record
   locking, memo fields (both dBase III and IV versions), and .NDX style
   indices. There is also an API for interfacing the library to an Apache
   Web Server and providing database access to web pages. Several example
   programs provide a framework for creating, browsing and updating
   databases. There are examples which demonstrate how to use the library
   with an Apache Web Server and using the library in conjunction with
   the wxWindows library. Some readers of this article will recognize the
   wxWindows library as a cross platform GUI C++ library.
     _________________________________________________________________
   
    System Requirements
    
   In order to use the Xbase DBMS library, you'll need to have a C/C++
   compiler. The original library was built on a Slackware distribution
   with the GNU public domain compiler, but there are examples on the
   site for using the library on other platforms including Windows, SUN,
   and VMS.
     _________________________________________________________________
   
    Getting Sources
    
   To downloading the library sources, point your web browser to
   http://www.startech.keller.tx.us/xbase/xbase.html and select the
   latest version, which at the time of this writing is version 1.7.4
   dated 6/18/98. There are a couple of flavors available, but for the
   purpose of this article, download the UNIX tar version. Also, you may
   want to grab the HTML documentation for using the library at the same
   time. Alternatively, you can get the software via ftp
   ftp.startech.keller.tx.us and retrieve the software from the pub/xbase
   directory.
     _________________________________________________________________
   
    Installing Sources
    
   To install the Xbase library under the /usr/local directory, execute
   the following commands: cd /usr/local and mkdir xbase. The next step
   is to set up access rights to the Xbase directory tree. Your site may
   have specific protocols on directory access rights which you may need
   to address at this point. If not, then the commands "chown
   YOURUSERID.users xbase", then "chmod 775 xbase" will get you going.
   
   Now create a source directory and copy the source code into it: "cd
   xbase", "mkdir src", "cp /home/of/xbase.tar.gz /usr/local/xbase/src",
   "cd /usr/local/xbase/src", "gunzip xbase.tar.gz" and lastly "tar -xvf
   xbase.tar". At this point the Xbase source code should be in the
   /usr/local/xbase/src directory and be ready to build the library.
     _________________________________________________________________
   
    Building the Library
    
   Before building the library, review the options.h file. This file
   contains any of the Xbase configuration switches you may want or need
   to change depending on what you are trying to do. To build a DLL
   library, type "make dll". To build a static library, type "make all".
   
   It should compile cleanly. Errors at this point can often be traced to
   the .h header files currently in use at your site. If you run into
   errors at this point, notify xbase@startech.keller.tx.us for help
   building the library.
     _________________________________________________________________
   
    Building a Sample Program
    
   This sample program demonstrates a simple program which creates a
   sample database and index.

/*  sample1.cpp  */
#include "xbase.h"
main()
{
  Schema MyRecord[] =
  {
    { "FIRSTNAME", CHAR_FLD,     15, 0 },
    { "LASTNAME",  CHAR_FLD,     20, 0 },
    { "BIRTHDATE", DATE_FLD,      8,  0 },
    { "AMOUNT",    NUMERIC_FLD,   9,  2 },
    { "SWITCH",    LOGICAL_FLD,   1,  0 },
    { "FLOAT1",    FLOAT_FLD,     9,  2 },
    { 0,0,0,0 }
  };

  /* define the classes */
  XBASE x;                      /* initialize xbase  */
  DBF MyFile( &x );             /* class for table   */
  NDX MyIndex( &MyFile );       /* class for index 1 */

  SHORT rc;                     /* return code       */

  if(( rc = MyFile.CreateDatabase( "MYFILE.DBF", MyRecord, OVERLAY )) != NO_ERR
OR )
     cout
Assuming you keyed the program source into directory /usr/local/xbase/myproj,
type "g++ -c -I/usr/include -I/usr/src/linux/include/asm-i386 -I../src
sample1.cpp" to compile the program and type "g++ -o sample1 sample1.o
../src/xbase.a" to link edit the program. The asm-i386 directory in the
above include line is for Linux running on the Intel
platform.  Other platforsm require the correct include directory.
  __________________________________________________________________________


    Conclusion


In conclusion, I'd like to say that although the Xbase library is not a 100%
complete Xbase solution,  it is a stable and reliable library capable of
handling various database requirements.   If you are looking for database
libraries in general, or need access to Xbase files in particular, give Xbase
DBMS a try.  If you are a C programmer and new to C++ object oriented
programming, the Xbase DBMS is easy to learn and will help transition you to
the world of object oriented programming.  If you have never  programmed
in C or C++ before, this library should provide complete enough examples to
get you started programming in C/C++ with confidence.




  __________________________________________________________________________


                       Copyright  1998, Gary Kunkel
            Published in Issue 33 of Linux Gazette, October 1998
                                      




  __________________________________________________________________________


[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back 
Next 

  __________________________________________________________________________



    "Linux Gazette...making Linux just a little more fun!"



  __________________________________________________________________________




                  Book Review: Website Automation Toolkit
                                      
                             By Andrew Johnson
                                      


  __________________________________________________________________________


 [INLINE]

     * Author: Paul Helinski
     * Publisher: John Wiley
     * E-mail: info@wiley.com
     * URL: http://www.wiley.com/
     * Price: $44.99 US
     * ISBN: 0-471-19785-8



  __________________________________________________________________________


Website Automation Toolkit is a collection of tools, most created
by the author's company, which range from allowing simple configuration
control over the look and feel of your entire site to remote
creation of and updating pages on the site to shopping carts and
simple database facilities. It is not a book about running and configuring
web servers or teaching the Common Gateway Interface (CGI) protocol.

The introductory preface and first chapter address
the motivation behind the book and a few of the benefits of using
some form of automation in maintaining your web site. Next are
two chapters discussing some of the alternatives
(and alternative proprietary software) to the author's CGI-oriented
approach to automation.

The majority of the tools provided are, in fact, Perl CGI programs
created by the author's company. These tools are officially free.
While the license in the book states that you are
not allowed to redistribute them without permission, you are
allowed to use and install them as many times and for as many
clients as you wish. This seemed a bit contradictory, so I asked
the author for some clarification. He responded with the
following statement (used with permission):

     I don't do courts, but the intent of the license is to prevent
     people from putting our utilities on shareware CD-ROMs without the
     supporting text. It's more of a support issue than an ownership
     one. I wrote the book because these things were far too useful to
     keep to ourselves.

Chapters 4 and 5 mark the transition into the main part of the
book by providing a short justification for why Perl is the
language of choice, and a brief introductory overview of Perl basics. This
overview is not intended as a guide to the
Perl programming language, but merely to acquaint the user with
some of the essentials so that later sections on configuring and
customizing Perl scripts will be less daunting to the
inexperienced.

The remaining chapters provide a tool-by-tool installation and
instruction manual. There are too many tools to cover them all with
any detail, so I will very quickly run through the remaining
chapters and follow with my general impressions.

Chapter 6 covers SiteWrapper, a package that wraps your site so
that all of your pages are served by a CGI program. Chapter 7
introduces Tickler, a program for soliciting e-mail addresses of
visitors and notifying them of content changes. Chapter 8 follows
with a discussion of the freely available
Majordomo mailing list software for creating and maintaining
mailing lists.

Chapter 9 addresses tracking visitors with discussions of the
Trakkit tool (requires SiteWrapper) and the freely available
Analogue program. Chapter 10 covers a Shopping Cart package
(a modified SiteWrapper program) along with some order processing utilities.

Chapter 11 covers WebPost, the utility which, according to the
author, sparked the book. This system allows you to create, edit,
delete or upload pages to your site and automatically generate
or update the cross links among pages.

Chapter 12 provides three search utilities for your site,
depending on whether you are using SiteWrapper, WebPost or neither.
Chapter 13 covers the AddaLink tool for creating and maintaining
a hot list of links. Chapter 14 covers QuickDB, a simple text-based
database engine with a browser interface for adding, editing and
deleting entries.

Chapter 15 presents a Bulletin Board utility, and also discusses
using FrontPage for a Discussion Board. Chapter 16 takes the next
step by covering a couple of freely available Chat programs.

Chapter 17 provides a couple of search engine agents, one to
submit a URL to a multitude of search engines and two more which
report your location on the search engines. The final chapter
presents BannerLog and ClickThru, tools which track and log
click-throughs and page views of banner ads on your site.

I set up a dummy site on my Linux box for installing and trying
out a few of the provided utilities. The installation instructions in each chap
ter are
divided into UNIX and NT sections and are relatively simple to
follow. However, some unfortunate problems arose.

There are .zip files for each package, and non-zipped directories
for each of the packages on the CD-ROM. A mild inconvenience is
that some of the .zip files were created with extraneous path
information included, and the individual files in the non-zipped
directories are riddled with ^M characters. The author has created
a web site where you can find problem reports and corrections, and
``cleaner'' versions of the source files for downloading. The
site is located at http://www.world-media.com/toolkit/.

Another inconvenience is that every Perl script must be checked
(and possibly edited) for the proper path to Perl on
your system, there is no script provided to automate this task,
although writing one would be trivial for any experienced Perl
programmer. Note that even if the first script you examine has the
proper path, others definitely will not--so you must check
and edit those with the incorrect path for your system.

More serious problems arise with the Perl code. None of
the open calls for reading and writing files are consistently
checked for success or failure. You'll first notice a problem
when you install the SiteWrapper package and try to change the
color scheme of your site with the included SiteColors program.
The installation guide omits mentioning that your server will
need write access to the tagfile.dat file where the color
scheme is stored. Since the program does not check the return
value of the open call, it will fail silently, your color
scheme will not be updated and no error will be present in your
server's logs. I'd seriously recommend locating all calls to the
open function in all .cgi scripts and adding at least a
||die "$!"; statement to those that don't
have it.

Other deficiencies with the Perl scripts are that they are not -w clean (for wa
rnings), won't compile with
the ``strict'' pragma, do not use -T
for taint checking and use the older cgi.pl library rather than the
CGI.pm module for Perl 5.

Even with the above comments and concerns, the packages are, for
the most part, easy to install and get working. Installation and
configuration of the basic SiteWrapper package took less than an
hour, including time spent checking and cleaning the source code
and creating simple header and footer files and a couple of dummy
pages. When using this system, every page is served from a CGI
program, even essentially static pages. This method allows for a great deal
of flexibility and a centralized configuration style of
management, but could become costly in terms of server load if
your site is large or heavily trafficked.

I had a little more trouble getting the WebPost system running
properly, mainly because I chose to set it up in a subdirectory
of the SiteWrapper directory and a few issues were involved
in getting the two packages to play nicely together. Once it was
set up, however, it worked as advertised. While I found parts
of the interface to be a bit clunky for creating web pages, it is
a functional way to create and edit pages remotely using
a browser.

Other tools were less problematic to install, Trakkit for example--I was
tracking and logging myself within a few minutes of unpacking the
package.

On the whole, if you are looking for instant ``shrink-wrap''
automation software with point-and-click setup and configuration,
you'll be disappointed. However, typical Linux users accustomed
to file-based configuration should have little trouble with
these tools, especially if they already have some experience with
Perl programming. The programs are not stellar
examples in their present incarnation, but they can provide an
inexpensive automation system for budding webmasters willing to get their hands
 dirty with a little Perl code.
Hopefully, many of the concerns mentioned above will be addressed in a
future edition.



  __________________________________________________________________________


                      Copyright  1998, Andrew Johnson
            Published in Issue 33 of Linux Gazette, October 1998
                                      




  __________________________________________________________________________


[ TABLE OF CONTENTS ]
[ FRONT PAGE ]
Back 
Next 

                          Linux Gazette Back Page
                                      
           Copyright  1998 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the
                              Copying License.
                                      



  __________________________________________________________________________



  Contents:

     * About This Month's Authors
     * Not Linux




  __________________________________________________________________________




                         About This Month's Authors
                                      



  __________________________________________________________________________





    Larry Ayers

Larry lives on a small farm
in northern Missouri, where he is currently engaged in building a
timber-frame house for his family. He operates a portable band-saw mill,
does general woodworking, plays the fiddle and searches for rare
prairie plants, as well as growing shiitake mushrooms. He is also
struggling with configuring a Usenet news server for his local ISP.


    Randolph Bentson

Randolph's first UNIX experience was booting a BSD VAX system on
July 3, 1981--the whole town had a celebration the next day.
He began contributing to the Linux kernel in May 1994, and his book
Inside Linux: A Look at Operating System Development describes
how many modern operating system features have evolved and become essential
parts of Linux.


    Ken O. Burtch

Ken has been using Linux since kernel 0.97. During
the early 1990's he wrote software for the Apple IIgs computer, including
Pegasus Pascal (an Ada-Turing hybrid language) and the award winning shareware
game "Quest for the Hoard". His hobbies include reading and writing fantasy
literature and collecting cartoons. He is currently the president of PegaSoft
Canada, a Linux development company based in southern Ontario. He can be
reached via the PegaSoft web site at http://www.vaxxine.com/pegasoft.


    Jurgen Defurne

Jason is an Analyst/programmer in financial company (Y2K and
Euro).
He became interested in microprocessors 18 years ago, when my eyes saw
the TRS-80 in the
Tandy (Radio Shack) catalog.
I read all I could find about microprocessors, which was
then mostly confined to 8080/8088/Z80. The only thing he could do back
then was write
programs in assembler without even having a computer.
When he was 18, he gathered enough money to buy his first computer,
the Sinclair ZX
Spectrum. He studied electronics and learned programming mostly
on his own. He worked with
several languages (C, C++, xBase/Clipper, Cobol, FORTH) and several
different systems in
different areas: programming of test equipment, single- and
multi-user databases in
quality control and customer support, and PLCs in an aluminium
foundry/milling factory.


    Jim Dennis

Jim is the proprietor of 
Starshine Technical Services.
His professional experience includes work in the technical
support, quality assurance, and information services (MIS)
departments of software companies like
 Quarterdeck,
 Symantec/
Peter Norton Group, and
 McAfee Associates -- as well as
positions (field service rep) with smaller VAR's.
He's been using Linux since version 0.99p10 and is an active
participant on an ever-changing list of mailing lists and
newsgroups.  He's just started collaborating on the 2nd Edition
for a book on Unix systems administration.
Jim is an avid science fiction fan -- and was
married at the World Science Fiction Convention in Anaheim.


    Michael J. Hammel

   A Computer Science graduate of Texas Tech University, Michael J.
   Hammel , is an software developer specializing in X/Motif living in
   Dallas, Texas (but calls Boulder, CO home for some reason). His
   background includes everything from data communications to GUI
   development to Interactive Cable systems, all based in Unix. He has
   worked for companies such as Nortel, Dell Computer, and Xi Graphics.
   Michael writes the monthly Graphics Muse column in the Linux Gazette,
   maintains the Graphics Muse Web site and theLinux Graphics mini-Howto,
   helps administer the Internet Ray Tracing Competition
   (http://irtc.org) and recently completed work on his new book "The
   Artist's Guide to the Gimp", published by SSC, Inc. His outside
   interests include running, basketball, Thai food, gardening, and dogs.
   
    Andrew Johnson
    
   Andrew is currently a full-time student working on his Ph.D. in
   Physical Anthropology and a part-time programmer and technical writer.
   He resides in Winnipeg, Manitoba with his wife and two sons and enjoys
   a good dark ale whenever he can.
   
    John Kacur
    
   John has a degree in Fine Arts and Russian. After two years in the
   former Soviet Union and two years in Germany, he has returned to
   Canada to pursue a second degree in Computer Science and rediscover
   his love of computer programming.
   
    Damir Naden
    
   Damir is a mechanical Engineer, working as a Manager of Special
   Projects with Brampton Engineering Inc. in Ontario, Canada. During the
   day he tries to figure out how to make special machinery for plastic
   extrusion, and he splits his spare time between his own small
   business, L&D Technologies (specializing in machine design and project
   management), tinkering with Linux, and mountain biking.
   
    David Nelson
    
   David manages scientific research at the U.S. Department of Energy.
   Before that he earned his living as a theoretical plasma physicist. He
   started programming on the IBM 650 using absolute machine language and
   later graduated to CDC, DEC and Cray machines for his research. But
   Linux is the most fun. He and his wife, Kathy, enjoy tennis, skiing,
   sailing, music, theater, and good food.
   
    Mike Richardson
    
   Having variously worked an academia and industry, Mike is now a
   self-employed programmer and general-purpose computer dogsbody. Mostly
   he writes C and C++ for Linux (good) and Windows (bad). In his spare
   time he crawls down holes in the ground, and is fixing up a house that
   the surveyor described as "not so much neglected as abandoned....."
   
    Jim Schweizer
    
   Jim is currently a Consultant in web site administration and design.
   He is the author of an on-line textbook about Computer and Internet
   use and is an Instructor of English at several universities in Western
   Japan. His main hobby is being the Webmaster for the Tokyo Linux Users
   Group.
   
    Alex Vrenios
    
   Alex is a Lead Software Engineer at Motorola and has his ows
   consulting business. He is always taking some sort of class. He just
   finished the class work toward a Ph.D. in computer science, but only
   time will tell if it goes any further. His wife, Diane, is certainly
   his best friend and biggest fan. He enjoys his two Schnauzers, Brutus
   and Cleo, and his dozens of African Ciclids, too. He is a licensed
   amateur radio operator, as is Diane, and they spend more than a few
   nights together observing the skies through their 5-inch telescope.
   They like to get out and stay active, to enjoy life together.
   
    Colin C. Wilson
    
   Colin has been programming and administering UNIX systems since 1985.
   He has been happily playing with Linux for the past four years while
   employed at the University of Washington, developing DNA analysis
   software and keeping the systems up at the Human Genome Center.
   
    Dan York
    
   Dan York is a technical instructor and author who has been working
   with UNIX systems and the Internet for 13 years. He will, under
   questioning, also confess to being a Microsoft Certified System
   Engineer and Microsoft Certified Trainer. He currently teaches Windows
   NT and Microsoft BackOffice classes but would really like to be
   teaching people how to use Linux!
     _________________________________________________________________
   
                                 Not Linux
     _________________________________________________________________
   
   [INLINE] Thanks to all our authors, not just the ones above, but also
   those who wrote giving us their tips and tricks and making
   suggestions. Thanks also to our new mirror sites. And of course,
   thanks to Ellen Dahl for her help with News Bytes.
   
   About a month ago, my doctor diagnosed me as having diabetes. Since
   then, I have found I am becoming quite self-absorbed. I've had to go
   back to always thinking about what I am going to eat and when--a habit
   I had given up years ago. For a time, I've decided to become
   essentially vegan (though not fanatic about it--I ate one piece of
   bacon this morning). I'm quite amazed at the difference giving up meat
   and dairy products has made in my energy level. Of course, getting my
   blood sugar down has certainly been the best help in that area. At any
   rate, I'm feeling better than I have in at least 6 months if not
   longer, and that's good!
   
   I will be going to San Diego this weekend to visit my grandchildren
   there. Haven't seen them in quite a while, so I am looking forward to
   it.
   
   Have fun!
     _________________________________________________________________
   
   Marjorie L. Richardson
   Editor, Linux Gazette, gazette@ssc.com
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back 
     _________________________________________________________________
   
   Linux Gazette Issue 33, October 1998, http://www.linuxgazette.com
   This page written and maintained by the Editor of Linux Gazette,
   gazette@ssc.com
