return to first page linux journal archive
keywordscontents

Master/Apprentice

A Forum For Questions About Smithing

HTML Validation

I want to make sure that my pages conform the the HTML standards, but I have a hard time figuring out what tags are in the different standards. Is there an easy way to do this?

There are several ways of going about this. If you are only worried about a couple of pages, then there are several on-line validation services that you can use. If it is just a chunk of code you are concerned about, then these sites have a box where you can enter suspect code to be checked. If you prefer to do the whole page, then you may also enter URLs of the pages you want checked.

The Weblint validation service is at
http://www.unipress.com/weblint/index.html#form.

HAL Computing has a site as well, at
http://www.halsoft.com/html-val-svc/index.html.

If you have an extensive site with lots of pages, or prefer to do your validation off-line, there are packages that allow that as well.

HAL HTMLChek Kit, a Perl script, can be retrieved from
http://www.halsoft.com/html-tk/.

Weblint, a Perl script, can be retrieved from
ftp://ftp.khoros.unm.edu/pub/perl/www/.

htmlcheck, an awk script, can be retrieved from
ftp://ftp.cs.buffalo.edu/pub/htmlchek/.

Webber, a MS-Windows HTML editor that also does validation can be retreived from
ftp://ftp.onramp.ca/csd/pub/.

Arena, an X-Windows HTML browser will flag invalid HTML documents that are loaded, can be retrieved from
http://www.w3.org/hypertext/WWW/Arena/.

Variables in Scripts

I need to pass a variable from page to page in my scripts, is there a way to do this?

It depends on what you are writing your CGI scripts in. I don't have the space to go into a CGI tutorial here--there are articles that cover this in upcoming issues, however the basic idea is this:

Instead of having your pages as static HTML documents, you must ``wrap'' them in a CGI script. Let's say that you have a form that returns a custom page based on what the user enters. The custom page is generated by the script, it doesn't actually exist as an HTML document anywhere, but is created on the fly, by receiving variables that the form passes on. This page can then pass those variables on to another page, as long as the pages are all generated by scripts. The key here is to use the HIDDEN variable type to pass things from page to page, and use the scripts to insert the values where they belong. Watch for more detailed explanations in subsequent issues of WEBsmith.

Where did they come from?

I want to know how people are finding my page, is there something that will allow me to track this?

Sort of. It isn't the perfect solution, but there is an environment variable called HTTP_REFERER that contains information about the page that was accessed just previous to the current one. This variable may not always return a value that is accurate; if the user accesses your page from a bookmark, or using the ``back'' button in their browser, incorrect information may be passed.

Tables in Lynx?

I really like the way tables make my pages look from Netscape, but when I try and view them using Lynx, or some other text-based browser, they look like garbage. Is there a good way of avoiding this?

In a word, no. You can get a rough approximation by adding <BR> tags in certain places, but this does not help with spacing:

<table BORDER=2>
<!--Starts the table-->
<th>Names</th><TH>Hobbies<BR></TH>
<!--Initial Row contains the Column
Headers-->
<tr><td><A HREF="Employees/liem/">Liem
Bahneman</A><TD> Enjoys eating 
crow.</TD><br></td></tr>
<!--Row two contains one entry with a <BR>
tag at the end, this keeps lynx from cramming
everything on one line-->
<tr><td><A
HREF="Employees/david.html">David
Conlin-Allen</A><TD> Enjoys Feeding crow to Liem.</TD><BR><td></tr>
<!--Same thing as Row 2-->
</TABLE>

The text in Lynx will still not always line up correctly; if nothing else, this makes your table readable from a text-based browser, while still looking good in Netscape.

  Previous    Next