Tag: personal
HTML::FillInForm
The CGI::Application::ValidateRM
module utilizes HTML::FillInForm
to fill in
values in the form if portions did not pass validation. Basically, it utilizes
HTML::Parser
to go through and find the elements and match them to values.
It's used because the assumption is that you've built your form into an
HTML::Template
, and that way you don't need to put in program logic into the
form.
Seems another good candidate for using FillInForm
would be to populate a form
with values grabbed from a database… I should look into that as well!
HTML::Template notes
I've used HTML::Template
a little, mainly in the Secret Santa project I did
this past Christmas for my wife's family. One thing I disliked was using the
normal syntax: <TMPL_VAR NAME=IMAGE_SRC>
— it made looking at it difficult (it
wasn't always easy to tell what was an HTML tag, what was plain text, and what
was HTML::Template
stuff), and it made it impossible to validate my pages
before they had data.
Fortunately, there's an alternate syntax: wrap the syntax in HTML comments:
<!-- TMPL_VAR NAME=IMAGE_SRC -->
does the job. It uses more characters, true,
but it gets highlighted different than HTML tags, as well, and that's worth a
lot.
And why do I have to say "NAME=" every time? That gets annoying. As it turns
out, I can simply say: <!-- TMPL_VAR IMAGE_SRC -->
, and that, too will get the
job done.
Finally, what about those times when I want to define a template, but have it
broken into parts, too? Basically, I want HTML::Template
to behave a little
like SSI. No worries; there's a TMPL_INCLUDE
tag that can do this: <!-- TMPL_INCLUDE NAME="filename.tmpl" -->
.
CGI::Application::ValidateRM and Data::FormValidator
I've been reading a lot of posts lately on the CGI::App
mailing list about using
CGI::Application::ValidateRM
(RM == Run Mode); I finally went and checked it
out.
CGI::App::ValRM
uses Data::FormValidator
in order to do its magic.
Interestingly, D::FV
is built much like how I've buit our formHandlers
library at work — you specify a list of required fields, and a list of fields
that need to be validated against criteria, then provide the criteria. It goes
exactly how I would have done our libraries had we been working in perl —
supplying the constraint as a regexp or anonymous sub in a hashref for the
field.
Anyways, it looks like the combination of CGI::App::ValRM
with CGI::App
could greatly simplify any form validations I need to do on the site, which will
in turn make me very happy!
Design Ideas
I had some success last night with the My::Portal
CGI::Application
superclass I'm building — I actually got it working with CGI::Wiki::Simple
(after I debugged the latter to fix some delegation issues!). Now that I know the "proof-of-concept" works, I'm ready to start in on some other issues.
The first issue is: how can I specify different directories for different applications to search for templates, while retaining the default directory so that the superclass can build the final page? I could always simply keep all templates in a single directory and simply prefix them, but that seems inelegant, somehow. I'll need to explore how HTML::Template integration works with CGI::App.
Second, and closely related: how do I want it to look, in the end? I could see keeping the design we have — it's clean, simple, and yet somehow functionally elegant. Okay, I'm exaggerating — it's your standard three-column with header and footer. But it goes with the idea of blocks of content. I need to think about that.
I saw a design idea for a WikiWikiWeb today, though, that totally changed my ideas of how a Wiki should look. I hadn't been to Wikipedia for some time, but a Google link to Gaston Julia showed up on Slashdot as it shut down a site in Australia, and so I visited it. I like the new design — it separates out the common links needed into a nice left menu, and puts a subset of that at the top and bottom of the main column as well, using nice borders to visually separate things. I much prefer it to PhpWiki's default style, as well as to anything else I've really seen so far relating to Wiki layout.
Fun with Find
I've had occasion to need to grab a specific set of files from a large directory — most recently, I needed to grab some specific access logs from our Apache logfiles at work.
Enter find
.
I needed to get all files newer than a specific date, and with the pattern 'sitename-access_log.timestamp.gz'. I then needed to tar up these files and grab them for processing. So, here's what I did:
- The
-newer filename
tells find to locate files newer thanfilename
. - The
-regex
flag tells find to locate files matching the regular expression. The regex that find uses is a little strange, however, and didn't follow many conventions I know; for one thing, it's assumed that the pattern you write will match against the entire string, and not just a portion of it. What I ended up using was-regex '.*access_log.*gz'
, and that worked. - The
-printf
flag tells find to format the printing. This is useful when using the output of find in another program. For instance, tar likes a list of filenames… so I used-printf "%p "
, which separated each filename with a space.
I then backticked my full find statement and used it as the final argument to a tar command; voila! instant tar file with the files I need!
conditional use in perl
I've been struggling with how to use modules at runtime instead of compile time (I even wrote about this once before). I finally figured it out:
my $module = "ROX::Filer";
eval "use $module";
die "couldn't load module : $!n" if ($@);
Now I just need to figure out how to create objects from dynamic module names…!
Update: Creating objects from dynamic names is as easy as dynamically loading the module at run-time:
my $obj = $module->new();
Where's that module?
One continual pain for me with perl is when I need to try to find the location
of a specific module on my filesystem so that I can examine it myself; I end up
first having to find out what my @INC
path is, then having to dig through it
until I find the module. Fortunately, I'm not the only one; somebody
posted a solution to this problem on
Perl Monks:
Updated: The original listing presented didn't work! The following one, garnered from a comment to the original PM post, does, and is what I'm now using.
###!/usr/bin/perl -w
use strict;
use File::Spec::Functions qw/catfile/;
my @loaded = grep {
eval "require $_";
!$@ ? 1 : ($@ =~ s/(@INC contains: Q@INCE)//, warn ("Failed loading $_: $@"), 0);
} @ARGV;
my @pm = map catfile(split '::') . (/.pmz/ ? '' : '.pm'), @loaded;
print "@INC{@pm}n";
__END__
=pod
=head1 NAME
whichpm - lists real paths of specified modules
=head1 SYNOPSIS
editor `whichpm Bar`
=head1 DESCRIPTION
Analogous to the UN*X command which.
=cut
Just place it in your $PATH
and let 'er rip!
Class::DBI
I was reading a thread on the cgiapp mailing list today from several of the core
developers about developing a book on CGI::Application
. In it, several mentioned
that it might/should center around CGI::App
and a handful of oft-used modules.
One of those modules is
Class::DBI.
I took a gander at Class::DBI
over at CPAN, and it looks absolutely amazing,
and at the same time perhaps too abstract. Basically, you create a number of
packages and/or packages, one for each table you'll be using in your
application, and one to establish your basic connection. Then, each package
creates an object instance of the connection, and defines a number of
properties: the name of the table, the columns you'll be using, and then the
relations it has to other tables (
has_a( col_name => 'Package::Name'); has_many( col_name => 'Package::Name'); might_have(col_name => 'Package::Name');
) etc.
Then you use the module/packages you need in your script, and you can then use object-oriented notation to do things like insert rows, update rows, search a table, select rows, etc. And it looks fairly natural.
I like the idea of data abstraction like this. I see a couple issues, however:
- I don't like the idea of one package per table; that gets so abstract as to make development come to a stand-still, especially during initial development. However, once development is sufficiently advanced, I could see doing this, particularly for large projects; it could vastly simplify many regular DBI calls.
- I like using SQL. If I need to debug why something isn't working when I interact with the database, I want to have absolute control over the language. Abstracting the SQL means I don't have that fine-grained control that helps me debug.
So, for now, I'll stick with straight DBI…. but this is an interesting avenue to explore.
Ctrl-S and Ctrl-Q in *nix systems
I just ran into this not long ago, and wish I'd discovered it years ago. Basically, Ctrl-S
suspends a process, while Ctrl-Q
resumes it. This is useful when in g/vim
or screen
and you manage to lock up your application because you accidently hit Ctrl-S
when reaching for another key combo.
use autouse ... or not
Due to my cursory reading in the Perl Cookbook, 2nd Edition, earlier this
week, I've been investigating the use autouse
pragma, to see if it will
indeed solve my issue of wanting to use different modules based on the current
situation. Unfortunately, I cannot find any documentation on it in perldoc
.
I remember seeing something about wrapping this stuff into a BEGIN
block, but
that would require knowing certain information immediately, and I might need
the code to work through some steps before getting there.
Fortunately, this node just appeared on Perl Monks today, and I got to see other ways of doing it:
- The
if
module lets you do something likeuse if $type eq 'x', "Some::Module";
However,$type
must be known at compile time (i.e., it's based on system info or on@ARGV
); this probably wouldn't work in a web-based application. - Use
require
andimport
instead:if $type wq 'ex') { require Some::Module; Some::Module->import if Some::Module->can("import"); }
If your module doesn't export anything, you can even omit the call toimport
. - Use an
eval
:if ($type eq 'x') { eval "use Some::Module"; }
This gets around theimport
problem, but could possibly run into other compile time issues.
So, basically, I already had the tools to do the job; just needed to examine the problem more.