Search results

From the blog

Posted 2012-06-24
Automatic deployment with git and gitolite

I read a post recently by Sean Coates about deploy on push. The concept is nothing new: you set up a hook that listens for commits on specific branches or tags, and it then deploys your site from that revision.

Except I'd not done it myself. This is how I got there.

From the blog

Posted 2012-01-11
Why Conventions Matter

When I started teaching myself scripting languages, I started with Perl. One Perl motto is "TMTOWTDI" — "There's More Than One Way To Do It," and pronounced "tim-toady." The idea is that there's likely multiple ways to accomplish the very same thing, and the culture of the language encourages finding novel ways to do things.

I've seen this principle used everywhere and in just about every programming situation possible, applied to logical operations, naming conventions, formatting, and even project structure. Everyone has an opinion on these topics, and given free rein to implement as they see fit, it's rare that two developers will come up with the same conventions.

TMTOWTDI is an incredibly freeing and egalitarian principle.

Over the years, however, my love for TMTOWTDI has diminished some. Freeing as it is, is also a driving force behind having coding standards and conventions — because when everyone does it their own way, projects become quickly hard to maintain. Each person finds themselves reformatting code to their own standards, simply so they can read it and follow its flow.

Additionally, TMTOWTDI can actually be a foe of simple, elegant solutions.

Why do I claim this?

From the blog

Posted 2009-01-02
Seven Things - Tagged by Keith Casey

I'm really not sure I understand these "seven things" or "tagged" memes, but I'm going to give it a shot, after Keith Casey did a drive-by tagging of me on New Year's Eve.

So, without further ado, seven things you may not know about me…

From the blog

Posted 2008-12-10
Mumbles irssi integration

I've been using IRC regularly for the past six to nine months, in large part due to the growing ZF community on the Freenode #zftalk channel (unfortunately, I simply don't have time to be in that particular channel any more, but you can generally find me in #zftalk.dev), but also to keep in contact with other peers, friends, and colleagues.

One difficulty, however, is keeping productivity high while staying on IRC. To me, the ultimate client would provide me notifications when somebody mentions my name or a watch word — allowing me to read the channel at my leisure, yet still respond to people in a timely fashion.

From the blog

Posted 2008-10-21
Vimgrep and Vim Project

Chris Hartjes today was on a quest for a "find in project" feature for Vim. "Find in Project" was a feature of Textmate that he'd grown accustomed to and was having trouble finding an equivalent for.

The funny thing is that Textmate is a newcomer, and, of course, vim has had such a feature for years. The thing to remember with vim, of course, is its unix roots; typically if you know the unix command for doing something, you can find what you need in vim. In this case, the key is the vimgrep plugin, which ships in the standard vim distribution.

From the blog

Posted 2006-09-19
Vim 7 code completion

I may work at Zend, but I've never been a fan of IDEs. They simply don't suit my programming style. I can usually keep track of file locations in my head pretty easily, and what I really need is a blank slate on which I can write, and one that doesn't consume resource that can better be used running web servers and other programs. Syntax highlighting, good indentation — these are important, but you can get these from good, minimal text editors very easily. Vim is my editor of choice.

I will admit, though, that one area where I have had IDE-envy is the area of code completion. I often find myself doing quick lookups to php.net or perldoc to determine the order of arguments to a function or method call, or checking for the expected return value. Most of the time, this doesn't take much time, however, so I just live with it.

Today, however, cruising through the blogosphere, I came across an article showcasing some new features of Vim 7.0, and discovered Vim 7's code completion.

Basically, while in insert mode, you can type <C-x> <C-o> to have vim attempt to autocomplete the current keyword. If more than one possibility exists, it shows a dropdown, and you can use your arrow keys to highlight the keyword that you wish to use.

But it gets better! Not only does it do this kind of autocompletion, but it also opens a small 'scratch preview' pane showing the function/method signature — i.e., the expected arguments and return value!

I thought I had little need for IDEs before… now I have even less! Bram and the rest of the Vim team, my hat's off to you for more fine work!

From the blog

Posted 2006-04-28
Telcos are Attacking the Internet

I generally try to stay out of politics on this blog, but this time something has to be said, as it affects anyone who uses the internet, at least in the US.

Basically, a number of telcos and cable providers are talking about charging internet content providers — the places you browse to on the internet, places like Google, Yahoo!, Amazon, etc. — fees to ensure bandwidth to their sites. Their argument is that these content providers are getting a 'free ride' on their lines, and generating a lot of traffic themselves, and should thus be paying for the cost of bandwidth.

This is patently ridiculous. Content providers already have to pay for their bandwidth — they, too, have ISPs or agreements with telcos in place, either explicitly or via their hosting providers. Sure, some of them, particularly search engines, send out robots in order to index or find content, but, again, they're paying for the bandwidth those robots generate. Additionally, people using the internet are typically paying for bandwidth as well, through their relationship with their ISP. What this amounts to is the telcos getting paid not just by each person to whom they provide internet access, but every end point on the internet, at least those within the US.

What this is really about is telcos wanting more money, and wanting to push their own content. As an example, let's say your ISP is AOL. AOL is part of Time Warner, and thus has ties to those media sources. Now, those media sources may put pressure on AOL to reduce bandwidth to sites operated by ABC, CBS, NBC, FOX, Disney, PBS, etc. This might mean that your kid can no longer visit the Sesame Street website reliably, because AOL has reduced the amount of bandwidth allowed to that service — but any media site in the TWC would get optimal access, so they could get to Cartoon Network. Not to slam Cartoon Network (I love it), but would you rather have your kid visiting cartoonnetwork.com or pbskids.org? Basically, content providers would not need to compete based on the value of their content, but on who they can get to subscribe to their service.

Here's another idea: your ISP is MSN. You want to use Google… but MSN has limited the bandwidth to Google because it's a competitor, and won't accept any amount of money to increase that bandwidth. They do the same with Yahoo! So, now you're limited to MSN search, because that's the only one that responds reliably — regardless of whether or not you like their search results. By doing so, they've just artificially inflated the value of their search engine — without needing to compete based on merit.

Additionally, let's say Barnes and Noble has paid MSN to ensure good bandwidth, but part of that agreement is a non-compete clause. Now you find your connections to Amazon timing out, meaning that you can't even see which book provider has the better price on the book you want; you're stuck looking and buying from B&N.

Now, let's look at something a little more close to home for those of us developing web applications. There have been a number of success stories the last few years: MySpace, Digg, and Flickr all come to mind. Would these endeavors have been as successful had they needed to pay multiple times for bandwidth, once to their ISP and once each to each telco charging for content providers? Indeed, some of these are still free services — how would they ever have been able to pay the extra amounts to the telcos in the first place?

So, basically, the only winners here are the telcos.

Considering how ludicrous this scheme is, one must be thinking, isn't the US Government going to step in and regulate against such behaviour? The answer, sadly, is no. The GOP doesn't like regulation, and so they want market forces to decide. Sadly, what this will likely do is force a number of content providers to offshore their internet operations — which is likely to have some pretty negative effects on the economy.

The decision isn't final — efforts can still be made to prevent it (the above link references a Senate committee meeting; there's been no vote on it). Call your representatives today and give them an earful. Tell them it's not just about regulation of the industry, but about fair competition in the market. Allowing the telcos to extort money from content providers will only reduce the US' economic chances in the world, and stifle innovation and choice.

From the blog

Posted 2006-03-27
PHP error reporting for Perl users

On perlmonks today, a user was needing to maintain a PHP app, and wanted to know what the PHP equivalent of perl -wc script.pl was — specifically, they wanted to know how to run a PHP script from the commandline and have it display any warnings (ala perl's strict and warnings pragmas).

Unfortunately, there's not as simple a way to do this in PHP as in perl. Basically, you need to do the following:

  • To display errors:

    • In your php.ini file, set display_errors = On, or
    • In your script, add the line ini_set('display_errors', true);
  • To show notices, warnings, errors, deprecation notices:

    • In your php.ini file, set error_reporting = E_ALL | E_STRICT, or
    • In your script, add the line error_reporting(E_ALL | E_STRICT);

Alternatively, you can create a file with the lines:

<?php
error_reporting(E_ALL | E_STRICT);
ini_set('display_errors', true);

and then set the php.ini setting auto_prepend_file to the path to that file.

NOTE: do not do any of the above on a production system! PHP's error messages often reveal a lot about your applications, including file layout and potential vectors of attack. Turn display_errors off on production machines, set your error_reporting somewhat lower, and log_errors to a file so you can keep track of what's going on on your production system.

The second part of the question was how to run a PHP script on the command line. This is incredibly simple: php myscript.php. No different than any other scripting language.

You can get some good information by using some of the switches, though. -l turns the PHP interpreter into a linter, and can let you know if your code is well-formed (which doesn't necessarily preclude runtime or parse errors). -f will run the script through the parser, which can give you even more information. I typically bind these actions to keys in vim so I can check my work as I go.

If you plan on running your code solely on the commandline, add a shebang to the first line of your script: #!/path/to/php. Then make the script executable, and you're good to go. This is handy for cronjobs, or batch processing scripts.

All of this information is readily available in the PHP manual, and the commandline options are always available by passing the --help switch to the PHP executable. So, start testing your scripts already!

From the blog

Posted 2004-11-17
Sign of a Geek

It's now been confirmed: I'm a geek.

Okay, so that probably comes as no shocker to those of you who know me, but it's the little things that make me realize it myself.

I've been frequenting Perl Monks for a couple of years now, mainly to garner ideas and code to help me with my personal or work projects. I rarely post comments, and I've only once submitted a question to the site. However, I do frequent the site regularly, and the few comments I've put in — generally regarding usage of CGI::Application — have been typically well-moderated.

Well, yesterday I made a comment to a user asking about editors to use with perl. I was incensed by a remark he made about VIM not having the features he needed. Now, as I said in my comment, I've used VIM on a daily basis for over two years, and I'm still discovering new features — and I've used all of the features he was looking for.

This is where I discovered I'm a geek: my comment made it into the Daily Best for today, peaking around number 5. The fact that that made my day indicates to me that I must be a geek.

Oh — and VIM rules!

From the blog

Posted 2004-10-12
PHP_SELF versus SCRIPT_NAME

I've standardized my PHP programming to use the environment variable SCRIPT_NAME when I want my script to refer to itself in links and form actions. I've known that PHP_SELF has the same information, but I was more familiar with the name SCRIPT_NAME from using it in perl, and liked the feel of it more as it seems to describe the resource better (PHP_SELF could stand for the path to the PHP executable if I were to go by the name only).

However, I just noticed a post on the php.general newsgroup where somebody asked what the difference was between them. Semantically, there isn't any; they should contain the same information. However, historically and technically speaking, there is. SCRIPT_NAME is defined in the CGI 1.1 specification, and is thus a standard. However, not all web servers actually implement it, and thus it isn't necessarily portable. PHP_SELF, on the other hand, is implemented directly by PHP, and as long as you're programming in PHP, will always be present.

Guess I have some grep and sed in my future as I change a bunch of scripts…

From the blog

Posted 2004-09-21
Cgiapp Roadmap

I've had a few people contact me indicating interest in Cgiapp, and I've noticed a number of subscribers to the freshmeat project I've setup. In addition, we're using the library extensively at the National Gardening Association in developing our new site (the current site is using a mixture of ASP and Tango, with several newer applications using PHP). I've also been monitoring the CGI::Application mailing list. As a result of all this activity, I've decided I need to develop a roadmap for Cgiapp.

Currently, planned changes include:

  • Version 1.x series:

    • Adding a Smarty registration for stripslashes (the Smarty "function" call will be sslashes).
    • param() bugfix: currently, calling param() with no arguments simply gives you a list of parameters registered with the method, but not their values; this will be fixed.
    • error_mode() method. The CGI::Application ML brought up and implemented the idea of an error_mode() method to register an error_mode with the object (similar to run_modes()). While non-essential, it would offer a standard, built-in hook for error handling.
    • $PATH_INFO traversing. Again, on the CGI::App ML, a request was brought up for built-in support for using $PATH_INFO to determine the run mode. Basically, you would pass a parameter indicating which location in the $PATH_INFO string holds the run mode.
    • DocBook tutorials. I feel that too much information is given in the class-level documentation, and that usage tutorials need to be written. Since I'm documenting with PhpDoc and targetting PEAR, moving tutorials into DocBook is a logical step.
  • Version 2.x series:

    Yes, a Cgiapp2 is in the future. There are a few changes that are either necessitating (a) PHP5, or (b) API changes. In keeping with PEAR guidelines, I'll rename the module Cgiapp2 so as not to break applications designed for Cgiapp.

    Changes expected include:

    • Inherit from PEAR. This will allow for some built in error handling, among other things. I suspect that this will tie in with the error_mode(), and may also deprecate croak() and carp().

    • Changes to tmpl_path() and load_tmpl(). In the perl version, you would instantiate a template using load_tmpl(), assign your variables to it, and then do your fetch() on it. So, this:

      $this->tmpl_assign('var1', 'val1');
      $body = $this->load_tmpl('template.html');
      

      Becomes this:

      $tmpl = $this->load_tmpl();
      $tmpl->assign('var1', 'val1');
      $body = $tmpl->fetch('template.html');
      

      OR

      $tmpl = $this->load_tmpl('template.html');
      $tmpl->assign('var1', 'val1');
      $body = $tmpl->fetch();
      

      (Both examples assume use of Smarty.) I want to revert to this behaviour for several reasons:

      • Portability with perl. This is one area in which the PHP and perl versions differ greatly; going to the perl way makes porting classes between the two languages simpler.

      • Decoupling. The current set of template methods create an object as a parameter of the application object — which is fine, unless the template object instantiator returns an object of a different kind.

        Cons:

        • Smarty can use the same object to fill multiple templates, and the current methods make use of this. By assigning the template object locally to each method, this could be lost. HOWEVER… an easy work-around would be for load_tmpl() to create the object and store it an a parameter; subsequent calls would return the same object reference. The difficulty then would be if load_tmpl() assumed a template name would be passed. However, even in CGI::App, you decide on a template engine and design for that engine; there is never an assumption that template engines should be swappable.

        • Existing Cgiapp1 applications would need to be rewritten.

    • Plugin Architecture: The CGI::App ML has produced a ::Plugin namespace that utilizes a common plugin architecture. The way it is done in perl is through some magic of namespaces and export routines… both of which are, notably, missing from PHP.

      However, I think I may know a workaround for this, if I use PHP5: the magic __call() overloader method.

      My idea is to have plugin classes register methods that should be accessible by a Cgiapp-based class a special key in the $_GLOBALS array. Then, the __call() method would check the key for registered methods; if one is found matching a method requested, that method is called (using call_user_func()), with the Cgiapp-based object reference as the first reference. Voilá! instant plugins!

      Why do this? A library of 'standard' plugins could then be created, such as:

      • A form validation plugin
      • Alternate template engines as plugins (instead of overriding the tmpl_* methods)
      • An authorization plugin

      Since the 'exported' methods would have access to the Cgiapp object, they could even register objects or parameters with it.

If you have any requests or comments on the roadmap, please feel free to contact me.

From the blog

Posted 2004-03-30
Cgiapp: A PHP Class

After working on some OO classes yesterday for an application backend I'm developing for work, I decided I needed to create a BREAD class to make this simpler. You know, Browse-Read-Edit-Add-Delete.

At first, I figured I'd build off of what I'd done yesterday. But then I got to thinking (ah, thinking, my curse). I ran into the BREAD concept originally when investigating CGI::Application; a number of individuals had developed CGI::Apps that provided this functionality. I'd discarded them usually because they provided more functionality than I needed or because they introduced more complexity than I was willing to tackle right then.

But once my thoughts had gone to BREAD and CGI::App, I started thinking how nice it would be to have CGI::Application for PHP. And then I thought, why not? What prevents me from porting it? I have the source…

So, today I stayed home with Maeve, who, on the tail end of an illness, evidently ran herself down when at daycare yesterday, and stayed home sleeping most of the day. So, while she was resting, I sat down with a printout of the non-POD code of CGI::App and hammered out what I needed to do. Then, when she fell asleep for a nap, I typed it all out and started testing. And, I'm proud to say, it works. For an example, visit my development site to see a very simple, templated application in action.

From the blog

Posted 2004-03-28
POD for PHP

I was lamenting at work the other day that now that I've discovered OO and templating with PHP, the only major feature missing for me is a way to easily document my programs. I'm a big fan of perl's POD, and use it fairly extensively, even for simple scripts — it's a way to provide a quick manual without needing to worry too much about how to format it.

So, it hit me on the way home Friday night: what prevents me from using POD in multiline comments of PHP scripts? I thought I'd give it a try when I got home.

First I googled for 'POD for PHP', and found a link to perlmongers where somebody recounted seeing that exact thing done, and how nicely it worked.

Then I tried it… and it indeed worked. So, basically, I've got all the tools I love from perl in PHP, one of which is borrowed directly from the language!

From the blog

Posted 2004-03-28
Scrap that. We're gonna' use PHP

I've been researching and coding for a couple months now with the decision that I'd rewrite the family website/portal using mod_perl with CGI::Application. I still like the idea, but a couple things recently have made me rethink it.

For starters, the perl DBI is a bit of a pain to program. At work, I've become very accustomed to using PEAR's DB library, and while it's in many ways derived from perl's DBI, it's much simpler to use.

Then there's the whole HTML::Template debacle. There's several ways in which to write the templates, but they don't all work in all situations, and, it seems they're a bit limited. We've started using PHP's Smarty at work, and it's much more intuitive, a wee bit more consistent, and almost infinitely more extendable. I could go the Template::Toolkit route for perl, but that's almost like learning another whole language.

Then, there's the way objects work in perl versus PHP. I've discovered that PHP objects are very easy and very extendable. I wouldn't have found them half as easy, however, if I hadn't already been doing object oriented programming in perl. One major difference, however, is how easy it is to create new attributes on the fly, and the syntax is much easier and cleaner.

Add to that the fact that if you want to dynamically require modules in perl, you have to go through some significant, often unsurmountable, hoops. So you can't easily have dynamic objects of dynamically defined classes. In PHP, though, you can require_once or include_once at any time without even thinking.

The final straw, however, was when I did my first OO application in PHP this past week. I hammered it out in a matter of an hour or so. Then I rewrote it to incorporate Smarty in around an hour. And it all worked easily. Then I wrote a form-handling libary in just over two hours that worked immediately — and made it possible for me to write a several screen application in a matter of an hour, complete with form, form validation, and database calls. Doing the same with CGI::Application took me hours, if not days.

So, my idea is this: port CGI::Application to PHP. I love the concept of CGI::App — it's exactly how I want to program, and I think it's solid. However, by porting it to PHP, I automatically have session and cookie support, and database support is only a few lines of code away when I use PEAR; I'll add Smarty as the template toolkit of choice, but make it easy to override the template methods to utilize . I get a nice MVC-style application template, but one that makes developing quickie applications truly a snap.

This falls under the "right-tool-for-the-job" category; perl, while a wonderful language, and with a large tradition as a CGI language, was not developed for the web as PHP was. PHP just makes more sense in this instance. And I won't be abandoning perl by any stretch; I still use it daily at work and at home for solving any number of tasks from automated backups to checking server availability to keeping my ethernet connection alive. But I have real strengths as a PHP developer, and it would be a shame not to use those strengths with our home website.

From the blog

Posted 2004-02-05
HTML::FillInForm

The CGI::Application::ValidateRM module utilizes HTML::FillInForm to fill in values in the form if portions did not pass validation. Basically, it utilizes HTML::Parser to go through and find the elements and match them to values. It's used because the assumption is that you've built your form into an HTML::Template, and that way you don't need to put in program logic into the form.

Seems another good candidate for using FillInForm would be to populate a form with values grabbed from a database… I should look into that as well!

From the blog

Posted 2004-02-05
HTML::Template notes

I've used HTML::Template a little, mainly in the Secret Santa project I did this past Christmas for my wife's family. One thing I disliked was using the normal syntax: <TMPL_VAR NAME=IMAGE_SRC> — it made looking at it difficult (it wasn't always easy to tell what was an HTML tag, what was plain text, and what was HTML::Template stuff), and it made it impossible to validate my pages before they had data.

Fortunately, there's an alternate syntax: wrap the syntax in HTML comments: <!-- TMPL_VAR NAME=IMAGE_SRC --> does the job. It uses more characters, true, but it gets highlighted different than HTML tags, as well, and that's worth a lot.

And why do I have to say "NAME=" every time? That gets annoying. As it turns out, I can simply say: <!-- TMPL_VAR IMAGE_SRC -->, and that, too will get the job done.

Finally, what about those times when I want to define a template, but have it broken into parts, too? Basically, I want HTML::Template to behave a little like SSI. No worries; there's a TMPL_INCLUDE tag that can do this: <!-- TMPL_INCLUDE NAME="filename.tmpl" -->.

From the blog

Posted 2004-02-05
CGI::Application::ValidateRM and Data::FormValidator

I've been reading a lot of posts lately on the CGI::App mailing list about using CGI::Application::ValidateRM (RM == Run Mode); I finally went and checked it out.

CGI::App::ValRM uses Data::FormValidator in order to do its magic. Interestingly, D::FV is built much like how I've buit our formHandlers library at work — you specify a list of required fields, and a list of fields that need to be validated against criteria, then provide the criteria. It goes exactly how I would have done our libraries had we been working in perl — supplying the constraint as a regexp or anonymous sub in a hashref for the field.

Anyways, it looks like the combination of CGI::App::ValRM with CGI::App could greatly simplify any form validations I need to do on the site, which will in turn make me very happy!

From the blog

Posted 2004-02-04
Design Ideas

I had some success last night with the My::Portal CGI::Application superclass I'm building — I actually got it working with CGI::Wiki::Simple (after I debugged the latter to fix some delegation issues!). Now that I know the "proof-of-concept" works, I'm ready to start in on some other issues.

The first issue is: how can I specify different directories for different applications to search for templates, while retaining the default directory so that the superclass can build the final page? I could always simply keep all templates in a single directory and simply prefix them, but that seems inelegant, somehow. I'll need to explore how HTML::Template integration works with CGI::App.

Second, and closely related: how do I want it to look, in the end? I could see keeping the design we have — it's clean, simple, and yet somehow functionally elegant. Okay, I'm exaggerating — it's your standard three-column with header and footer. But it goes with the idea of blocks of content. I need to think about that.

I saw a design idea for a WikiWikiWeb today, though, that totally changed my ideas of how a Wiki should look. I hadn't been to Wikipedia for some time, but a Google link to Gaston Julia showed up on Slashdot as it shut down a site in Australia, and so I visited it. I like the new design — it separates out the common links needed into a nice left menu, and puts a subset of that at the top and bottom of the main column as well, using nice borders to visually separate things. I much prefer it to PhpWiki's default style, as well as to anything else I've really seen so far relating to Wiki layout.

From the blog

Posted 2004-02-01
conditional use in perl

I've been struggling with how to use modules at runtime instead of compile time (I even wrote about this once before). I finally figured it out:

my $module = "ROX::Filer";
eval "use $module";
die "couldn't load module : $!n" if ($@);

Now I just need to figure out how to create objects from dynamic module names…!

Update: Creating objects from dynamic names is as easy as dynamically loading the module at run-time:

my $obj = $module->new();

From the blog

Posted 2004-01-27
Where's that module?

One continual pain for me with perl is when I need to try to find the location of a specific module on my filesystem so that I can examine it myself; I end up first having to find out what my @INC path is, then having to dig through it until I find the module. Fortunately, I'm not the only one; somebody posted a solution to this problem on Perl Monks:

Updated: The original listing presented didn't work! The following one, garnered from a comment to the original PM post, does, and is what I'm now using.

###!/usr/bin/perl -w
use strict;

use File::Spec::Functions qw/catfile/;

my @loaded = grep {
    eval "require $_";
    !$@ ? 1 : ($@ =~ s/(@INC contains: Q@INCE)//, warn ("Failed loading $_: $@"), 0);
} @ARGV;

my @pm = map catfile(split '::') . (/.pmz/ ? '' : '.pm'), @loaded;

print "@INC{@pm}n";
__END__

=pod

=head1 NAME

whichpm - lists real paths of specified modules

=head1 SYNOPSIS

  editor `whichpm Bar`

=head1 DESCRIPTION

Analogous to the UN*X command which.

=cut

Just place it in your $PATH and let 'er rip!

From the blog

Posted 2004-01-25
Class::DBI

I was reading a thread on the cgiapp mailing list today from several of the core developers about developing a book on CGI::Application. In it, several mentioned that it might/should center around CGI::App and a handful of oft-used modules. One of those modules is Class::DBI.

I took a gander at Class::DBI over at CPAN, and it looks absolutely amazing, and at the same time perhaps too abstract. Basically, you create a number of packages and/or packages, one for each table you'll be using in your application, and one to establish your basic connection. Then, each package creates an object instance of the connection, and defines a number of properties: the name of the table, the columns you'll be using, and then the relations it has to other tables ( has_a( col_name => 'Package::Name'); has_many( col_name => 'Package::Name'); might_have(col_name => 'Package::Name'); ) etc.

Then you use the module/packages you need in your script, and you can then use object-oriented notation to do things like insert rows, update rows, search a table, select rows, etc. And it looks fairly natural.

I like the idea of data abstraction like this. I see a couple issues, however:

  1. I don't like the idea of one package per table; that gets so abstract as to make development come to a stand-still, especially during initial development. However, once development is sufficiently advanced, I could see doing this, particularly for large projects; it could vastly simplify many regular DBI calls.
  2. I like using SQL. If I need to debug why something isn't working when I interact with the database, I want to have absolute control over the language. Abstracting the SQL means I don't have that fine-grained control that helps me debug.

So, for now, I'll stick with straight DBI…. but this is an interesting avenue to explore.

From the blog

Posted 2004-01-24
use autouse ... or not

Due to my cursory reading in the Perl Cookbook, 2nd Edition, earlier this week, I've been investigating the use autouse pragma, to see if it will indeed solve my issue of wanting to use different modules based on the current situation. Unfortunately, I cannot find any documentation on it in perldoc.

I remember seeing something about wrapping this stuff into a BEGIN block, but that would require knowing certain information immediately, and I might need the code to work through some steps before getting there.

Fortunately, this node just appeared on Perl Monks today, and I got to see other ways of doing it:

  • The if module lets you do something like use if $type eq 'x', "Some::Module"; However, $type must be known at compile time (i.e., it's based on system info or on @ARGV); this probably wouldn't work in a web-based application.
  • Use require and import instead: if $type wq 'ex') { require Some::Module; Some::Module->import if Some::Module->can("import"); } If your module doesn't export anything, you can even omit the call to import.
  • Use an eval: if ($type eq 'x') { eval "use Some::Module"; } This gets around the import problem, but could possibly run into other compile time issues.

So, basically, I already had the tools to do the job; just needed to examine the problem more.

From the blog

Posted 2004-01-23
More CGI::App research... Try the manual!

So, I'm a bit of an idiot… it's been so long since I looked at CGI::App, and yet I felt I had such a grasp on it, that I overlooked the obvious step: look at the manual!

In particular, there's a whole series of methods that are used to tailor CGI:App to your particular needs, and these include cgiapp_init(), cgiapp_prerun(), and cgiapp_postrun().

  • cgiapp_init() is used to perform application specific initialization behaviour, and is called immediately before the setup() method. It can be used to load settings from elsewhere; if it were called only from a superclass from which other modules inherited, it would then provide common settings for all modules.
  • cgiapp_prerun() is called immediately before the selected run-mode. If it were called only by your superclass, you could perform items such as authorization or even form validation; this would then be standard for all your applications. (You can use the $self->prerun_mode('mode') call to to override the selected run-mode, for instance, thus allowing you to redirect to a different mode if a user isn't permitted there.)
  • cgiapp_postrun() is called after the run-mode has returned its output, but before http headers have been generated or anything sent to the web browser. Again, if defined in a superclass, it means that you could then place the run-mode output in a specific place within a larger template, and even call other routines to fill in other parts of the main template. You could even check to see if certain parameters were passed to the page, and change the type of output you send back (XML, PDF, image, etc.), allowing you to have a common query element that changes the output type (e.g., a 'print' parameter that returns a PDF or a stripped down template).

In addition, you could specify in the superclass that you're using CGI::Simple for the query object (using the cgiapp_get_query method), or you could rewrite the load_tmpl() method to use Template::Toolkit or some other templating system, etc.

Doesn't look so crazy anymore…

From the blog

Posted 2004-01-23
CGI::Application Research

I've been wanting to redevelop my home website for some time using CGI::Application. The last time I rewrote it from PHP to perl, I developed something that was basically a subset of the things CGI::App does, and those things weren't done nearly as well.

The problem I've been running into has to do with having sidebar content, and wanting to run basically a variety of applications. I want to have a WikiWikiWeb, a photo gallery, some mail forms, and an article database/blog; CGI::App-based modules for each of these all exist. But I want them all to utilize the same sidebar content, as well — and that sidebar content may vary based on the user.

My interest got sparked by this node on Perl Monks. The author tells of an acquaintance who goes by the rule that a CGI::App should have 10-12 states at most; more than that, and you need to either break it apart or rethink your design. And all CGI::Apps inherit from a common superclass, so that they share the same DB connections, templates, etc.

So, I've been investigating this problem. One node on PM notes that his ISP uses CGI::App with hundreds of run modes spread across many applications; they created a module for session management and access control that calls use base CGI::Application; each aplication then calls use base Control, and they all automatically have that same session management and access, as well as CGI::Application.

Another node mentions the same thing, but gives a little more detail. That author writes a module per application, each inheriting from a super class: UserManager.pm, Survey.pm, RSS.pm, Search.pm, etc. You create an API for that super class, and each CGI::App utilizes that API to do its work.

This also seems to be the idea behind CheesePizza, a CGI::App-based framework for building applications. (All pizzas start out as cheese pizzas; you simply add ingredients.) The problem with that, though, is that I have to learn another framework on top of CGI::App, instead of intuiting my own.

But how do I write the superclass? Going back to the original node that sparked my interest, I found a later reply that described how you do this. The big key is that you override the print method — this allows you to customize the output, and from here you could call functions that create your sidebar blocks, and output the content of the CGI::App you just called in a main content area of your template.

Grist for the mill…

Search tips

  • Use #{some tag} or tag:{some tag} to filter by tag; using this more than once will find results that match all tags.
  • Use year:{year} to filter by year. If you use this multiple times, the last one wins.
  • Use begin:{year-month-day} to restrict to results beginning on that day. Last one wins.
  • Use end:{year-month-day} to restrict to results ending on that day. Last one wins.