Search results

From the blog

Posted 2015-09-18
On 10 Years at Zend

10 years ago, as I write this, I was on a plane from Burlington, VT, to San Jose, CA, where I'd be starting work at Zend Technologies the next day as a PHP Developer.

From the blog

Posted 2009-01-02
Seven Things - Tagged by Keith Casey

I'm really not sure I understand these "seven things" or "tagged" memes, but I'm going to give it a shot, after Keith Casey did a drive-by tagging of me on New Year's Eve.

So, without further ado, seven things you may not know about me…

From the blog

Posted 2008-01-01
2007 Retrospective

2007 was a busy year, both personally and professionally. I won't go into the personal too much, because, well, it's personal, and some of the details are simply inappropriate for blogging material.

Here's the short version:

  • One trip to Belgium and The Netherlands.
  • Two trips to Israel.
  • Two trips to Atlanta, GA (not counting the return trip from Europe, when I was stranded for a day due to storms in the Northeast).
  • Three different user groups attended, with three presentations.
  • One major Zend Framework release
  • One PEAR release.
  • One podcast.
  • One webinar.
  • One book published.
  • One conference attended.

What follows is my month-by-month breakdown:

From the blog

Posted 2007-11-06
I is a published author!

So, in addition to it being my daughter's birthday, which is exciting enough in itself, I received a package from my publisher, SitePoint, with my author copies of The PHP Anthology. Very exciting to see stuff I've written published!

From the blog

Posted 2005-09-21
Transitions

Life is in transition for me now. Two weeks ago, we got to bring our handsome baby boy home, and I haven't been sleeping much since (though more than Jen). On top of the sleep deprivation, however, comes more exciting news: I've been hired as a PHP Developer by Zend Technologies!

I was approached by Daniel Kushner in late July regarding another position at Zend, and was flown out at the beginning of August. While I felt the interview went well, I harbored some doubts; work got fairly busy shortly thereafter, and then, of course, Liam was born, and the interview went completely out of my head. Until about three days after Liam's birthday, when Daniel contacted me again about the PHP Developer position.

Work started yesterday, and I was flown to Zend's offices in Cupertino, CA, for orientation and to sit down with both Daniel and others to prepare for the projects on which I will be working. Thankfully, the job will not require that I move, and I will be working out of the 'home office' in Vermont when I return later this week.

The decision to leave NGA was difficult, but the opportunity to work with Zend is just too good to miss. I am honored to be selected by them, and hope this is the beginning of many good things to come.

From the blog

Posted 2005-06-28
ZCE - Results are in!

Got the official notification: I passed the Zend PHP Certification Exam, and can now report I'm a Zend Certified Engineer (ZCE)!

Zend Certified Engineer

Thanks go to my bosses at NGA for giving me the opportunity to attend php|Tropics, to Marco Tabini for offering the ZCE exam as part of the php|Tropics conference fee, and to my wife, Jen, and daughter, Maeve, for putting up with me while I studied… and being good sports about having to stay home while I went to Cancun. Hopefully next time I can take you along!

From the blog

Posted 2005-06-05
Moving into City Living

We did it… we moved, again.

However, unlike our previous two moves, which were interstate, this time we stayed in the same state. The same county, even. What made (makes; we're still finishing up as I write this) this one so jarring is the fact that we're going from the rural mountainside to the fourth floor of a new apartment/condo building adjoining an interstate spur.

Why would we do this?

From the blog

Posted 2005-04-23
We're having a baby!

I can't believe I haven't announced this to the world yet, but Jen and I are expecting another baby! The due date is mid-September. And… we decided at the ultrasound this past week we would go ahead and find out the gender… and….

From the blog

Posted 2005-03-20
Enter the Matrix

I couldn't resist… the car model demands it…

For those not familiar with where I live, my family and I live in West Bolton, VT — about 20 miles from Burlington, and at the base of Bolton Mountain. Our daily commute is 4 miles on a dirt road, another 3 to 4 miles on some twisty two-laners at 35mph to the interstate, and around 10 miles on the interstate into Burlington. Then there's all the miles in town getting Maeve to day-care, Jen or myself dropped off, and whomever has the car to work. And we only have one car.

So, you can imagine the crisis when, almost a month ago, our Toyota RAV4 died on the way in to work.

We started it up that day, and it had this funny knocking sound. I remembered a similar sound in my old pickup back in Montana… the one that died. I determined to get it into a shop that day to get it diagnosed. The noise came and went while we were on the backroads, and because it wasn't constant, I figured it couldn't be too serious.

And then we tried to get to highway speeds…. a few miles on the interstate, and it was evident we were in trouble. The Rav was having trouble maintaining 60mph on the way up French Hill — when it normally was able to accelerate past 70mph. And the knocking sound was getting worse and louder.

We resolved to pull off at the first exit, at Tafts Corners in Williston. I pulled into the first gas station there, and as we tried to find a place to park the vehicle, a mechanic was flagging at us to stop the car. He came over to where we parked and said, "Sounds like you've blown your engine."

These, of course, were the absolute last words I wanted to hear.

To make a long story short, apparently a bearing was thrown when we started the engine that day, and because we decided to drive it, we basically destroyed the engine. The cost to replace it: around $6,000.

Now, we're not exactly what you'd call "financially secure". We've had a lot of transitions in the past five years, and except for the past year and a few months, haven't typically both been working at the same time. We've been in a perpetual cycle of having enough to pay the bills… but having to pay consistently late. And we haven't been able to do much, if anything, about our educational debt. In short, our credit sucks. Which means that $6,000 is a big deal.

Did I mention that, at the time of the incident, we still had 17 months left on our car payments?

And, on top of it, I've been in the middle of a huge project for work that's required a fair bit of overtime — and very little wiggle room for personal time?

The timing could not have been worse, either professionally or financially.

We've been very fortunate, however. Jen's parents very graciously offerred to pay off our existing car loan — which helped tremendously. It bought us both the time to figure things out, as well as eliminated one factor that may have barred our ability to borrow towards repairs or a new car. Additionally, a friend of Jen's turns out to be absolutely ruthless when it comes to dealing with car salespeople, and went to bat for us in working out a deal. If it hadn't been for her efforts — and those of the salesperson, who also went to bat for us — we would not have gotten more than a thousand or so for the vehicle; we ended up getting over $3,000 for it, as is. Finally, the finance guy at the dealership advocated for us tremendously so we could get a loan on a new vehicle, with the Rav as our trade in.

So, to conclude: We're now proud owners of a 2005 Toyota Matrix! (And now the mystery of the title is revealed… to all you Matrix fans out there…)

I'll try to get a photo of the car up soon… about the time we update the year-old photos on our site… :-)

From the blog

Posted 2005-03-20
php|Tropics

Well, it's official: My IT Manager convinced those in the upper echelons (well, considering it's a non-profit with only around 20 employees, that meant the president and the CFO) that (1) he and I need to attend a PHP conference, (2) due to the amount of work we've been putting in to bring money into the organization, cost shouldn't be too much of a deciding factor, and (3) php|Tropics isn't too expensive, especially considering the sessions involved cover some of the very issues we've been struggling with the past few months (PHP/MySQL/Apache and clusters, PHP5 OOP, PHP Security, test-driven development, Smarty, and more).

So, we're going to Cancun in May!

This is incredibly exciting! I've never been to Mexico, nor even a resort, so I'll finally get to find out what my wife and friends have been talking about all these years. Plus, the conference is top-notch — many of the presenters are well-known in the PHP community, and have blogs I've been following for the past year. (I only wish that Chris Shiflett's PHP Security series wasn't running head-to-head with the PHP5 OOP Extensions and PHP 5 Patterns sessions; I suspect Rob and I will have to do a divide-and-conquer that day.)

Drop me a line if you'll be attending — I'm looking forward to meeting other PHP junkies!

From the blog

Posted 2005-02-19
New Cgiapp Site

I've been extremely busy at work, and will continue to be through the end of March. I realized this past week that I'd set a goal of having a SourceForge website up and running for Cgiapp by the end of January — and it's now mid-February. Originally, I was going to backport some of my libraries from PHP5 to PHP4 so I could do so… and I think that was beginning to daunt me a little.

Fortunately, I ran across a quick-and-dirty content management solution yesterday called Gunther. It does templating in Smarty, and uses a wiki-esque syntax for markup — though page editing is limited to admin users only (something I was looking for). I decided to try it out, and within an hour or so had a working site ready to upload.

Cgiapp's new site can be found at cgiapp.sourceforge.net.

UPDATE

Shortly after I wrote this original post, I figured out what the strength of Gunther was — and why I no longer needed it. Gunther was basically taking content entered from a form and then inserting that content (after some processing for wiki-like syntax) into a Smarty template. Which meant that I could do the same thing with Cgiapp and Text_Wiki. Within an hour, I wrote an application module in Cgiapp that did just that, and am proud to say that the Cgiapp website is 100% Cgiapp.

From the blog

Posted 2005-01-14
Cgiapp 1.5.3 released

1.5.3 fixes an issue introduced by 1.5.2 that creates a performance hit whenever the run mode is being determined by function name or CGI parameter. More details on the Cgiapp download page.

From the blog

Posted 2005-01-14
Cgiapp 1.5.2 released

At work, we've been developing a new platform for our website, based entirely on Cgiapp. This week we released the first stage of it: garden.org and assoc.garden.org. These should stand as good testament to Cgiapp's robustness!

With all that development, and also with some communication from other Cgiapp users, I've made some changes to Cgiapp, and release version 1.5.2 this evening.

1.5.2 is mainly security and bugfixes. Error handling was somewhat broken in 1.5.1 — it wouldn't restore the original error handler gracefully. This is now corrected. Additionally, I've made run() use the array returned by query() — consisting of the $_GET and $_POST arrays — in determining the run mode. Finally, I've modified the behaviour of how run() determines the current run mode: if the mode parameter is a method or function name, it cannot be a Cgiapp method or a PHP internal function. This allows more flexibility on the part of the programmer in determining the mode param — words like 'run' and 'do' can now be used without causing massive problems (using 'run' would cause a race condition in the past).

As usual, Cgiapp is available in the downloads area. Grab your tarball today!

From the blog

Posted 2005-01-08
Dreamweaver, Text Editors, and Webmasters

I picked up on this article on Friday, glanced through it and moved on, but noticed this evening it had been slashdotted — at which point I realized the author is the current CGI::Application maintainer, so I looked again.

At my first glance through it, it appeared the author was looking for a nice, easy-to-use pre-processor script for generating a site out of templates and content files. To that end, he, in the end, recommended ttree, part of the Template Toolkit distribution.

However, the real gist of the article — something that should probably have been summarized at the end — is that the author was looking for a free and OSS replacement for DreamWeaver's Templates functionality. This functionality allows a developer to create a template with placeholders for content, lock it, and then create pages that have the bits and pieces of content. Finally, the developer compiles the site — creating final HTML pages out of the content merged with the templates.

Now, I can see something like this being useful. I've used webmake for a couple of projects, and, obviously, utilize PHP in many places as a templating language. However, several comments on Slashdot also gave some pause. The tone of these comments was to the effect of, "real developers shouldn't use DW; they should understand HTML and code it directly." Part of me felt this was elitist — the web is such an egalitarian medium that there should be few barriers to entry. However, the webmaster in me — the professional who gets paid each pay period and makes a living off the web — also agreed with this substantially.

I've worked — both professionally and as a freelancer — with individuals who use and rely on DW. The problem I see with the tool and others of its breed is precisely their empowerment of people. Let me explain.

I really do feel anybody should be able to have a presence on the 'net. However, HTML is a fragile language: trivial errors can cause tremendous changes in how a page is rendered — and even crash browsers on occasion. The problem I see is that DW and other GUI webpage applications create, from my viewpoint, garbage HTML. I cannot tell you how many pages generated by these applications that I've had to clean up and reformat. They spuriously add tags, often around empty content, that are simply unnecessary.

The problem is compounded when individuals have neither time nor inclination to learn HTML, but continue using the tool to create pages. They get a false sense of accomplishment — that can be quickly followed by a very real sense of incompetence when the page inexplicably breaks due to an edit they've made — especially when the content is part of a larger framework that includes other files. Of course, as a web professional, I get paid to fix such mistakes. But I feel that this does everybody a disservice — the individual/company has basically paid twice for the presentation of content — once to the person generating it, a second time to me to fix the errors.

This is a big part of the reason why I've been leaning more and more heavily on database-driven web applications. Content then goes into the database, and contains minimal — if any — markup. It is then injected into templates, which go through a formal review process, as well as through the W3C validators, to prevent display problems. This puts everybody in a position of strength: the editor generating content, the designer creating the look-and-feel, and the programmer developing the methods for mapping content with the templates.

There's still a part of me that struggles with what I perceive as an elitist position. However, there's another part of me that has struggled heavily with the multitasking demands made on all web professionals — we're expected to be editors, graphic designers, programmers, and more. In most cases, we're lucky if we're strong in one or two such areas, much less passionate about staying abreast of the changing face of our medium.

From the blog

Posted 2004-12-31
Happy New Year!

It's about 50 minutes shy of 2005 here, and Maeve has finally succumbed to sleep, I'm almost done with my stout, and we're in West Bolton without TV for the second year running (yay!).

I hope the new year brings peace and happiness to one and all! Happy coding!

From the blog

Posted 2004-12-31
Smarty $_SERVER vars

I don't know why I never bothered to look this up, but I didn't. One thing I typically do in my parent Cgiapp classes is to pass $_SERVER['SCRIPT_NAME'] to the template. I just found out — through the pear-general newsgroup — that this is unnecessary: use $smarty.server.KEY_NAME to access any $_SERVER vars your template might need.

From the blog

Posted 2004-11-17
Sign of a Geek

It's now been confirmed: I'm a geek.

Okay, so that probably comes as no shocker to those of you who know me, but it's the little things that make me realize it myself.

I've been frequenting Perl Monks for a couple of years now, mainly to garner ideas and code to help me with my personal or work projects. I rarely post comments, and I've only once submitted a question to the site. However, I do frequent the site regularly, and the few comments I've put in — generally regarding usage of CGI::Application — have been typically well-moderated.

Well, yesterday I made a comment to a user asking about editors to use with perl. I was incensed by a remark he made about VIM not having the features he needed. Now, as I said in my comment, I've used VIM on a daily basis for over two years, and I'm still discovering new features — and I've used all of the features he was looking for.

This is where I discovered I'm a geek: my comment made it into the Daily Best for today, peaking around number 5. The fact that that made my day indicates to me that I must be a geek.

Oh — and VIM rules!

From the blog

Posted 2004-11-04
Cgiapp mentioned in php|architect!

A new Cgiapp user reported they had stumbled across the project in php|architect! It's in the current, October 2004 issue, in the News section, prominently displayed in the upper right corner of the page. The announcement blurb is straight from my freshmeat project page for version 1.4. Cgiapp is carving a name for itself!

From the blog

Posted 2004-11-04
Cgiapp 1.5.1 released

At work this week, I discovered a bug with how I was calling set_error_handler() in Cgiapp's run() method. Evidently passing a reference in a PHP callback causes issues! So, I corrected that.

I also made a minor, one-character change to query() to make it explicitly return a reference to the $_CGIAPP_REQUEST property array.

You can see full details at the Cgiapp download page.

From the blog

Posted 2004-10-28
Cgiapp 1.5 released

Cgiapp 1.5 has been released; you may now download it.

This release fixes a subtle bug I hadn't encountered before; namely, when a method name or function name is passed as an argument to mode_param(), run() was receiving the requested run mode… and then attempting to process that as the mode param. The behaviour is now fixed, and is actually simpler than the previous (non-working) behaviour.

Also, on reading Chris Shiflet's paper on PHP security, I decided to reinstate the query() method. I had been using $_REQUEST to check for a run mode parameter; because this combines the $_GET, $_POST, and $_COOKIE arrays, it's considered a bit of a security risk. query() now creates a combined array of $_GET and $_POST variable ($_POST taking precedence over $_GET) and stores them in the property $_CGIAPP_REQUEST; it returns a reference to that property. run() uses that property to determine the run mode now.

Enjoy!

From the blog

Posted 2004-10-22
When array_key_exists just doesn't work

I've been playing with parameter testing in my various Cgiapp classes, and one test that seemed pretty slick was the following:

if (!array_key_exists('some_string', $_REQUEST)) {
    // some error
}

Seems pretty straight-forward: $_REQUEST is an associative array, and I want to test for the existence of a key in it. Sure, I could use isset(), but it seemed… ugly, and verbose, and a waste of keystrokes, particularly when I'm using the param() method:

if (!isset($_REQUEST[$this->param('some_param')])) {
    // some error
}

However, I ran into a pitfall: when it comes to array_key_exists(), $_REQUEST isn't exactly an array. I think what's going on is that $_REQUEST is actually a superset of several other arrays — $_POST, $_GET, and $_COOKIE — and isset() has some logic to descend amongst the various keys, while array_key_exists() can only work on a single level.

Whatever the explanation, I ended up reverting a bunch of code. :-(

From the blog

Posted 2004-10-20
MySQL Miscellanae

Inspired by a Slashdot book review of High Performance MySQL.

I've often suspected that I'm not a SQL guru… little things like being self taught and having virtually no resources for learning it. This has been confirmed to a large degree at work, where our DBA has taught me many tricks about databases: indexing, when to use DISTINCT, how and when to do JOINs, and the magic of TEMPORARY TABLEs. I now feel fairly competent, though far from being an expert — I certainly don't know much about how to tune a server for MySQL, or tuning MySQL for performance.

Last year around this time, we needed to replace our MySQL server, and I got handed the job of getting the data from the old one onto the new. At the time, I looked into replication, and from there discovered about binary copies of a data store. I started using this as a way to backup data, instead of periodic mysqldumps.

One thing I've often wondered since: would replication be a good way to do backups? It seems like it would, but I haven't investigated. One post on the aforementioned Slashdot article addressed this, with the following summary:

  1. Set up replication
  2. Do a locked table backup on the slave

Concise and to the point. I only wish I had a spare server on which to implement it!

From the blog

Posted 2004-10-12
PHP_SELF versus SCRIPT_NAME

I've standardized my PHP programming to use the environment variable SCRIPT_NAME when I want my script to refer to itself in links and form actions. I've known that PHP_SELF has the same information, but I was more familiar with the name SCRIPT_NAME from using it in perl, and liked the feel of it more as it seems to describe the resource better (PHP_SELF could stand for the path to the PHP executable if I were to go by the name only).

However, I just noticed a post on the php.general newsgroup where somebody asked what the difference was between them. Semantically, there isn't any; they should contain the same information. However, historically and technically speaking, there is. SCRIPT_NAME is defined in the CGI 1.1 specification, and is thus a standard. However, not all web servers actually implement it, and thus it isn't necessarily portable. PHP_SELF, on the other hand, is implemented directly by PHP, and as long as you're programming in PHP, will always be present.

Guess I have some grep and sed in my future as I change a bunch of scripts…

From the blog

Posted 2004-10-08
PHP: Continue processing after script aborts

Occasionally, I've needed to process a lot of information from a script, but I don't want to worry about PHP timing out or the user aborting the script (by clicking on another link or closing the window). Initially, I investigated register_shutdown_function() for this; it will fire off a process once the page finishes loading. Unfortunately, the process is still a part of the current connection, so it can be aborted in the same way as any other script (i.e., by hitting stop, closing the browser, going to a new link, etc.).

However, there's another setting initialized via a function that can override this behaviour — i.e., let the script continue running after the abort. This is ignore_user_abort(). By setting this to true, your script will continue running after the fact.

This sort of thing would be especially good for bulk uploads where the upload needs to be processed — say, for instance, a group of images or email addresses.

From the blog

Posted 2004-10-07
Practical PHP Programming

In the past two days, I've seen two references to Practical PHP Programming, an online book that serves both as an introduction to programming with PHP5 and MySQL as well as a good advanced reference with many good tips.

This evening, I was browsing through the Performance chapter (chapter 18), and found a number of cool things, both for PHP and MySQL. Many were common sense things that I've been doing for a while, but which I've also seen and shaken my head at in code I've seen from others (calculating loop invariables at every iteration, not using variables passed to a function, not returning a value from a function, not using a return value from a function). Others were new and gave me pause for thought (string concatenation with the '.' operator is expensive, especially when done more than once in an operation; echo can take a comma separated list).

Some PHP myths were also dispelled, some of which I've been wondering about for awhile. For instance, the amount of comments and whitespace in PHP are not a factor in performance (and PHP caching systems will often strip them out anyways); double quotes are not more expensive than single quotes unless variable interpolation occurs.

It also has some good advice for SQL optimization, and, more importantly, MySQL server optimization. For instance, the author suggests running OPTIMIZE TABLE table; on any table that has been added/updated/deleted from to any large extent since creation; this will defrag the table and give it better performance. Use CHAR() versus VARCHAR(); VARCHAR() saves on space, but MySQL has to calculate how much space was used each time it queries in order to determine where the next field or record starts. However, if you have any variable length fields, you may as well use as many as you need — or split off variable length fields (such as a TEXT() field) into a different table in order to speed up searching. When performing JOINs, compare on numeric fields instead of character fields, and always JOIN on rows that are indexed.

I haven't read the entire book, but glancing through the TOC, there are some potential downfalls to its content:

  • It doesn't cover PhpDoc It doesn't appear to cover unit testing Limited
  • coverage of templating solutions (though they are mentioned) Limited usage of
  • PEAR. The author does mention PEAR a number of times, and often indicates that use of certain PEAR modules is preferable to using the corresponding low-level PHP calls (e.g., Mail and Mail_MIME, DB), but in the examples rarely uses them.
  • PHP-HTML-PHP… The examples I browsed all created self-contained scripts that did all HTML output. While I can appreciate this to a degree, I'd still like to see a book that shows OOP development in PHP and which creates re-usable web components in doing so. For instance, instead of creating a message board script, create a message board class that can be called from anywhere with metadata specifying the database and templates to use.

All told, there's plenty of meat in this book — I wish it were in dead tree format already so I could browse through it at my leisure, instead of in front of the computer.

From the blog

Posted 2004-09-24
Get Firefox!

Those who know me know that I love linux and open source. One particular program that firmly committed me to open source software is the Mozilla project — a project that took the Netscape browser's codebase and ran with it to places I know I never anticipated when I first heard of the project.

What do I like about Mozilla? Well, for starters, and most importantly, tabbed browsing changed the way I work. What is tabbed browsing? It's the ability to have multiple tabs in a browser window, allowing you to switch between web pages without needing to switch windows.

Mozilla came out with a standalone browser a number of months back called, first Phoenix, then Firebird, and now Firefox. This standalone browser has a conservative number of basic features, which allow for a lean download — and yet, these basic features, which include tabbed browsing and disabling popups, far surpass Internet Explorer's features. And there are many extensions that you can download and integrate into the browser.

One such extension is a tabbed browsing extension that makes tabbed browsing even more useful. With it, I can choose to have any links leaving a site go to a new tab; or have bookmarks automatically load in a new tab; or group tabs and save them as bookmark folders; or drag a tab to a different location in the tabs (allowing easy grouping).

Frankly, there's few things I can find that Firefox can't do.

And, on top of that, it's not integrated into the operating system. So, if you're on Windows, that means if you use Firefox, you're less likely to end up with spyware and adware — which often is downloaded and installed by special IE components just by visiting sites — ruining your internet experience.

So, spread the word: Firefox is a speedy, featureful, SECURE alternative to Internet Explorer!

From the blog

Posted 2004-09-21
Cgiapp Roadmap

I've had a few people contact me indicating interest in Cgiapp, and I've noticed a number of subscribers to the freshmeat project I've setup. In addition, we're using the library extensively at the National Gardening Association in developing our new site (the current site is using a mixture of ASP and Tango, with several newer applications using PHP). I've also been monitoring the CGI::Application mailing list. As a result of all this activity, I've decided I need to develop a roadmap for Cgiapp.

Currently, planned changes include:

  • Version 1.x series:

    • Adding a Smarty registration for stripslashes (the Smarty "function" call will be sslashes).
    • param() bugfix: currently, calling param() with no arguments simply gives you a list of parameters registered with the method, but not their values; this will be fixed.
    • error_mode() method. The CGI::Application ML brought up and implemented the idea of an error_mode() method to register an error_mode with the object (similar to run_modes()). While non-essential, it would offer a standard, built-in hook for error handling.
    • $PATH_INFO traversing. Again, on the CGI::App ML, a request was brought up for built-in support for using $PATH_INFO to determine the run mode. Basically, you would pass a parameter indicating which location in the $PATH_INFO string holds the run mode.
    • DocBook tutorials. I feel that too much information is given in the class-level documentation, and that usage tutorials need to be written. Since I'm documenting with PhpDoc and targetting PEAR, moving tutorials into DocBook is a logical step.
  • Version 2.x series:

    Yes, a Cgiapp2 is in the future. There are a few changes that are either necessitating (a) PHP5, or (b) API changes. In keeping with PEAR guidelines, I'll rename the module Cgiapp2 so as not to break applications designed for Cgiapp.

    Changes expected include:

    • Inherit from PEAR. This will allow for some built in error handling, among other things. I suspect that this will tie in with the error_mode(), and may also deprecate croak() and carp().

    • Changes to tmpl_path() and load_tmpl(). In the perl version, you would instantiate a template using load_tmpl(), assign your variables to it, and then do your fetch() on it. So, this:

      $this->tmpl_assign('var1', 'val1');
      $body = $this->load_tmpl('template.html');
      

      Becomes this:

      $tmpl = $this->load_tmpl();
      $tmpl->assign('var1', 'val1');
      $body = $tmpl->fetch('template.html');
      

      OR

      $tmpl = $this->load_tmpl('template.html');
      $tmpl->assign('var1', 'val1');
      $body = $tmpl->fetch();
      

      (Both examples assume use of Smarty.) I want to revert to this behaviour for several reasons:

      • Portability with perl. This is one area in which the PHP and perl versions differ greatly; going to the perl way makes porting classes between the two languages simpler.

      • Decoupling. The current set of template methods create an object as a parameter of the application object — which is fine, unless the template object instantiator returns an object of a different kind.

        Cons:

        • Smarty can use the same object to fill multiple templates, and the current methods make use of this. By assigning the template object locally to each method, this could be lost. HOWEVER… an easy work-around would be for load_tmpl() to create the object and store it an a parameter; subsequent calls would return the same object reference. The difficulty then would be if load_tmpl() assumed a template name would be passed. However, even in CGI::App, you decide on a template engine and design for that engine; there is never an assumption that template engines should be swappable.

        • Existing Cgiapp1 applications would need to be rewritten.

    • Plugin Architecture: The CGI::App ML has produced a ::Plugin namespace that utilizes a common plugin architecture. The way it is done in perl is through some magic of namespaces and export routines… both of which are, notably, missing from PHP.

      However, I think I may know a workaround for this, if I use PHP5: the magic __call() overloader method.

      My idea is to have plugin classes register methods that should be accessible by a Cgiapp-based class a special key in the $_GLOBALS array. Then, the __call() method would check the key for registered methods; if one is found matching a method requested, that method is called (using call_user_func()), with the Cgiapp-based object reference as the first reference. Voilá! instant plugins!

      Why do this? A library of 'standard' plugins could then be created, such as:

      • A form validation plugin
      • Alternate template engines as plugins (instead of overriding the tmpl_* methods)
      • An authorization plugin

      Since the 'exported' methods would have access to the Cgiapp object, they could even register objects or parameters with it.

If you have any requests or comments on the roadmap, please feel free to contact me.

From the blog

Posted 2004-09-20
New site is up!

The new weierophinney.net/matthew/ site is now up and running!

The site has been many months in planning, and about a month or so in actual coding. I have written the site in, instead of flatfiles, PHP, so as to:

  • Allow easier updating (it includes its own content management system)
  • Include a blog for my web development and IT interests
  • Allow site searching (everything is an article or download)

I've written it using a strict MVC model, which means that I have libraries for accessing and manipulating the database; all displays are template driven (meaning I can create them with plain-old HTML); and I can create customizable applications out of various controller libraries. I've called this concoction Dragonfly.

There will be more developments coming — sitewide search comes to mind, as well as RSS feeds for the blog and downloads.

Stay Tuned!

From the blog

Posted 2004-09-11
What's keeping that device in use?

Ever wonder what's keeping that device in use so you can't unmount it? It's happened to me a few times. The tool to discover this information? lsof.

Basically, you type something like: lsof /mnt/cdrom and it gives you a ps-style output detailing the PID and process of the processes that are using the cdrom. You can then go and manually stop those programs, or kill them yourself.

From the blog

Posted 2004-09-04
PHP and Template Engines

On PhpPatterns, I recently read an article on Template Engines in PHP. It got my ire up, as it said (my interpretation):

  • "template engines are a bad idea"
  • "templating using PHP natively can be a good idea"
  • "template engines… are not worth the text their written in"

Okay, so that's actually direct quotes from the article. I took issue with it, immediately — I use Smarty for everything I do, and the decision to do so was not done lightly. I have in fact been advocating the use of template engines in one language or another for several years with the various positions in which I've been employed; I think they are an essential tool for projects larger than a few pages. Why?

  • Mixing of languages causes inefficiency. When I'm programming, it's incredibly inefficient to be writing in up to four different languages: PHP or Perl, X/HTML, CSS, and Javascript. Switching between them while in the same file is cumbersome and confusing, and trying to find HTML entities buried within quoting can be a nightmare, even when done in heredocs. Separating the languages into different files seems not only natural, but essential.
  • Views contain their own logic. In an MVC pattern, the final web page View may be dependent on data passed to it via the Controller; however, this doesn't mean that I want the full functionality of a language like PHP or Perl to do that. I should only be doing simple logic or looping constructs — and a full scripting language is overkill. (I do recognize, however, that template engines such as Smarty are written using PHP, so the language is being invoked regardless. What I speak of here is the language used to compose the template.)
  • Abstraction and Security. The fewer internals that are divulged on the template page, the better. For security purposes, I may not want clients able to know how data got to the page, only what data is available to them. In addition, if this data is abstracted enough, any number of backends could be connected to the page to produce output.

So, reading the aforementioned article really got my hackles up. However, it got me thinking, as well. One issue raised is that PHP can be used as your templating language. While I can understand why this might be desirable — everything from load issues to flexibility — I also feel that this doesn't give enough abstraction.

Using PHP seems to me to be inefficient on two fundamental levels, based on my understanding of The Pragmatic Programmer:

  • Domain Langauge. The Pragmatic Programmer suggests that subsets of a language should be used, or wholly new mini-languages developed, that speak to the domain at hand. As an example, you might want to use a sharp tool to open a can; an axe would be overkill, but a knife might work nicely. Using PHP to describe a template is like using an axe to open a can; it'll do the job, but it may also make a mess of it all, simply because it's too much sharp edge for the job.
  • Metadata. Metadata is data about data; to my thinking, templates describe the data they are communicating; the compiled template actually contains the data. In this regard, again, putting PHP into the script is overkill as doing so gives more than just some hints as to what the data is.

The author of the article also makes a case for teaching web designers PHP — that the language is sufficiently easy to pick up that they typically will be able to learn it as easily, if not more easily, than a template language. I agree to a degree… But my experience has shown that web designers typically struggle with HTML, let alone PHP. (Note: my experience in this regard is not huge, and I'm sure that this is an exaggeration.) I find that it's typically easiest for me to give an example template, explain what the funny, non-HTML stuff can do, and let them go from there. Using this approach, they do not need to learn anything new — they simply work with placeholders.

Still, I think the author raises some fine points. I wish he'd bothered to do more research into why people choose template engines and the benefits that arise from using them before simply outright slamming them. Of course, the article is also a bit dated; it was written over two years ago, and much has changed in the world of PHP and many of its template engines. I'm curious as to whether they would feel the same way today.

Me? My mind is made up — the benefits, in my circumstances, far outweigh any costs associated. I'll be using template engines, and Smarty in particular, for years to come.

From the blog

Posted 2004-04-27
Binary packages for Gentoo

I'd read that you could get binary packages for gentoo, thus alleviating the need to compile everything. (Of course, then you lose some of the benefits of compiling everything, but you gain in speed…) Unfortunately, I mistook this with ebuilds, and never quite figured it out.

The key is to throw the -g flag:

$ emerge -g gnumeric # which is like 'emerge --getbinpkg gnumeric'

I also learned how to update packages tonight:

$ emerge sync             # to sync your mirror with the gentoo mirrors
$ emerge --update portage # if necessary
$ emerge --update system  # updates core system files
$ emerge --update world   # updates all packages

From the blog

Posted 2004-04-22
Gentoo for several weeks

I've had a bunch of problems with my new computer — it uses ACPI, but if I load the ACPI modules, it won't boot; if I don't load them, I have to go through contortions to get the ethernet working, and it won't power down; and a bunch of other little stuff.

So, a few weeks ago, I thought, what the heck? Why not try Gentoo? I've been reading about it since it first came out, and I remember talking with Duane about it once, and it has a reputation for both being cutting edge and stable. Heck, even Wil Wheaton's endorsing it… it can't be all bad, right?

I had a few misstarts — bad CDs, not quite getting how the chroot thing worked, problems with DNS (which I still don't understand; and Rob has them as well, so it's not just me). But once I got it installed… well, I'm impressed.

The thing about Gentoo is, it compiles everything from source. It's like Debian, in that it fetches all dependencies and installs those… but it's all source. So it's not exactly fast. But because everything is compiled, and because you setup C flags specific to your machine, what you get is incredibly optimized for your own machine. This 1.6GHz machine simply flies. And the memory usage just stays low.

I'd like to use it for my server… but I can't really take the server down at this point when it's making both my mom and myself money. But what a great system… I only wish I'd used it for the mail server at work.

From the blog

Posted 2004-03-30
Cgiapp: A PHP Class

After working on some OO classes yesterday for an application backend I'm developing for work, I decided I needed to create a BREAD class to make this simpler. You know, Browse-Read-Edit-Add-Delete.

At first, I figured I'd build off of what I'd done yesterday. But then I got to thinking (ah, thinking, my curse). I ran into the BREAD concept originally when investigating CGI::Application; a number of individuals had developed CGI::Apps that provided this functionality. I'd discarded them usually because they provided more functionality than I needed or because they introduced more complexity than I was willing to tackle right then.

But once my thoughts had gone to BREAD and CGI::App, I started thinking how nice it would be to have CGI::Application for PHP. And then I thought, why not? What prevents me from porting it? I have the source…

So, today I stayed home with Maeve, who, on the tail end of an illness, evidently ran herself down when at daycare yesterday, and stayed home sleeping most of the day. So, while she was resting, I sat down with a printout of the non-POD code of CGI::App and hammered out what I needed to do. Then, when she fell asleep for a nap, I typed it all out and started testing. And, I'm proud to say, it works. For an example, visit my development site to see a very simple, templated application in action.

From the blog

Posted 2004-03-29
B. Gates: Open Source Programmer?

I just read coverage of a panel of programming luminaries on Salon; the topic of discussion was about the state of programming. In the course of the discussion, the subject of Open Source came up. Several of the luminaries — which included architects of the Mac OS and Windows, as well as others — derided the community for basically reinventing the wheel, and wheels that need to be re-thought entirely anyways. One questioned, "Why is the idealism just about how the code is shared — what about idealism about the code itself?"

Andy Hertzfeld (who helped develop the original Mac OS) was sitting on the panel, and jumped in. He has been working with Eazel and Chandler in recent years, and thus has an inside view of open source. His initial comment: "It's because they want people to use the stuff!" Basically, they program Windows- or Mac-like interfaces because then people will be willing to try it out. They program office suites because people "need" an office suite to be productive. Such offerings hook them into the OSS movement.

Another participant, Dan Bricklin (of VisiCalc, a pioneering spreadsheet program) shared an anecdote from Bill Gates. Evidently, Gates gave an interview (with Lammers — look up this person) in which he explained that his work on MS's BASIC compiler was done by looking at how other programmers had accomplished the task. In his own words, "The best way to prepare is to write programs, and to study great programs that other people have written. In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating systems."

So basically, Gates was an early adopter of OSS methodologies… Interesting to see that today he's so protective of MS code. Guess money might do that to you.

From the blog

Posted 2004-03-28
POD for PHP

I was lamenting at work the other day that now that I've discovered OO and templating with PHP, the only major feature missing for me is a way to easily document my programs. I'm a big fan of perl's POD, and use it fairly extensively, even for simple scripts — it's a way to provide a quick manual without needing to worry too much about how to format it.

So, it hit me on the way home Friday night: what prevents me from using POD in multiline comments of PHP scripts? I thought I'd give it a try when I got home.

First I googled for 'POD for PHP', and found a link to perlmongers where somebody recounted seeing that exact thing done, and how nicely it worked.

Then I tried it… and it indeed worked. So, basically, I've got all the tools I love from perl in PHP, one of which is borrowed directly from the language!

From the blog

Posted 2004-03-28
Scrap that. We're gonna' use PHP

I've been researching and coding for a couple months now with the decision that I'd rewrite the family website/portal using mod_perl with CGI::Application. I still like the idea, but a couple things recently have made me rethink it.

For starters, the perl DBI is a bit of a pain to program. At work, I've become very accustomed to using PEAR's DB library, and while it's in many ways derived from perl's DBI, it's much simpler to use.

Then there's the whole HTML::Template debacle. There's several ways in which to write the templates, but they don't all work in all situations, and, it seems they're a bit limited. We've started using PHP's Smarty at work, and it's much more intuitive, a wee bit more consistent, and almost infinitely more extendable. I could go the Template::Toolkit route for perl, but that's almost like learning another whole language.

Then, there's the way objects work in perl versus PHP. I've discovered that PHP objects are very easy and very extendable. I wouldn't have found them half as easy, however, if I hadn't already been doing object oriented programming in perl. One major difference, however, is how easy it is to create new attributes on the fly, and the syntax is much easier and cleaner.

Add to that the fact that if you want to dynamically require modules in perl, you have to go through some significant, often unsurmountable, hoops. So you can't easily have dynamic objects of dynamically defined classes. In PHP, though, you can require_once or include_once at any time without even thinking.

The final straw, however, was when I did my first OO application in PHP this past week. I hammered it out in a matter of an hour or so. Then I rewrote it to incorporate Smarty in around an hour. And it all worked easily. Then I wrote a form-handling libary in just over two hours that worked immediately — and made it possible for me to write a several screen application in a matter of an hour, complete with form, form validation, and database calls. Doing the same with CGI::Application took me hours, if not days.

So, my idea is this: port CGI::Application to PHP. I love the concept of CGI::App — it's exactly how I want to program, and I think it's solid. However, by porting it to PHP, I automatically have session and cookie support, and database support is only a few lines of code away when I use PEAR; I'll add Smarty as the template toolkit of choice, but make it easy to override the template methods to utilize . I get a nice MVC-style application template, but one that makes developing quickie applications truly a snap.

This falls under the "right-tool-for-the-job" category; perl, while a wonderful language, and with a large tradition as a CGI language, was not developed for the web as PHP was. PHP just makes more sense in this instance. And I won't be abandoning perl by any stretch; I still use it daily at work and at home for solving any number of tasks from automated backups to checking server availability to keeping my ethernet connection alive. But I have real strengths as a PHP developer, and it would be a shame not to use those strengths with our home website.

From the blog

Posted 2004-03-19
PHP Class Tips

We're starting to use OO in our PHP at work. I discovered when I started using it why I'd been having problems wrapping my head around some of the applications I've been programming lately: I've become accustomed in Perl to using an OO framework. Suddenly, programming in PHP was much easier.

There's a few things that are different, however. It appears that you cannot pass objects in object attributes, and then reference them like thus:

$object->db>query($sql)

PHP doesn't like that kind of syntax (at least not in versions 4.x). Instead, you have to pass a reference to the object in the attribute, then set a temporary variable to that reference whenever you wish to use it:

$object->db =& $db;
...
$db = $object->db;
$res = $db->query($sql);

What if you want to inherit from another class and extend one of the methods? In other words, you want to use the method from the parent class, but you want to do some additional items with it? Simple: use parent:

function method1()
{
    /* do some pre-processing */

    parent::method1(); // Do the parent's version of the method

    /* do some more stuff here */
}
Update:

Actually, you can reference objects when they are attributes of another object; you just have to define the references in the correct order:

$db =& DB::connect('dsn');
$this->db =& $db;
...
$res = $this->db->query($sql);

I've tested the above syntax with both PEAR's DB and with Smarty, and it works without issue.

From the blog

Posted 2004-03-11
Making a RAID array from the command line

Last night, I created my first RAID array from the commandline. It was quite simple, I discovered.

  1. Create your partitions using fstab. Remember, primary partitions must be created before extended partitions.
  2. Look in /proc/partions and note the new partition IDs.
  3. Edit /etc/raidtab and create a new RAID array. If unsure of the syntax, look up the Linux Software RAID HOWTO for more details.
  4. Type mkraid /dev/md?, where ? is the id of the RAID device you just entered in /etc/raidtab.
  5. Format the new RAID device with your favorite filesystem, assign it a mount point, and start using it!

I was impressed with how easy it was; the choices that the Anaconda installer present for creating a RAID array made it seem like the underlying process must be difficult, when in fact it may have been almost the same complexity if not easier.

From the blog

Posted 2004-02-16
Learn something new everyday

Linux.com has had a running series on CLI commands for Newbies. Most of it has been very basic, but there are still a few gems within. For instance, today I was introduced to apropos and whatis. Give a search term to the former, and it will list all programs in which the search term is found in the manpages; give a program name to the latter, and it will tell you which man page addresses it.

From the blog

Posted 2004-02-09
Random thoughts of violence

I began the day with sudden images and body remembrances of an escrima or arnis drill Morgan used to teach during weapons class years ago — it utilizes a short stick or wakazashi in one hand, the other hand free, and consists of five steps on each side; when you finish one side, you do the other, because the drill is done with a partner.

I haven't done the drill for years, but I remembered all the nuances, all the little tips and secrets Morgan showed me over the year or two he continued teaching it. And I wanted desperately to do it with someone right that moment as I was getting out of bed so that I wouldn't lose it. But, of course, I had no such opportunity. The movement is still tracing its way through my body.

And this evening, we watched Fight Club. I still remember watching it in the theater, and how it affected me then — and it affects me in many of the same ways now. There's some cultural references I 'get' more now — references to Ikea, and now I understand groups and guided meditation and therapy better. And there's new references, too — the image of the buildings falling is much different now that the WTC buildings have been viewed collapsing.

But the message, the message is still the same, still present. Do things own us, or do we own them? What do I most want to do before I die, and am I doing it? These are big questions for a film to raise, and I'm still surprised that Fight Club remains such a huge hit and success because of them. And they're not necessarily buried in the film — though I can see how many people might simply glorify the violence in the film, and pass over the message. I find the violence is a part of the message — can you teach yourself to live with pain, that pain is transient and ceases? can you learn to stop living in fear?

So my day was marked by violence, beginning and end. The middle was all consumer fluff. And hedonism. But hey, that's okay, too.

From the blog

Posted 2004-02-05
HTML::FillInForm

The CGI::Application::ValidateRM module utilizes HTML::FillInForm to fill in values in the form if portions did not pass validation. Basically, it utilizes HTML::Parser to go through and find the elements and match them to values. It's used because the assumption is that you've built your form into an HTML::Template, and that way you don't need to put in program logic into the form.

Seems another good candidate for using FillInForm would be to populate a form with values grabbed from a database… I should look into that as well!

From the blog

Posted 2004-02-05
HTML::Template notes

I've used HTML::Template a little, mainly in the Secret Santa project I did this past Christmas for my wife's family. One thing I disliked was using the normal syntax: <TMPL_VAR NAME=IMAGE_SRC> — it made looking at it difficult (it wasn't always easy to tell what was an HTML tag, what was plain text, and what was HTML::Template stuff), and it made it impossible to validate my pages before they had data.

Fortunately, there's an alternate syntax: wrap the syntax in HTML comments: <!-- TMPL_VAR NAME=IMAGE_SRC --> does the job. It uses more characters, true, but it gets highlighted different than HTML tags, as well, and that's worth a lot.

And why do I have to say "NAME=" every time? That gets annoying. As it turns out, I can simply say: <!-- TMPL_VAR IMAGE_SRC -->, and that, too will get the job done.

Finally, what about those times when I want to define a template, but have it broken into parts, too? Basically, I want HTML::Template to behave a little like SSI. No worries; there's a TMPL_INCLUDE tag that can do this: <!-- TMPL_INCLUDE NAME="filename.tmpl" -->.

From the blog

Posted 2004-02-05
CGI::Application::ValidateRM and Data::FormValidator

I've been reading a lot of posts lately on the CGI::App mailing list about using CGI::Application::ValidateRM (RM == Run Mode); I finally went and checked it out.

CGI::App::ValRM uses Data::FormValidator in order to do its magic. Interestingly, D::FV is built much like how I've buit our formHandlers library at work — you specify a list of required fields, and a list of fields that need to be validated against criteria, then provide the criteria. It goes exactly how I would have done our libraries had we been working in perl — supplying the constraint as a regexp or anonymous sub in a hashref for the field.

Anyways, it looks like the combination of CGI::App::ValRM with CGI::App could greatly simplify any form validations I need to do on the site, which will in turn make me very happy!

From the blog

Posted 2004-02-04
Design Ideas

I had some success last night with the My::Portal CGI::Application superclass I'm building — I actually got it working with CGI::Wiki::Simple (after I debugged the latter to fix some delegation issues!). Now that I know the "proof-of-concept" works, I'm ready to start in on some other issues.

The first issue is: how can I specify different directories for different applications to search for templates, while retaining the default directory so that the superclass can build the final page? I could always simply keep all templates in a single directory and simply prefix them, but that seems inelegant, somehow. I'll need to explore how HTML::Template integration works with CGI::App.

Second, and closely related: how do I want it to look, in the end? I could see keeping the design we have — it's clean, simple, and yet somehow functionally elegant. Okay, I'm exaggerating — it's your standard three-column with header and footer. But it goes with the idea of blocks of content. I need to think about that.

I saw a design idea for a WikiWikiWeb today, though, that totally changed my ideas of how a Wiki should look. I hadn't been to Wikipedia for some time, but a Google link to Gaston Julia showed up on Slashdot as it shut down a site in Australia, and so I visited it. I like the new design — it separates out the common links needed into a nice left menu, and puts a subset of that at the top and bottom of the main column as well, using nice borders to visually separate things. I much prefer it to PhpWiki's default style, as well as to anything else I've really seen so far relating to Wiki layout.

From the blog

Posted 2004-02-04
Fun with Find

I've had occasion to need to grab a specific set of files from a large directory — most recently, I needed to grab some specific access logs from our Apache logfiles at work.

Enter find.

I needed to get all files newer than a specific date, and with the pattern 'sitename-access_log.timestamp.gz'. I then needed to tar up these files and grab them for processing. So, here's what I did:

  • The -newer filename tells find to locate files newer than filename.
  • The -regex flag tells find to locate files matching the regular expression. The regex that find uses is a little strange, however, and didn't follow many conventions I know; for one thing, it's assumed that the pattern you write will match against the entire string, and not just a portion of it. What I ended up using was -regex '.*access_log.*gz', and that worked.
  • The -printf flag tells find to format the printing. This is useful when using the output of find in another program. For instance, tar likes a list of filenames… so I used -printf "%p ", which separated each filename with a space.

I then backticked my full find statement and used it as the final argument to a tar command; voila! instant tar file with the files I need!

From the blog

Posted 2004-02-01
conditional use in perl

I've been struggling with how to use modules at runtime instead of compile time (I even wrote about this once before). I finally figured it out:

my $module = "ROX::Filer";
eval "use $module";
die "couldn't load module : $!n" if ($@);

Now I just need to figure out how to create objects from dynamic module names…!

Update: Creating objects from dynamic names is as easy as dynamically loading the module at run-time:

my $obj = $module->new();

From the blog

Posted 2004-01-27
Where's that module?

One continual pain for me with perl is when I need to try to find the location of a specific module on my filesystem so that I can examine it myself; I end up first having to find out what my @INC path is, then having to dig through it until I find the module. Fortunately, I'm not the only one; somebody posted a solution to this problem on Perl Monks:

Updated: The original listing presented didn't work! The following one, garnered from a comment to the original PM post, does, and is what I'm now using.

###!/usr/bin/perl -w
use strict;

use File::Spec::Functions qw/catfile/;

my @loaded = grep {
    eval "require $_";
    !$@ ? 1 : ($@ =~ s/(@INC contains: Q@INCE)//, warn ("Failed loading $_: $@"), 0);
} @ARGV;

my @pm = map catfile(split '::') . (/.pmz/ ? '' : '.pm'), @loaded;

print "@INC{@pm}n";
__END__

=pod

=head1 NAME

whichpm - lists real paths of specified modules

=head1 SYNOPSIS

  editor `whichpm Bar`

=head1 DESCRIPTION

Analogous to the UN*X command which.

=cut

Just place it in your $PATH and let 'er rip!

From the blog

Posted 2004-01-25
Class::DBI

I was reading a thread on the cgiapp mailing list today from several of the core developers about developing a book on CGI::Application. In it, several mentioned that it might/should center around CGI::App and a handful of oft-used modules. One of those modules is Class::DBI.

I took a gander at Class::DBI over at CPAN, and it looks absolutely amazing, and at the same time perhaps too abstract. Basically, you create a number of packages and/or packages, one for each table you'll be using in your application, and one to establish your basic connection. Then, each package creates an object instance of the connection, and defines a number of properties: the name of the table, the columns you'll be using, and then the relations it has to other tables ( has_a( col_name => 'Package::Name'); has_many( col_name => 'Package::Name'); might_have(col_name => 'Package::Name'); ) etc.

Then you use the module/packages you need in your script, and you can then use object-oriented notation to do things like insert rows, update rows, search a table, select rows, etc. And it looks fairly natural.

I like the idea of data abstraction like this. I see a couple issues, however:

  1. I don't like the idea of one package per table; that gets so abstract as to make development come to a stand-still, especially during initial development. However, once development is sufficiently advanced, I could see doing this, particularly for large projects; it could vastly simplify many regular DBI calls.
  2. I like using SQL. If I need to debug why something isn't working when I interact with the database, I want to have absolute control over the language. Abstracting the SQL means I don't have that fine-grained control that helps me debug.

So, for now, I'll stick with straight DBI…. but this is an interesting avenue to explore.

From the blog

Posted 2004-01-25
Ctrl-S and Ctrl-Q in *nix systems

I just ran into this not long ago, and wish I'd discovered it years ago. Basically, Ctrl-S suspends a process, while Ctrl-Q resumes it. This is useful when in g/vim or screen and you manage to lock up your application because you accidently hit Ctrl-S when reaching for another key combo.

From the blog

Posted 2004-01-24
use autouse ... or not

Due to my cursory reading in the Perl Cookbook, 2nd Edition, earlier this week, I've been investigating the use autouse pragma, to see if it will indeed solve my issue of wanting to use different modules based on the current situation. Unfortunately, I cannot find any documentation on it in perldoc.

I remember seeing something about wrapping this stuff into a BEGIN block, but that would require knowing certain information immediately, and I might need the code to work through some steps before getting there.

Fortunately, this node just appeared on Perl Monks today, and I got to see other ways of doing it:

  • The if module lets you do something like use if $type eq 'x', "Some::Module"; However, $type must be known at compile time (i.e., it's based on system info or on @ARGV); this probably wouldn't work in a web-based application.
  • Use require and import instead: if $type wq 'ex') { require Some::Module; Some::Module->import if Some::Module->can("import"); } If your module doesn't export anything, you can even omit the call to import.
  • Use an eval: if ($type eq 'x') { eval "use Some::Module"; } This gets around the import problem, but could possibly run into other compile time issues.

So, basically, I already had the tools to do the job; just needed to examine the problem more.

From the blog

Posted 2004-01-23
More CGI::App research... Try the manual!

So, I'm a bit of an idiot… it's been so long since I looked at CGI::App, and yet I felt I had such a grasp on it, that I overlooked the obvious step: look at the manual!

In particular, there's a whole series of methods that are used to tailor CGI:App to your particular needs, and these include cgiapp_init(), cgiapp_prerun(), and cgiapp_postrun().

  • cgiapp_init() is used to perform application specific initialization behaviour, and is called immediately before the setup() method. It can be used to load settings from elsewhere; if it were called only from a superclass from which other modules inherited, it would then provide common settings for all modules.
  • cgiapp_prerun() is called immediately before the selected run-mode. If it were called only by your superclass, you could perform items such as authorization or even form validation; this would then be standard for all your applications. (You can use the $self->prerun_mode('mode') call to to override the selected run-mode, for instance, thus allowing you to redirect to a different mode if a user isn't permitted there.)
  • cgiapp_postrun() is called after the run-mode has returned its output, but before http headers have been generated or anything sent to the web browser. Again, if defined in a superclass, it means that you could then place the run-mode output in a specific place within a larger template, and even call other routines to fill in other parts of the main template. You could even check to see if certain parameters were passed to the page, and change the type of output you send back (XML, PDF, image, etc.), allowing you to have a common query element that changes the output type (e.g., a 'print' parameter that returns a PDF or a stripped down template).

In addition, you could specify in the superclass that you're using CGI::Simple for the query object (using the cgiapp_get_query method), or you could rewrite the load_tmpl() method to use Template::Toolkit or some other templating system, etc.

Doesn't look so crazy anymore…

From the blog

Posted 2004-01-23
CGI::Application Research

I've been wanting to redevelop my home website for some time using CGI::Application. The last time I rewrote it from PHP to perl, I developed something that was basically a subset of the things CGI::App does, and those things weren't done nearly as well.

The problem I've been running into has to do with having sidebar content, and wanting to run basically a variety of applications. I want to have a WikiWikiWeb, a photo gallery, some mail forms, and an article database/blog; CGI::App-based modules for each of these all exist. But I want them all to utilize the same sidebar content, as well — and that sidebar content may vary based on the user.

My interest got sparked by this node on Perl Monks. The author tells of an acquaintance who goes by the rule that a CGI::App should have 10-12 states at most; more than that, and you need to either break it apart or rethink your design. And all CGI::Apps inherit from a common superclass, so that they share the same DB connections, templates, etc.

So, I've been investigating this problem. One node on PM notes that his ISP uses CGI::App with hundreds of run modes spread across many applications; they created a module for session management and access control that calls use base CGI::Application; each aplication then calls use base Control, and they all automatically have that same session management and access, as well as CGI::Application.

Another node mentions the same thing, but gives a little more detail. That author writes a module per application, each inheriting from a super class: UserManager.pm, Survey.pm, RSS.pm, Search.pm, etc. You create an API for that super class, and each CGI::App utilizes that API to do its work.

This also seems to be the idea behind CheesePizza, a CGI::App-based framework for building applications. (All pizzas start out as cheese pizzas; you simply add ingredients.) The problem with that, though, is that I have to learn another framework on top of CGI::App, instead of intuiting my own.

But how do I write the superclass? Going back to the original node that sparked my interest, I found a later reply that described how you do this. The big key is that you override the print method — this allows you to customize the output, and from here you could call functions that create your sidebar blocks, and output the content of the CGI::App you just called in a main content area of your template.

Grist for the mill…

From the blog

Posted 2004-01-23
robots.txt

One thing I've wondered about is the syntax of the robots.txt file, where it's placed, and how it's used. I've known that it is used to block spiders from accessing your site, but that's about it. I've had to look into it recently because we're offering free memberships at work, and we don't want them indexed by search engines. I've also wondered how we can exclude certain areas, such as where we collate our site statistics, from these engines.

As it turns out, it's really dead simple. Simply create a robots.txt file in your htmlroot, and the syntax is as follows:

User-agent: *
Disallow: /path/
Disallow: /path/to/file

The User-agent can specify specific agents or the wildcard; there are so many spiders out there, it's probably safest to simply disallow all of them. The Disallow line should have only one path or name, but you can have multiple Disallow lines, so you can exclude any number of paths or files.

From the blog

Posted 2004-01-23
More SSH tips: Tunnelling

I wrote up a short tutorial today on the IT wiki about SSH tunneling. What I didn't know is that you can start a tunnel after you've already ssh'd to another machine. Basically, you:

  • Press Enter
  • Type ~C

and you're at an ssh> prompt. From there, you can issue the tunnel command of your choice: -R7111:localhost:22, for instance.

From the blog

Posted 2004-01-23
IT hiring principles

I was just reading an article about the Dean campaign's IT infrastructure, and there's an interesting quote from their IT manager, Harish Rao:

"I believe in three principles", he said. "First I always make sure I hire people I can trust 100%. Second, I always try to hire people who are smarter than I am. Third, I give them the independence to do as they see fit as long as they communicate about it to their other team members. We've had a lot of growing pains, a lot of issues; but we've been able to deal with them because we have a high level of trust, skill and communication."

I know for myself that when I (1) don't feel trusted, and/or (2) am not given independence to do what I see as necessary to do my job, I don't communicate with my superiors about my actions, and I also get lazy about my job because I don't feel my work is valued.

Fortunately, I feel that in my current work situation, my employers followed the same principles as Rao, and I've felt more productive and appreciated than I've felt in any previous job.

From the blog

Posted 2004-01-22
PHP standards ruminations

I've been thinking about trying to standardize the PHP code we do at work. Rob and I follow similar styles, but there are some definite differences. It would make delving into eachother's code much easier if we both followed some basic, agreed upon, guidelines.

One thing I've been thinking about is function declarations. I find that I'm often retooling a function to make it more general, and in doing so either need to decrease or increase the number of arguments to it. This, of course, breaks compatability.

So I propose that we have all functions take two arguments: $data and $db. $data is a hash which can then be extract'd via PHP. To change the number of arguments, you can simply set defaults for arguments or return meaningful errors for missing arguments.

Another thought going through my mind deals with the fact that we reuse many of our applications across our various sites, and also export some of them. I think we should try and code the applications as functional libraries or classes, and then place them somewhere in PHP's include path. We can then have a "demo" area that shows how to use the libraries/classes (i.e., example scripts), and to utilize a given application, we need simply include it like: include 'apps/eventCalendar/calendar.inc';. This gives us maximum portability, and also forces us to code concisely and document vigorously.

I was also reading on php.general tonight, and noticed some questions about PHP standards. Several people contend that PEAR is becoming the de facto standard, as it's the de facto extension library. In addition, because it is becoming a standard, there's also a standard for documenting projects, and this is phpdocumenter. The relevant links are:

From the blog

Posted 2004-01-21
Making RCS a little easier...

One thing I noticed today when using RCS is that it isn't terribly user friendly — you need to checkout a file to make edits. Often, I make edits, and then want to commit my changes.

So I wrote a wrapper script called revise. It makes a temporary copy of the file you've been editing, checks it out of RCS with locking, makes it writeable, moves the temporary copy to the permanent name, checks it in and unlocks it (which prompts for a log message), and then makes the file writeable for the user and group again. The script is outlined here:

###!/bin/bash
FILE=$1
cp $FILE $FILE.new
co -l $FILE
chmod u+w $FILE
mv $FILE.new $FILE
ci -u $FILE
chmod ug+w $FILE

Being the ROX-Filer centric person I am, I also wrote a quick perl script called rox-revise that I can then put in my SendTo menu. It parses the file's path, changes to that directory, and then calls the revise script on the filename, from within a terminal. This script follows:

###!/usr/bin/perl -w
use strict;

use vars qw/$path $file $TERMCMD $REVISE $ZENITY/;

### Configurable variables
$TERMCMD = "myTerm";    # What terminal command to use; must be xterm compliant
$REVISE  = "revise";    # What command to use to revise (i.e. rcs ci) the file
$ZENITY  = "zenity";    # The zenity or dialog or xdialog command to use

### Grab the filename from the command line
$path = shift;
$file = $path;

### If no file given, raise a dialog and quit
if (!$path || ($path eq '')) {
    system(
        $ZENITY, 
        '--title=Error', 
        '--warning', 
        "--text=No path given to $0; rox-revise quit!"
    );
    exit 0;
}

### Get the path to the file and switch to that directory
if ($path =~ m#/#) {
    $path =~ s#^(.*)/.*?$#$1#;
    if ($path !~ m#^/#) { $path = "./$path"; }
    chdir $path or die "$path not found!n";
} else {
### Or else assume we're in the current directory
    $path = './';
}

### Get the filename
$file =~ s#^.*/(.*?)$#$1#;

### Execute the revise statement
my $failure = system($TERMCMD, '-e', $REVISE, $file);
if ($failure) {
    # on failure, raise a dialog
    system(
        $ZENITY, 
        '--title=Error', 
        '--warning', 
        "--text=Unable to revise $file"
    );
}

1;

Now I just need to check out Subversion, and I can have some robust versioning!

From the blog

Posted 2004-01-21
SSH tips and tricks

In trying to implement some of the hacks in Linux Server Hacks, I had to go to the ssh manpage, where I discovered a number of cool tricks.

  1. In order to get key-based authentication (i.e., passwordless) working, the $HOME/.ssh directory must be mode 0700, and all files in it must be mode 0600. Once that's setup properly, key-based authentication works perfectly.
  2. You can have a file called config in your $HOME/.ssh directory that specifies user-specific settings for using SSH, as well as a number of host-specific settings:
  • Compression yes turns on compression
  • ForwardX11 yes turns on X11 forwarding by default
  • ForwardAgent yes turns on ssh-agent forwarding by default
  • Host-based settings go from one Host keyword to the next, so place them at the end of the file. Do it in the following order:
```apacheconf
Host nickname
HostName actual.host.name
User username_on_that_host
Port PortToUse
```

This means, for instance, that I can ssh back and forth between home using the same key-based authentication and the same ssh-to script ([more below](#ssh-to)) I use for work servers -- because I don't have to specify the port or the username.

I mentioned a script called ssh-to earlier. This is a neat little hack from the server hacks book as well. Basically, you have the following script in your path somewhere:

###!/bin/bash
ssh -C `basename $0` $*

Then, elsewhere in your path, you do a bunch of ln -s /path/to/ssh-to /path/to/$HOSTNAME, where $HOSTNAME is the name of a host to which you ssh regularly; this is where specifying a host nickname in your $HOME/.ssh/config file can come in handy. Then, to ssh to any such server, you simply type $HOSTNAME at the command line, and you're there!

From the blog

Posted 2004-01-21
RCS quickstart

Gleaned from Linux Server Hacks

  • Create an RCS directory
  • Execute a ci -i filename
  • Execute a co -l filename and edit as you wish.
  • Execute a ci -u filename to check in changes.

The initial time you checkout the copy, it will be locked, and this can cause problems if someone else wishes to edit it; you should probably edit it once and put in the version placeholder in comments somewhere at the top or bottom:

$VERSION$

and then check it back in with the -u flag to unlock it.

From the blog

Posted 2004-01-20
Linux Server Hacks

I stopped at Borders in downtown Burlington on New Year's Eve day, and found a book called Linux Server Hacks. I loved it immediately, but I wasn't quite willing to shell out $25 for such a slim volume, even if it did have many tidbits I could immediately use.

When I told my co-worker, Rob, about it, it turned out he already had the book, and brought it in to work for me to borrow the next day.

My nose has barely been out of it since. I've done such things as:

  • Create personal firewalls for my home and office machines. I've always used scripts for this, but the hacks for iptables showed the basics of how they work, and I've now got nice robust firewalls that are very simple scripts. To make them even more user-friendly, I borrowed some syntax from the various /etc/init.d scripts so that I can start, stop, and reload the firewall at will.
  • I don't use perl at the command line much, even though I've long known the -e switch; it just seems to cumbersome. However, combine it with the -p and/or -i switch, and you can use perl as a filter on globbed files!
  • I know much more about SSH now, and am using ssh-agent effectively at work now to bounce around servers and transfer groups of files between servers (often by piping tar commands with ssh).
  • A script called movein.sh turned my life around when it came to working on the servers. I now have a .skel directory on my work machine that contains links to oft-used configuration files and directories, as well as to my ~/bin directory; this allows me to then type movein.sh server and have all these files uploaded to the server. I can now use vim, screen, and other programs on any system we have in exactly the manner I expect to.
  • I've started thinking about versioning more, and have plans to put into place a subversion repository to store server configs, database schema, and development projects so we won't make as many mistakes in the future — at least not ones we can't rollback from.
  • I rewrote a shell script in perl that was originally intended for IP takeover, and have been utilizing it to determine if and/or when a server we've reinstalled goes down.
  • A bunch of Apache and MySQL tips are included, including mod_rewrite hacks, how to make your directory indexes show full file names, and more; as well as how to monitor your mysql processes and, if necessary, kill them. I'm also very interested in how to use MySQL as an authentication backend for an FTP daemon — it could give us very fine-grained control of our webserver for editors.

And that's just the tip of the iceberg. All in all, I highly recommend the book — though most likely as a book to check out from the library for a few weeks, digest, put into practice, and return. The hacks are so damn useful, I've found that after using one, I don't need to refer to that one ever again. But that's the point.

From the blog

Posted 2004-01-20
Perl Cookbook, 2nd Edition

Tonight was Papa night, which meant that I got to look after Maeve while Jen worked late doing a group at work. Last week, Maeve and I established that Papa Night would always include going to the bookstore, which means Barnes & Noble in South Burlington.

Last week, Maeve was perfectly content to look at books by herself, and didn't want me interfering, so I decided this week to grab a book for myself to peruse while she was busy. It didn't work as I intended — Maeve saw that I wasn't paying full attention to her, and then demanded my attention — but I was able to look through some of the new items in the second edition of The Perl Cookbook.

Among them were:

  • Setting up both an XML-RPC server and client, using SOAP::Lite

  • Setting up both a SOAP-RPC server and client, using SOAP::Lite and other modules; I could have used this in ROX::Filer to communicate with ROX instead of using the filer's RPC call.

  • Better coverage of DBI (it actually covered it!):

    • When you expect only a single row, this is a nice way to grab it:

      $row = $dbi->selectrow_(array|hash)ref($statement)
      
    • This is a great way to grab a bunch of columns from a large resultset:

      $results = $dbi->selectall_hashref($sql);
      foreach $record (keys(%{$results})) {
          print $results->{$record}{fieldname};
      }
      
    • This one is nice for a large resultset from which you only want one column:

      $results = $dbi->selectcol_arrayref($sql);
      foreach $result (@{$results}) {
          print $result;
      }
      
    • If you need to quote values before inserting them, try:

      $quoted = $dbi->quote($unquoted);
      $sql = "UPDATE table SET textfield = $quoted";
      
    • If you need to check for errors, don't check with each DBI call; instead, wrap all of them in an eval statement:

      eval {
          $sth = $dbi->prepare($sql);
          $sth->do;
          while ($row = $sth->fetchrow_hashref) {
              ...
          }
      }
      if ($@) {
          print $DBI::errstr; 
      }
      
  • Coverage of templating, including Text::Template (very interesting!)

  • Whole new chapters on mod_perl and XML (including DOM!) which I didn't really even get to peruse.

  • autouse pragma: if you use:

    use autouse Module::Name;
    

    perl will use the module at runtime instead of compiletime; basically, it only uses it if it actually needs it (i.e., if it encounters code that utilizes functionality from that module). It's a good way to keep down on the bloat — I should use this with librox-perl, and possibly with CGI::App.

Search tips

  • Use #{some tag} or tag:{some tag} to filter by tag; using this more than once will find results that match all tags.
  • Use year:{year} to filter by year. If you use this multiple times, the last one wins.
  • Use begin:{year-month-day} to restrict to results beginning on that day. Last one wins.
  • Use end:{year-month-day} to restrict to results ending on that day. Last one wins.