Search results
From the blog
Posted 2019-01-24
Registering Module-Specific Routes in Expressive
In Expressive, we have standardized on a file named
config/routes.php
to contain all your route registrations. A typical file
might look something like this:
declare(strict_types=1);
use Zend\Expressive\Csrf\CsrfMiddleware;
use Zend\Expressive\Session\SessionMiddleware;
return function (
\Zend\Expressive\Application $app,
\Zend\Expressive\MiddlewareFactory $factory,
\Psr\Container\ContainerInterface $container
) : void {
$app->get('/', App\HomePageHandler::class, 'home');
$app->get('/contact', [
SessionMiddleware::class,
CsrfMiddleware::class,
App\Contact\ContactPageHandler::class
], 'contact');
$app->post('/contact', [
SessionMiddleware::class,
CsrfMiddleware::class,
App\Contact\ProcessContactRequestHandler::class
]);
$app->get(
'/contact/thank-you',
App\Contact\ThankYouHandler::class,
'contact.done'
);
$app->get(
'/blog[/]',
App\Blog\Handler\LandingPageHandler::class,
'blog'
);
$app->get('/blog/{id:[^/]+\.html', [
SessionMiddleware::class,
CsrfMiddleware::class,
App\Blog\Handler\BlogPostHandler::class,
], 'blog.post');
$app->post('/blog/comment/{id:[^/]+\.html', [
SessionMiddleware::class,
CsrfMiddleware::class,
App\Blog\Handler\ProcessBlogCommentHandler::class,
], 'blog.comment');
}
and so on.
These files can get really long, and organizing them becomes imperative.
From the blog
Posted 2018-12-05
Creating Exception types on-the-fly in modern PHP
We pioneered a pattern for exception handling for Zend Framework back as we initially began development on version 2 around seven years ago. The pattern looks like this:
- We would create a marker
ExceptionInterface
for each package. - We would extend SPL exceptions and implement the package marker interface when doing so.
What this gave users was the ability to catch in three ways:
- They could catch the most specific exception type by class name.
- They could catch all package-level exceptions using the marker interface.
- The could catch general exceptions using the associated SPL type.
From the blog
Posted 2018-10-17
The Future of Zend Framework
For the past thirteen years, I've been either consuming Zend Framework or directly contributing to it. Since 2009, I've operated as project lead, and, since then, shepherded the version 2 and 3 releases, added Apigility to the ZF ecosystem, and helped bring middleware paradigms to the mainstream by assisting with the creation of Stratigility and coordination of the Expressive project. As I write this, the various ZF packages have been downloaded over 300 MILLION times, with 200 million of those being in the past 18 months!
From the blog
Posted 2018-10-16
Async Expressive with Swoole
Have you used Node.js?
For those of my readers unfamiliar with Node.js, it's a server-side JavaScript framework that provides the ability to create, among other things, network services. To do so, it provides an event loop, which allows for such things as asynchronous processing.
In the PHP ecosystem, a group of Chinese developers have been creating an extension that provides many of the same capabilities as Node.js. This extension, called Swoole, allows you to create web servers with asynchronous capabilities. In many cases, the asynchronous capabilities are handled via coroutines, allowing you to write normal, synchronous code that still benefits from the asynchronous nature of the system event loop, allowing your server to continue responding to new requests as they come in!
We've been gradually adding and refining our Swoole support in Expressive, and recently issued a stable release that will work with any PSR-15 request handler. In this post, I'll enumerate what I feel are the reasons for considering Swoole when deploying your PHP middleware application.
From the blog
Posted 2018-07-18
Notes on GraphQL
The last week has been my first foray into GraphQL, using the GitHub GraphQL API endpoints. I now have OpinionsTM.
The promise is fantastic: query for everything you need, but nothing more. Get it all in one go.
But the reality is somewhat... different.
From the blog
Posted 2018-01-23
PSR-15
Yesterday, following a unanimous vote from its Core Committee, PHP-FIG formally accepted the proposed PSR-15, HTTP Server Handlers standard.
This new standard defines interfaces for request handlers and middleware. These have enormous potential impact on the PHP ecosystem, as they provide standard mechanisms for writing HTTP-facing, server-side applications. Essentially, they pave the way for developers to create re-usable web components that will work in any application that works with PSR-15 middleware or request handlers!
From the blog
Posted 2017-03-30
Using Anonymous Classes to Write Middleware
I faced an interesting question recently with regards to middleware: What happens when we go from a convention-based to a contract-based approach when programming?
Convention-based approaches usually allow for duck-typing; with middleware, it means you can write PHP callables — usually closures — and just expect them to work.
Contract-based approaches use interfaces. I think you can see where this is going.
From the blog
Posted 2017-01-26
PSR-7 Request and Method Utilities
We all know the standard HTTP request methods and status codes, right? Or do we?
We definitely know whether or not they should be integers or strings, and/or how string values should be normalized, right?
And our IDEs can totally autocomplete them, right?
Oh, that's not the case?
From the blog
Posted 2016-10-24
Automating PHPUnit with Node
I've been trying to automate everything this year. When working on OSS, this is usually as simple as setting up Travis CI; in some cases, even that becomes a little more involved, but remains possible.
But that's continuous integration. What about continuous development?
From the blog
Posted 2016-08-17
Using Composer to Autoload ZF Modules
One aspect of Zend Framework 3, we paid particular focus on was leveraging the Composer ecosystem. We now provide a number of Composer plugins for handling things such as initial project installation, registering installed modules with the application, and more. It's the "more" I particularly want to talk about.
From the blog
Posted 2016-06-30
Push-to-Deploy with AWS CodeDeploy
AWS CodeDeploy is a tool for automating application deployments to EC2 instances and clusters. It can pull application archives from either S3 or GitHub, and then allows you to specify how to install, configure, and run the application via a configuration specification and optionally hook scripts. When setup correctly, it can provide a powerful way to automate your deployments.
I started looking into it because I wanted to try out my site on PHP 7, and do a few new things with nginx that I wasn't doing before. Additionally, I've accidently forgotten to deploy a few times in the past year after writing a blog post, and I wanted to see if I solve that situation; I'd really enjoyed the "push-to-deploy" paradigm of OpenShift and EngineYard in the past, and wanted to see if I could recreate it.
Enrico first pointed me to the service, and I was later inspired by a slide deck by Ric Harvey. The process wasn't easy, due to a number of things that are not documented or not fully documented in the AWS CodeDeploy documentation, but in the end, I was able to accomplish exactly that: push-to-deploy. This post details what I found, some recommendations on how to create your deployments, and ways to avoid some of the pitfalls I fell into.
From the blog
Posted 2016-05-16
Programmatic Expressive
Enrico just returned from phpDay, where he spoke about Expressive and the upcoming Zend Framework 3. One piece of feedback he brought back had to do with how people perceive they should be building Expressive applications: many think, based on our examples, that it's completely configuration driven!
As it turns out, this is far from the truth; we developed our API to mimic that of traditional microframeworks, and then built a configuration layer on top of that to allow making substitutions. However, it's not only possible, but quite fun, to mix and match the two ideas!
From the blog
Posted 2016-04-26
On Deprecating ServiceLocatorAware
A month or two ago, we pushed a new release of zend-mvc that provides a number of forwards-compatibility features to help users prepare their applications for the upcoming v3 release.
One of those was, evidently, quite controversial: in v3, zend-servicemanager no
longer defines the ServiceLocatorAwareInterface
, and this particular release
of zend-mvc raises deprecation notices when you attempt to inject a service
locator into application services, or pull a service locator within your
controllers.
The arguments go something like this:
- "Dependency injection is too hard to understand!"
- "This feature simplifies development!"
- "If this is so bad, why was it in there in the first place?"
These are usually followed by folks:
- saying they'll switch frameworks (okay, I guess?);
- asking for re-instatement of the feature (um, no);
- asking for removal of the deprecation notices (why? so you can delay your pain until upgrading, when you'll ask for re-instatement of the feature?); or
- asking for a justification of the change.
So, I've decided to do the last, justify the change, which addresses the reasons
why we won't do the middle two, and addresses why the assumptions and assertions
about ServiceLocatorAware
's usefulness are mostly misguided.
From the blog
Posted 2016-04-17
Serve PSR-7 Middleware Via React
I've been intending to play with React for some time, but, for one reason or another, kept putting it off. This past week, I carved some time finally to experiment with it, and, specifically, to determine if serving PSR-7 middleware was possible.
From the blog
Posted 2016-01-29
Automating GitHub Pages Builds with MkDocs
One of the final tasks in prepping for the Expressive 1.0 release was setting up the documentation site. We'd decided to use GitHub Pages for this, and we wanted to automate builds so that as we push to the master branch, documentation is deployed.
The process turned out both simple and bewilderingly difficult. This post is intended to help others in the same situation.
From the blog
Posted 2016-01-28
Expressive 1.0 in the Wild!
A few hours ago, we pushed Expressive 1.0.
This is a huge milestone for the ZF3 initiative; I've even called it the cornerstone. It signals a huge shift in direction for the project, returning to its roots as a component library. Expressive itself, however, also signals the future of PHP applications we envision: composed of layered, single-purpose PSR-7 middleware.
From the blog
Posted 2015-12-14
Secure PHAR Automation
For a variety of reasons, I've been working on a utility that is best distributed via PHAR file. As has been noted by others (archive.is link, due to lack of availability of original site), PHAR distribution, while useful, is not without security concerns, and I decided to investigate how to securely create, distribute, and update PHAR utilities as part of this exercise.
This is an account of my journey, as well as concrete steps you can take to secure your own PHAR downloads.
From the blog
Posted 2015-09-18
On 10 Years at Zend
10 years ago, as I write this, I was on a plane from Burlington, VT, to San Jose, CA, where I'd be starting work at Zend Technologies the next day as a PHP Developer.
From the blog
Posted 2015-09-09
Fixing Version Issues When Running Composer from a Branch
For the Zend Framework component repositories, we occasionally need to backport changes to the 2.4 LTS releases. This requires checking out a branch based off the last LTS tag, applying patches (often with edits to translate PHP 5.5 syntax to PHP 5.3), and running tests against PHP 5.3 and 5.4.
Of course, to run the tests, you need the correct set of dependencies installed.
If you have any component dependencies, that means running a composer update
to ensure that you get the 2.4 versions of those components.
And that's where my story begins.
From the blog
Posted 2015-07-28
On PSR7 and HTTP Headers
Yesterday, a question tagged #psr7
on Twitter caught my eye:
#psr7
Request::getHeader($name)
return array of single string instead of strings in #Slim3? cc: @codeguy pic.twitter.com/ifA9hCKAPs
The image linked provides the following details:
When I call
$request->getHeader('Accept')
for example, I was expected that I'll get something like this:
Array( [0] => text/html, [1] => application/xhtml+xml, [2] => application/xml, )
but, in reallity I got this:
Array( [0] => text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 )
Is it correct?
In this post, I'll explain why the behavior observed is correct, as well as shed a light on a few details of header handling in PSR-7.
From the blog
Posted 2015-06-08
PHP is 20!
Today, 20 years ago, Rasmus Lerdorf publicly released PHP. Ben Ramsey has issued a call-to-action for people to blog the event and the impact PHP has had on their lives and careers; this is my entry.
From the blog
Posted 2015-05-18
PSR-7 Accepted!
I'm pleased to announce that as of 22:00 CDT on 18 May 2015, http://www.php-fig.org/psr/psr-7 PSR-7 (HTTP Message Interfaces) has been accepted!
From the blog
Posted 2015-01-26
PSR-7 By Example
PSR-7 is now accepted!!!
I'm still hearing some grumbles both of "simplify!" and "not far enough!"
so I'm writing this posts to demonstrate usage of the currently published
interfaces, and to illustrate both the ease of use and the completeness and
robustness they offer.
First, though I want to clarify what PSR-7 is attempting.
From the blog
Posted 2015-01-08
On HTTP, Middleware, and PSR-7
As I've surveyed the successes and failures of ZF1 and ZF2, I've started considering how we can address usability: how do we make the framework more approachable?
One concept I've been researching a ton lately is middleware. Middleware exists in a mature form in Ruby (via Rack), Python (via WSGI), and Node (via Connect / ExpressJS); just about every language has some exemplar. Even PHP has some examples already, in StackPHP and Slim Framework.
The basic concept of middleware can be summed up in a single method signature:
function (request, response) { }
The idea is that objects, hashes, or structs representing the HTTP request and HTTP response are passed to a callable, which does something with them. You compose these in a number of ways to build an application.
From the blog
Posted 2014-09-18
Deployment with Zend Server (Part 8 of 8)
This is the final in a series of eight posts detailing tips on deploying to Zend Server. The previous post in the series detailed using the Zend Server SDK to deploy your Zend Server deployment packages (ZPKs) from the command line.
Today, I'll detail how I automate deployment with zf-deploy and zs-client (the Zend Server SDK), and wrap up the series with some closing thoughts.
From the blog
Posted 2014-09-16
Deployment with Zend Server (Part 7 of 8)
This is the seventh in a series of eight posts detailing tips on deploying to Zend Server. The previous post in the series detailed setting up and clearing page caching.
Today, I'm sharing how to use the Zend Server SDK to deploy your Zend Server deployment packages (ZPKs) from the command line.
From the blog
Posted 2014-09-11
Deployment with Zend Server (Part 6 of 8)
This is the sixth in a series of eight posts detailing tips on deploying to Zend Server. The previous post in the series detailed setting job script status codes.
Today, I'm sharing some tips around setting up page caching, and jobs for clearing the Zend Server page cache.
From the blog
Posted 2014-09-09
Deployment with Zend Server (Part 5 of 8)
This is the fifth in a series of eight posts detailing tips on deploying to Zend Server. The previous post in the series detailed how to secure your Job Queue job scripts.
Today, I'm sharing some best practices around writing job scripts, particularly around how to indicate execution status.
From the blog
Posted 2014-09-04
Deployment with Zend Server (Part 4 of 8)
This is the fourth in a series of eight posts detailing tips on deploying to Zend Server. The previous post in the series detailed a trick I learned about when to execute a chmod statement during deployment.
Today, I'm sharing a tip about securing your Job Queue job scripts.
From the blog
Posted 2014-09-02
Deployment with Zend Server (Part 3 of 8)
This is the third in a series of eight posts detailing tips on deploying to Zend Server. The previous post in the series detailed creating recurring jobs via Zend Job Queue, à la cronjobs.
Today, I'm sharing a very short deployment script tip learned by experience.
From the blog
Posted 2014-08-28
Deployment with Zend Server (Part 2 of 8)
This is the second in a series of eight posts detailing tips on deploying to Zend Server. The previous post in the series detailed getting started with Zend Server on the AWS marketplace and using zf-deploy to create ZPK packages to deploy to Zend Server.
Today, I'm looking at how to created scheduled/recurring jobs using Zend Server's Job Queue; think of this as application-level cronjobs.
From the blog
Posted 2014-08-26
Deployment with Zend Server (Part 1 of 8)
I manage a number of websites running on Zend Server, Zend's PHP application platform. I've started accumulating a number of patterns and tricks that make the deployments more successful, and which also allow me to do more advanced things such as setting up recurring jobs for the application, clearing page caches, and more.
From the blog
Posted 2014-08-21
Testing Code That Emits Output
Here's the scenario: you have code that will emit headers and content, for instance, a front controller. How do you test this?
The answer is remarkably simple, but non-obvious: namespaces.
From the blog
Posted 2014-06-09
Better Understanding Controllers Through Basic Patterns
Paul M. Jones has started an interesting discussion rethinking the MVC pattern as applied to the web, which he has dubbed Action-Domain-Responder (ADR). If you haven't given it a read yet, click the link and do that; this page will still be sitting here waiting when you return.
I agree with a ton of it — heck, I've contributed to it a fair bit via conversations with Paul. But there's been one thing nagging at me for a bit now, and I was finally able to put it into words recently.
Controllers — Actions in ADR — can be explained as facades.
From the blog
Posted 2008-12-31
2008: The year in review
That time of year again — wrap-up time. Each year, it seems like it's the busiest ever, and I often wonder if it will ever slow down. As usual, I'm restricting myself to primarily professional activities out of respect for the privacy of my family.
The short, executive summary:
- One trip to Israel
- One trip to The Netherlands
- One trip to California's Bay Area
- One trip to Atlanta, GA
- Three minor releases of Zend Framework
- Seven webinars, six for zend.com and one for Adobe
- Three conferences attended as a speaker, including:
- One six-hour workshop
- One three-hour tutorial (as a co-presenter)
- Four regular sessions
- Two panel sessions (one scheduled, one for an uncon)
- Two uncon sessions (one as a co-presenter)
- One foul-mouthed Pecha Kucha talk
- Ten Burlington, VT PHP User's Group meetings attended; I spoke at many
- One Bug Hunt week organized
- Two books reviewed as a technical editor
- Six articles for DevZone
- 50 blog entries (including this one)
Read on for the gruesome, month-by-month breakdown.
From the blog
Posted 2008-12-10
Mumbles irssi integration
I've been using IRC regularly for the past six to nine months, in large part
due to the growing ZF community on the Freenode
#zftalk
channel (unfortunately, I simply don't have time to be in that
particular channel any more, but you can generally find me in #zftalk.dev
),
but also to keep in contact with other peers, friends, and colleagues.
One difficulty, however, is keeping productivity high while staying on IRC. To me, the ultimate client would provide me notifications when somebody mentions my name or a watch word — allowing me to read the channel at my leisure, yet still respond to people in a timely fashion.
From the blog
Posted 2008-10-21
Vimgrep and Vim Project
Chris Hartjes today was on a quest for a "find in project" feature for Vim. "Find in Project" was a feature of Textmate that he'd grown accustomed to and was having trouble finding an equivalent for.
The funny thing is that Textmate is a newcomer, and, of course, vim has had such a feature for years. The thing to remember with vim, of course, is its unix roots; typically if you know the unix command for doing something, you can find what you need in vim. In this case, the key is the vimgrep plugin, which ships in the standard vim distribution.
From the blog
Posted 2008-09-24
git-svn Tip: don't use core.autocrlf
I've been playing around with Git in the past couple months, and have been really enjoying it. Paired with subversion, I get the best of all worlds — distributed source control when I want it (working on new features or trying out performance tuning), and non-distributed source control for my public commits.
Github suggests that
when working with remote repositories, you turn on the autocrlf
option, which
ensures that changes in line endings do not get accounted for when pushing to
and pulling from the remote repo. However, when working with git-svn
, this
actually causes issues. After turning this option on, I started getting the
error "Delta source ended unexpectedly" from git-svn
. After a bunch of aimless
tinkering, I finally asked myself the questions, "When did this start
happening?" and, "Have I changed anything with Git lately?" Once I'd backed out
the config change, all started working again.
In summary: don't use git config --global core.autocrlf true
when using git-svn
.
From the blog
Posted 2008-05-16
Server Upgrades... lost entries...
My good friend, Rob, hosts my site for me, in return for helping with server maintenance. After being on Gentoo for the past three years, though, we decided it was time to switch to something a little easier to maintain, so last night we wiped the system partitions and installed Ubuntu server.
I'll say this: the setup is much faster! However, we had a few gotchas that surprised us — it didn't setup our RAID array out-of-the-box, which led to a good hour of frustration as we tried to verify that the install wouldn't wipe it, and then to verify that we could re-assemble it. (We succeeded.) Additionally, we second-guessed a few things we shouldn't have, which led to needing to back out and reconfigure. But what was over a 12 hour install with Gentoo we accomplished in a matter of a few hours with Ubuntu server — so it was a huge success that way.
Unfortunately, our mysqldump of all databases… wasn't, a fact we discovered only after importing it into the new system. I ended up losing my blog database and PEAR channel database. Fortunately, the PEAR channel has not changed at all in the past year, so we had an old backup that worked, and I had a snapshot of my blog database from three weeks ago I was able to use. As a result, there are a few missing entries, but for the most part, all works. If you commented on one of those missing entries, my apologies.
Now that the install is done, I'm also finalizing some design changes to my blog — it's time to leave the black and white for more colorful grounds. Look for a revamp in the coming weeks!
From the blog
Posted 2008-03-08
Submitting Bug Reports
Full disclosure: I am employed by Zend to program Zend Framework. That said, the following is all my opinion, and is based on my experiences with Zend Framework, as well as answering questions on a variety of mailing lists and with other OSS projects (PEAR, Solar, and Cgiapp in particular).
One of my biggest pet peeves in the OSS world is vague bug/issue reports and feature requests. I cannot count the number of times I've seen a report similar to the following:
<Feature X>
doesn't work; you need to fix it now!
If such a report comes in on an issue tracker, it's invariably marked critical and high priority.
What bothers me about it? Simply this: it gives those responsible for maintaining Feature X absolutely no information to work on: what result they received, what was expected, or how exactly they were using the feature. The reviewer now has to go into one or more cycles with the reporter fishing for that information — wasting everyone's time and energy.
Only slightly better are these reports:
<Feature X>
doesn't work — I keep getting<Result X>
from it, which is incorrect.
At least this tells the reviewers what they reporter is receiving… but it doesn't tell them how they got there, or what they're expecting.
So, the following should be your mantra when reporting issues or making feature requests:
- What is the minimum code necessary to reproduce the issue or show the desired API?
- What is the expected result?
- What is the actual result?
From the blog
Posted 2008-02-07
Backwards Compatibility
Ivo already pointed this out, but I want to point it out again: Boy Baukema writes a very nice entry regarding backwards compatibility on the ibuildings.nl corporate blog.
Backwards compatibility (BC) is a tricky thing to support, even when you strive hard to, as Boy puts it, "think hard about your API" prior to release. Somebody will always come along and point out ways it could have been done better or ways it could be improved. I've had to wrestle with these issues a ton since joining the Zend Framework team, and while it often feels like the wrong thing to do to tell somebody, "too little, too late" when they have genuinely good feedback for you, its often in the best interest of the many users already using a component.
I had the pleasure of meeting Boy last year when visiting the ibuildings.nl offices, and he's got a good head on his shoulders. He does a nice job outlining the issues and a number of approaches to BC; if you develop a project for public consumption, you should definitely head over and read what he has to say.
From the blog
Posted 2008-01-25
Apache HOSTNAME on Clusters
In an effort to debug issues on a cluster, I was trying to determine which machine on the cluster was causing the issue. My idea was that I could insert a header token identifying the server.
My first idea was to add the directive Header add X-Server-Ip "%{SERVER\_ADDR}e
in my httpd.conf
. However, due to the nature of our load
balancer, Apache was somehow resolving this to the load balancer IP address on
all machines of the cluster — which was really, really not useful.
I finally stumbled on a good solution, however: you can set environment
variables in apachectl
, and then pass them into the Apache environment using the
PassEnv
directive from mod_env
; once that's done, you can use the environment
variable anywhere.
In my apachectl
, I added the line export HOSTNAME=\
hostname``. Then, in my
httpd.conf
, I added first the line PassEnv HOSTNAME
, followed by the
directive Header add X-Server-Name "%{HOSTNAME}e"
. Voilá! I now had the
hostname in the header, which gave me the information I needed for debugging.
From the blog
Posted 2008-01-01
2007 Retrospective
2007 was a busy year, both personally and professionally. I won't go into the personal too much, because, well, it's personal, and some of the details are simply inappropriate for blogging material.
Here's the short version:
- One trip to Belgium and The Netherlands.
- Two trips to Israel.
- Two trips to Atlanta, GA (not counting the return trip from Europe, when I was stranded for a day due to storms in the Northeast).
- Three different user groups attended, with three presentations.
- One major Zend Framework release
- One PEAR release.
- One podcast.
- One webinar.
- One book published.
- One conference attended.
What follows is my month-by-month breakdown:
From the blog
Posted 2007-01-05
svn:externals
I was recently working with someone who was using Zend Framework in their project. To keep things stable and releasable, he was doing an export of framework into his repository and checking it in. Since files change so much in the ZF project currently, instead of doing an rsync from a checkout into his own repository, he decided instead to delete the directory from the repository and re-add it everytime he was updating framework.
This seemed really inefficient to me, especially considering that it made it incredibly difficult to merge changes from his development branch into his production branch (deleting and re-adding directories breaks the merge process considerably). I knew there had to be a better way.
I'd heard of the svn:externals
property before, but never really played with
it. As it turns out, it exists for just this very type of situation. The
problem is that the documentation of svn:externals
in the SVN book doesn't indicate at all how the property should be set, and
most howto's I've read omit one or more very important details. I finally
figured things out through some trial and error of my own, so I'm going to
share the process so others hopefully can learn from the experience as well.
It's actually pretty easy. This assumes that your project layout looks something like this:
project/
branch/
production/
tag/
trunk/
-
In the top of your project trunk, execute the following:
$ svn propedit svn:externals .
-
This will open an editor session. In the file opened by your editor, each line indicates a different external svn repo to pull. The first segment of the line is the directory where you want the pull to exist. The last segment is the svn repo URL to pull. You can have an optional middle argument indicating the revision to use. Some examples:
-
Pull framework repo from head:
framework http://framework.zend.com/svn/framework/trunk
- Pull framework repo from revision 2616:
framework -r2616 http://framework.zend.com/svn/framework/trunk
-
-
After saving and exiting, update the repo:
$ svn up
-
Commit changes:
$ svn commit
One thing to note: any directory you specify for an svn:externals
checkout
should not already exist in your repository. If it does, you will get an
error like the following:
svn: Working copy 'sharedproject' locked
svn: run 'svn cleanup' to remove locks
I show using revisions above; you could also pin to tags by simply checkout the external repository from a given tag. Either way works well.
Then, when moving from one branch to another, or from the trunk to a branch,
you simply set a different svn:externals
for each branch. For instance, your
current production might check from one particular revision, but your trunk
might simply track head; you then simply determine what the current revision
being used is on your trunk, and update svn:externals in your production branch
when you're ready to push changes in.
Hope this helps some of you out there!
From the blog
Posted 2006-09-19
Vim 7 code completion
I may work at Zend, but I've never been a fan of IDEs. They simply don't suit my programming style. I can usually keep track of file locations in my head pretty easily, and what I really need is a blank slate on which I can write, and one that doesn't consume resource that can better be used running web servers and other programs. Syntax highlighting, good indentation — these are important, but you can get these from good, minimal text editors very easily. Vim is my editor of choice.
I will admit, though, that one area where I have had IDE-envy is the area of code completion. I often find myself doing quick lookups to php.net or perldoc to determine the order of arguments to a function or method call, or checking for the expected return value. Most of the time, this doesn't take much time, however, so I just live with it.
Today, however, cruising through the blogosphere, I came across an article showcasing some new features of Vim 7.0, and discovered Vim 7's code completion.
Basically, while in insert mode, you can type <C-x> <C-o>
to have vim attempt to autocomplete the current keyword. If more than one possibility exists, it shows a dropdown, and you can use your arrow keys to highlight the keyword that you wish to use.
But it gets better! Not only does it do this kind of autocompletion, but it also opens a small 'scratch preview' pane showing the function/method signature — i.e., the expected arguments and return value!
I thought I had little need for IDEs before… now I have even less! Bram and the rest of the Vim team, my hat's off to you for more fine work!
From the blog
Posted 2006-06-17
Serendipity upgrade
I upgraded Serendipity today, due to the recent announcement of the 1.0 release, as well as to combat some rampant issues with trackback spam.
I've been very happy with Serendipity so far; it just runs, and the default install gives just what you need to get a blog up and running, and nothing more; any extra functionality comes via plugins which you, the blogger, get to decide upon.
Additionally, it's incredibly easy to upgrade. Unpack the tarball, rsync it over your existing install (I rsync it, because I don't use 'serendipity' as my directory name), visit the admin, boom, you're done. I've upgraded several times, and never lost data, nor configuration settings.
My primary reason for the upgrade was, as noted earlier, to combat trackback spam. As of this morning, I had 15,000 pending trackbacks, none of which appeared to be valid (if any of them were, and you're not seeing yours, I'm very sorry; I deleted them en masse). These had accumulated in less than a month — that's an average of about one every 3 minutes.
Since upgrading, and using the Akismet service, I've received not a single spam trackback. Needless to say, I'm happy I performed the upgrade!
If you're a Serendipity user, and haven't upgraded to 1.0.0 yet (or one of it's reportedly very stable release candidates), do it today — you have nothing to lose, and a lot of lost time to gain!
From the blog
Posted 2006-06-06
The light has not set on PHP
I ran across a blog post entitled "Why the Light Has Gone Out on LAMP" earlier today, and felt compelled to respond.
First, a rant: somehow, this article got posted on Slashdot. I've never heard of the guy who posted it, and some quick googling shows that he's a pythoner. He's simply fueling the language wars, and the slashdot post opened up a huge debate that need not have occurred. I think it was irresponsible of the Slashdot editors to post it.
In the post, the author makes an analogy of using PHP + MySQL as the equivalent of using BASIC, and then uses a quote that claims BASIC "has become the leading cause of brain-damage in proto-hackers."
I'm sorry, but using a language doesn't cause brain damage. And there are many levels to programming. And using Python, Ruby, C, C++, Java, etc., does not automatically make you a better programmer than those using one of "those other languages". You can write crap code in any language. You can also write great code in just about any language.
Programming takes practice; programming well takes a lot of practice. You get better at it by learning more about programming practices, and applying them to your language. Part of programming is also learning when a particular language is suited to a task, and when it isn't. Python works for the web, but it's not particularly suited to it; similarly, you can write web servers in PHP, but that doesn't mean you should.
Stop the language wars already! Stop writing incendiary pieces about a language you don't use regularly or never gained particular proficiency in, and code already!
From the blog
Posted 2006-04-28
Telcos are Attacking the Internet
I generally try to stay out of politics on this blog, but this time something has to be said, as it affects anyone who uses the internet, at least in the US.
Basically, a number of telcos and cable providers are talking about charging internet content providers — the places you browse to on the internet, places like Google, Yahoo!, Amazon, etc. — fees to ensure bandwidth to their sites. Their argument is that these content providers are getting a 'free ride' on their lines, and generating a lot of traffic themselves, and should thus be paying for the cost of bandwidth.
This is patently ridiculous. Content providers already have to pay for their bandwidth — they, too, have ISPs or agreements with telcos in place, either explicitly or via their hosting providers. Sure, some of them, particularly search engines, send out robots in order to index or find content, but, again, they're paying for the bandwidth those robots generate. Additionally, people using the internet are typically paying for bandwidth as well, through their relationship with their ISP. What this amounts to is the telcos getting paid not just by each person to whom they provide internet access, but every end point on the internet, at least those within the US.
What this is really about is telcos wanting more money, and wanting to push their own content. As an example, let's say your ISP is AOL. AOL is part of Time Warner, and thus has ties to those media sources. Now, those media sources may put pressure on AOL to reduce bandwidth to sites operated by ABC, CBS, NBC, FOX, Disney, PBS, etc. This might mean that your kid can no longer visit the Sesame Street website reliably, because AOL has reduced the amount of bandwidth allowed to that service — but any media site in the TWC would get optimal access, so they could get to Cartoon Network. Not to slam Cartoon Network (I love it), but would you rather have your kid visiting cartoonnetwork.com or pbskids.org? Basically, content providers would not need to compete based on the value of their content, but on who they can get to subscribe to their service.
Here's another idea: your ISP is MSN. You want to use Google… but MSN has limited the bandwidth to Google because it's a competitor, and won't accept any amount of money to increase that bandwidth. They do the same with Yahoo! So, now you're limited to MSN search, because that's the only one that responds reliably — regardless of whether or not you like their search results. By doing so, they've just artificially inflated the value of their search engine — without needing to compete based on merit.
Additionally, let's say Barnes and Noble has paid MSN to ensure good bandwidth, but part of that agreement is a non-compete clause. Now you find your connections to Amazon timing out, meaning that you can't even see which book provider has the better price on the book you want; you're stuck looking and buying from B&N.
Now, let's look at something a little more close to home for those of us developing web applications. There have been a number of success stories the last few years: MySpace, Digg, and Flickr all come to mind. Would these endeavors have been as successful had they needed to pay multiple times for bandwidth, once to their ISP and once each to each telco charging for content providers? Indeed, some of these are still free services — how would they ever have been able to pay the extra amounts to the telcos in the first place?
So, basically, the only winners here are the telcos.
Considering how ludicrous this scheme is, one must be thinking, isn't the US Government going to step in and regulate against such behaviour? The answer, sadly, is no. The GOP doesn't like regulation, and so they want market forces to decide. Sadly, what this will likely do is force a number of content providers to offshore their internet operations — which is likely to have some pretty negative effects on the economy.
The decision isn't final — efforts can still be made to prevent it (the above link references a Senate committee meeting; there's been no vote on it). Call your representatives today and give them an earful. Tell them it's not just about regulation of the industry, but about fair competition in the market. Allowing the telcos to extort money from content providers will only reduce the US' economic chances in the world, and stifle innovation and choice.
From the blog
Posted 2006-01-05
XP + Cygwin + coLinux == Productivity
I wrote earlier of my experiences using Windows XP, a move I've considered somewhat unfortunate but necessary. I've added a couple more tools to my toolbox since that have made the environment even better.
From the blog
Posted 2005-12-10
Using Windows XP
Those of you who know me well know that I've been using Linux as my primary OS for many years now. At this point, for me, using Windows is like deciding I'm going to use a limited vocabulary; I can get the idea across, but not quite as well.
Due to the nature of where I work and the fact that I'm telecommuting, I've been having to maintain a dual-boot system. I use Ubuntu for my daily OS, and boot into Windows when I need to interact with people at work via Webex or Skype (we're using the new Skype beta with video, and it only works on Windows XP at this time).
This week, however, I've had to stay on Windows quite a bit — lots of impromptu conference calls and such. So, I've been customizing my environment, and been pretty pleased with the results.
Continue reading for some tips on customizing your Windows XP environment to work and feel a little more like… linux.
From the blog
Posted 2005-09-21
Transitions
Life is in transition for me now. Two weeks ago, we got to bring our handsome baby boy home, and I haven't been sleeping much since (though more than Jen). On top of the sleep deprivation, however, comes more exciting news: I've been hired as a PHP Developer by Zend Technologies!
I was approached by Daniel Kushner in late July regarding another position at Zend, and was flown out at the beginning of August. While I felt the interview went well, I harbored some doubts; work got fairly busy shortly thereafter, and then, of course, Liam was born, and the interview went completely out of my head. Until about three days after Liam's birthday, when Daniel contacted me again about the PHP Developer position.
Work started yesterday, and I was flown to Zend's offices in Cupertino, CA, for orientation and to sit down with both Daniel and others to prepare for the projects on which I will be working. Thankfully, the job will not require that I move, and I will be working out of the 'home office' in Vermont when I return later this week.
The decision to leave NGA was difficult, but the opportunity to work with Zend is just too good to miss. I am honored to be selected by them, and hope this is the beginning of many good things to come.
From the blog
Posted 2005-02-19
New Cgiapp Site
I've been extremely busy at work, and will continue to be through the end of March. I realized this past week that I'd set a goal of having a SourceForge website up and running for Cgiapp by the end of January — and it's now mid-February. Originally, I was going to backport some of my libraries from PHP5 to PHP4 so I could do so… and I think that was beginning to daunt me a little.
Fortunately, I ran across a quick-and-dirty content management solution yesterday called Gunther. It does templating in Smarty, and uses a wiki-esque syntax for markup — though page editing is limited to admin users only (something I was looking for). I decided to try it out, and within an hour or so had a working site ready to upload.
Cgiapp's new site can be found at cgiapp.sourceforge.net.
UPDATE
Shortly after I wrote this original post, I figured out what the strength of Gunther was — and why I no longer needed it. Gunther was basically taking content entered from a form and then inserting that content (after some processing for wiki-like syntax) into a Smarty template. Which meant that I could do the same thing with Cgiapp and Text_Wiki. Within an hour, I wrote an application module in Cgiapp that did just that, and am proud to say that the Cgiapp website is 100% Cgiapp.
From the blog
Posted 2005-01-08
Dreamweaver, Text Editors, and Webmasters
I picked up on this article on Friday, glanced through it and moved on, but noticed this evening it had been slashdotted — at which point I realized the author is the current CGI::Application maintainer, so I looked again.
At my first glance through it, it appeared the author was looking for a nice, easy-to-use pre-processor script for generating a site out of templates and content files. To that end, he, in the end, recommended ttree, part of the Template Toolkit distribution.
However, the real gist of the article — something that should probably have been summarized at the end — is that the author was looking for a free and OSS replacement for DreamWeaver's Templates functionality. This functionality allows a developer to create a template with placeholders for content, lock it, and then create pages that have the bits and pieces of content. Finally, the developer compiles the site — creating final HTML pages out of the content merged with the templates.
Now, I can see something like this being useful. I've used webmake for a couple of projects, and, obviously, utilize PHP in many places as a templating language. However, several comments on Slashdot also gave some pause. The tone of these comments was to the effect of, "real developers shouldn't use DW; they should understand HTML and code it directly." Part of me felt this was elitist — the web is such an egalitarian medium that there should be few barriers to entry. However, the webmaster in me — the professional who gets paid each pay period and makes a living off the web — also agreed with this substantially.
I've worked — both professionally and as a freelancer — with individuals who use and rely on DW. The problem I see with the tool and others of its breed is precisely their empowerment of people. Let me explain.
I really do feel anybody should be able to have a presence on the 'net. However, HTML is a fragile language: trivial errors can cause tremendous changes in how a page is rendered — and even crash browsers on occasion. The problem I see is that DW and other GUI webpage applications create, from my viewpoint, garbage HTML. I cannot tell you how many pages generated by these applications that I've had to clean up and reformat. They spuriously add tags, often around empty content, that are simply unnecessary.
The problem is compounded when individuals have neither time nor inclination to learn HTML, but continue using the tool to create pages. They get a false sense of accomplishment — that can be quickly followed by a very real sense of incompetence when the page inexplicably breaks due to an edit they've made — especially when the content is part of a larger framework that includes other files. Of course, as a web professional, I get paid to fix such mistakes. But I feel that this does everybody a disservice — the individual/company has basically paid twice for the presentation of content — once to the person generating it, a second time to me to fix the errors.
This is a big part of the reason why I've been leaning more and more heavily on database-driven web applications. Content then goes into the database, and contains minimal — if any — markup. It is then injected into templates, which go through a formal review process, as well as through the W3C validators, to prevent display problems. This puts everybody in a position of strength: the editor generating content, the designer creating the look-and-feel, and the programmer developing the methods for mapping content with the templates.
There's still a part of me that struggles with what I perceive as an elitist position. However, there's another part of me that has struggled heavily with the multitasking demands made on all web professionals — we're expected to be editors, graphic designers, programmers, and more. In most cases, we're lucky if we're strong in one or two such areas, much less passionate about staying abreast of the changing face of our medium.
From the blog
Posted 2004-11-17
Sign of a Geek
It's now been confirmed: I'm a geek.
Okay, so that probably comes as no shocker to those of you who know me, but it's the little things that make me realize it myself.
I've been frequenting Perl Monks for a couple of years now, mainly to garner ideas and code to help me with my personal or work projects. I rarely post comments, and I've only once submitted a question to the site. However, I do frequent the site regularly, and the few comments I've put in — generally regarding usage of CGI::Application — have been typically well-moderated.
Well, yesterday I made a comment to a user asking about editors to use with perl. I was incensed by a remark he made about VIM not having the features he needed. Now, as I said in my comment, I've used VIM on a daily basis for over two years, and I'm still discovering new features — and I've used all of the features he was looking for.
This is where I discovered I'm a geek: my comment made it into the Daily Best for today, peaking around number 5. The fact that that made my day indicates to me that I must be a geek.
Oh — and VIM rules!
From the blog
Posted 2004-10-20
MySQL Miscellanae
Inspired by a Slashdot book review of High Performance MySQL.
I've often suspected that I'm not a SQL guru… little things like being self
taught and having virtually no resources for learning it. This has been
confirmed to a large degree at work, where our DBA has taught me many tricks
about databases: indexing, when to use DISTINCT
, how and when to do JOIN
s,
and the magic of TEMPORARY TABLE
s. I now feel fairly competent, though far
from being an expert — I certainly don't know much about how to tune a server
for MySQL, or tuning MySQL for performance.
Last year around this time, we needed to replace our MySQL server, and I got handed the job of getting the data from the old one onto the new. At the time, I looked into replication, and from there discovered about binary copies of a data store. I started using this as a way to backup data, instead of periodic mysqldumps.
One thing I've often wondered since: would replication be a good way to do backups? It seems like it would, but I haven't investigated. One post on the aforementioned Slashdot article addressed this, with the following summary:
- Set up replication
- Do a locked table backup on the slave
Concise and to the point. I only wish I had a spare server on which to implement it!
From the blog
Posted 2004-09-21
Cgiapp Roadmap
I've had a few people contact me indicating interest in Cgiapp, and I've noticed
a number of subscribers to the freshmeat project I've setup. In addition, we're
using the library extensively at the
National Gardening Association in developing our new
site (the current site is using a mixture of ASP and Tango, with several newer
applications using PHP). I've also been monitoring the CGI::Application
mailing list. As a result of all this activity, I've decided I need to develop a
roadmap for Cgiapp.
Currently, planned changes include:
-
Version 1.x series:
- Adding a Smarty registration for
stripslashes
(the Smarty "function" call will besslashes
). -
param()
bugfix: currently, callingparam()
with no arguments simply gives you a list of parameters registered with the method, but not their values; this will be fixed. -
error_mode()
method. TheCGI::Application
ML brought up and implemented the idea of anerror_mode()
method to register anerror_mode
with the object (similar torun_modes()
). While non-essential, it would offer a standard, built-in hook for error handling. -
$PATH_INFO
traversing. Again, on theCGI::App
ML, a request was brought up for built-in support for using$PATH_INFO
to determine the run mode. Basically, you would pass a parameter indicating which location in the$PATH_INFO
string holds the run mode. - DocBook tutorials. I feel that too much information is given in the class-level documentation, and that usage tutorials need to be written. Since I'm documenting with PhpDoc and targetting PEAR, moving tutorials into DocBook is a logical step.
- Adding a Smarty registration for
-
Version 2.x series:
Yes, a Cgiapp2 is in the future. There are a few changes that are either necessitating (a) PHP5, or (b) API changes. In keeping with PEAR guidelines, I'll rename the module Cgiapp2 so as not to break applications designed for Cgiapp.
Changes expected include:
-
Inherit from PEAR. This will allow for some built in error handling, among other things. I suspect that this will tie in with the
error_mode()
, and may also deprecatecroak()
andcarp()
. -
Changes to
tmpl_path()
andload_tmpl()
. In the perl version, you would instantiate a template usingload_tmpl()
, assign your variables to it, and then do yourfetch()
on it. So, this:$this->tmpl_assign('var1', 'val1'); $body = $this->load_tmpl('template.html');
Becomes this:
$tmpl = $this->load_tmpl(); $tmpl->assign('var1', 'val1'); $body = $tmpl->fetch('template.html');
OR
$tmpl = $this->load_tmpl('template.html'); $tmpl->assign('var1', 'val1'); $body = $tmpl->fetch();
(Both examples assume use of Smarty.) I want to revert to this behaviour for several reasons:
-
Portability with perl. This is one area in which the PHP and perl versions differ greatly; going to the perl way makes porting classes between the two languages simpler.
-
Decoupling. The current set of template methods create an object as a parameter of the application object — which is fine, unless the template object instantiator returns an object of a different kind.
Cons:
-
Smarty can use the same object to fill multiple templates, and the current methods make use of this. By assigning the template object locally to each method, this could be lost. HOWEVER… an easy work-around would be for
load_tmpl()
to create the object and store it an a parameter; subsequent calls would return the same object reference. The difficulty then would be ifload_tmpl()
assumed a template name would be passed. However, even inCGI::App
, you decide on a template engine and design for that engine; there is never an assumption that template engines should be swappable. -
Existing Cgiapp1 applications would need to be rewritten.
-
-
-
Plugin Architecture: The
CGI::App
ML has produced a::Plugin
namespace that utilizes a common plugin architecture. The way it is done in perl is through some magic of namespaces and export routines… both of which are, notably, missing from PHP.However, I think I may know a workaround for this, if I use PHP5: the magic
__call()
overloader method.My idea is to have plugin classes register methods that should be accessible by a Cgiapp-based class a special key in the
$_GLOBALS
array. Then, the__call()
method would check the key for registered methods; if one is found matching a method requested, that method is called (usingcall_user_func()
), with the Cgiapp-based object reference as the first reference. Voilá! instant plugins!Why do this? A library of 'standard' plugins could then be created, such as:
- A form validation plugin
- Alternate template engines as plugins (instead of overriding the
tmpl_*
methods) - An authorization plugin
Since the 'exported' methods would have access to the Cgiapp object, they could even register objects or parameters with it.
-
If you have any requests or comments on the roadmap, please feel free to contact me.
From the blog
Posted 2004-09-20
New site is up!
The new weierophinney.net/matthew/ site is now up and running!
The site has been many months in planning, and about a month or so in actual coding. I have written the site in, instead of flatfiles, PHP, so as to:
- Allow easier updating (it includes its own content management system)
- Include a blog for my web development and IT interests
- Allow site searching (everything is an article or download)
I've written it using a strict MVC model, which means that I have libraries for accessing and manipulating the database; all displays are template driven (meaning I can create them with plain-old HTML); and I can create customizable applications out of various controller libraries. I've called this concoction Dragonfly.
There will be more developments coming — sitewide search comes to mind, as well as RSS feeds for the blog and downloads.
Stay Tuned!
From the blog
Posted 2004-03-30
Cgiapp: A PHP Class
After working on some OO classes yesterday for an application backend I'm
developing for work, I decided I needed to create a BREAD
class to make this
simpler. You know, Browse-Read-Edit-Add-Delete.
At first, I figured I'd build off of what I'd done yesterday. But then I got to
thinking (ah, thinking, my curse). I ran into the BREAD
concept originally
when investigating CGI::Application
; a number of individuals had developed
CGI::Apps
that provided this functionality. I'd discarded them usually because
they provided more functionality than I needed or because they introduced more
complexity than I was willing to tackle right then.
But once my thoughts had gone to BREAD
and CGI::App
, I started thinking how
nice it would be to have CGI::Application
for PHP. And then I thought, why
not? What prevents me from porting it? I have the source…
So, today I stayed home with Maeve, who, on the tail end of an illness,
evidently ran herself down when at daycare yesterday, and stayed home sleeping
most of the day. So, while she was resting, I sat down with a printout of the
non-POD code of CGI::App
and hammered out what I needed to do. Then, when she
fell asleep for a nap, I typed it all out and started testing. And, I'm proud to
say, it works. For an example, visit
my development site to see a
very simple, templated application in action.
From the blog
Posted 2004-03-29
B. Gates: Open Source Programmer?
I just read coverage of a panel of programming luminaries on Salon; the topic of discussion was about the state of programming. In the course of the discussion, the subject of Open Source came up. Several of the luminaries — which included architects of the Mac OS and Windows, as well as others — derided the community for basically reinventing the wheel, and wheels that need to be re-thought entirely anyways. One questioned, "Why is the idealism just about how the code is shared — what about idealism about the code itself?"
Andy Hertzfeld (who helped develop the original Mac OS) was sitting on the panel, and jumped in. He has been working with Eazel and Chandler in recent years, and thus has an inside view of open source. His initial comment: "It's because they want people to use the stuff!" Basically, they program Windows- or Mac-like interfaces because then people will be willing to try it out. They program office suites because people "need" an office suite to be productive. Such offerings hook them into the OSS movement.
Another participant, Dan Bricklin (of VisiCalc, a pioneering spreadsheet program) shared an anecdote from Bill Gates. Evidently, Gates gave an interview (with Lammers — look up this person) in which he explained that his work on MS's BASIC compiler was done by looking at how other programmers had accomplished the task. In his own words, "The best way to prepare is to write programs, and to study great programs that other people have written. In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating systems."
So basically, Gates was an early adopter of OSS methodologies… Interesting to see that today he's so protective of MS code. Guess money might do that to you.
From the blog
Posted 2004-03-28
Scrap that. We're gonna' use PHP
I've been researching and coding for a couple months now with the decision that
I'd rewrite the family website/portal using mod_perl
with CGI::Application
.
I still like the idea, but a couple things recently have made me rethink it.
For starters, the perl DBI is a bit of a pain to program. At work, I've become very accustomed to using PEAR's DB library, and while it's in many ways derived from perl's DBI, it's much simpler to use.
Then there's the whole HTML::Template
debacle. There's several ways in which
to write the templates, but they don't all work in all situations, and, it seems
they're a bit limited. We've started using PHP's Smarty at work, and it's much
more intuitive, a wee bit more consistent, and almost infinitely more
extendable. I could go the Template::Toolkit
route for perl, but that's almost
like learning another whole language.
Then, there's the way objects work in perl versus PHP. I've discovered that PHP objects are very easy and very extendable. I wouldn't have found them half as easy, however, if I hadn't already been doing object oriented programming in perl. One major difference, however, is how easy it is to create new attributes on the fly, and the syntax is much easier and cleaner.
Add to that the fact that if you want to dynamically require modules in perl,
you have to go through some significant, often unsurmountable, hoops. So you
can't easily have dynamic objects of dynamically defined classes. In PHP,
though, you can require_once
or include_once
at any time without even
thinking.
The final straw, however, was when I did my first OO application in PHP this
past week. I hammered it out in a matter of an hour or so. Then I rewrote it to
incorporate Smarty in around an hour. And it all worked easily. Then I wrote a
form-handling libary in just over two hours that worked immediately — and made
it possible for me to write a several screen application in a matter of an hour,
complete with form, form validation, and database calls. Doing the same with
CGI::Application
took me hours, if not days.
So, my idea is this: port CGI::Application
to PHP. I love the concept of
CGI::App
— it's exactly how I want to program, and I think it's solid.
However, by porting it to PHP, I automatically have session and cookie support,
and database support is only a few lines of code away when I use PEAR; I'll add
Smarty as the template toolkit of choice, but make it easy to override the
template methods to utilize . I get a nice MVC-style application template, but
one that makes developing quickie applications truly a snap.
This falls under the "right-tool-for-the-job" category; perl, while a wonderful language, and with a large tradition as a CGI language, was not developed for the web as PHP was. PHP just makes more sense in this instance. And I won't be abandoning perl by any stretch; I still use it daily at work and at home for solving any number of tasks from automated backups to checking server availability to keeping my ethernet connection alive. But I have real strengths as a PHP developer, and it would be a shame not to use those strengths with our home website.
From the blog
Posted 2004-03-19
PHP Class Tips
We're starting to use OO in our PHP at work. I discovered when I started using it why I'd been having problems wrapping my head around some of the applications I've been programming lately: I've become accustomed in Perl to using an OO framework. Suddenly, programming in PHP was much easier.
There's a few things that are different, however. It appears that you cannot pass objects in object attributes, and then reference them like thus:
$object->db>query($sql)
PHP doesn't like that kind of syntax (at least not in versions 4.x). Instead, you have to pass a reference to the object in the attribute, then set a temporary variable to that reference whenever you wish to use it:
$object->db =& $db;
...
$db = $object->db;
$res = $db->query($sql);
What if you want to inherit from another class and extend one of the methods? In
other words, you want to use the method from the parent class, but you want to
do some additional items with it? Simple: use parent
:
function method1()
{
/* do some pre-processing */
parent::method1(); // Do the parent's version of the method
/* do some more stuff here */
}
Update:
Actually, you can reference objects when they are attributes of another object; you just have to define the references in the correct order:
$db =& DB::connect('dsn');
$this->db =& $db;
...
$res = $this->db->query($sql);
I've tested the above syntax with both PEAR's DB and with Smarty, and it works without issue.
From the blog
Posted 2004-02-05
HTML::FillInForm
The CGI::Application::ValidateRM
module utilizes HTML::FillInForm
to fill in
values in the form if portions did not pass validation. Basically, it utilizes
HTML::Parser
to go through and find the elements and match them to values.
It's used because the assumption is that you've built your form into an
HTML::Template
, and that way you don't need to put in program logic into the
form.
Seems another good candidate for using FillInForm
would be to populate a form
with values grabbed from a database… I should look into that as well!
From the blog
Posted 2004-02-05
HTML::Template notes
I've used HTML::Template
a little, mainly in the Secret Santa project I did
this past Christmas for my wife's family. One thing I disliked was using the
normal syntax: <TMPL_VAR NAME=IMAGE_SRC>
— it made looking at it difficult (it
wasn't always easy to tell what was an HTML tag, what was plain text, and what
was HTML::Template
stuff), and it made it impossible to validate my pages
before they had data.
Fortunately, there's an alternate syntax: wrap the syntax in HTML comments:
<!-- TMPL_VAR NAME=IMAGE_SRC -->
does the job. It uses more characters, true,
but it gets highlighted different than HTML tags, as well, and that's worth a
lot.
And why do I have to say "NAME=" every time? That gets annoying. As it turns
out, I can simply say: <!-- TMPL_VAR IMAGE_SRC -->
, and that, too will get the
job done.
Finally, what about those times when I want to define a template, but have it
broken into parts, too? Basically, I want HTML::Template
to behave a little
like SSI. No worries; there's a TMPL_INCLUDE
tag that can do this: <!-- TMPL_INCLUDE NAME="filename.tmpl" -->
.
From the blog
Posted 2004-02-05
CGI::Application::ValidateRM and Data::FormValidator
I've been reading a lot of posts lately on the CGI::App
mailing list about using
CGI::Application::ValidateRM
(RM == Run Mode); I finally went and checked it
out.
CGI::App::ValRM
uses Data::FormValidator
in order to do its magic.
Interestingly, D::FV
is built much like how I've buit our formHandlers
library at work — you specify a list of required fields, and a list of fields
that need to be validated against criteria, then provide the criteria. It goes
exactly how I would have done our libraries had we been working in perl —
supplying the constraint as a regexp or anonymous sub in a hashref for the
field.
Anyways, it looks like the combination of CGI::App::ValRM
with CGI::App
could greatly simplify any form validations I need to do on the site, which will
in turn make me very happy!
From the blog
Posted 2004-02-04
Design Ideas
I had some success last night with the My::Portal
CGI::Application
superclass I'm building — I actually got it working with CGI::Wiki::Simple
(after I debugged the latter to fix some delegation issues!). Now that I know the "proof-of-concept" works, I'm ready to start in on some other issues.
The first issue is: how can I specify different directories for different applications to search for templates, while retaining the default directory so that the superclass can build the final page? I could always simply keep all templates in a single directory and simply prefix them, but that seems inelegant, somehow. I'll need to explore how HTML::Template integration works with CGI::App.
Second, and closely related: how do I want it to look, in the end? I could see keeping the design we have — it's clean, simple, and yet somehow functionally elegant. Okay, I'm exaggerating — it's your standard three-column with header and footer. But it goes with the idea of blocks of content. I need to think about that.
I saw a design idea for a WikiWikiWeb today, though, that totally changed my ideas of how a Wiki should look. I hadn't been to Wikipedia for some time, but a Google link to Gaston Julia showed up on Slashdot as it shut down a site in Australia, and so I visited it. I like the new design — it separates out the common links needed into a nice left menu, and puts a subset of that at the top and bottom of the main column as well, using nice borders to visually separate things. I much prefer it to PhpWiki's default style, as well as to anything else I've really seen so far relating to Wiki layout.
From the blog
Posted 2004-01-23
More CGI::App research... Try the manual!
So, I'm a bit of an idiot… it's been so long since I looked at CGI::App
, and
yet I felt I had such a grasp on it, that I overlooked the obvious step: look
at the manual!
In particular, there's a whole series of methods that are used to tailor
CGI:App
to your particular needs, and these include cgiapp_init()
,
cgiapp_prerun()
, and cgiapp_postrun()
.
-
cgiapp_init() is used to perform application specific initialization
behaviour, and is called immediately before the
setup()
method. It can be used to load settings from elsewhere; if it were called only from a superclass from which other modules inherited, it would then provide common settings for all modules. -
cgiapp_prerun() is called immediately before the selected run-mode. If it
were called only by your superclass, you could perform items such as
authorization or even form validation; this would then be standard for all
your applications. (You can use the
$self->prerun_mode('mode')
call to to override the selected run-mode, for instance, thus allowing you to redirect to a different mode if a user isn't permitted there.) - cgiapp_postrun() is called after the run-mode has returned its output, but before http headers have been generated or anything sent to the web browser. Again, if defined in a superclass, it means that you could then place the run-mode output in a specific place within a larger template, and even call other routines to fill in other parts of the main template. You could even check to see if certain parameters were passed to the page, and change the type of output you send back (XML, PDF, image, etc.), allowing you to have a common query element that changes the output type (e.g., a 'print' parameter that returns a PDF or a stripped down template).
In addition, you could specify in the superclass that you're using
CGI::Simple
for the query object (using the cgiapp_get_query
method), or
you could rewrite the load_tmpl()
method to use Template::Toolkit
or some
other templating system, etc.
Doesn't look so crazy anymore…
From the blog
Posted 2004-01-23
CGI::Application Research
I've been wanting to redevelop my home website for some time using
CGI::Application
. The last time I rewrote it from PHP to perl, I developed
something that was basically a subset of the things CGI::App
does, and those
things weren't done nearly as well.
The problem I've been running into has to do with having sidebar content, and
wanting to run basically a variety of applications. I want to have a
WikiWikiWeb, a photo gallery, some mail forms, and an article database/blog;
CGI::App
-based modules for each of these all exist. But I want them all to
utilize the same sidebar content, as well — and that sidebar content may vary
based on the user.
My interest got sparked by this node on
Perl Monks. The author tells of an acquaintance who goes
by the rule that a CGI::App
should have 10-12 states at most; more than that,
and you need to either break it apart or rethink your design. And all CGI::App
s
inherit from a common superclass, so that they share the same DB connections,
templates, etc.
So, I've been investigating this problem. One node on PM
notes that his ISP uses CGI::App
with hundreds of run modes spread across
many applications; they created a module for session management and access
control that calls use base CGI::Application
; each aplication then calls
use base Control
, and they all automatically have that same session
management and access, as well as CGI::Application
.
Another node mentions the
same thing, but gives a little more detail. That author writes a module per
application, each inheriting from a super class: UserManager.pm
, Survey.pm
,
RSS.pm
, Search.pm
, etc. You create an API for that super class, and each
CGI::App
utilizes that API to do its work.
This also seems to be the idea behind CheesePizza,
a CGI::App
-based framework for building applications. (All pizzas start out
as cheese pizzas; you simply add ingredients.) The problem with that, though,
is that I have to learn another framework on top of CGI::App
, instead of
intuiting my own.
But how do I write the superclass? Going back to the original node that sparked
my interest, I found a later reply that described how you
do this. The big key is that you override the print
method — this allows you
to customize the output, and from here you could call functions that create
your sidebar blocks, and output the content of the CGI::App
you just called in
a main content area of your template.
Grist for the mill…
From the blog
Posted 2004-01-23
robots.txt
One thing I've wondered about is the syntax of the robots.txt
file, where it's
placed, and how it's used. I've known that it is used to block spiders from
accessing your site, but that's about it. I've had to look into it recently
because we're offering free memberships at work, and we don't want them indexed
by search engines. I've also wondered how we can exclude certain areas, such as
where we collate our site statistics, from these engines.
As it turns out, it's really dead simple. Simply create a robots.txt
file in
your htmlroot, and the syntax is as follows:
User-agent: *
Disallow: /path/
Disallow: /path/to/file
The User-agent
can specify specific agents or the wildcard; there are so many
spiders out there, it's probably safest to simply disallow all of them. The
Disallow
line should have only one path or name, but you can have multiple
Disallow
lines, so you can exclude any number of paths or files.
Search tips
- Use
#{some tag}
ortag:{some tag}
to filter by tag; using this more than once will find results that match all tags. - Use
year:{year}
to filter by year. If you use this multiple times, the last one wins. - Use
begin:{year-month-day}
to restrict to results beginning on that day. Last one wins. - Use
end:{year-month-day}
to restrict to results ending on that day. Last one wins.