Search results
From the blog
Posted 2024-10-21
Wezterm GUI Notifications
Wezterm has a utility for raising GUI system notifications, window:toast_notification(), which is a handy way to bring notifications to you that you might otherwise miss if the window is hidden or if a given tab is inactive.
However, on Linux, it's a far from ideal tool, at least under gnome-shell. (I don't know how it does on KDE or other desktop environments.) It raises the notification, but the notification never times out, even if you provide a timeout value (fourth argument to the function). This means that you have to manually dismiss the notification, which can be annoying, particularly if the notifications happen regularly.
So, I worked up my own utility.
From the blog
Posted 2024-09-17
Wezterm Dropdown in Gnome
In a previous article, I detailed how I use Wezterm.
One goal I had when switching to Wezterm to was to ensure I was able to continue using a dropdown terminal, and in that article, I detailed using the tdrop
utility to implement this... but with the caveat that it didn't work well under the Wayland environment.
Well, I've now found a better solution.
From the blog
Posted 2024-07-04
How I use Wezterm
I use the terminal a lot. Until the past few years, I basically used only a browser and a terminal. (The primary changes in the past couple years are that I'm using Logseq for tracking notes and todos, and now use native apps for Zoom and Slack.)
Today I'm going to detail my exploration of Wezterm, my current daily driver.
From the blog
Posted 2024-01-18
Fixing Audio Choppiness in OBS Studio on Linux
I occasionally record screencasts for work — some of these are used for the website as demos/training material, and sometimes they're used internally by our various technical teams. When I record, I use OBS Studio, which works brilliantly.
However, since the last time I recorded, I've upgrade my operating system, as well as switched over to Wayland, and I discovered after doing a recording session that my audio was super choppy.
This is how I fixed it.
From the blog
Posted 2019-04-02
Fixing gnome-shell app indicators in Ubuntu
I am a long-time gnome-shell user. I appreciate the simplicity and elegance it provides, as I prefer having a minimalist environment that still provides me easy access to the applications I use.
That said, just as with any desktop environment, I've still run into problems now and again. One that's been plaguing me since at least the 18.04 release is with display of app indicators, specifically those using legacy system tray APIs.
Normally, gnome-shell ignores these, which is suboptimal as a number of popular
programs still use them (including Dropbox, Nextcloud, Keybase, Shutter, and
many others). To integrate them into Gnome, Ubuntu provides the gnome-shell
extension "kstatusnotifieritem/appindicator support" (via the package
gnome-shell-extension-appindicator
). When enabled, they show up in your
gnome-shell panel. Problem solved!
Except that if you suspend your system or lock your screen, they disappear when you wake it up.
Now, you can get them back by hitting Alt-F2
, and entering r
(for "restart")
at the prompt. But having to do that after every time you suspend or lock is
tedious.
Fortunately, I recently came across this gem:
$ sudo apt purge indicator-common
This removes some packages specific to Ubuntu's legacy Unity interface that interfere with how appindicators are propagated to the desktop. Once I did this, my appindicators persisted after all suspend/lock operations!
From the blog
Posted 2014-11-03
Fixing AMD Radeon Display Issues in Ubuntu 14.10
After upgrading to Ubuntu 14.10, I faced a blank screen after boot. As in: no GUI login prompt, just a blank screen. My monitors were on, I'd seen the graphical splash screen as Ubuntu booted, but nothing once complete.
Fortunately, I could switch over to a TTY prompt (using Alt+F1), so I had some capacity to try and fix the situation. The question was: what did I need to do?
From the blog
Posted 2012-09-20
Screencasting on Linux
I've been wanting to do screencasts on Linux for some time now, and my big stumbling block has been determining what tools to use.
The tl;dr:
- Use
recordMyDesktop
to record video clips, but afterwards, re-encode them to AVI (see the script I used) - Record audio to WAV, or convert compressed audio to WAV format afterwards.
- Use OpenShot to stitch clips together and layer audio and video tracks.
- Remember to reset the video length if you change the playback rate.
- Export to a Web + Vimeo profile for best results.
From the blog
Posted 2010-03-24
GPG-signing Git Commits
We're working on migrating Zend Framework to Git. One issue we're trying to deal with is enforcing that commits come from CLA signees.
One possibility presented to us was the possibility of utilizing GPG signing of commit messages. Unfortunately, I was able to find little to no information on the 'net about how this might be done, so I started to experiment with some solutions.
The approach I chose utilizes git hooks,
specifically the commit-msg
hook client-side, and the pre-receive
hook
server-side.
From the blog
Posted 2009-08-31
Enabling VPN split tunnel with NetworkManager
I've been using NetworkManager for some time now, and appreciate how easy it makes both connecting to wifi as well as VPNs. That said, I've had an issue with it that I only resolved today.
When working from home, I prefer to use a VPN split tunnel setup — I'm behind a firewall all the time, and it's useful to be able to run virtual machines while still connected to my VPN (e.g., when doing training or webinar sessions). However, I noticed some months ago that this wasn't working. I assumed at first it was a change in our network setup, but others reported that the split tunnel was working fine. It's been particularly problematic when on IRC — if the VPN drops, I lose my IRC connection, meaning I have to re-connect and re-claim my nick.
So, I did some searching, and found an interesting setting. In NetworkManager, "Configure..." then "Edit" your VPN connection, and navigate to the "IPv4 Settings" tab. Once there, click the button that says "Routes..." and select the checkbox next to "Use this connection only for resources on its network". Press Ok to close the dialog, then "Apply" to exit out of the VPN configuration. Re-connect to the VPN, and you should be all set.
Note: this will only work if your VPN server is configured to allow split tunnels. Additionally, only do so if you are behind a firewall. Practice safe networking.
From the blog
Posted 2009-01-17
VirtualBox Networking
I use Linux on the desktop (currently Ubuntu), but occasionally need to use Windows for things like webinars, OS-specific testing, etc. I started using VirtualBox for virtualization around six months ago, and have been reasonably satisfied; Windows boots quickly, and everything "just works." That is, until yesterday.
I was given a linux VM image running a web server and some applications I needed to review. On top of that, I needed to do so over WebEx, so that I could share my screen with somebody else. This meant I needed the following to work:
- Internet access for my Windows VM
- Access to my linux VM from my Windows VM
- Ideally, access to both guest VMs from my linux host
- Ideally, internet access for my linux host
From the blog
Posted 2008-05-16
Server Upgrades... lost entries...
My good friend, Rob, hosts my site for me, in return for helping with server maintenance. After being on Gentoo for the past three years, though, we decided it was time to switch to something a little easier to maintain, so last night we wiped the system partitions and installed Ubuntu server.
I'll say this: the setup is much faster! However, we had a few gotchas that surprised us — it didn't setup our RAID array out-of-the-box, which led to a good hour of frustration as we tried to verify that the install wouldn't wipe it, and then to verify that we could re-assemble it. (We succeeded.) Additionally, we second-guessed a few things we shouldn't have, which led to needing to back out and reconfigure. But what was over a 12 hour install with Gentoo we accomplished in a matter of a few hours with Ubuntu server — so it was a huge success that way.
Unfortunately, our mysqldump of all databases… wasn't, a fact we discovered only after importing it into the new system. I ended up losing my blog database and PEAR channel database. Fortunately, the PEAR channel has not changed at all in the past year, so we had an old backup that worked, and I had a snapshot of my blog database from three weeks ago I was able to use. As a result, there are a few missing entries, but for the most part, all works. If you commented on one of those missing entries, my apologies.
Now that the install is done, I'm also finalizing some design changes to my blog — it's time to leave the black and white for more colorful grounds. Look for a revamp in the coming weeks!
From the blog
Posted 2008-01-25
Apache HOSTNAME on Clusters
In an effort to debug issues on a cluster, I was trying to determine which machine on the cluster was causing the issue. My idea was that I could insert a header token identifying the server.
My first idea was to add the directive Header add X-Server-Ip "%{SERVER\_ADDR}e
in my httpd.conf
. However, due to the nature of our load
balancer, Apache was somehow resolving this to the load balancer IP address on
all machines of the cluster — which was really, really not useful.
I finally stumbled on a good solution, however: you can set environment
variables in apachectl
, and then pass them into the Apache environment using the
PassEnv
directive from mod_env
; once that's done, you can use the environment
variable anywhere.
In my apachectl
, I added the line export HOSTNAME=\
hostname``. Then, in my
httpd.conf
, I added first the line PassEnv HOSTNAME
, followed by the
directive Header add X-Server-Name "%{HOSTNAME}e"
. Voilá! I now had the
hostname in the header, which gave me the information I needed for debugging.
From the blog
Posted 2007-10-20
Gutsy Gibbon review
Early in the week, I decided to avoid the release rush and go ahead and update my laptop to Ubuntu's Gutsy Gibbon release. Overall, it's quite good, with one caveat I'll elaborate on later.
From the blog
Posted 2007-02-17
Back on Linux Again
A little over a year ago, I stopped using Linux as my primary desktop due to the fact that a number of programs we were using were Windows dependent. Despite getting coLinux running, I've never been completely satisfied with the setup. I missed being able to paste with my middle-mouse button, and I was constantly having character encoding issues pasting back and forth between PuTTY and windows apps, couldn't access mail easily between my coLinux and Windows partitions, and overall felt that I was losing out on some productivity by not having a native linux environment as my primary OS.
Last week, we had an infrastructure change at work, and I basically realized that my Windows + coLinux setup was going to get in the way of productivity -- and that, at this point, there were now Windows applications tying me to that OS. So, I decided it was time to go back to Linux.
From the blog
Posted 2006-09-25
coLinux Recovery
As I've written previously, I use coLinux in order to have a Linux virtual machine running on my Windows XP install. It runs Debian unstable (SID), which gives me all the apt-get love I could want.
Except when an apt-get based install goes bad, that is, like it did Saturday evening. This is the tale of how I got it back up and running.
From the blog
Posted 2006-01-05
XP + Cygwin + coLinux == Productivity
I wrote earlier of my experiences using Windows XP, a move I've considered somewhat unfortunate but necessary. I've added a couple more tools to my toolbox since that have made the environment even better.
From the blog
Posted 2005-09-07
File_SMBPasswd woes
I've been cobbling together a system at work for the last couple months to allow a single place for changing all network passwords. This includes a variety of database sources, as well as passwd files and smbpasswd files. I've been making use of PEAR's File_Passwd and File_SMBPasswd, and they've greatly simplified the task of updating passwords for those types of systems. However, I've encountered some issues that I never would have expected.
I have the web user in a group called 'samba', and I have the smbpasswd file owned by root:samba. I then set the smbpasswd file to be group +rw. Simple, right? The web user should then be able to update the smbpasswd file without a problem, right? Wrong.
I kept getting errors, and on investigation continually found that the smbpasswd file permissions had reverted to 0600 — i.e., only the root user could access it. I tried using 'chattr -i' on the off-chance that the file had been made immutable (which didn't make sense, as I was able to see the permissions change). No luck.
Based on observations of when the permissions reverted, it appears that the various SMB processes will reset the permissions! An example is when someone attempts to mount a resource from the server; this accesses the smbpasswd file to perform authentication — and at this point the file permissions change. I can find no documentation to support this; these are simply my observations.
So, to get around the behaviour, I created a script that will set the file permissions to what I want them, and then gave sudo privileges to the samba group for that script. This script is then called via system() in the update script just before processing.
It's a hack, and could be made more secure, but it works.
From the blog
Posted 2005-08-19
Server upgrades, samba, and permissions, oh my!
Last week, we performed a long needed upgrade to the company file/print/intranet server. Basically, we'd been on a Redhat 8 system, and there were upgrades we were wanting to perform, and couldn't due to lack of libraries. We could have possibly compiled from source in some occasions… but that would likely have shuttled us into a similar dependency hell as using Redhat in the first place.
So, we decided to re-install the OS, and switch to Gentoo in the process. We've found that Gentoo is a great distro for servers — it allows us to tailor the install to the server purpose, and simultaneously provides a clean upgrade path via portage.
Things went primarily without a hitch. We lost a few databases due to a bad DB backup (argh! there went the wiki!), but that was the primary extent of the damage.
When investigating the sytem post-install, I discovered some connectivity issues with Samba. Basically, when connecting via a *nix-based machine, we were getting symlinks reported as being local to the connecting machine, not the server. This meant that symlinks on the server weren't being followed — which caused major issues for those connecting via FTP, Mac, or Linux.
From the blog
Posted 2005-04-02
PHP, Cgiapp, and extensibility
At work this week, Rob was doing some monitoring of our bandwidth usage. We have SNMP on each of our servers now, and he uses MRTG to create bandwidth usage graphs that are updated every five minutes or so. He's been monitoring since late last year.
Before January, we had two systems going. The first, legacy, system hosted the majority of the content from garden.org, and was done using Tango 2000, a web application server that ran on top of IIS and Windows NT 4. I say 'ran', because Tango 2000 was the last version to ship; the company that made it stopped supporting it a year later. This meant we could not upgrade our server's OS to Windows 2000 or 2003, nor could we switch to a more secure web server, etc. It was a time bomb waiting to happen.
The second system is a basic LAMP system — Linux + Apache + MySQL + PHP. Rob began migrating applications to it shortly after he started at NGA 3 years ago, one application at a time. Mostly, new applications were placed on it, though in May 2003, he and the other programmer who was there at the time started migrating old applications to the techology. Part of the reason I was hired was to continue this migration.
The migration was time consuming, and plenty of other projects often got in the way. However, starting last July, we made a big push to get it all ported over — before the old WinNT server fails on us. In January, we were able to rollout the new garden.org, which runs on this new technology.
A big reason we were able to finish is because of Cgiapp. I originally ported it to PHP last year around this time, and knew that while I wanted to develop new applications using it, I wasn't so sure I could sell Rob on it.
Amazingly, it didn't take much to convince him. We had already started using Smarty for templates just before this, and were also using OOP in new development. Cgiapp just helped unify these technologies and to provide a nice, standard framework with which to program.
This last can not be emphasized enough. We started developing all applications in three places: an API for data access, a Cgiapp-based application, and our templates. Either one of us could pick up development of an application from the other without having to spend a day or two familiarizing ourselves with the idiosyncracies of what the other had decided was the programming paradigm of the day. Sure, we still have our own programming styles, but the framework makes it easy to debug or extend each others programs painlessly.
Now, back to the bandwidth reports: Rob has noticed that our bandwidth usage has been growing steadily on the new server since we switched garden.org over — a 45 degree line. At one point this week, our outgoing bandwidth was almost 3 T1s — and we were having no performance issues whatsoever. This simply would not have been possible on the old system — nor without Cgiapp. We've managed to produce both a hardware architecture and a programming framework that has proved immensely scalable — which will in turn save the organization money.
I love open source! How else can you create such high-performing software without paying through the nose for it?
From the blog
Posted 2004-09-24
Get Firefox!
Those who know me know that I love linux and open source. One particular program that firmly committed me to open source software is the Mozilla project — a project that took the Netscape browser's codebase and ran with it to places I know I never anticipated when I first heard of the project.
What do I like about Mozilla? Well, for starters, and most importantly, tabbed browsing changed the way I work. What is tabbed browsing? It's the ability to have multiple tabs in a browser window, allowing you to switch between web pages without needing to switch windows.
Mozilla came out with a standalone browser a number of months back called, first Phoenix, then Firebird, and now Firefox. This standalone browser has a conservative number of basic features, which allow for a lean download — and yet, these basic features, which include tabbed browsing and disabling popups, far surpass Internet Explorer's features. And there are many extensions that you can download and integrate into the browser.
One such extension is a tabbed browsing extension that makes tabbed browsing even more useful. With it, I can choose to have any links leaving a site go to a new tab; or have bookmarks automatically load in a new tab; or group tabs and save them as bookmark folders; or drag a tab to a different location in the tabs (allowing easy grouping).
Frankly, there's few things I can find that Firefox can't do.
And, on top of that, it's not integrated into the operating system. So, if you're on Windows, that means if you use Firefox, you're less likely to end up with spyware and adware — which often is downloaded and installed by special IE components just by visiting sites — ruining your internet experience.
So, spread the word: Firefox is a speedy, featureful, SECURE alternative to Internet Explorer!
From the blog
Posted 2004-09-11
What's keeping that device in use?
Ever wonder what's keeping that device in use so you can't unmount it? It's
happened to me a few times. The tool to discover this information? lsof
.
Basically, you type something like: lsof /mnt/cdrom
and it gives you a ps
-style
output detailing the PID and process of the processes that are using the cdrom.
You can then go and manually stop those programs, or kill them yourself.
From the blog
Posted 2004-04-27
Binary packages for Gentoo
I'd read that you could get binary packages for gentoo, thus alleviating the need to compile everything. (Of course, then you lose some of the benefits of compiling everything, but you gain in speed…) Unfortunately, I mistook this with ebuilds, and never quite figured it out.
The key is to throw the -g
flag:
$ emerge -g gnumeric # which is like 'emerge --getbinpkg gnumeric'
I also learned how to update packages tonight:
$ emerge sync # to sync your mirror with the gentoo mirrors
$ emerge --update portage # if necessary
$ emerge --update system # updates core system files
$ emerge --update world # updates all packages
From the blog
Posted 2004-04-22
Gentoo for several weeks
I've had a bunch of problems with my new computer — it uses ACPI, but if I load the ACPI modules, it won't boot; if I don't load them, I have to go through contortions to get the ethernet working, and it won't power down; and a bunch of other little stuff.
So, a few weeks ago, I thought, what the heck? Why not try Gentoo? I've been reading about it since it first came out, and I remember talking with Duane about it once, and it has a reputation for both being cutting edge and stable. Heck, even Wil Wheaton's endorsing it… it can't be all bad, right?
I had a few misstarts — bad CDs, not quite getting how the chroot thing worked, problems with DNS (which I still don't understand; and Rob has them as well, so it's not just me). But once I got it installed… well, I'm impressed.
The thing about Gentoo is, it compiles everything from source. It's like Debian, in that it fetches all dependencies and installs those… but it's all source. So it's not exactly fast. But because everything is compiled, and because you setup C flags specific to your machine, what you get is incredibly optimized for your own machine. This 1.6GHz machine simply flies. And the memory usage just stays low.
I'd like to use it for my server… but I can't really take the server down at this point when it's making both my mom and myself money. But what a great system… I only wish I'd used it for the mail server at work.
From the blog
Posted 2004-03-29
B. Gates: Open Source Programmer?
I just read coverage of a panel of programming luminaries on Salon; the topic of discussion was about the state of programming. In the course of the discussion, the subject of Open Source came up. Several of the luminaries — which included architects of the Mac OS and Windows, as well as others — derided the community for basically reinventing the wheel, and wheels that need to be re-thought entirely anyways. One questioned, "Why is the idealism just about how the code is shared — what about idealism about the code itself?"
Andy Hertzfeld (who helped develop the original Mac OS) was sitting on the panel, and jumped in. He has been working with Eazel and Chandler in recent years, and thus has an inside view of open source. His initial comment: "It's because they want people to use the stuff!" Basically, they program Windows- or Mac-like interfaces because then people will be willing to try it out. They program office suites because people "need" an office suite to be productive. Such offerings hook them into the OSS movement.
Another participant, Dan Bricklin (of VisiCalc, a pioneering spreadsheet program) shared an anecdote from Bill Gates. Evidently, Gates gave an interview (with Lammers — look up this person) in which he explained that his work on MS's BASIC compiler was done by looking at how other programmers had accomplished the task. In his own words, "The best way to prepare is to write programs, and to study great programs that other people have written. In my case, I went to the garbage cans at the Computer Science Center and I fished out listings of their operating systems."
So basically, Gates was an early adopter of OSS methodologies… Interesting to see that today he's so protective of MS code. Guess money might do that to you.
From the blog
Posted 2004-03-11
Making a RAID array from the command line
Last night, I created my first RAID array from the commandline. It was quite simple, I discovered.
- Create your partitions using fstab. Remember, primary partitions must be created before extended partitions.
- Look in
/proc/partions
and note the new partition IDs. - Edit
/etc/raidtab
and create a new RAID array. If unsure of the syntax, look up the Linux Software RAID HOWTO for more details. - Type
mkraid /dev/md?
, where?
is the id of the RAID device you just entered in/etc/raidtab
. - Format the new RAID device with your favorite filesystem, assign it a mount point, and start using it!
I was impressed with how easy it was; the choices that the Anaconda installer present for creating a RAID array made it seem like the underlying process must be difficult, when in fact it may have been almost the same complexity if not easier.
From the blog
Posted 2004-02-16
Learn something new everyday
Linux.com has had a running series on CLI commands for Newbies. Most of it has been very basic, but there are still a few gems within. For instance, today I was introduced to apropos
and whatis
. Give a search term to the former, and it will list all programs in which the search term is found in the manpages; give a program name to the latter, and it will tell you which man page addresses it.
From the blog
Posted 2004-02-04
Fun with Find
I've had occasion to need to grab a specific set of files from a large directory — most recently, I needed to grab some specific access logs from our Apache logfiles at work.
Enter find
.
I needed to get all files newer than a specific date, and with the pattern 'sitename-access_log.timestamp.gz'. I then needed to tar up these files and grab them for processing. So, here's what I did:
- The
-newer filename
tells find to locate files newer thanfilename
. - The
-regex
flag tells find to locate files matching the regular expression. The regex that find uses is a little strange, however, and didn't follow many conventions I know; for one thing, it's assumed that the pattern you write will match against the entire string, and not just a portion of it. What I ended up using was-regex '.*access_log.*gz'
, and that worked. - The
-printf
flag tells find to format the printing. This is useful when using the output of find in another program. For instance, tar likes a list of filenames… so I used-printf "%p "
, which separated each filename with a space.
I then backticked my full find statement and used it as the final argument to a tar command; voila! instant tar file with the files I need!
From the blog
Posted 2004-01-25
Ctrl-S and Ctrl-Q in *nix systems
I just ran into this not long ago, and wish I'd discovered it years ago. Basically, Ctrl-S
suspends a process, while Ctrl-Q
resumes it. This is useful when in g/vim
or screen
and you manage to lock up your application because you accidently hit Ctrl-S
when reaching for another key combo.
From the blog
Posted 2004-01-23
More SSH tips: Tunnelling
I wrote up a short tutorial today on the IT wiki about SSH tunneling. What I didn't know is that you can start a tunnel after you've already ssh'd to another machine. Basically, you:
- Press Enter
- Type
~C
and you're at an ssh>
prompt. From there, you can issue the tunnel command of
your choice: -R7111:localhost:22
, for instance.
From the blog
Posted 2004-01-23
IT hiring principles
I was just reading an article about the Dean campaign's IT infrastructure, and there's an interesting quote from their IT manager, Harish Rao:
"I believe in three principles", he said. "First I always make sure I hire people I can trust 100%. Second, I always try to hire people who are smarter than I am. Third, I give them the independence to do as they see fit as long as they communicate about it to their other team members. We've had a lot of growing pains, a lot of issues; but we've been able to deal with them because we have a high level of trust, skill and communication."
I know for myself that when I (1) don't feel trusted, and/or (2) am not given independence to do what I see as necessary to do my job, I don't communicate with my superiors about my actions, and I also get lazy about my job because I don't feel my work is valued.
Fortunately, I feel that in my current work situation, my employers followed the same principles as Rao, and I've felt more productive and appreciated than I've felt in any previous job.
From the blog
Posted 2004-01-21
Making RCS a little easier...
One thing I noticed today when using RCS is that it isn't terribly user friendly — you need to checkout a file to make edits. Often, I make edits, and then want to commit my changes.
So I wrote a wrapper script called revise. It makes a temporary copy of the file you've been editing, checks it out of RCS with locking, makes it writeable, moves the temporary copy to the permanent name, checks it in and unlocks it (which prompts for a log message), and then makes the file writeable for the user and group again. The script is outlined here:
###!/bin/bash
FILE=$1
cp $FILE $FILE.new
co -l $FILE
chmod u+w $FILE
mv $FILE.new $FILE
ci -u $FILE
chmod ug+w $FILE
Being the ROX-Filer centric person I am, I also wrote a quick perl script called rox-revise that I can then put in my SendTo
menu. It parses the file's path, changes to that directory, and then calls the revise script on the filename, from within a terminal. This script follows:
###!/usr/bin/perl -w
use strict;
use vars qw/$path $file $TERMCMD $REVISE $ZENITY/;
### Configurable variables
$TERMCMD = "myTerm"; # What terminal command to use; must be xterm compliant
$REVISE = "revise"; # What command to use to revise (i.e. rcs ci) the file
$ZENITY = "zenity"; # The zenity or dialog or xdialog command to use
### Grab the filename from the command line
$path = shift;
$file = $path;
### If no file given, raise a dialog and quit
if (!$path || ($path eq '')) {
system(
$ZENITY,
'--title=Error',
'--warning',
"--text=No path given to $0; rox-revise quit!"
);
exit 0;
}
### Get the path to the file and switch to that directory
if ($path =~ m#/#) {
$path =~ s#^(.*)/.*?$#$1#;
if ($path !~ m#^/#) { $path = "./$path"; }
chdir $path or die "$path not found!n";
} else {
### Or else assume we're in the current directory
$path = './';
}
### Get the filename
$file =~ s#^.*/(.*?)$#$1#;
### Execute the revise statement
my $failure = system($TERMCMD, '-e', $REVISE, $file);
if ($failure) {
# on failure, raise a dialog
system(
$ZENITY,
'--title=Error',
'--warning',
"--text=Unable to revise $file"
);
}
1;
Now I just need to check out Subversion, and I can have some robust versioning!
Search tips
- Use
#{some tag}
ortag:{some tag}
to filter by tag; using this more than once will find results that match all tags. - Use
year:{year}
to filter by year. If you use this multiple times, the last one wins. - Use
begin:{year-month-day}
to restrict to results beginning on that day. Last one wins. - Use
end:{year-month-day}
to restrict to results ending on that day. Last one wins.