From zsh to fish

I'm a longtime zsh user. A colleague introduced me to it in 2009, and I was an instant convert, if nothing else than for directory aliases and simpler $PATH management. Within a couple of years, I discovered oh-my-zsh, which put my shell on steroids, giving me a ton of completion capabilities, better prompts, and more.

But a few years ago, I started noticing that my shell load times were getting worse and worse. At that time, I discovered I could easily switch to vanilla zsh with zplug managing a small number of plugins I used (nvm, fzf, and a few others). I also discovered starship, which gave me more prompt options, with faster startup times.

And yet...

I kept reading Julia Evans recommending fish. She often would note that fish just does things that other shells need plugins or customization for: decent tab completion, better history capabilities, etc.

So I took the plunge finally in the past couple weeks to give fish a try.

Configuration

First off, the switch more than halved the amount of configuration I needed to have an equivalent setup. I was able to remove a ton of configuration I had in zsh around history management, autocompletion, and overrides.

As I noted, with zsh, I was using zplug, and I had a half-dozen plugins. With fish, initially I had no plugins, but in order to get nvm running, I needed to install fisher, the de facto fish plugin manager. But that's literally the only plugin I'm now using.

I continued to use starship and fzf, which meant two lines of configuration in my fish configuration, and no changes otherwise.

And there's no lag whatsoever when starting up a shell. With zsh, even with my minimal config, I would sometimes wait a second or two for a shell to spawn. With fish, no wait.

Discoveries

One thing I've kept from oh-my-zsh is a utility called take, which does the following:

  • If given a directory name, it creates it, and then enters it
  • If given an archive file, it unarchives it into a directory, and then enters that directory
  • If given a git repository name, it clones it, and then enters that directory

When porting this to fish, I discovered some really cool features of that shell.

First, fish will automatically autoload functions from the functions/ subdirectory of your fish configuration. So if you name the function the same as the file (e.g., functions/take.fish), it will load it on demand. This is a nice performance improvement over loading everything.

Second, fish uses a standard syntax for any block statement. Instead of sometimes needing braces, sometimes needing a keyword (which generally varies BASED on the block type - e.g. fi to end a conditional, done to end a loop), all blocks use an end keyword. This makes it far simpler to remember and less prone to errors.

Third, when defining a function, you can specify variable names to which to capture arguments. This is far easier to visually parse and use than standard posix shells, where you use positional parameters. As an example:

function takedir -a newpath
    mkdir -p $newpath && cd $newpath
end

Fourth, while you can use the notation varname=value to define variables, there's a better built-in, the set directive, which can:

  • Define block-local (set -l) and function-local (set -f) variables
  • Define globally-available variables (set -g)
  • Define environment variables (set -x, for export) that persist to child shells

Using this, it's far easier both to ensure that a variable is scoped correctly, as well as to reason about the scope of a given variable. And those captured arguments I mentioned? Automatically scoped to the function, so they won't bleed outside of it.

(There's also a "univeral" flag, -u, which will not only set it in the current shell, at the globally available level, but make it available across any other instances, and persist it for future invocations. This seems dangerous, though!)

The combination of these meant that the take declaration took fewer lines of code, was easier to understand, and less likely to bleed state. I'll take it!

But will I stick with it?

I think so. I even put it on some servers I maintain, and it's instantly given me more and better functionality than the default shell available on each, which makes being on those servers more comfortable. Having less configuration is something I've been keeping an eye on, as more configuration means it's harder to reason about how things work, and more likely to break or fail in interesting ways when updating or upgrading.

I'll still need to keep my bash chops; provisioning scripts for containers and VMs generally have to depend on this lowest common denominator. However, having a useful out-of-the-box shell for my workstation and servers that's easy to script? I'll take it.