fish shell quickstart for converting bash scripts

After some years of bash and PowerShell, and some hours of using fish, I’ve realised that expansion & predictive typeahead are good features in a shell, whereas “be a great programming language” is less important than I thought: because there is no need to write scripts in the language of your shell.

Fish has slicker typeahead and expansions than bash or even PowerShell. But to switch to a fish shell, you do still have to convert your profile & start-up scripts. So here’s my quick-start guide for converting bash to fish.

  • Do this first: at the fish prompt type help. Behold! the fish documentation in your browser is much easier to search than man pages are.
  • Calmly accept that fish uses set var value instead of var=value. Roll your eyes if it helps.
  • Use end everywhere that bash has fi, done, esac, braces {} etc. e.g. function definition is done with function ... end. The keywords do and then are redundant everywhere, just remove them. else has a semicolon after it. case requires a leading switch(expr).
  • There is no [[ condition ]] but [ ... ] or test ... work. Type help test to see all the file and numeric tests you expect, such as if [ -f filename ] etc. string and regex conditionals are done with the string match command (see below). You can replace [[ -f this && -z that || -z other ]] with [ -f this -a -z that -o -z other ] but see below for how fish can also replace || and && constructions with or and and statements.
  • But first! type help string to see the marvels of proper built-in string commands.
  • Replace function parameters $*, $1, $2 etc with $argv, $argv[1], $argv[2] etc. If that makes you scowl, then type help argparse. See! That’s much better than kludging about in bash.
  • Remove the $ from $(subcommand) leaving just (subcommand). Inside quotes, take the subcommand outside the quote: "Today is $(date)" becomes "Today is "(date). (Recall that quotes in bash & fish don’t work at all like quotes in most programming languages. Quote marks are not token delimiters and a"bc"d is a valid single token and is parsed identically to each of abcd , "abcd", abc'd').
  • Replace heredocs with multi-line literal strings and standard piping syntax. However, note that if you pipe or read to a variable, the default multiline behaviour is to split on newline and generate an array. Defeat this by piping through string split0 – see https://fishshell.com/docs/current/index.html#command-substitution

Search-and-replace Script Snippets

Here is my hit-list of things to search and replace to convert a bash shell to fish. These resolved almost all of my issues in converting a few hundred lines of bash script to fish.

FromToNotes
var=valueset var value
export var=valueset -x var value
export -f functionnameredundant.Just remove it
alias abbr=’commandstring’(no change)alias syntax is accepted as an abbreviation for a function definition since fish 3
command $(subshell commmand)
command `subshell commmand`
command (subshell command)
OR
command (subshell commmand | string split0)
Just remove the $ but keep the ()

See below for when you want to add string split0
command “$(subshell commmand)”command (subshell command)Remove both the $ and the quotes ""to make this work
if [[ condition ]] ; then this ; else that ; fiif [ condition ] ; this ; else ; that ; endSee below for more on Fish’s multine and and or syntax.
if [[ number != number ]] ; then this ; else that ; fiif [ number -ne number ] ; this ; else ; that ; endSee below for more on Fish’s multine and and or syntax.
while condition ; do something ; donewhile condition ; something ; end
$*$argv
$1, $2$argv[1], $argv[2]But see help argparse
if [[ testthis =~ substring ]] if string match -q ‘*substring*’ testthisstring match without -r does glob style testing
if [[ testthis =~ regexpattern ]] if string match -rq regexpattern testthisstring match with -r does regex testing
[ guardcondition ] && command
[ guardcondition ] || command
works as isBut see or and and below for when it’s more complex
var=${this:-$that}if set -q this ; set var $this ; else ; set var $that ; end
cat > outfile <<< “heredoc”
cat > outfile <<< “multiline … heredoc”
echo “multiline … heredoc” | cat > outfile no heredocs, but multiline strings are fine
NB printf is better than echo for anything complicated, in any shell.
if [[ -z $this && $that=~$pattern ]]if [ -z $this ] ; and string match -rq $pattern $that ;
content=$(curl $url)set content (curl $url | string split0)without the pipe to string split0, content will be split on newlines to an array of lines.

Fish’s multine and and or syntax

Fish has a multiline and and or syntax that may be clearer than && and || in both conditionals and guarded commands. It is less terse.

[ condition ]
and do this
or do that

That said, && and || are still valid in commands :

[ condition ] && do this || do that

Other gotchas

  • You may have to read up on how fish does parameter expansion, and especially handling spaces, differently to bash.
  • Pipe & subcommand output to multiline strings or arrays: set x (cat myfile.txt) will set x to an array of the lines of myfile.txt. To keep x as a single multine string, use string split0 : set x (cat myfile.txt | string split0)

Official tips for new fishers:

See the FAQ at https://fishshell.com/docs/3.0/faq.html

Choosing an Affordable Mechanical Keyboard

tl;dr: KeyChron are brilliant and cheaper than the alternatives.

My nephew was foolish enough to ask about the keyboard he spotted in a photo of my desktop so, armed with my Keychron K2, I explained what led me to it.

I got out kitchen scales and improvised some 2-10 gramme weights and carefully weighed the actuation force on my MacBook and on my Logitech K480. I found that the MacBook is approx 60grammes and that the K480—which I find too much hard work to type on—is about 70g. To me, the difference feels much more than that, but that’s what the scales told me. (And yes, the A-level physicists amongst you will know that by gramme I mean, milliNewton)

I deduced that any switch of 60g or below, I would be happy with.
I then noted that the Gateron Blue is allegedly noisy (fine if you don’t share a room with someone, anti-social if you do); and I examined the graphs of the actuation curves. I would have preferred the ‘clicky’ buckling spring feel of the Blue but I’ve used a noisy keyboard before and feel the noise can be a real downer. I definitely didn’t want the linear feel which the Red and Black have (allegedly gamers like it, but I don’t) so that left me with the Brown ‘Tactile’ which is in between.

I went for aluminium backlit because boo to plastic (don’t kid yourself though; I do not know whether the carbon footprint of producing the aluminium keyboard is any less than that of the plastic). Nightlife without backlighting is grim, it is must-have for me. My other must-haves are Mac keycaps, and either the “tenkeyless” keyset or the full 101 keys, because I hate not having page up/down and home/end on the main keyboard.

I agonised over K1 or K2 and went for K2 because K1 didn’t have the Brown switches. But … having become a closet Apple fanboi I am now disappointed that the K2 feels much higher off the desk that the Apple ultra-flat style and I may yet buy a K1 instead. Or a wrist rest.

I assure you however that the K2 is the best keyboard I’ve had for years. I had an IBM buckling spring keyboard for while in the 1990s, and a noisy Cherry for a while last year. The KeyChron has the same solid feel at the IBM had, but with the Brown switches is lighter and feels slightly less industrial. It is really nice, impressively priced, and I very much like it.

It will make you want to type long letters, or—should you happen to have WhatsApp Web on your computer—really really long messages to your nephew about how good your keyboard is. 🤷

Also 10% off! https://keychronwireless.refr.cc/chriscarroll

I did spend a while looking at other keyboards, mostly mechanical, but wasn’t willing to pay the enormous prices. KeyChron is a half or even a third of the price of other good mechanical keyboards. Seemed a no-brainer to me.

Customise Macos XQuartz : xinitrc doesn’t work

If you installed XQuartz and are, for instance, irritated by the small white xterm window you get, you might try customising it in the usual way by editting an .xinitrc file. If only.

Instead, try this:

defaults read org.macosforge.xquartz.X11

to see all the settings; or to permanently change the startup xterm window, something like:

defaults write org.macosforge.xquartz.X11 app_to_run \
 "/opt/X11/bin/xterm -fa Monaco -fs 12 -fg green -bg black -sb -sl 1000"

Or, if you have installed a better bash with homebrew, then e.g. :

defaults write org.macosforge.xquartz.X11 app_to_run \
  "/opt/X11/bin/xterm -fa Monaco -fs 12 -fg green -bg black -sb -sl 1000 -ls /usr/local/bin/bash"

You can check your syntax before writing the default just by running your quoted command in a terminal, and then watch as XQuartz opens and xterm runs your shell:

~/Source/Repos/VMs] /opt/X11/bin/xterm -fa Monaco -fs 12 -fg green \
    -bg black -sb -sl 1000 -ls /usr/local/bin/bash

To set the default for a new xterm window from the XQuartz Application menu, the menu itself lets you edit the command.

In short, read the FAQ : https://www.xquartz.org/FAQs.html.

Migrating Net Framework to Netcore

Until NetCore 2 came along, migrating an existing Net Framework project to dotnet core was likely a painful exercise in futility, as you time-consumingly discovered just how many bits of the .Net framework don’t exist on netcore 1. Small things, like key parts of AdoNet. It was a bleeding edge experience.

But then there was dotnet core 2 with not-very-far-off 100% Api compatibility. And now all is sweetness and light.

Seriously. It is. Huge chunks of your .Net framework project will now ‘just work’ on .netcore, with little or no editing. In fact it’s so easy, you might consider multi-targeting net framework and netcore just to show off.

Console apps and class libraries are straightforward. Considerations for UI and platform technologies:

  • For AspNet, there is a learning curve from Mvc versions 3/4/5 to AspNetCore for which I will refer you to the tutorials. There is then work to do which I do cover below.
  • For Windows Forms and WPF projects, I recommend you to the considerations in MS guide for porting Winforms.
  • WWF and WCF-serverside are not (currently) migratible, although WCF-clientside is. Moving Web-facing WCF-serverside to dotnetcore with NancyFx or to AspNetCore would be a smallish-rewrite, about the same as moving to WebApi2; but it seems that MS are working on WCF serverside. Remoting, however is gone. So if your preference for WCF is that it’s a better style that all this pseudo-resty AspNet nonsense, then consider Nancy.

Overview

  1. Start with a new, empty dotnet core 2 project
  2. Drag-n-drop all your existing code into it, excluding AssemblyInfo.cs
  3. Deal with .settings and .config files
  4. Re-add your NuGet dependencies
  5. Deal with other code differences
  6. Build and Go!

Well okay, that last step, Build-and-Go, is more likely to be Build-and-Fix-The-Next-Compiler-Error-And-Build-Again. But it is mostly straightforward.

To migrate AspNet to AspNetCore there are more steps, and you do have to start with the learning curve for a whole new framework. That said, it’s like someone thought, “let’s redo Mvc as WebApi2 + Razor + Views but with a cleaner startup style and with mandatory dependency injection”. Your controllers will hardly change. I do find AspNetCore simpler, cleaner, easier to work with. Roughly, your steps are:

  1. Work through the getting started tutorials & learning curve. (Estimate 5-10 hours per person?)
  2. Migrate your startup config to the new approach (2-10 hours depending on how much novel startup code you have)
  3. Migrate any custom authentication to the new approach (An hour or so if you read the gotcha below)
  4. Consider whether your Attribute-based filters will remain as attributes or be re-worked into something else.
  5. Re-tool unit tests which mocked the old Asp.Net Mvc dependencies

Larger sets of projects

If you are dealing with not just a single project but a whole load of them, you should first look through Microsoft’s guide to porting. The main reason to not work through those steps for a single project is that since netcore2, the fastest way to analyse “what problems will I have in porting” for a smallish project is to just do it! You can most likely finish the job already, faster than you can use the analysis tools to predict what problems you will have. That wasn’t always the case before netcore 2. A couple of thoughts from that guide that I do recommend though:

Start with a new empty dotnet core 2 project.

To migrate an executable you’d create a console app. For a class library, you can make it a netstandard2 project, which makes the project available for use in .net 4.6.1+ / 4.7 as well as in dotnet core.

The command line is very trendy in dotnet core, so you can do it all with dotnet new instead of using a GUI. dotnet new will show what templates are installed on your machine.

Drag-n-drop all your existing code in, excluding AssemblyInfo.cs

dotnet core projects assume, by default, that if there’s a code file in the directory or a subdirectory then it’s part of the project, so just dragging all your code into the new project directory will just work.

Don’t include the AssemblyInfo.cs because that gets auto-generated from the .csproj file. If you have anything of interest in your AssemblyInfo.cs, edit the .csproj file and put it in there. The AssemblyInfo properties section of Additions to the csproj properties for dotnet core show you the Element Names to use if you want to re-add information. Something like:

<PropertyGroup>
<TargetFrameworks>netstandard1.6;net40</TargetFrameworks>
<AssemblyVersion>4.1.4.3</AssemblyVersion>
<AssemblyFileVersion>4.1.4.3</AssemblyFileVersion>
<PackageVersion>4.1.4.3</PackageVersion>
<GenerateDocumentationFile>true</GenerateDocumentationFile>
<Title>TestBase – Rich, fluent assertions and tools for testing with heavyweight dependencies: AspNetCore, AdoNet, HttpClient, AspNet.Mvc, Streams, Logging</Title>
<PackageDescription><![CDATA[*TestBase* gives you a flying start with ....etc...</PackageDescription>

Note the new properties with names beginning with <Package...> which will be picked up by dotnet pack when creating NuGet Packages. Nuget is much easier with dotnet core, it’s kind of built-in instead of being an extra thing to learn and do.

Deal with .settings and .config files

There is a whole new approach to settings and configuration. You will have to learn it. It’s good though. It lets you do things like this:

{
"AComponentDefaults": {
"SomeSetting": "Me",
"ANumericSetting" : 1.0,
"Subsetting": {
"Something" : "Sub"
},

"JustOneLine" : "This"
}

and then read a whole section as strongly-typed settings with a one-liner:

Configuration.GetSection("AComponentDefaults")
.Bind(myComponent = new AComponent());

You can still use single-line settings of course:

var justOneLine=Configuration["JustOneLine"]

The new system deals easily with per-environment overrides, and has a whole new “get your config from all kinds of other sources than the settings file” capability.

Re-add your NuGet dependencies

This is straightforward. In Visual Studio (or in JetBrains Rider) use right-click -> Manage NuGet Packages. On the command line it’s dotnet add package.

The big news here is that most of your NuGet dependencies already work on dotnetcore. All of the most downloaded NuGet packages are either multi-targeted or have packages for each platform. (The dependency trees for most packages for dotnet core is quite different to the dependency tree for net framework, but it makes no difference at all, on the whole).

Deal with other code differences

I don’t think there are too many. Under netcore2, your major external dependencies – AdoNet, HttpClient and FileSystem – are all either the same or quite similar. SqlClient, Npgsql, Dapper are pretty much unchanged and the rest of the Framework is very much the same.
Main code changes:

  • Scan the list of breaking changes, which are largely in low-level or platform specific areas.
  • If you use reflection you must often use type.GetTypeInfo().GetXXX() instead of type.GetXXX(). If you’re good with regex, this just needs a search-&-replace to fix.
  • EntityFramework Core is different, but not extremely different.

Build and Go!

And … deploy to Macos and Linux. Hurray.

Migrating AspNet to AspNetCore

Work through the getting started tutorials

Really. Don’t try to skip the aspnetcore getting started learning curve. Be aware that the tutorials push the new Razor Pages approach, which you will want to ignore. Instead be sure you’re clear on how the new approach handles startup, dependency injection, attributes, filters, and authentication. Your controllers and routing will largely work with minimal change.

Migrate your startup config to the new approach

So having done your learning curve, you understand that all your Global.asax.cs and App_Startup code will move into, or be called from, your Startup class. And you will cleanly separate config setup—having learned about the new configuration approach–and you will use a dependency injection container to provide any global config to your controllers.

Fix-up ControllerContext changes

There are some fiddling tidyup changes on ControllerContext and Request properties–for instance userHostAddress is no more, you must look for HttpContext.Connection.RemoteIpAddress instead. Global HttpContext is gone, but of course you were always careful to use controller.HttpContext weren’t you?

Add the new interfaces to Attribute-based filters, or else rework them as middleware

You do need to learn about the new kinds of filters, and consider whether what you are doing with your filters should stay as-is in attribute filters or might it be simpler to move logic into the new middleware approach.

Migrate any custom authentication/authorization to the new approach

The mistake to avoid here, is trying to make your custom AuthorizationAttribute work as an AspNetCore attribute. Don’t. Instead,

  • Move the logic of your custom AuthorizationAttribute into a Policy, which could be just a single method call.
  • Delete your custom attribute and let the built-in AuthorizeAttribute reference your new policy:
    [Authorize(Policy="MyCustomPolicy")]

It would have saved me half a day if I’d realised this up-front. But on this plan, you can migrate custom authenticate in an hour or even minutes.

Re-tool your unit test controller dependencies for the new framework

There is some popular code on the web for mocking the dependencies of an Mvc 3 or 4 or 5 Controller.ControllerContext. This must all be replaced.

Myself, for Mvc 4 & 5 I always used TestBase-Mvc which gave me two simple extension methods:

var unitUnderTest= new MyMvcController(...)
.WithHttpContextAndRoutes();

var webApiControllerUnderTest= new MyWebApiController(...)
.WithWebApiHttpContext<T>(HttpMethod httpMethod,
[Optional] string requestUri,
[Optional] string routeTemplate);

//Or, optional parameters to process the actual route urls from your RegisterRoutes config:

controllerUnderTest
.WithHttpContextAndRoutes(
[Optional] Action<RouteCollection> mvcApplicationRoutesRegistration,
[optional] string requestUrl,
[Optional] string query = "",
[Optional] string appVirtualPath = "/",
[Optional] HttpApplication applicationInstance)

This makes sure a controller can reference cookies, session, TempData, the Url.Action() calls and even the global HttpContext.Current in a unit test context.

For AspNetCore, I wrote TestBase.Mvc.AspNetCore (soon to be renamed to to TestBase.AspNetCore) which offers a similar thing:

var uut = new ControllerUnderTest().WithControllerContext();
uut.Url.Action("a", "b").ShouldEqual("/b/a");
uut.ControllerContext.ShouldNotBeNull();
uut.HttpContext.ShouldBe(uut.ControllerContext.HttpContext);
uut.Request.ShouldNotBeNull();
uut.ViewData.ShouldNotBeNull();
uut.TempData.ShouldNotBeNull();
uut.MyAction(param)
.ShouldBeViewResult()
.ShouldHaveModel<YouSaidViewModel>()
.YouSaid.ShouldBe(param);

It also has a large set of fluent assertions for ViewResults, FileResults, etc, etc. Once I’d written the new infrastructure, migrating my controller unit tests was mostly painless. (Nb it still needs a few changes for CompatibityVersion_2_2, it’s currently written for 2.0.)

New in AspNetCore is the ease of testing not just individual controllers but the whole hosted application. The AspNetCore team coded a TestServer for their unit tests, and this server can be used, bootstrapped with your actual application’s Startup code, and then tested with an HttpClient:

[TestFixture]
public class WhenTestingControllersUsingAspNetCoreTestTestServer : HostedMvcTestFixtureBase
{

[TestCase(""/dummy/action?id={id}"")]
public async Task Get_Should_ReturnActionResult(string url)
{
var id=Guid.NewGuid();
var httpClient=GivenClientForRunningServer<Startup>();
GivenRequestHeaders(httpClient, ""CustomHeader"", ""HeaderValue1"");

var result= await httpClient.GetAsync(url.Formatz(new {id}));

result
.ShouldBe_200Ok()
.Content.ReadAsStringAsync().Result
.ShouldBe(""Content"");
}

But I have come round to seeing this as automated integration testing, not unit testing. I would use it for testing e.g. content negotiation is working as expected, not for testing the domain logic of a controller action.

Conclusion

Since the arrival of netcore2, the cost of migrating to DotNetCore is dramatically lower. DotNetCore tooling and extensibility is very good. Even migrating AspNet is not an excessive task. Even for just NetFramework 4 development, the new tooling is simpler and better. I reckon that dotnetcore is cheaper and easier to write and maintain. Both C# and the framework are evolving in ways that reduce your cost of development: And you get cross-platform deployment pretty much for free. At last.