This is perhaps not your typical "Golang vs (language)" post... I've seen it said in various places (citations lost, please roll with it) that Go is a good replacement for shell scripting. At first, I was highly skeptical - Go is a compliled, stricly typed language with more boilerplate to "Hello world" than needed in any scirpting language. But as a compiled language, it does make sense for portability, and as a properly and strictly typed language, it makes performing complex operations easier to implement... so it was worth trying out.
TL/DR:
- Go is a credible alternative to python and shell, for me
- I did Alpacka/NG as an experiment to convert form shell to Go
- Publishing Go modules for sharing code is super easy
- Establishing supply chain trust is an issue
- Shell and Python still definitely have their places
Alpacka, and shell scripting limitations
Several years ago when I was still a bright-eyed and busy-tailed distro-hopper, I kept on running into the same problem: "which package manager command and keyword combination do I need on this distro again??" I wrote a simple script I called Alpacka and whose command I rendered as paf
(naming choice lost to history at this point) to help me. It supported a variety of package managers, and is still available on gitlab as Alpacka
It had however three notable drawbacks:
- It depended on the local commands, which varied in flavour and version over distro and distro versions, of which bash
- Bash's syntax is not oriented to complex data structure manipulation
- Predictable modularisation of source code is hard without workarounds
The version issue was multi-fold: not only was it a sequential version problem (1.0, 1.1, 2.0, etc) but also an implementation problem (BSD variant, GNU variant, Busybox variant) for which outputs and flags would often vary in key places. Depending on given behaviours and outputs of these would be tricky sometimes, and limit the available operations to the common base denominator. At the time for example, I think Busybox did not fully implement Perl Compatible Regular Expressions (PCRE) in its grep
implementation (or was it sed
?) so implementing certain actions was difficult, if not downright impossible.
The lack of rich types like arrays and maps prevented things like function recursion (possible, with hoops, caveats, boilerplate and metaprogramming) and key-value lookup (perhaps possible, but too tedious to warrant implementing).
Does this make shell scripting "inferior"? No! The whole point of POSIX and POSIX-like shells is to be command-driven, and scripting enables re-running commands with some extra flair. I've talked about this before
But for complex processing, shell scripting is not the best choice.
As for the modularisation issue, I leveraged another tool I had, the "bash builder" suite, which transpiles a custom syntax and files along a library path, into a single deployable bash script. Again, I had to code this solution as a workaround to limitations with shell scripts. Again, shell scripts have idiosyncracies for gearing towards shell usage, not large-codebase usage. It has been fun to implement, but limitations still remain.
Thus, the Alpacka/NG Go rewrite was born.
Why not python?
I've written a lot of tooling python.
Some of the same issues with shell scripting remain with python: versioning, and command availablility. At least it's not a question of variant usually, but between a newer script on older python (feature not available) or an older script on newer python (feature deprecated and removed), there's plenty of external dependency-ing that can go wrong.
At work we got around this by using pyenv , which went some way to working around the issue, but in some instances, pyenv would not even install (distro too old!)
Compiling to binary solves this issue by just not having runtime dependencies (or so I thought - more on that later....!). But it holds by and large.
This is more of an issue when trying to ship code to machines not under you control - in an enterprise environment, mandating specific distro versions and python versions etc is easier. Developing for the wilderness is another matter, and the use case I have in mind for Alpacka - so it tracks.
Why not Zig?
I was going to do it in Zig, but that proved to have more to learn than my scripting-oriented brain can jump to at the present point in time... next effort I will go there ;-)
Enter Go
Over the last few months I have been toying around with Go and only in the last month been actively trying to use it fully. Re-writing Alpacka was a good fit for me because
- compilation to single deployable file. I want to be able to just download it and start using it, no environment set up
- I want a set of package lists (package spec file) that apply under conditions, and I want alpacka to take care of it
The single-binary from Go answers the first, and the fact that I could not have the packages spec was due to the lack of complex types in Bash.
Package spec
The spec I wanted to implement for looks something like this:
alpacka:
variants:
- release: VERSION_ID<=22, ID=~fedora
groups: common, fedora
- release: ID_LIKE=~debian
groups: common, debian
- release: ID_LIKE=~suse
groups: common, debian
package-groups:
common:
- mariadb
debian:
- apache2
fedora:
- httpd
Note the catering to different names on different distros, and common names to share on all distros. Handy when hopping and wanting everything covered, right?
How it went
Sub processes
First off, running a sub-command in Go is pretty easy - a little more involved than in shell, no better or worse than in python. Because I did it a lot in this script (calls to package managers), I wrapped it in its own struct with some niceties: runner.go :
// .OrFail() comes from the Result object, whih allows a shorthand exit-now to cut down on verbosity
// -- because these calls would happen A LOT.
RunCmd("apt-get", "install", "-y", "htop").OrFail("Could not run installation!")
I didn't end up needing to pipe anything - but rather than "grep" and "cut" and "sed", instead we have actual programming functions that can operate on data types. It is indeed possible to use a subshell and write a command pipe, but I did not have need for it.
Argument parsing
Next up was parsing arguments. I've written before about some of my basic frustrations with out-of-the-box Go, one of which is around argument parsing. I've since solved this issue for myself in the form of my GoArgs package which offers short-flags and long-flags, positional argument unpacking, and additional handy options such as mode-switched flags.
I know it's a solved problem, but I wrote this myself because
- supply chain security considerations - if I can write it myself, why open myself up to supply chain attacks
- go language self-learning - what better way to learn Go than to write Go. Re-writing Alpacka was not my first Go-venture ;-)
- go version releasing - I wanted to learn how to release a go module for re-use (and I did!)
- customisation - I had a specific idea as to what features I wanted, and what API I wanted, and one is best served by oneself. I did have to add customisations as I went about writing Alpacka-ng
Publishing a Go module is as simple as using version control (typically Git, not sure what else is supported) and tagging a commit with v<x>.<y>.<z>
- explicitly it needs the v
in the front, and three numbers. Go does the rest, and retains a hash for repeat-deployment verification.
Shell scripting argument parsing is pretty convoluted - you can use the opts
utility which behaves in mind-contorting ways, or reference positionals - a function's arguments are accessed the same way the script's arguments are, via $1
, $2
, etc which prevents a function from directly accessing script args. It's possible - but there's a lot of faff to get it to work.
As for external dependencies - I tried to solve that for bash level with Bash Builder: download your dependency, add it to BBPATH
path variable, and you get external modules! A little cumbersome, and I could have improved upon it... but I haven't yet.
Python has argparse
in its native libraries and is good; and pip/conda/uv for adding external modules. Publishing them uses a setup.py
which is unwieldy, but so long as you have a template for re-use, it's easy enough.
Yaml and Supply Chain Trust
Finally, I could implement the package list spec which I designed around YAML. First issue was to find a YAML package because Go does not ship one by default in the standard library. I've found that many languages do this - support JSON, but not YAML. Why? I know JSON is a very useful representation to serialise data, but YAML is a much more comfrotable format for similarly serialised data that also needs to be human editable.
Plenty of tutorials exist online, all pointing to gopkg.in/yaml.v3
without a hint as to explaining
- what is
gopkg.in
? why should I trust it? - who wrote
yaml/v3
? why should I trust them?
After some digging I found that the official Go project itself recognises gopkg.in
even so far as to making certain accommodations for its version notation. This is (at least in part) due to the fact that the site was the main solution for consistent package naming and version notating before the Go project implemented modules. It is a redirector that points to Github repos with a distinct short URL. Repos with a gopkg.in/<PKG>.<VER>
can be seen as an "official"/"blessed" package, whereas gopkg.in/<USER>/<PKG>.<VER>
are "unofficial"/"we don't verify these guys" sources. gopkg.in/yaml.v3
is an "official" package then, although the Github project it points to has been archived (read-only) since April 2025 (this year). Who knows what shall happen next...
The official contents are published by Canonical Inc, the company behind Ubuntu. Based on this sleuthing, I was finally able to satisfy that the package was not completely random, and my "trust" in the "supply chain" is... acceptable. I now have a target, and a hash, but I'm pretty sure that can be circumvented by a later update on the v3 branch if the redirect gets modified... ah life on the open Internet...
Beyond that, Unmarshalling Yaml/Json is a tad more cumbersome than parsing it in Python and descending dictionaries, but it does have the advantage of ensuring types all the way down.
And then... I was done.
Checking it on Alpine
One of the compile targets is Alpine, though perhaps I might strip this since a/ Alpine's package manager is easy and b/ it is usually only used in Dockerfile specs in the frist place.
I discovered in this exercise however that binaries compiled on a GNU/Linux will not run on Alpine, due to linkages against different system runtime modules (different libc
implementations I think?). Huh. A runtime dependency in a "statically" compiled binary. There are nuances to everything after all.
The result then being that the Alpine version must be compiled in an Alpine environment, otherwise you get a cryptic bin/paf: not found
error, despite the file being indeed there...! Using containers for build helped with that.
Conclusion
Looking back at my work, it seems I have been able to implement this over the course of 8 days, a couple hours per day (and one full day in the mix too). I became more at-ease and fluent with the language as time progressed as one would expect as well. Could I have done this faster in python? Probably, but I would still have a runtime dependency. Could I have done it in bash scripting? With the added manifest, not a chance.
It feels to me that Go is a credible alternative to both shell scripting and python given the requirements. However, I would still say that, with fluency, one would go faster with shell scripts for simple items (the build and install scripts are still shell), and faster in python for "throw it together" operations.
Only You, however, can determine what's throwaway. At the risk of coining an abomination:
Since bad code obviously needs rewritten, write bad python to get going; and if it must indeed grow: scrap it, and write good Go in its place.
/jk
😱
Happy coding!
Top comments (1)
That last quip... it's a joke folks ...! Don't write bad code on purpose, pleeeaase!!
Some comments may only be visible to logged-in visitors. Sign in to view all comments.