The Linux kernel does not care much. The boot loader tells the kernel where to mount a root filesystem -- typically an initial ramdisk image (initrd), but it can also be the actual root filesystem --, and the kernel will start the init process from /sbin/init (/init on a initramfs) unless told otherwise.
Even the location of the kernel filesystems -- /proc, /sys, /dev if udev, and so on -- is basically up to the userspace to decide.
The Linux Standards Base project standardized these across various Linux distributions. (Well, more or less. There are still some small differences in e.g. device naming, and there is talk about merging /usr/bin into /bin and /usr/lib into /lib). A version of it was accepted as an ISO standard, ISO/IEC 23360. The current version of LSB, as of October 2016, is LSB 5.
The Linux kernel developers try very hard to keep stuff backwards-compatible in the userspace interface. This is why version 2.6 information is very much applicable to 4.4. It is pretty much only when new facilities and interfaces are introduced that new versions diverge from the older ones, and you need to find the documentation for those.
You mention you've already compiled some libraries and applications. If so, the compile-time settings you used (check configure settings, --prefix and so on) and the directories those libraries and applications look for their configuration files (and timezone files for the C library, internationalization, and so on) determine the directory structure you absolutely do need.
Linux From Scratch! is a community which develops books on how to compile and build a fully working Linux distribution from scratch. It is not exactly minimal -- you can omit certain packages in some situations, strip others, and so on --, but everything is explained.
Rob Landley is well known for documenting Linux kernel stuff. His intro to initramfs, how to use initramfs, and programming for initramfs, are very interesting if you want to make a minimal system that runs directly from an initramfs, like many embedded devices do.
As to systemctl or systemd in general, I'd point you to its home page, and bid you good luck. I myself am looking for ways to avoid it, and use more robust init systems instead, ones that still acknowledge the Unix philosophy rather than agglomerating into a monolithic mess by whim. (In my experience, the former stay functional and maintainable in the long term, and the latter, while often loved by end users due to the agglomerated new features and outer polish, makes for fragile and broken systems and system administrators in the long term. Your experience and opinions may vary; I'm just describing mine.)
When I developed a simple benchmarking USB stick for evaluating Linux cluster nodes, I examined the minimal Debian and CentOS systems you can install, to find out the details OP is asking (except that I was not looking for a minimal system, but a small lightweight system that can run the same binaries as are run on the final cluster itself; i.e., including basic services and libraries). Today, I'd recommend looking at Devuan, because it supports multiple init systems. Experimenting these in virtual machines should be very informative.
Practice rules over theory or standards. There are no enforced standards, or any standards really; even LSB and ISO/IEC 23360 are more like guidelines for successful interoperability. The Linux kernel documentation extracted from the Linux kernel sources does describe the kernel expectations, but as mentioned, there are very, very few, that affect the filesystem tree. And even those tend to be boot-time or compile-time configurable.
etc...inside the article because I can't work withetc...since I need the specs .