Wed Jan 6 08:39:18 2021
Last updates: ... Tue Jan 19 06:21:54 2021
This directory contains files created in support of building and using a pre-release of the TeX Live 2021 distribution. The official release is expected in April 2021, and should be available on DVD in mid-summer 2021; you can download it electronically here. Please note that if you install it from either of those sources, you should first choose a suitable mirror, and then run the update procedure described later in this document. The TeX Live package repository is actively maintained, and you should expect that several hundred packages are updated each month.
Your first task is to learn how to mount an ISO image on your platform. On many desktops, simply clicking on its icon in a File Manager tool does the job. On others, you might have to mount the image (usually as an administrator), such as with this command on most Linux flavors:
# mount -o ro /path/to/your/copy/of/texcol2021.iso /mnt
The top-level directory of the DVD image contains README and index.xx.html files (in English, French, and German) that should help guide your selection and installation procedure. For example, Unix users can run the script texlive/install-tl, and Windows users, the script texlive/install-tl-windows.bat. In either case, brief answers to a few questions about your local preferences get the installation started. The rest is automatic, and should complete in less than an hour. See below for resource requirements and how to choose a TeX Live package repository mirror near you.
When installation completes, you should unmount the image through your File Manager, or with an administrator command like this:
# umount /mnt
After a successful installation, the ISO image is then no longer needed, and can be deleted if disk space is limited. Copies of the image remain on numerous TeX Live mirror sites for years, so you can always download it again if needed.
There are more details about the installation process here.
A test lab at this site has hundreds of flavors of Unix on some of which TeX Live builds are attempted, and the scripts named *.*sh in this directory are those used by the local developer.
The intent of the build-texlive-2021.sh script is that it should setup the build environment on each platform, and then run the internal Build script to carry out the build. We have found it necessary on many platforms to carefully control the search path and environment variables at the start of a TeX Live build, and to avoid use of installation locations that are owned by the vendor package system. See elsewhere for an explanation of why we scrupulously avoid the GNU default prefix of /usr/local on new systems.
The scripts in this directory are likely to change during the spring build season for TeX Live 2021 as more platforms are successfully supported.
Note: In sharp contrast to previous years, certain non-TeXware libraries required by some of the TeX Live 2018, 2019, 2020, and 2021 binary executable programs now mandate compiler support for more recent versions of ISO Standard C++, and such compilers are unavailable for many older systems. As a result, the number of platforms for which the code can be built and run has been noticeably reduced. Unless your computer and operating system are fairly new, the last release that you may be able to run until you do an operating system, and possibly, hardware, upgrade is TeX Live 2017.
If you control your own hardware, or have benevolent computer management, and you really need the latest TeX Live, then a good solution may be to create a virtual machine (VM) running a recent O/S release. There are several free, or no-cost, or low-cost, virtualization technology solutions, including bhyve, Hyper-V, KVM, QEMU, virt-manager, OVirt, Parallels, VirtualBox, VMware Workstation Player, and Xen, and every modern operating system supports one or more of those, although generally, at most one can be installed at a given time.
One of the easiest for a new user is VirtualBox, and it requires no special privileges to create and run a virtual machine. At the Utah test lab, we generally allocate a new VM one CPU, 1GB to 4GB DRAM memory, and 80GB of virtual disk storage. For the single purpose of running TeX Live, a 25GB disk would be ample. The VM treats CPU, memory, and disk as expandable resources up to their declared size, so if an initial installation needs only 4GB of disk space, that is all that the underlying host needs to provide. If the assigned memory size is suboptimal, just shut down the VM, change the memory size in the VM management GUI, and reboot the VM. In our experience, setting up a new VM takes about 15 minutes, and modern O/Ses have management GUIs that make user account setup and software package installation easy for novices.
Most commercial cloud services supply a choice of preconfigured virtual machines, but you must pay a small monthly fee for CPUs and disk storage. The advantage is that the service provider does most of the work for you, including VM configuration and backup, and in return, you can access and control your personal cloud VM from anywhere in the world where you have an Internet connection.
As of 19 January 2021, the following builds have been (mostly) successful at Utah:
% show-file-counts.sh 459 amd64-freebsd104 459 amd64-freebsd114 459 amd64-freebsd13-clang 459 amd64-freebsd13-gcc 459 amd64-netbsd100 459 amd64-netbsd72 458 amd64-netbsd82 459 amd64-netbsd91 456 amd64-openbsd64 454 amd64-openbsd65 454 amd64-openbsd66 417 amd64-openbsd67 455 amd64-openbsd68 459 i686-fedora28 459 i686-opensusetw 459 i686-peppermint-10 425 i686-void32m 452 i86pc-solaris114 446 ppc64be-centos7 446 ppc64le-centos8 453 ppc64le-debian106 452 s390x-ubuntu2004 459 x86_64-alpine312 459 x86_64-alpine313 456 x86_64-centos6 459 x86_64-centos7 459 x86_64-centos8 459 x86_64-dragonflybsd562 459 x86_64-dragonflybsd583 459 x86_64-dragonflybsd59b 459 x86_64-fedora32 459 x86_64-fedora33 459 x86_64-opensusejump 459 x86_64-opensusetw 457 x86_64-oracle79 457 x86_64-oracle8 459 x86_64-ubuntu2004 459 x86_64-void-musl Total: 38 systems Missing binaries [compared to x86_64-ubuntu2004]: amd64-netbsd82 : xasy amd64-openbsd64 : luajittex mfluajit mfluajit-nowin amd64-openbsd65 : asy luajittex mfluajit mfluajit-nowin xasy amd64-openbsd66 : asy luajittex mfluajit mfluajit-nowin xasy amd64-openbsd67 : amstex asy cslatex csplain dvilualatex dvilualatex-dev dviluatex eplain etex jadetex latex latex-dev lollipop luacsplain luajittex lualatex lualatex-dev man mex mfluajit mfluajit-nowin mllatex mltex optex pdfcslatex pdfcsplain pdfetex pdfjadetex pdflatex pdflatex-dev pdfmex pdfxmltex platex platex-dev texsis uplatex uplatex-dev utf8mex xasy xelatex xelatex-dev xmltex amd64-openbsd68 : asy luajittex mfluajit mfluajit-nowin xasy i686-void32m : cfftot1 dvigif dvilualatex dvilualatex-dev dviluatex dvipdfm dvipdfmx dvipdft dvipng dvisvgm ebb extractbb luacsplain luahbtex luajithbtex luajittex lualatex lualatex-dev luatex mmafm mmpfb optex otfinfo otftotfm t1dotlessj t1lint t1rawafm t1reencode t1testpage ttftotype42 upmendex xdvi xdvi-xaw xdvipdfmx i86pc-solaris114 : asy tex2xindy texindy xasy xindy xindy.mem xindy.run ppc64be-centos7 : asy luajithbtex luajittex mfluajit mfluajit-nowin tex2xindy texindy texluajit texluajitc xasy xindy xindy.mem xindy.run ppc64le-centos8 : asy luajithbtex luajittex mfluajit mfluajit-nowin tex2xindy texindy texluajit texluajitc xasy xindy xindy.mem xindy.run ppc64le-debian106 : luajithbtex luajittex mfluajit mfluajit-nowin texluajit texluajitc s390x-ubuntu2004 : luajithbtex luajittex mfluajit mfluajit-nowin texluajit texluajitc xindy.mem x86_64-centos6 : asy xasy xdvi xdvi-xaw x86_64-oracle79 : asy xasy x86_64-oracle8 : asy xasy
The first column in the first table is the number of installed executables, and the second column is the CPU architecture, base operating system, distribution, and optional version.
Of those directories, the following are part of the pre-test installation (described below):
aarch64-linux i386-cygwin i386-solaris x86_64-darwinlegacy amd64-freebsd i386-freebsd win32 x86_64-linux amd64-netbsd i386-linux x86_64-cygwin x86_64-linuxmusl armhf-linux i386-netbsd x86_64-darwin x86_64-solaris
Most of the others have been built at the University of Utah, almost entirely in facilities of the Department of Mathematics, and on my personal home VM cluster running with VirtManager and QEMU on top of Ubuntu 20.04. Several of those VMs emulate other CPU architectures.
Although some of the builds on FreeBSD and GNU/Linux systems appear to duplicate the contents of i386-freebsd, i386-linux, amd64-freebsd, and x86_64-linux, they serve two purposes: (1) demonstration of the possibility of independent builds of TeX Live at other sites and on different O/S distributions, and (2) builds on bleeding-edge O/S releases may benefit from new and improved compiler technology, and newer system libraries.
Arch Linux (arch), BlackArch, ClearLinux, CentOS Stream, Fedora Rawhide (fedorarh), GUIX, Hyperbola, Kali, PCLinuxOS (pclinuxos), openSUSE Jump, openSUSE Tumbleweed, Parabola, Solus, Trident, TrueOS (trueos1806), Ubuntu RR (ubuntu-rr), and Void Linux do not have version numbers: they use a rolling-update model, and once updates have run, and if needed, the systems have been rebooted, they are at the latest available software levels.
It may also be of interest to record the library dependencies of all of the executables in one of the binary directories:
% show-lib-deps.sh $prefix/texlive/2021/bin/amd64-freebsd130/ Library dependencies of TeX Live executables in amd64-freebsd130: libGL-NVIDIA asy libICE asy inimf mf mflua mflua-nowin mfluajit mfluajit-nowin xdvi-xaw libSM inimf mf mflua mflua-nowin mfluajit mfluajit-nowin xdvi-xaw libX11 asy inimf mf mflua mflua-nowin mfluajit mfluajit-nowin pdfclose pdfopen xdvi-xaw libXau asy inimf mf mflua mflua-nowin mfluajit mfluajit-nowin pdfclose pdfopen xdvi-xaw libXaw xdvi-xaw libXdmcp asy inimf mf mflua mflua-nowin mfluajit mfluajit-nowin pdfclose pdfopen xdvi-xaw libXext asy inimf mf mflua mflua-nowin mfluajit mfluajit-nowin xdvi-xaw libXi asy libXmu xdvi-xaw libXpm xdvi-xaw libXrandr asy libXrender asy libXt xdvi-xaw libXxf86vm asy libbz2 xelatex xelatex-dev xetex libc afm2pl afm2tfm aleph amstex asy autosp axohelp bbox bg5conv bibtex bibtex8 bibtexu cef5conv cefconv cefsconv cfftot1 chkdvifont chktex cslatex csplain ctangle ctie ctwill ctwill-refsort ctwill-twinx cweave detex devnag disdvi dt2dv dv2dt dvi2tty dvibook dviconcat dvicopy dvidvi dvigif dvilj dvilj2p dvilj4 dvilj4l dvilj6 dvilualatex dvilualatex-dev dviluatex dvipdfm dvipdfmx dvipng dvipos dvips dviselect dvispc dvisvgm dvitodvi dvitomp dvitype ebb eplain epsffit eptex etex euptex extconv extractbb gftodvi gftopk gftype gregorio gsftopk hbf2gf inimf initex jadetex kpseaccess kpsereadlink kpsestat kpsewhich lacheck lamed latex latex-dev lollipop luacsplain luahbtex luajithbtex luajittex lualatex lualatex-dev luatex mag makeindex makejvf mendex mex mf mf-nowin mflua mflua-nowin mfluajit mfluajit-nowin mfplain mft mllatex mltex mmafm mmpfb mpost msxlint odvicopy odvitype ofm2opl omfonts opl2ofm otangle otfinfo otftotfm otp2ocp outocp ovf2ovp ovp2ovf patgen pbibtex pdfclose pdfcslatex pdfcsplain pdfetex pdfjadetex pdflatex pdflatex-dev pdfmex pdfopen pdftex pdftosrc pdfxmltex pdvitomp pdvitype pfb2pfa pk2bm pktogf pktype platex platex-dev pltotf pmpost pmxab pooltype ppltotf prepmx ps2pk psbook psnup psresize psselect pstops ptex ptftopl r-mpost r-pmpost r-upmpost scor2prt sjisconv synctex t1ascii t1asm t1binary t1disasm t1dotlessj t1lint t1mac t1rawafm t1reencode t1testpage t1unmac t4ht tangle teckit_compile tex tex2aspc tex2xindy tex4ht texlua texluac texluajit texluajitc texsis tftopl tie ttf2afm ttf2pk ttf2tfm ttfdump ttftotype42 upbibtex updvitomp updvitype uplatex uplatex-dev upmendex upmpost uppltotf uptex uptftopl utf8mex vftovp vlna vptovf weave wofm2opl wopl2ofm wovf2ovp wovp2ovf xdvi-xaw xdvipdfmx xelatex xelatex-dev xetex xindy.run xmltex libcrypt xindy.run libexpat xelatex xelatex-dev xetex libffcall xindy.run libfftw3 asy libfontconfig xelatex xelatex-dev xetex libfreetype xelatex xelatex-dev xetex libgcc_s asy luajittex mfluajit mfluajit-nowin libglut asy libgsl asy libgslcblas asy libintl xdvi-xaw xindy.run libm afm2pl afm2tfm aleph amstex asy autosp axohelp bbox bg5conv bibtex bibtex8 bibtexu cef5conv cefconv cefsconv cfftot1 chkdvifont chktex cslatex csplain ctangle ctie ctwill ctwill-refsort ctwill-twinx cweave detex devnag disdvi dt2dv dv2dt dvi2tty dvibook dviconcat dvicopy dvidvi dvigif dvilj dvilj2p dvilj4 dvilj4l dvilj6 dvilualatex dvilualatex-dev dviluatex dvipdfm dvipdfmx dvipng dvipos dvips dviselect dvispc dvisvgm dvitodvi dvitomp dvitype ebb eplain epsffit eptex etex euptex extconv extractbb gftodvi gftopk gftype gregorio gsftopk hbf2gf inimf initex jadetex kpseaccess kpsereadlink kpsestat kpsewhich lacheck lamed latex latex-dev lollipop luacsplain luahbtex luajithbtex luajittex lualatex lualatex-dev luatex mag makeindex makejvf mendex mex mf mf-nowin mflua mflua-nowin mfluajit mfluajit-nowin mfplain mft mllatex mltex mmafm mmpfb mpost msxlint odvicopy odvitype ofm2opl omfonts opl2ofm otangle otfinfo otftotfm otp2ocp outocp ovf2ovp ovp2ovf patgen pbibtex pdfclose pdfcslatex pdfcsplain pdfetex pdfjadetex pdflatex pdflatex-dev pdfmex pdfopen pdftex pdftosrc pdfxmltex pdvitomp pdvitype pfb2pfa pk2bm pktogf pktype platex platex-dev pltotf pmpost pmxab pooltype ppltotf prepmx ps2pk psbook psnup psresize psselect pstops ptex ptftopl r-mpost r-pmpost r-upmpost scor2prt sjisconv synctex t1ascii t1asm t1binary t1disasm t1dotlessj t1lint t1mac t1rawafm t1reencode t1testpage t1unmac t4ht tangle teckit_compile tex tex2aspc tex2xindy tex4ht texlua texluac texluajit texluajitc texsis tftopl tie ttf2afm ttf2pk ttf2tfm ttfdump ttftotype42 upbibtex updvitomp updvitype uplatex uplatex-dev upmendex upmpost uppltotf uptex uptftopl utf8mex vftovp vlna vptovf weave wofm2opl wopl2ofm wovf2ovp wovp2ovf xdvi-xaw xdvipdfmx xelatex xelatex-dev xetex xindy.run xmltex libncurses asy xindy.run libncursesw asy xindy.run libnvidia-glcore asy libnvidia-tls asy libreadline asy xindy.run librt asy libsigsegv asy xindy.run libthr asy inimf mf mflua mflua-nowin mfluajit mfluajit-nowin pdfclose pdfopen upmendex xdvi-xaw xelatex xelatex-dev xetex libunistring xindy.run libusbhid asy libxcb asy inimf mf mflua mflua-nowin mfluajit mfluajit-nowin pdfclose pdfopen xdvi-xaw libz asy xelatex xelatex-dev xetex
xz-compressed tar files for each of the binary trees can be found here . They are about 60% of the size of corresponding gz-compressed files, both at maximal compression level -9. They would normally be unpacked in the directory path /path/to/texlive/2021/bin. After installing them, it is likely necessary to update the TeX preloaded memory-image files, *.fmt, by running the command ./fmtutil-sys --all in the just-unpacked directory. Those files are TeX-Live-release dependent, but platform-independent, so if you unpack multiple binary trees that are shared across different systems, you only need to regenerate them once.
The binary format files that contain precompiled macros for various engines based on TeX sometimes contain settings for the local paper size, notably, those engines that can produce PDF output. Therefore, before you run fmtutil-sys, run whichever of these is suitable for your site:
% tlmgr paper a4 # for European A4 paper (210mm × 297mm)
% tlmgr paper letter # for US letter A paper (8.5in × 11in)
For more on the problems of configuring a default paper size for TeXware, see the document section Page layout and document printing.
TeX Live executables can often be shared with O/S releases of higher levels, and binaries for the oldest GNU/Linux release have a good chance of running on other GNU/Linux distributions for the same CPU family. That works as long as Linux kernel and system library versions are upward compatible. Thus, a CentOS 6 binary can likely run on CentOS 7 and CentOS 8, but also on Debian, openSUSE, Red Hat, Ubuntu, and other distributions. Similarly, Solaris 10 binaries run just fine on Solaris 11. FreeBSD binaries are often usable on ArisbluBSD, ClonOS, FreeNAS, FuryBSD, GhostBSD, HardenedBSD, MidnightBSD, NomadBSD, OPNsense, PacBSD, PC-BSD, Trident, and TrueOS.
Once an installation is complete for a given platform, a user can switch to it by executing one of these scripts:
### assume prefix=/usr/local (but trivially changeable at each site) ### csh and tcsh login shells source $prefix/skel/SYS.texlive-2021.csh ### ash, bash, dash, ksh, pdksh, sh, and sh login shells ### (POSIX-compliant, or supersets thereof) . $prefix/skel/SYS.texlive-2021.sh
Those scripts redefine certain TeXware environment values to new ones suitable for use with TeX Live, and they reset the PATH to put the 2021 release first, ahead of any local, older TeX Live, or vendor-supplied installations of TeX.
An installed TeX Live 2021 tree with all available packages, and binaries for one O/S architecture, requires about 13GB of disk space. It contains about 15_000 directories and 211_000 files. Smaller storage totals are possible with the installer programs in the official TeX Live 2021 DVD image, which allow you to choose subsets for your local installation.
Building TeX Live 2021 requires primarily source code, rather than macro packages and fonts: about 1.5GB of storage suffices on most platforms.
Fortunately, computer storage costs continue to drop. In early 2021 on the US market, spinning magnetic disks cost about US$20 per terabyte, and solid-state disks (SSDs) cost about US$85 per terabyte. Thus, storage of a full TeX Live 2021 tree costs between US$0.25 and US$1.10, similar to that for stamps on a traditional postal letter, and less than a typical cafe beverage.
The economic realities of the computer market today, and knowledge of resource growth from computer history, suggest that it is better to spend money on more, and faster, storage, internal memory (DRAM), and more CPU cores, than it is to pay premium prices for higher CPU clock speeds. Most personal computers, from mobile devices to laptops to desktops, are idle most of the time.
The master TeX Live repository is located in Paris, France, but there may be a repository mirror with higher data transfer rates that is closer to you.
For example, to change your default repository to the North American master mirror in Utah, run this command:
% tlmgr option repository http://ctan.math.utah.edu/tex-archive/systems/texlive/tlnet
To switch back to the Paris mirror master site, run this command:
% tlmgr option repository http://mirror.ctan.org/systems/texlive/tlnet
That chooses a mirror site that is `near' you. It is likely to differ on successive updates, due to network traffic and machine load.
Once a repository has been chosen, your TeX Live installation remembers the setting, so the sample commands above are likely to be needed only once a year, unless you travel a lot with TeX Live on a mobile device, in which case the Paris mirror master is likely your best choice.
If you have reason to suspect that your chosen TeX Live mirror is out of date, check its status at the CTAN monitoring site.
You can find a long list of CTAN mirror sites here.
The official description of installing the TeX Live 2021 pre-test is found here. However, because that document doesn't show explicit examples, it may leave the human puzzled as to which repository to choose. I recorded these steps that produce a working installation using the Utah mirror:
### Move to a temporary directory $ cd /path/to/suitable/temporary/directory ### Fetch the small (4MB) installer bundle $ wget http://www.math.utah.edu/pub/texlive/tlpretest/install-tl-unx.tar.gz ### Unpack the installer bundle $ tar xf install-tl-unx.tar.gz ### Move to the just-unpacked installer directory ### [WARNING: the year-month-day numeric suffix changes daily] $ cd install-tl-20210106 ### Start the work, using the Utah mirror $ ./install-tl -repository http://www.math.utah.edu/pub/texlive/tlpretest/
The text-mode installer asks a few questions. I selected all binary platforms (because our servers supply them to numerous clients of varying operating systems and CPU types), picked the full installation scheme, provided the location of the installation tree, chose a suitable local paper type, and then typed I to begin the automatic installation of 7083 packages. That step took 71 minutes on a physical machine, or about 100 packages per minute. The installation creates about 15_000 subdirectories and 211_000 files.
The newly installed tree takes almost 7GB of filesystem space, with executables for 16 platforms in the bin subdirectory. That subdirectory takes 1.6GB of space, so a typical installation with just one platform directory would need about 5.5GB of space. However, the space requirements increase with each subsequent update with tlmgr, because it normally saves previous versions, to allow package rollback if a problem is detected.
To verify the above experience, the next day, I redid the installation on a virtual machine running FreeBSD 13 with ZFS. This time, I selected just the default platform package. There were 3978 packages installed in 36 minutes, or 110 packages per minute. Packages per minute rates on other installations are:
Every few days, I update my TeX Live 2021 pre-test installation tree like this on my CentOS 7 workstation:
$ PATH=/path/to/texlive/2021/bin/x86_64-linux-centos-7:$PATH $ export PATH ### Update TeX Live Manager itself (this usually does nothing) $ tlmgr update --self ### Update TeX Live 2021 tree $ tlmgr update --all
Obviously, those commands are good candidates for hiding in a wrapper script. If you add an invocation of that script to your crontab(1) file, it then runs automatically at intervals that you specify in that file, and sends its output in e-mail to you.
Here is a sample crontab file entry that does just that, dispensing with a wrapper script, and running the update every Sunday morning at 3:15am local time (the #-initiated comments are part of my crontab file as a reminder of the field order and meaning):
# 00-59 00-23 01-31 01-12 0-6(0=Sunday) # mm hh dd mon weekday command 15 3 * * 0 ( PATH=/path/to/texlive/2021/bin/x86_64-linux:/bin:/usr/bin ; export PATH ; tlmgr update --self ; tlmgr update --all )
Change the weekday field from 0 to * to make the job run daily. Change it to 1,3,5 to run it Monday, Wednesday, and Friday. Such jobs are usually best run at off-peak hours at both your site, and the repository site. Our site in Utah is on US Mountain Time (UTC/GMT - 6 hours in summer, UTC/GMT - 7 hours in winter). The TUG master site in Paris, France, is on Central European Time (UTC/GMT + 2 hours in summer, UTC/GMT + 1 hour in winter).
The script that we use at Utah for TeX Live updates is available here. It contains a comment header that includes reminders of how to select the repository during both pre-test and post-release times.
Donald Knuth's TeX and METAFONT, and their associated TeXware and MFware programs and data files, have proven extraordinarily robust over more than 40 years of use, and have been ported to most commercially important machine architectures (operating systems and CPUs), from mobile devices to supercomputers. Their original code is in Web, a literate-programming language that can be processed by tangle and weave to produce a Pascal program file and a TeX file that documents the software.
Pascal no longer enjoys the popularity that it did when Donald Knuth chose it as his second implementation language, and C is now the most widely available language that continues to be used for much of the world's software. Thus, for about 30 years, the Pascal source has been translated automatically to the C language by a special utility.
A modern TeX Live distribution continues much
more than just TeX and METAFONT. On top of
those two programs, each about 20_000 lines of Pascal,
stands a TeX Live source tree with about 2200
directories, 28_500 files, 275 Makefiles,
and 85 GNU-style configure scripts. The binary
subdirectory for each platform contains up to 460
programs. The total amount of source code just in the
main languages (Asymptote, C, C++, Java, Lisp,
PostScript, and Web) and
in five scripting languages (awk, lua, perl, python, and
shell) is about 4_908_000 lines. In addition,
in the final TeX Live installation tree, there are about
8_500_000 lines of code in the TeX macro language. In
round figures, there are more than 13 million lines of
code in TeX Live 2021!
Many of the source packages that are built and included with TeX Live distributions are handled independently by other developers outside the TeX world, and the job of the TeX Live team each year is to find out by actual build experience how many of the latest releases of those other packages have build issues, or platform dependence, or nasty CPU-architecture assumptions.
In view of those observations, it should be no surprise that the annual TeX Live production takes team members about three months of hard work, and that there are numerous platforms where builds are only partially successful, and thus, some programs are missing from the binary subdirectories, as illustrated by the large table near the beginning of this Web page.
Alpine Linux is unusual among Linux distributions in that it is based on the MUSL C standard library, rather than the GNU one. Our site has test machines for several versions of Alpine Linux, but only one non-Alpine system that uses MUSL: Void Linux MUSL on x86_64. After a user reported in mid-2020 that xindy is missing from the x86_64-linuxmusl binary package, I supplied him binaries for xindy from Void Linux. However, that did not work because of shared library differences with Alpine Linux.
The major impediment to building a full TeX Live 2021 system on Alpine Linux is that its binary package system lacks clisp and the -lsigsegv library. Most Alpine versions also lack the -lffcall library, although it is available in Alpine 3.12 and later. Fortunately, I had previously built clisp on Alpine 3.6.2, and I found that, after building and installing the two dependent libraries on Alpine 3.11, I could then build Clisp. However, its installed lisp.run file still had dependencies on shared libraries that I wanted replaced with statically linked libraries. By removing the *.run files from the clisp build tree, editing the Makefiles to replace references to -lxxx libraries with pathnames for the static libraries, and restarting make, I was able to rebuild clisp so that its lisp.run file has only these dependencies:
$ ldd $prefix/lib/clisp-2.49.92/base/lisp.run /lib/ld-musl-x86_64.so.1 (0x7f72c5958000) libreadline.so.8 => /usr/lib/libreadline.so.8 (0x7f72c55a8000) libncursesw.so.6 => /usr/lib/libncursesw.so.6 (0x7f72c554d000) libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7f72c5958000)
Despite my replacing references to -lreadline with the static library, there is still a reference to its dynamic library. Fortunately, the listed libraries are also available on Void Linux MUSL.
With clisp installed locally, and extra installations of ghostscript and texinfo, the build of all of TeX Live, and Asymptote, succeeded on Alpine 3.11.6 in 2020, and on Alpine 3.12.3 in 2021. On the latter, the default compiler family is gcc-9.3.0.
Further tests show that the Alpine 3.11.6 binaries work on Void Linux, and on Alpine Linux back to 3.6.5 (3.6.0 released May 2017). However, on some of those older versions, there are missing libraries that I resolved by creating symbolic links to existing libraries, and setting the environment variable LD_LIBRARY_PATH. Alternatively, one can simply copy the missing library version from a compatible O/S. If there is a need for a build on an older Alpine version, please contact the author of this site.
Alpine Linux 3.13 was released on 14-Jan-2021. Two days later, I successfully built a new VM for it, installed a large collection of packages, built and installed libsigsegv and Clisp from source code, and then successfully completed a TeX Live 2021 build, including Asymptote.
We have several small Wandboard Quad boxes used for dedicated network services and software development and testing. In 2014, each cost less than USD200. They contain a Cortex-A9 4-core 1GHz ARM rev 10 (v7l) processor, 2GB of DRAM, and a 32GB MicroSD card with an EXT4 filesystem. One of them, used for this build, has an external 128GB disk. They run Arch Linux (a rolling release O/S) in little-endian addressing mode.
In 2021, the Linux kernel version is 5.5.2-1-ARCH, and the C/C++ compilers available on this system are from gcc-9.2.0. The Arch Linux package system for this machine contains TeX Live 2019. The build of TeX Live 2021 and Asymptote from the texlive-20210106 snapshot completed without problems.
Warning: There are numerous models of ARM processors, manufactured by several different companies, sometimes with differing subsets of the ARM instruction set architecture. Most can run with either big-endian or little-endian addressing, but which of those is in effect is determined by the operating system chosen for installation. Thus, even if you have an ARM system running some Linux distribution, you might not be able to use our TeX Live 2021 binaries.
Vendor support for CentOS 6, first introduced in July 2011, ended in November 2020. Nevertheless, my site has a significant hardware investment in Sun Ray thin clients that require this operating system, and funds have not yet been found to replace them with newer workstations. In recent years, TeX Live builds on CentOS 6 have had many problems, and it has not been possible to build all of the expected executables. This year, surprisingly, saw some improvements: only Asymptote and xdvi are not buildable.
Because we intend to retire the CentOS 6 desktops, and repurpose their servers, as soon as financially possible, I do not want to spend time trying to resolve build problems for this operating system, but I will continue to make build attempts if major TeX Live 2021 source changes are made.
The IBM POWER, and later, PowerPC, CPUs were the first commercially successful RISC microprocessors (POWER stands for Performance Optimization With Enhanced RISC). POWER appeared on the market in the IBM RS/6000 workstation in February 1990, and was the CPU design that introduced the Fused Multiply-Add (FMA) instruction that is so useful that it is now a requirement of the 2008 and 2019 IEEE 754 Standards for floating-point arithmetic, and is a key component in much modern research in computer arithmetic.
The IBM System/360 family (later named System/370, System/380, System/390, and now, zSeries or z/Architecture) introduced in April 1964 was the first to use byte addressing, and had the reasonable convention that memory is viewed as a linear stream advancing from 0 on the left to the highest address on the right, and thus, the address of any multibyte object is always that of its first byte from the left, which is then the high-order byte of any numeric value.
POWER and PowerPC support that big-endian addressing convention, but because the DEC PDP-11, DEC VAX, and Intel x86 and x86_64 architectures address objects by their rightmost (and low-order) byte, a practice known as little-endian addressing, the IBM CPU designs also support that alternate convention, although a given system must be entirely in traditional big-endian order, or in little-endian order. Since then, ARM, MIPS, RISC-V, and SPARC CPU designs also support big-endian and little-endian addressing, although in their markets, only one or the other of those has been used in practice.
IBM's own AIX operating system on POWER and PowerPC is big-endian, but CentOS and Red Hat Linux distributions are available for both endian designs. My test lab includes VMs for both, because programmers have too often made false assumptions about byte order in multibyte objects, compromising software portability. TeX Live builds in previous years have on occasion found endian bugs in support software.
The main TeX Live 2021 build on CentOS 7.9 on PowerPC64 in big-endian addressing mode was problem free. The Lua JIT programs are missing, because they are not supported on that architecture. Asymptote failed to build, with the configure script claiming that the math library (-lm) is missing, which is certainly not the case. The problem will be reported to the Asymptote author.
This is the last release of CentOS for 32-bit x86, and support for CentOS 7 ends on 30 June 2024. Later releases of CentOS, Fedora, and Red Hat Enterprise Linux drop support for this architecture, and concentrate on x86_64, with some support for other CPU designs. The last manufactured Intel x86 CPU may be the Pentium Dual-Core in 2007.
One of the earliest builds at Utah for TeX Live 2021 was on CentOS 7.9 (a free subset of the commercially supported Red Hat 7.9). The previous year's build script, with the year updated, was used, and the build failed in the icu Unicode support library due to C++ issues with static libraries.
A problem report on the developer list was quickly answered with news that a workaround had been found in recent years (but had never been necessary at Utah), and required only the addition of a configure-time option.
The build script was easily updated to use that option on CentOS 7 systems, and a fresh build, including Asymptote, ran flawlessly, using the locally installed gcc-7.5.0 compilers. The native g++ compiler on CentOS 7 is the 2015-vintage g++-4.8.5, which is too old to support the ISO C++ 2014 Standard whose features are used in some of the libraries that are compiled in TeX Live builds.
The TeX Live 2021 build on a PowerPC64 little-endian operating system went like that on the big-endian system described in the preceding section, and the Asymptote build failed for the same reason. The Lua JIT programs are missing, because they are not supported on that architecture. The native compilers are from the gcc-8.3.1 family.
The TeX Live 2021 build with native compilers (gcc-8.3.1) on CentOS 8 was flawless, and the extra option needed on CentOS 7 is not required.
The TeX Live 2021 build with native compilers (gcc-8.3.0) on Debian 10.6 (code name buster) on little-endian PowerPC64 completed successfully, except for the Lua JIT programs, which are not supported on that architecture.
The TeX Live 2021 build with native compilers (gcc-10.2.1) on Debian 11 (code name bullseye/sid) on ARM64 (also known as aarch64) was problem free.
DragonFlyBSD was forked from FreeBSD 4.x in 2003 with the intent of significantly improving multiprocessor and multithreaded performance, and of developing a completely new filesystem, Hammer, capable of distributed storage replication and file-based snapshots. By 2019, the kernel had been demonstrated to be capable of running over a million simultaneous processes, and network throughput has been significantly improved over DragonFlyBSD releases just a few years older.
As of 2021, DragonFlyBSD still runs only on x86_64 (amd64) hardware. One may hope that, with the rise of ARM64 and RISC-V64 in the market, it may be ported to those CPU families in the future.
In late August 2019, a DragonFlyBSD developer announced an early implementation of dfbeadm, similar to the beadm (Boot Environment ADMinistration) command in the Solaris and some BSD families, to make it easy to capture the state of the entire system before a major upgrade and a reboot. If problems arise from the upgrade, one can use dfbeadm to revert to the previous boot environment, and reboot, restoring the system to its pre-upgrade state.
Our test laboratory has included virtual machines for DragonFlyBSD versions from release 3.2 in late 2012, up to the latest, 5.9, in February 2020.
Because of significant kernel and library differences, DragonFlyBSD is not capable of running FreeBSD executables, but its development team has an outstanding record of supplying a large prebuilt binary package repertoire (almost 32_000 packages at release 5.9), including the latest development versions of gcc 4.x to 11.x, as well as clang versions from 6.x to 11.x. I have found DragonFlyBSD a pleasant and reliable system to use. Generally, if software packages can be built on GNU/Linux and FreeBSD, they should also be buildable on DragonFlyBSD.
The DragonFlyBSD development cycle is fairly short, and binary package archives for older releases tend to disappear from the Internet, so users need to be prepared to run O/S updates at least a couple of times a year.
Until recently, the DragonFlyBSD package system did not supply clisp, but after installing some needed libraries, I built it without problems from source code with these commands:
% tar xf /some/path/to/clisp-2.49.92.p3.tar.gz % cd clisp-2.49.92.p3 % ./configure --prefix=$prefix \ --with-libffcall-prefix=/usr/local \ --with-libiconv-prefix=/usr/local \ --with-libpth-prefix=/usr/local \ --with-libreadline-prefix=/usr/local \ --with-libsigsegv-prefix=/usr/local \ --with-libunistring-prefix=/usr/local \ && gmake all check % gmake install
After Clisp was added to the package system, all that is needed is
# pkg install clisp
The bin subdirectory of this Web site contains a complete set of TeX Live 2021 binaries, but only for the most recent versions of DragonFlyBSD. The first, for DragonFlyBSD 5.8.3, was built from a texlive-20210106 snapshot on VM using the native Clisp. Builds for DragonFlyBSD 5.6.2 and 5.9 also succeeded. Builds for more DragonFlyBSD versions are in progress.
Fedora is the freely accessible development branch of the commercial Red Hat Linux. A build of TeX Live 2021 with gcc-8.3.1 compilers was successful.
The TeX Live 2021 build on Fedora 32 x86 with the native gcc-8.3.1 compilers was problem free.
The TeX Live 2021 build on Fedora 32 x86_64 with the native gcc-10.2.1 compilers was problem free.
Fedora 33 appeared in October 2020, and is the latest stable release of the freely accessible development branch of the commercial Red Hat Linux.
RISC-V is a family of open standard instruction set CPU designs developed starting in 2010 by a research group at the University of California, Berkeley, with many collaborators from other institutions. The latest edition of the famous Computer Architecture books by John Hennessy and David Patterson devotes considerable space to the motivation for, and design of, the RISC-V family, and David Patterson and Andrew Waterman describe it in detail in their book The RISC-V Reader: An Open Architecture Atlas .
Although off-the-shelf desktop computers with a hardware RISC-V CPU are not yet readily available, there are numerous efforts in that direction, recorded here.
You can, nevertheless, run complete operating systems on software emulations of RISC-V, including FreeBSD, NetBSD, and the Linux distributions Debian, Fedora, Gentoo, Janus, OpenMandriva, openSUSE, and Parabola.
In a few hours, I was able to create, configure, and do numerous binary package installations on, a Fedora 32 (Rawhide) RISC-V64 virtual machine, following clear instructions available here. It runs on my personal workstation, which is bootable to several different operating systems, currently Fedora 33 (Rawhide). The workstation CPU is an Intel Xeon Platinum 8253, and the virtual machine runs under virt-manager QEMU/KVM.
Because every RISC-V instruction must be emulated, the machine runs several times slower than one that is natively x86_64, but the performance is nevertheless surprisingly decent. Tests on the x86_64 host versus the RISC-V guest compiling a large mathematical library serially, then in parallel with 2, 4, 8, ..., 256 processes, showed that the emulated virtual machine runs 15 to 30 times slower.
In early 2021, there are about 50_500 packages available in the Fedora 33 RISC-V64 system, compared to about 56_500 for Fedora 33 x86_64. On Fedora RISC-V64, the package system supplies several versions of the gcc and clang compiler family, plus clisp, emacs, go, and TeX Live 2020. The system supports 8-, 16-, 32-, 64-, and 128-bit two's-complement integer types, and 32-, 64-, and 128-bit IEEE 754 binary floating-point types. Addressing is little-endian. The preprocessors for the C and C++ compilers define the symbol __riscv on this architecture.
In build scripts, and the SYS.texlive-2021.* scripts, I had to add test cases for the model architecture symbol riscv64 produced by the uname -m command, but apart from that, the build of TeX Live 2021, excluding Asymptote, was uneventful. Asymptote, however, does not build because it downloads other packages whose config.guess scripts are too old to recognize the RISC-V architecture.
A build of TeX Live 2021 with the native gcc-10.2.1 compiler family was flawless.
The TeX Live 2021 build with the native GNU compilers (gcc-7.3.0) on FreeBSD 10.4 AMD64 (= x86_64) was problem free.
The TeX Live 2021 build with the native GNU compilers (gcc-9.3.0) on FreeBSD 11.4 AMD64 (= x86_64) was problem free.
The TeX Live 2021 build with the native GNU compilers (gcc-9.3.0) on FreeBSD 12.4 AMD64 (= x86_64) was problem free.
The FreeBSD binary package repository is also the source of packages for several related BSD variants. Regrettably, even the latest FreeBSD development version only supplies TeX Live 2015. The annual builds available at this site, and in the TeX Live DVDs, for that system can help to remedy that serious deficiency.
The TeX Live 2021 build with the native GNU compilers (gcc-9.3.0) on FreeBSD 13 AMD64 (= x86_64) was problem free.
A separate build with the native LLVM Clang compilers (clang-9.0.1) was also successful. Consequently, two binary distributions are available for this system.
NetBSD runs on an astonishly large number of hardware platforms (about 75), and we have in the past run it on amd64 (x86_64), i386 (x86), SPARC, and VAX systems. However, our current NetBSD systems are all on amd64 hardware, and we have machines running NetBSD versions from 5.x to 10.x.
On NetBSD 6.1.5 on amd64, the default C compiler, /usr/bin/gcc is version 4.5.3, and the latest version in the binary package system, /usr/pkg/gcc49/bin/gcc, is version 4.9.4 (2015). Both are too old for TeX Live 2021, but as an experiment, I forced the build to completion, ignoring errors, with make -i -k world. That produced 333 executables, but among the missing ones are all TeX-based programs. Thus, no binary package is provided for this operating system, which was first released in October 2012. The only NetBSD versions still supported by the NetBSD team are 8.0 and later.
The TeX Live 2021 build on NetBSD 9.1 amd64, including Asymptote, was flawless. The native compiler families is gcc-7.5.0. Builds were also successful on NetBSD 7.2 (with an additional package that provides gcc-7.4.0, because the native gcc-4.8.5 compilers are too old), and on NetBSD 8.2 (gcc-5.5.0 compilers).
On NetBSD 6.1.5, the latest compiler version available is gcc-4.9.4, and that is too old to compile some of the TeX Live 2021 code.
Our NetBSD 10 system is a development version of NetBSD that is not yet on a release schedule, but based on past history, might become official in 2022. Its /etc/release file, and the output of uname -r, both report a version of 9.99.46.
The build of the main part of TeX Live 2021 with the native gcc-8.3.0 compiler release was successful. Asymptote built as well, but its test suite hung several times: a top display showed no activity, so it looks like tests are waiting for some event, without using CPU time. The asy executable is included in the binary distribution from this system.
Unlike most other O/S distributions, OpenBSD makes no guarantee of upward compatibility of compiled binary executable programs in major and minor releases. Thus, locally installed software that requires compilation must be rebuilt at each OpenBSD release. Our test site has OpenBSD versions from 4.0 (November 2006) to the latest 6.8 (October 2020), but the master site provides downloads only for the latest O/S version, and packages only for the last five minor versions. Unlike many other O/S distributions, between-release package updates are not common on OpenBSD, so the system remains relatively stable for months at a time.
The default build on OpenBSD 6.8 x86_64 with native GNU compilers failed because they are far too old: gcc-4.2.1 from 2007. A restart with the native LLVM Clang compilers (version 10.0.1) produced a successful build of all but Asymptote.
The OpenBSD 6.8 package system has a gcc-8.4.0 compiler, but it lacks a companion C++ compiler, making it useless for TeX Live builds.
Much of Asymptote compiled with the Clang compilers, but eventually, a compilation error terminated the build. The issue will be reported to the code's author.
Builds for OpenBSD 6.4, 6.5, 6.6, and 6.7 were partially successful, but more work needs to be done to resolve the several missing executables.
In mid-2020, the openSUSE project released a new distribution called Jump. Like the Tumbleweed distribution, Jump is also a rolling release.
The TeX Live 2021 build with the native gcc-7.5.0 compilers was flawless.
The TeX Live 2021 build on openSUSE Tumbleweed x86 with the native gcc-10.2.1 compilers was flawless.
The TeX Live 2021 build on openSUSE Tumbleweed x86_64 with the native gcc-10.2.1 compilers was flawless.
Oracle 7 Linux is based on Red Hat Enterprise Linux (RHEL) 7, and the main build with the locally installed gcc-7.4.0 compiler family succeeded. However, Asymptote did not build because its configure script fails to find the math library: that is the same flaw as seen in on CentOS 7.9 PPC64BE.
The main TeX Live 2021 build on Oracle 8 Linux with the native gcc-8.3.1 compiler family succeeded, but Asymptote failed to build because it could not find the math library: that is the same flaw as seen in on Oracle 7.9 x86_64 and CentOS 7.9 PPC64BE.
Peppermint 10 is a Linux distribution derived from Debian 10 (code name buster/sid) and Ubuntu 18.04 (code name bionic/beaver). The build of TeX Live 2021 with the native gcc-7.5.0 compiler family was completely successful.
Oracle Solaris 11.4 (build ID 18.104.22.168.1.82.3 of 15 December 2020) on i86pc (the designation for combined i386 and amd64) now has modern compilers, with gcc-10.2.0 and clang-10.0.0 families, and the main TeX Live 2021 build was flawless, except for those that need Clisp. Further investigation found a flaw in the OpenCSW packaging system: the executable /opt/csw/lib/clisp-2.47/base/lisp.run has references to missing libraries:
# ldd /opt/csw/lib/clisp-2.47/base/lisp.run ... libavcall.so.0 => (file not found) libcallback.so.0 => (file not found) ...
I resolved that problem by creating two symbolic links:
# cd /opt/csw/lib # ln -s libavcall.so.1.0.1 libavcall.so.0 # ln -s libcallback.so.1.0.1 libcallback.so.0
A following fresh build successfully created the missing Clisp-based executables.
Asymptote failed to build, with the configure script falsely claiming that the math library (-lm) is missing.
This VM runs on the IBM mainframe architecture s390x, a recent member of the world's oldest upward-compatible architecture that began with IBM System/360 in April 1964. See earlier remarks about the two main IBM architecture families on which Unix systems can run. The current IBM mainframe systems are part of the IBM z/Architecture, where z means zero downtime, due to replication and automatic failover of critical components.
System/360 marked the first time that a commercial computer vendor offered a range of software-compatible machines. Until then, each vendor's new machine model was sufficiently different from its predecessor models that software largely had to be programmed anew. IBM's System/360 product-line idea changed the entire computing industry, and by the mid 1970s, most computer vendors offered a range of models that promised upward compatibility. CPU designs also became largely model independent, with the DEC VAX enjoying a two-decade run, and the Intel x86 and x86_64 a four-decade market.
System/360 was the first architecture with byte-addressable memory, and it initially supplied support for 8-, 16-, and 32-bit integer arithmetic, and 32- and 64-bit floating-point arithmetic with hexadecimal normalization. By 1972, 128-bit floating-point arithmetic was added and implemented in hardware in the larger models, and the Fortran compiler was extended with an Automatic Precision Increase option to move floating-point code to the next higher precision.
By the end of the 1990s, the architecture was extended with support for 64-bit integer arithmetic, and for IEEE 754 floating-point arithmetic in three sizes each for binary and decimal formats. Over its life, the address space has grown from 24 to 31 to 64 bits. Although it has always used big-endian addressing, support has been added for addressing of little-endian data.
Regrettably, the Fedora and Ubuntu operating systems on s390x do not provide compiler access to the original hexadecimal floating-point types; they support only the IEEE 754 formats.
IBM was the first commercial computer vendor to introduce the idea of virtual machines: the earliest reference to their work that I can find is dated 1966. Although it took about three decades for the importance of virtualization technology to be understood by the rest of the computing industry, the author's build farm at Utah, and the whole industry of cloud computing, depend on virtualization. With adequate numbers of CPU cores, and substantial DRAM, my home and campus office workstations each run 60 to 80 VMs simultaneously, and repurposed older servers runs hundreds more.
During the TeX Live 2021 build work, I had to add a virtual disk to one VM that needed more space: it took a one-line command to shutdown the machine, another one-line command to create a new disk, a few mouse clicks and a file selection to add the disk to the VM, and two mouse clicks to restart the VM. It then took only a couple of minutes of work to format and mount the new disk, and add it to the system's /etc/fstab file that records filesystems to be mounted at boot time. On several VMs, where I had previously allocated only 1GB to 2GB of DRAM, I needed more memory for the TeX Live builds: it took a quick VM shutdown, typing the new memory size (4GB) into a box (4 characters), and a button click to restart the VM. By contrast, on two physical machines at home, I had to wait a week for new DRAM cards to arrive by express order from a factory in China, and then wait several more days for busy machines to be less busy, shut down the systems, disconnect and open their boxes, replace old cards with the news ones, reassemble and recable the boxes, and then power them on. Except for poor I/O performance, VMs are often an excellent solution for computing, giving access to additional platforms for software that is unavailable on your own system, for testing software portability, and for testing system updates in an easily recoverable VM before applying them to a physical machine that may be much harder to restore if the update causes problems.
C/C++ compilers on s390x define the symbols __s390__ (31-bit addressing) and __s390x__ (64-bit addressing) to allow compile-time detection of the architectures. Since 2015, the Linux kernel supports only the 64-bit model.
Available Linux distributions for s390x include Alpine, CentOS, Debian, Fedora, Gentoo, Red Hat Enterprise Linux, Slackware, SUSE, and Ubuntu.
There do not appear to be any BSD family operating systems for s390x, although I found one effort that appears to be dormant.
During extensive testing of numerical software on Ubuntu 20.04 on s390x, I found that I was able to crash the entire VM from unprivileged user code, often with ensuing file corruption, with my numerical tests. I eventually narrowed the crash down to a one-line C program that, when compiled and run, caused the crash. The bug was in the emulation of the 128-bit IEEE 754 square root instruction, and was reported to the QEMU developers on 17 June 2020. Two days later, they found a fix that affected only a single line of code in QEMU's qemu-system-s390x emulator, but regrettably, it took almost six months for the fix to make it into the Ubuntu package system. Since then, the system has been stable, and no further surprises have turned up in my porting of large amounts of software to that machine.
The build of TeX Live 2021, including Asymptote, but excluding Lua JIT programs, and xindy, on Ubuntu 20.04 LTS on s390x with the native gcc-9.3.0 compilers was successful. The Lua JIT programs are missing, because they are not supported on that architecture. The xindy build failure happens because clisp suffers a segmentation fault.
The TeX Live 2021 build with native compilers (gcc-9.3.0) on Ubuntu 20.04 LTS (Long-Term Support, until April 2025) (code name Focal Fossa) on x86_64 was problem free.
The TeX Live 2021 build with native compilers (gcc-9.3.0) on Void Linux with the MUSL C library (instead of the usual GNU C library) had no issues, and Asymptote built as well.
The MUSL library was written beginning in 2011 to provide a clean, independent, and POSIX-conforming, redesign of the C run-time library. Besides Void Linux, other Linux distributions that use MUSL include Alpine, Dragora 3, Gentoo, and a few others: see Wikipedia for more about MUSL.
The build attempt of TeX Live 2021 0n the 32-bit companion of the 64-bit Void Linux system showed problems: the zlib compilation exposes a 64-bit off64_t type that does not apply to this platform. Perhaps this is due to differences in the definitions of default preprocessor symbols on MUSL versus the much more common Glibc: further investigation is needed. I forced the build to continue in nonstop mode with make -i -k world in the source/Work directory, and was later able to build Asymptote without problems. However, as a result of the earlier failures, there are 32 missing executables, most seriously of LuaTeX and related executables, and of several font tools, and the xdvi previewer.