My first computer, which I got second-hand from my father around 1990, had an OS so ancient I can’t remember what it was called. Almost impossible to credit, it did have a graphical user interface, which was amazing (and amazingly useless). It also had an impressive 512 KB of RAM, proudly announced by its unforgettably German version number, 1512K (the “1” prefix probably meant it was a somehow improved model). What it didn’t have was a hard disk. When you wanted to do something as simple as write a letter, you had to insert a series of wobbly 5.25″ floppy disks in the drive, one containing the OS, the next the GUI, then one with the text editor, and finally one to save the file on.
My second computer was a 80386 I purchased with my own money (and quite a lot of it, computers were still expensive), I think in 1992. It did have a hard disk, with the incredible size of 170 MB instead of the then-standard 60 to 80 MB, but I wanted one so large it would last me 10 years (or so I imagined rather naïvely). It also did have an astonishing 4 MB of RAM, which was enough to play graphically rather intensive games, if you knew your way around config.sys, autoexec.bat and EMM386 so you could design boot configurations that actually made all that massive RAM available, but could then do little else, so you had several such configurations on several boot disks. This was the time when even as a “normal” user you had to be able to do a few things under the hood!
This computer came with DOS 5.0 and Windows 3.1, which makes me a Windows user for over 20 years. I have seen it all: Win95; Win98 with its dreaded error messages that always popped up seemingly out of the blue and infamously left the user with three virtually indistinguishable choices, all of which crashed the system; Win ME, which came pre-installed on my first laptop computer and was so horrible I learned how to do a dual-boot system so I could have W2K on the second partition; then Win XP, my favorite Windows of all times, and finally Win7 which I installed on my first first self-designed desktop computer, a fast machine with a Xeon chip and a SSD, in January 2014.
A year after getting my last Windows machine, I started to learn how to program and gradually realized that the de facto standard for using desktop computers was not necessarily the first choice for doing more involved stuff. Whatever tool you needed, starting with things as simple as a text editor or a C compiler, came with installation instructions deceptively simple, expect they were not written for Windows, but for something I slowly found out was usually Linux. And when you insisted on installing such tools on a Windows system, it involved laborious workarounds, and in the end they often didn’t quite work as intended, or at least not quite as easily as the standard instructions suggested. In short, suddenly Windows seemed like rather the wrong environment, and an unnecessarily complicated one to boot.
Then of course Windows, though hated, was entirely familiar, whereas Linux sounded like a maze of oddly named little tools controlled primarily from the command line. And while I was not necessarily apt to run away screaming when I saw a terminal window–I had started on DOS after all–, I had become quite used to the convenient world of graphical one-click installers. At the very least, acquainting myself with a new OS sounded like a major effort. Not to mention that, as I found out only slowly, there is not one “Linux”, but a multitude of distributions with an entirely different look-and-feel and even different software packages.
Then on the other hand, since my life was radically changing anyway, this was definitely a time to try something new. And somehow Linux seemed like the thing to do once you got serious about programming. The nerd thing, in a way. A little challenge, too. And since I was already familiar with dual boot systems, there was little risk involved. I could just keep my Windows next to a Linux partition.
So I did some reading, got myself a few live CDs for popular distros, and after a few trials and errors settled for the very mainstream Linux Mint, then in version 17.1, with the Cinnamon desktop. Not a big step from Windows, as anybody who has used both will readily admit. Particularly, on Mint there is almost no need for ever using the command line, as there are cute colorful GUIs for almost everything, including installing software. In fact, using Linux Mint felt almost easier than using Windows, particularly since one didn’t need any virus scanners and malware tools, surely the bane of any Windows system. Just before my first term at UAS started, I felt confident enough to put Mint on my laptop computer, without dual boot this time. Once I had defeated the WPA2 encryption of the university network (though to this day I am baffled by the fact that the configuration that worked for me was not even close to the one recommended on the university website), I complacently enjoyed having just one click away what my co-students often took several Windows-specific workarounds to install on their systems.
For that was the first surprise. In line with generally being not nearly as nerdy as I had expected them to be, my co-students are as a rule Windows users. In actual numbers, in a semester group of about 60 to start with, there were–and are–three people, including me, who are at least also using Linux. (As far as I can make out, nobody seems to have a Mac.)
The second and greater surprise was that the CS department at UAS also uses, recommends, and distributes Windows products. The computers in the labs are at least dual boot (Windows and Ubuntu), but the CS department’s technical wiki contains instructions practically exclusively for installing software on Windows systems, detailing every single click and illustrating every step with screenshots, whereas Linux users are left to figure out things for themselves. That’s basically OK, since for one thing installing programming software is a great deal easier on a Linux machine, and also Linux users usually can be depended upon to be more self-reliant in these things. Still, it’s an odd signal to send to budding computer scientists. In the CS 101 lectures we hear how important Unix is and how great open source software, but in reality the department emphasizes Microsoft products. We even get a big software package for free, as a result of UAS being a Microsoft premium partner or some such thing.
My relationship with Linux was–and is–not without its ups and downs. For a great while I continued to use the Windows partition on my desktop computer for gaming and for video editing (I do video DVDs of the kids for their grandparents). Just recently, after much trial and error, I have finally found a halfway convenient workflow for doing the videos on Linux, but it’s still rather involved, compared with the professional all-in-one solution I was using on Windows. As with all things Linux, it also forces you to learn a lot about what’s going on under the hood, but as a matter of fact I am not nerdy enough to really want to learn video editing on the command line level.
What really baffled me after a while was that I found myself installing more and more software manually instead of through the package manager of my Linux distro. Mint is Ubuntu-based and uses the repositories of that distro, which means that most of the versions you get there are really very old, and that goes particularly for programming tools. I remember that when I installed my first Linux, my favorite programming language, Ruby, was in version 2.2. The repository offered me 1.8.3, and that’s really an entirely different language, and years out of date. And each older version of a software had repercussions throughout the system. It’s nice to be able to install software with a single terminal command, but not if you end up in a vicious circle of ancient tools which then refuse to cooperate with other more recent software, or are unable to accommodate current extensions. To give just one example, the repository version of my favorite editor, Emacs, occasionally did not work with plug-ins provided by its own package manager because they required a more recent version of the software. So either I had to find older versions of the plug-ins and install these manually, or conversely manually install a newer Emacs so to be able to use its package manager for the current plug-ins. In either case I was foregoing most of the advantages of managing software on Linux. It should have been easy, but in fact I soon had a confusing maze of manual installations all over the place (every installation routine seemed to put the stuff in yet another folder), a nightmare to keep track of and update.
The last straw came when a couple of weeks ago a co-student, one of the two other Linux users, asked me for help with his problems in getting Gradle, a popular software build tool, to run on his Ubuntu machine. I tried to reproduce his problem, and found that while the current version of that tool is 2.11, the Ubuntu repositories were offering Gradle 1.4. That thing is so old it doesn’t know how to work with Java 8! Now that major version of Java has been out for nearly two years and has brought such a paradigm shift that it’s really impossible to use an older version today. What use is a software builder that accommodates only software so ancient nobody uses it? It’s a joke. Maybe Ubuntu and its derivatives are OK for people basically using their computer as a typewriter of sorts, but if you do anything programming-related at all, it just won’t do.
That same day our smallest daughter, a clumsy, but determined two-year old, yanked the power cord on my laptop computer, bringing it crashing down on the hardwood floor from about four feet high, seriously damaging the hard disk. That machine is nearly four years old, so it was nearing the end of its natural life anyway. Unless I did something radical to make it fit for maybe another couple of years.
Since this was still during the vacations, I got really nerdy (and really cocky, maybe) and jumped right in at the deep end. I bought me an SSD to replace the hard disk, and did the Linux nerd thing: I installed Arch, the everything manual, everything command line, all do-it-yourself distribution. That was a tense two hours to be sure, even with three step-by-step manuals all open on the big screen on my desktop. But hey, I got it to run. And then browsed the Arch repositories with a watering mouth: It was all there, the most recent version of practically everything, including all programming tools. Ruby 2.3. Eclipse Mars.2. Emacs 24.5. Gradle 2.11. And so on, and so on.
Except I couldn’t really get my Arch install to work properly. It was not really anything big, just the small and nevertheless annoying things that a complete out-of-the-box distro took care of automatically. Like for instance the fans on my laptop would always run at full speed. I had a hard time finding a way to easily switch keyboard layouts. I use a German layout for writing, so to have the umlauts available, but a US keyboard for programming, since the German keyboard hides things as simple as a square bracket or a backslash behind impossible CTRL-ALT-some number key combinations. When I for instance edit German text in a LaTeX file I switch like every ten seconds, so I need a single key for doing that.
The killer though was that I didn’t get my Arch laptop into the UAS WLAN. I tried everything for about four hours, but no go. Accessing a WPA2 network using connman on the command line, with manually editing configuration files and all, simply was beyond me. And in the process I messed up things so badly that afterwards I didn’t get into my home network either.
Arch was certainly cool, but I was in over my head. So after two days I gave up and backpedaled. The second best thing to Arch obviously is a distro that offers the same user software, but is less of a hassle to set up and maintain. I wiped the SSD clean and installed Manjaro (an Arch-based distro that mysteriously, instead of actually using the Arch repositories, copies them to own repositories, but it’s the same software in the end), and that was really a breeze. It took a moment to get used to the xfce desktop, and while I got into the UAS WLAN easily, the configuration that worked this time was again different from the one recommended by UAS and from the one had I used before; very baffling. There were some minor problems with software. Eclipse Mars first refused to work except at a snail’s pace and with icons on a black background, but there was a simple solution easily found on the internet (one needed an older version of GTK).
The only thing that defied me for hours was getting a Ruby version management to run. In the winter term we had done a few Ruby programming exercises that needed a GUI, and the graphical toolkit Tk (a badly documented nightmare in the best of cases) just refuses to work with Ruby 2.2 or higher. Yet Manjaro/Arch has Ruby 2.3. So I needed at least one older Ruby (the standard in our programming class was 2.0). But both common solutions, rvm and rbenv, use ruby-build, and that one crashed on me every time I installed any Ruby version at all. After exhausting all useful advice offered in countless internet forums (use this installation option, or that, install the following dependencies, which Manjaro has all out of the box anyway, analyse a log file full of dramatic error messages, and so on), I was about ready to give up, when as a last resort I tried one solution that looked absolutely not promising at all, involving a user-designed patch to be invoked from the command line, as messy as it gets: and this one worked, and not just for the Ruby version it was advertised for, but for all older versions. Heureka!
By the by, a welcome side effect of using Manjaro (or the new SSD, or both) is that it dramatically alleviated my electrical power problem. Mint, or Cinnamon, seems to be a veritable resource hog. With the old setup my battery barely lasted 3 hours, even when I reduced screen “brightness” to almost total darkness. Now I have a comfortable brightness for nearly five hours. And all in all, right now I am basically very happy with my old laptop computer. I’ll yet have to see, of course, how long being one of the few Linux users in a Windows-dominated environment will last me. It might be that once I start working on actual university projects with others I will have to bite the bullet and use Microsoft products again. With the AI professor, C# seems the programming language of choice. But right now I am fine. And somehow, sometimes, just feel a little like a proper nerd.