Early computer contact included self-teaching FORTRAN IV and PDP-10 assembler in the late 70s while a Gymnasium student. At the time, my school cooperated with a local research institute. I was a member of a group of pupils that were granted access to the institute’s main-frame computer, a PDP-10, during off-hours on Saturday mornings.
The next ten years or so, I spend in the academia, earning myself a Ph.D. in Mathematics. I also gathered quite a bit of computer experience along the way, as I
picked up more programming languages than I care to remember, including ALGOL 68 and C,
worked as teaching assistant for several computer courses, including C and “algorithms and data structures” (textbook the classic by Robert Sedgewick), and for numerical mathematics,
had learned to love the UNIX philosophy and ecosystem,
dabbled with networking. For example, I would occasionally talk SMTP over telnet to send an email as “emperor of china” to a friend, to demonstrate how easily email sender addresses can be forged.
The 90s: Simulations, Linux, network, CORBA
In 1990, I turned fascination for programming into a profession and have been a software developer ever since.
The 90s saw me leading simulation projects: Discreet simulations, logistics algorithms, with occasional excursions into planning and optimization. I also learned object orientation and C++, and later taught both to my colleagues.
These were also the early days of Linux. My initial Linux servers provided early documentation systems (nowadays you’d use a Wiki), helped dialing into customer systems via modems (this was when mailboxes and BBS were cool and exciting), and they soon provided the department’s first internet connection. At the time, you had to walk down the hall to a separate office to reach the internet. Later in the decade, first web applications also ran on Linux (coded using Perl).
Back to the early 90s, when I tinkered installation
.bat files for
floppy disks. This allowed us to simply send the floppies to our
customers via mail, instead of installing each new version of our
software on their PCs personally. A win-win scenario resulted: We
were more at liberty to “release often”, and our customers felt more
in possession of the software they had paid us to develop.
During those years, I earned some reputation as a
and build system expert, and also as a maintainer and troubleshooter
for various version control systems we used.
My network interest also continued, by and by I became an expert in networking and network protocols. One highlight was opening to network access existing software (which did heavy-industry factory-floor automation). CORBA was the requirement of the day. Unfortunately, that automation software ran on VAX machines under OpenVMS, an environment lacking multi-threading functionality. I was given leadership of a three-person project that, over the course of a year or so, delivered a (partial) ORB that was CORBA-compatible in what functionality it had. The architecture we came up with would nowadays be call event-based. That ORB was successfully employed in several distributed projects.
The 00s: The enterprise years
Java started to rise in the late 90s. In the first years of new millennium, it ruled supreme.
Hard to believe nowadays, but at the time, Java was a herald of progress: It no longer took 2 years for newcomers to achieve proficiency in the language, as it had with C++. Gone were the days of memory leaks, or so we initially hoped. An ever-expanding ecosystem offered canned solutions for many everyday problems. For a brief moment, we even fancied this brave new world would offer us easy concurrency and parallelism.
Also, enterprise became the next big thing. A few years earlier, “10 people” had been a large project. Now, “100 people” was ordinary. These were the days of off-shoring and multi-continent projects. Envision huge tankers plowing the waves with majestic (though mediocre) speed, amazing sights to behold, burning fuel like crazy, hard to steer and control.
At the time, I worked for a consultancy agency that would hire me out as manpower into such projects. (Though our official communication tried hard to make it appear more glorious than that.)
A typical operation started when I would be taken on board as an enterprise developer. Some two or three months into the project, I had usually earned myself some reputation for improving builds and changing the test infrastructure for the better, you might find me designing or coding a network interface to some third-party or legacy software, documenting “current best practice” for version-management branching and merging, or taking care of some kind of migration or another.
There was demand everywhere for stuff I liked to do and knew how to do, and also for new stuff I could pick up quite as fast as the next person. To this day, I trust in the ability of interesting problems to find their way to me. I like my job, and teams generally find my expertise valuable.
The 10s: Distribution and magic.
As time progressed, the industry learned that “big vessel” software, hallmark of early enterprise projects, carries in its belly its own characteristic set of – problems.
To mention just one among several, such software has a way of creating time bubbles. As new technology comes along, one would love to, say, replace the vessel’s engine. That is easy to desire, but hard to do. Typically, the company’s very survival depends on the ship’s ceaselessly plowing the waves!
Today, gone are those days of majestic vessels ruling the seas. While you still see them, the more common thing is a whole flotilla of much smaller ships, which cooperate intelligently to get the job done.
This mitigates the “time bubble”-problem: Modernization can now be accomplished by doing it one small ship at a time.
“Intelligent cooperation” within the flotilla requires communication from ship to ship. These days, almost every project is distributed.
I upgraded to working for a new consultancy company, with enjoys a threefold reputation: For knowing how to organize flotillas, for getting network communication right, and for digging projects out of time bubbles (and similar problems).
As for modern engines to power our ships, we now have at our disposal a good store of friendly, powerful magic. A minimum of configuration serves as the magic incantation. It summons up robust, tested constructions, addressing standard problems with standard solutions. One can focus on the remaining problem, the unique, non-standard demand at hand.
I enjoy well-constructed magic. I appreciate the time-saving it brings, like anybody does. For me in particular, a real fun part is to analyze. I start with the obvious question: Which problems do they consider standard? Beyond that, I consider it an intellectual treat to get to open the hood and peek inside. How do they go about it?
And then (rather occasionally), I construct magic myself. I may find an abstraction that allows others to just use it. I take care of the gory details, so they don’t have to. Seeing such magic being used matter-of-factly is a satisfying experience indeed!
These modern times are for me!
I’ve always enjoyed designing and coding network communication and remote interfaces. Never has network stuff been more commonplace, more in demand than today. Most likely, the demand will increase even more, given current trends like industry 4.0, digitalization, and IoT.
Software nowadays goes distributed in grand style. At one extreme, I’ve been involved in projects with worldwide deployment of software, servicing masses of customers in two, three, four continents. At another extreme, brittle radio links connect power-constrained IoT devices.
Either way (and many other ways in between): Going distributed raises lots of interesting problems. I can reasonably hope to continue receiving my fair share of those, and look forward to tackling them.
Flotillas of small ships cruise the oceans. Applications are distributed across continents, into clouds, onto IoT devices. For this, of course, a lot of shipbuilding and distribution needs to be going on. “Automate, automate, automate”, the battle-cry resounds.
Recently, I automated installing entire clusters (operating system, cluster management software, and related services on diverse hosts) on top of a cloud infrastructure. Compare this with the humble beginnings of installing a single application on a single PC some 25 years ago. Solving automation problems has always been a satisfying experience. These days, it gets better and better.
In particular since there is a lot of friendly magic in the air. The words “cluster management software” and “cloud infrastructure” stand for quite a hand full of such magic, waiting to be wielded.
It must be said some of this is rather “bleeding edge”. There are still quite a few rough edges that time will have to grind smooth. Today, we need to solve pesky little problems that future decades will have little reason to bother with, or even know about. These are days for pioneers.
I like being a pioneer.