The Declarer (Floyd McWilliams' Blog)

Thursday, November 27, 2003


Evan Kirchhoff liked my essay on outsourcing, and wrote a post in which he explained why he said that "all currently lucrative programming skills will collapse to zero value over the next 5-20 years". Evan predicts a future in which


... a working American programmer in 2020 will be producing something equivalent to the output of between 10 and 1000 current programmers (even if the number of programmers actually increases overall). Hence, lucrative 2003 skills like being a really good C++ or Java programmer will have gradually fallen to a small fraction of their current value, or effectively zero.

I think current programming skills will vanish by 2020. In the world of 2010, I expect us to be at a midpoint where 2003-style programming still exists, but pays somewhat less than being a plumber, and hence is largely performed in lower-cost countries.


As Clint Eastwood once said, in a slightly different context, "That's not gonna happen." I expect to see many strange events in the year 2020 -- I can imagine lecturing my teenage son, "You can't leave the house with your penis hanging out of your pants," and him replying, "But Dad, all the kids are doing it." Certainly the specifics of software development will change; we won't be programming in Flash or Java or C#.

Evan's vision of the future is a lovely one. But the current state of software development is far too primitive to allow such improvement to occur in the next two decades.

Computer science is in the same state today as chemistry was three centuries past. In the year 1700 Georg Stahl theorized that combustion was akin to rusting, and that both phenomena occurred when the fuel or metal being consumed gave up a substance called phlogiston to the atmosphere. An obvious objection to this theory was that while wood lost weight when it "released phlogiston," iron actually gained weight when rusting.

What was the reaction of contemporary chemists to this contradiction? They did not care. Modern-day software development is in the same sort of pre-scientific state. There are thousands of project managers happily planning traditional schedules for their development process, even though this process does not work and has never worked. A manager who enters sub-tasks into Microsoft Project is a respectable-looking fraud, just as Isaac Newton was when he wasted his time trying to transmute things into gold. It looks scientific when there is a precise-looking document that states that Developer Joe Blow will work on cache management from Feb 5 to Feb 19, 2004. In real life Joe Blow will take until March to complete the task, or will think that he is done but will have to come back and redo it a month later, or will spend February trying to determine what on earth is wrong with his stored procedures.

For that matter, there are manager who happily deal with a late project by adding more developers to it. This mistake was first criticized by Brooks in The Mythical Man-Month -- forty years ago!

Evan criticizes Extreme Programming because one of its tenets is that developers work in pairs when they write code. I am neither a proponent nor a critic of XP, which I assume is popular with the geek community because its project management philosophy is a complete repudiation of traditional software development practices. I just think it's hard to imagine how Evan and I could have a meaningful discussion about pair programming efficiency, given as how neither one of us has any metric for measuring the effectiveness of a programmer!

One reason that software development is in such a mess is that software becomes obsolete every five to twenty years as capabilities of computer hardware expand exponentially. Suppose that early factories had casted their output using a new and revolutionary metal every decade: In 1800, bronze; in 1810, iron; in 1820, stainless steel; in 1830, titanium; in 1840, unobtainium. It's likely that any progress in manufacturing technology would have been flummoxed as factory owners rushed to destroy and rebuild their machinery to handle the capability of each new material.

A little background to my pessimism is in order. I work in enterprise software development. In my field, we sell software (usually written in Java or C#) to large corporations to be used by thousands of their employees. Enterprise software implements an important business function (training management, product design, supply chain management), and often replaces custom software developed by the customer in-house.

Because the customer is paying a lot of money for the software, and because they have probably tried to write the software themselves, they demand specific custom funtionality. No one could possibly plan for all these features in advance, so no one is able to sell an existing off-the-shelf product. These missing features are handled by a mixture of the following:


  • Selling a product which is currently in development, and adding the features as requirements to that product.
  • Customizing a product after it has been developed. This is often done not by the people who developed the product, but by people at the customer site -- who usually called "Professional Services".
  • Agreeing that a feature is not supported, or is supported only in a circuitous manner.


As a result of these constraints, enterprise software is the worst written software in the computer industry. This is not because the developers are bozos (after all, I am such a developer and stop snickering in the back row). Nor is it because the enterprise software vendors are dishonest; customers know that they are always, to some extent, purchasing vaporware. Poor software quality is a result of the conditions I have mentioned, and the fact that many enterprise products have had a short lifetime, and are constantly being reegineered on new technology.

Another factor that leads to poor quality is that most software developers have little or no intuition as to how some complex business function such as training management or supply chain management should behave. Software developers are pretty simple at heart; they are interested in programming languages, and development tools, and games. Intuition is not that useful anyway, given that the features one is developing are not a rational expression of what the program should contain, but rather the result of a tug-of-war between the customer, the salesmen, product marketing (who plans features for products), product management, and the developers.

Now Evan is a web site developer, and posts a lot about computer games. As I already stated, it's a lot easier to know what a game should do in some particular situation than to know what, say, a contract management tool should do. Another huge difference between gaming and enterprise software can only be expressed through profanity: There is a fuck of a lot of money in computer games. The yearly sales of one title, my beloved Civilization, probably equals the revenue of all the competitors in learning management, my current employment venue. So games have better functionality, and higher quality, than other kinds of software.

In 1995 I briefly worked for a company that produced medical imaging software, to allow doctors to view X-rays, CAT scans, etc. I remember taking a break with a co-worker to a mini-golf establishment. We played some video games, and I was struck by how much better the graphics in these games were compared to our products -- built to save peoples' lives, for God's sake!

There are some tools that improve developer productivity by allowing people to leverage the preexisting work of others. Unfortunately the scope of these tools is limited. The really successful developer tools include compilers, debuggers, standard libraries, relational databases, graphical libraries, and optimization engines. One reason that Java is so popular is that from the outset, Java included not just the core language, but also a set of libraries to manipulate data structures, access relational databases, etc. This standardization was completely absent in C++. (Java is a fine language for server-side development, but is rather lacking when used to develop UIs, hence Evan's criticism.)

Many other tools exist, but they are in a state of infancy. In my current project we use a third-party open-source object-relational package to handle "persistence" -- the software loads database data as Java objects, and if you change the objects, makes corresponding changes in the database. Using this package saved us work, but also caused other, additional tasks. We needed to investigate, and implement, our own code to handle some of our strange cases that didn't map well to what the object-relational engine expected.

This is not the fault of the object-relational engine developers; they did a pretty good job of placing callbacks and other hooks into their software for the weird stuff that we were doing. The problem was that the object-relational people had their mental map of how their software should be used, and we had our mental map of what their software was and what it was doing, and it took us several months of development time to reconcile the two.

Using the object-relational software probably increased our productivity by 25-50%. Now it took 5-10 man-years to develop this software, and we developers will spend 1-2 man-years interacting with it. As long as development tools are created in the same order of magnitude of effort as is spent using them, they will never cause a 100 or 1000-fold productivity improvement.


0 comments

0 Comments:

Post a Comment

Home