Why Programming is Hard to Fundamentally Improve

Aidan Cunniffe
Spare Thoughts By Aidan Cunniffe
8 min readOct 26, 2017

--

and why I’m still trying…

One of the curious things about being human is the ability to hold two contradictory views simultaneously. When I’m in analysis mode I try to understand what market, technical and social forces have lead to the status quo. To do this effectively, or at least to induce useful principles for later use, you have to believe that things are the way they are for good reasons. As soon as I put my innovator hat on however, I get all jazzed up and think “screw that! X would be so much better if Y”. I’ve learned over the last year that balancing these two perspectives is essential to an inventor.

This week I watched Bret Victor’s Future of Programming lecture from July 2013 and it compelled me to put these thoughts down on paper. During the lecture, Bret uses an overhead projector and pretends its 1973. He proudly presents the latest in programming research for the time and explains why it’d be really silly if we aren’t using them in 40 years. To Victor’s credit he remains in character the whole time as he satirically paints the ideal world we’ll soon enter — one the audience knows doesn’t really exist.

The future Bret showed off included direct manipulations of the output (result) instead of the code, programming via constraints/goals, spatial (visual) programming paradigms, and massively parallel programming approaches. Declarative programming, functional programming, microservice architectures and WYSIWYG editors check some of those boxes, but not nearly at the level of their potential. All that being said, there’s no argument against real progress having been made the last 40 years, just not the progress many expected.

So now I’ll put on my analysis hat and work through why the art of programming is so hard to advance beyond its current levels.

Switching Is Irrational

Many blame the lack of advancements on developers. We built shinny thing X, but developers are too arrogant, stubborn, busy, dismissive or all of the above to adopt it. I’ve even heard people accuse developers of selfishly protecting their future job prospects by trying to stifle the adoption of new programming mediums. Heck, I’ve said this before…

^ This is just wrong. In my experience developers are very rational beings. Their job is to find the most efficient way of solving problems every single day and if you create a tool that provides that they’ll be all over it. They’ll even volunteer their time to help you build it for free. If you move forward accepting that developers are mostly rational actors and have good reasons for adopting something / not adopting something you can learn a lot to inform future design.

So I did that. I actually sat down with people over the last 3 months and asked them why they failed to adopt a variety of tools. I overwhelmingly got rational explanations to why switching to something new was irrational.

Better Hammer Tradeoff

Imagine you’re building a log cabin by hand with a bad, but usable hammer. When you’re 90% of the way finished a magical genie comes and offers you a better hammer. Great! But there’s a tradeoff: If you take the new shinny hammer, you have to start the house from scratch…oh and he turns back humanity to the stone age. You can get your new hammer, but you’ll have to do without bandsaws, your pickup truck, and yes, nails.

What’s the rational choice? Hint, it involves the old hammer.

That parable emerged from summarizing all the interviews I conducted and highlights the frustrations developers have had with many of the new ‘solve all your problems’ tools they’ve tried. For context these tools fit into two categories: new programming languages including some flow & graph based paradigms and no-code visual builders.

There are two areas of value to consider when choosing a programming paradigm: the developer experience of the [language, tooling, GUI] and the strength of the ecosystem (what common problems have been solved already). Many of the new paradigms, while often built on sound principles, miss out on the massive body of work that already exists is language X. How do you weight these two areas when picking your paradigm? 30–70? 90–10? 1–99? Most people I spoke with rated the ecosystem as 2–3x more important than DX.

Tool-builders usually only focus on DX and [assume, hope, pray] that a community comes along and builds an ecosystem. This still can happens, but it was much much more common in the 90s than it is today. There are network effects in programming and a beautiful virtuous cycle quickly emerges among the most used tools. When a user shares functionality openly it makes paradigm more capable, which attracts new users, who continue to improve that paradigm. Boom! Node module for everything.

When you look at the languages that have really caught on recently, they tend to share one thing in common: they tap into an existing body of work. Look at TypeScript, the most popular language released after 2000, which interoperates with Javascript. Then there’s Swift, the second most popular language released after 2000. Just imagine where Swift would be today if it hadn’t been interoperable with Objective-C and the Cocoa legacy or if TypeScript’s ecosystem was a blue ocean from day one.

There are consequences for tool makers building an entirely new base abstraction, be it visual, text based or otherwise. If you choose not to interoperate with an existing ecosystem, you or your community will need to spend years coming up from the stone age to modernity. Programming is just learning to use a bunch of stacked abstractions. Even if you have an objectively better base abstraction, one that would have clearly won out over everything else had it been introduced in 1992, people will have few incentives to adopt it if there’s no ecosystem.

Sunk Cost…Sensibility

Most people complete the phrase “Sunk cost _____” with “fallacy”. The classic economics thought experiment usually involves a couple staying at a concert they hate just because they paid face value for the tickets. An enlightened economist would, as the story goes, leave as soon as they became unhappy with the concert and reclaim a few precious hours. But if you had to pay $50k, $50M or $5B to leave the concert early, you’d probably stay. When the switching costs are that high, there’s good reason to stay and the largest employers of programmers in the world have enormous switching costs. This keeps mainstream programmers anchored to the status quo.

This is another completely rational choice developers make when sticking to the paradigms they know. The combined costs of hiring new people, training, rewriting the code, and the opportunity cost of choosing these actions over improving your product almost always outweigh the benefits of making the change. Joel has a classic article about code rewrites that expands on this more.

Young companies without much in the way of legacy or those with no other choice have the luxury of picking new paradigms, but few of these companies end up becoming incumbents. Those who do make it are the kingmakers. Both Facebook and Twitter were first built on technologies that were inadequate for their eventual scale (PHP and Ruby). Facebook became so attached to PHP that they enhanced the language to suit their needs with Hack and other tools. Similarly, Twitter switched over to Scala which one could argue is a key reason for Scala’s mainstream success.

Solving the Least Important Problems

Tool makers must strike a balance between learnability and productivity. A visual programming environment like MIT’s Scratch is incredibly learnable. I’ve seen 4-5 year old kids build games with it after just a few hours. The drag/drop interface and shape based constraints are really easy to learn and prevent Scratch from ever being in a broken state. This is great for kids and individuals trying to learn programming.

The same things that make Scratch easy to learn also make it an unproductive environment for more serious programmers. For a professional, programming with drag and drop is way slower than keying in code — that’s just a fact.

These tradeoffs must be considered whenever building a new tool. If you build something super learnable that is not productive you’ll get a lot of praise, but little follow through. If you build something that’s too difficult to learn, but very productive you’ll turn a lot of people off. To reach mainstream programmers you need to make something that is both learnable and productive.

I think most of the stalled innovations in programming focused disproportionally on learnability. The problem is, within a few weeks of using any paradigm developers usually have built a repository of habits that keep them from making mistakes. For instance, if a new visual logic builder prides itself on preventing all syntax errors, that’s really cool, but most developers have learned to do that automatically.

Truth is, every tool you have ever used is flawed and every new tool will also be flawed. Humans subconsciously come up with ways to cope with their tools so they can keep themselves focussed on the bigger conceptual issues. That isn’t going to change anytime soon. If professionals already have habits in place to cope with things that have been made easier/more learnable by new tooling, those new tools do not have much value to them. It’s like offering a 40 year old who’s been driving manual her whole life an automatic car — yeah that’s great, but it’s not needed and she certainly won’t pay a premium for it to be included.

Things Have Been Getting Better

No good analysis fails to include a few words from the Advocatus Diaboli.

Things have been getting better. The growth has just been in the ecosystems and not the paradigms themselves. We’ve been building this amazing tower of abstraction since the early days of programming that provide really useful abstractions for things like:

  • Charging a credit card, which has gone from a 20 person team to processor.charge()
  • Configuring a massive cluster of servers is done with a short text file instead of an army of humans wandering through a data center
  • GUI tools are slowly merging design and UI development into a single discipline.
  • …the list goes on

Resolving the Conflict

Now back to the conflict I mentioned earlier between seeing why things are the way they are and believing they can be changed. There are solid rational reasons why programming has not fundamentally changed over the last few decades. Any aspiring innovator who wants to change this needs to first understand why things are the way they are, and then use those insights to their advantage.

A fair criticism can be lodged in support of pure, unencumbered innovation. It can be argued that it’s wise to design like its day one and not worry about all these real world constraints. That kind of thinking has not worked with programming because the total value of the entire programming ecosystem is stored mostly in the useful abstractions that have been built and not the tools themselves. The same is true for purely conceptual fields like philosophy and mathematics. You can change the notation or the written language, but at the end of the day the ideas are what hold the real value. The legacy is the value, it can’t be thrown out.

About the Author: Aidan has been working on dev tools his whole career. His current project Optic (YC S18) is an OSS project for API versioning.

--

--