What We (Developers) Take For Granted
A few weeks ago, I was having a conversation over dinner with a few fellow local development community members. As it always does, the conversation eventually shifted to work and we began discussing the implications of “the cloud” and it’s gradual redefinition of how we look at provisioning compute capacity.
Before I dive into the crux of this post, allow me to provide a little background.
For most of my professional career, adding physical compute capacity has involved long and bureaucratic corporate processes that often contribute to unmatched frustration for everyone involved in the process from the assembly line developer to the CTO. Assuming that the request to add additional capacity is even approved it can typically take four to six weeks for resources to actually be ready to use. This was the reality and is still the reality for most large companies. Beyond the headaches caused by adding compute capacity, organizations typically have to have enough compute capacity to meet the magic “peak usage” number even if it meant that 50% of their data center sat idle 90% of the time. On top of this problem it was and remains nearly impossible for companies to accurately predict what their “peak usage” is.
Over the last decade, there has been a groundswell of support across the globe for “Agile” development methodologies and practices. While I won’t go into the details of these principles here I will share with you my favorite definition of Agile from Dan Rawsthorne, PhD, which is “the ability to react appropriately to reality.” Of course there are literally hundreds of guidelines, practices and methodlogies that are designed to help organizations reach this lofty goal, but, at the end of the day, agility is really this simple. By this beautifully succinct definition the traditional model of predicting the need for and provisioning compute capacity is anything but agile.
At the root of this problem is the fact that computing resources have traditionally been “products.” Products cost money up front. Products have to be justified. Products depreciate in value over time.
Enter “the cloud.” With the cloud, compute capacity is exposed as a service either through a public cloud provider such as Amazon Web Services (AWS) or Microsoft Azure or through a private cloud hosted within your own on-premises data center that is configured and exposed as a cloud. Theoretically, the advent of the cloud tears down the walls that have plagued IT organizations for years around provisioning compute capacity. Now, provisioning 10 new identical servers is a matter of clicks or, even more impressively, a small shell script. The removal of those servers is equally simple. Compute capacity is now a service, and, more interestingly, a commodity to be bought and sold on open public compute capacity “markets.” With this new model, compute capacity no longer needs to be thought of as a product because it can now be exposed as a service.
Returning to our rather casual dinner conversation, we started discussing what developers new to the professional development market ”take for granted” that developers in the past have had to deal with. I finished college in 2001 and, for most of my college career, studied languages such as Visual Basic 6 and, near the end of my tenure, the new and exciting Microsoft .NET platform. While I graduated with an academic understanding of memory management and the pains that previous generations of developers had to deal with I personally always took and continue to take automatic memory management (garbage collection in .NET) for granted. While I now understand how .NET handles garbage collection it is mostly a fleeting thought that is not necessarily a design consideration when developing software. I don’t think that this is a bad thing. I think that this is a generational thing. The reality is that unless you were working alongside Grace Hopper in the early 1950s building the first simple software compilers then, more than likely, there have been advancements in development technology that you take for granted as well.
So, the question obviously is, what will developers who are entering the workforce today take for granted? How about in 10 years? How about in 15 years? The reality is that the face of software development tends to change completely once every five years so these questions are nearly impossible to answer.
I will, however, make one prediction that by this point in the post should be obvious to most readers. In five years maximum I predict that the vast majority of “young” developers will take compute capacity for granted.
I predict that compute capacity will be analogous to the hot and cold running water taps in your home as far as ubiquity and control is concerned. In that same sense, compute capacity will be generally considered a utility in the same way that we view our electricity, gas and water. Think about it for a moment. You pay for electric service. You pay for water service. You pay for natural gas service. While you do indirectly pay for the infrastructure that delivers electricity and water to your home you don’t personally purchase the physical piping and other infrastructure that makes modern utility grids possible. This is exactly the case with the “Infrastructure as a Service” or IaaS model that modern public cloud services provide.
To fully summarize my point, consider the follow analogy: physically pumping water out of a well in your own backyard is to being connected to a municipal water grid as purchasing and maintaing your own server hardware is to provisioning compute capacity in the public cloud.
I think I’m only scratching the surface here as to what developers will “take for granted” in five years. What are your predictions? As a developer, what do you take for granted today?
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)