Rewind: Object Oriented Programming and Agile
When I was first starting development object oriented programming wasn’t an assumption. We were working with non-object languages like FORTRAN and COBOL. We were dealing with C – before the ++ was added. In fact for years object oriented programming was treated with skepticism. I can vividly remember doing a full object architecture for an ecommerce site when the project manager/engagement manager came back to me freaking out that the performance was bad. By the way, bad was some pages at 5 seconds to load. Anyway, he was all up in arms that we were going to have a slow site and that it would never be responsive. The discussion ended with me upset – knowing that it was trivial to address the issues – and him insisting that I prove that the code could perform. It took me about 2 hours to insert the caching and list-object generation into the framework and to reduce our page load times to sub-second. His skepticism was removed – and I turned off all the caching so that we could do the rest of our development and debugging.
The initial reports with object oriented programming was that it was a better system. The initial research seemed to show that switching to an object oriented approach improved developer productivity. However, the push-back on these studies was that the developers who were doing the object oriented development were better developers. Given that Fred Brooks in his classic work The Mythical Man-Month quoted developer productivity rates can vary by up to 10 times between two developers, the improvements in object oriented development could easily have been explained by better developers.
I should say that I don’t believe that a language causes better development practices, rather, I believe that it encourages them. Back in 2004 I wrote an article “What does Object-Oriented Design Mean to You?” talking about how the language doesn’t make you do the right things. However, it can encourage the right things. Kurt Lewin, a German-American psychologist, said that behavior is a function of both the person and the environment. That is that behavior can be explained by some interaction between the person and their environment. You can put a good developer in a bad situation and they’ll succeed. You can put a bad developer in a good situation and they’ll succeed – and the reverse is also true. So while I’m making the point that languages influence good or bad behavior, and later I’ll make the point that our habits are formed by our behaviors, I should say that our goal is to encourage the right behaviors to develop the right habits.
A similar argument about improvement was made in the early days of agile development practices. It was 6 years ago when I was investigating how agile was supposed to be done. (I read and reviewed numerous classical software development books and agile development books: Agile & Interactive Development: A Manager’s Guide, The Rational Unified Process Made Easy: A Practioner’s Guide to the RUP, Peopleware, The Psychology of Computer Programming, Agile Software Development with Scrum, Agile Software Development, Dynamics of Software Development, and Software Craftsmanship – The New Imperative) The arguments against agile fell into two basic categories. The first was that agile was just an excuse to not document things. Numerous teams wanted to try agile by following the steps without understanding the principles and did end up floundering and putting a bad taste in the mouths of large organizations. This lead to the second argument – we don’t’ believe it works. This was placed in opposition to research that seemed to show that it did work. Teams wrote more code, had fewer defects, and the customers were happier with the end solutions. However, the argument was that the best developers worked on these projects and therefore they would have done better whether or not the methodology was better – or not.
Centralized, Decentralized, and Back
One of the subtle shifts that’s been happening for a long time is the move of processing power from a centralized authority to the client and back again. We started with “dumb” terminals. Whether you were working in an IBM world and knew the numbers 3270 (mainframe terminal) and 5250 (mini-computer terminal) or whether you were looking at a terminal starting with VT (Unix, VAX, etc.) you were staring at a very simple device that did little more than relay keystrokes and receive character updates. The shift came when PCs started landing on users desks. While they were being used to access the host computers they started running applications locally –and those applications seemed more responsive and more in-tune with the way that the user wanted to use the machine.
So we started to pull applications off the centralized processing platform on to the relatively immature PCs. We got ISAM databases like Dbase and FoxPro. We started developing “trivial” applications from the point of view of the central processing host but invaluable from the users point of view. As the PCs got connected to local networks we started the client-server revolution. In this time we used the PCs to do the processing and interaction and had centralized data storage.
It wasn’t long, however, before we discovered that some operations are best done right were the data sits. Enter databases like btrieve and the start of SQL databases. Ultimately the pendulum was swinging back from the client to the server as we moved more and more logic back to the server side for reliability, consistency, and stability. By the time the Internet was becoming popular we had moved most of our development back to core servers. We had intelligent interaction on PCs but all of the meaningful processing happened on the central server.
Internet browsers were created as a way to navigate information that was linked together. Over time users began demanding richer experiences from the web and over time we started enhancing what we did on the client again. We developed plug-ins that operated like mini-operating systems running in the browser. Flash is perhaps the most profound example of this. We moved back into a model with central storage and distributed processing.
Cross Platform and Cross Browser
The list of plugins that have tried to gain ubiquitous support and have failed is long and distinguished including Flash, Java, Silverlight, etc. Having achieved the goal of ubiquitous availability is an impressive accomplishment indeed. As a development manager it means that you’re going to be able to build once – well, almost.
Language and Library Richness
The problem is that libraries and operating environments clash with one another from time-to-time and because of this you find yourself working on getting code to work together. Consider a situation where two libraries require two different versions of jQuery, how do you get each of the libraries gets the jQuery that they need?
I remember writing code in C and C++ eons ago. I remember that my development environment was a glorified text editor. I remember needing to put print statements in my code to see what is going on – and that the idea of interactive debugging was a dream. Today because of strongly typed languages and better development environments we’ve learned to rely on our ability to interactively debug and more importantly we’ve learned to rely on the feature that Visual Studio calls Intellisense.
Hit a period and you’re going to get a list of things that you can do with that class. You don’t have to remember the method name or property name. You can browse from the list of items and simply select the item you need. From a cognitive load/instruction perspective this dramatically reduces the amount of mental processing required and makes it both easier and quicker to work on code. (You can look at my article for TechRepublic “Squeeze more out of the construction phase of application development” for a historical perspective on improving developer productivity.) In my book reviews for Efficiency in Learning and The Adult Learner, I speak more about the impact of cognitive load on learning – the same applies to development. The higher the load the lower the developer productivity.
A quick sidebar to realize that the primary constraint for developing code is the amount of workload the brain is doing. Developers are running mental simulations of how the code will run, they’re tweaking the code in their heads before they type it in. (See Sources of Power for more on mental models.) The in-head compiler doesn’t care about method names, it cares about operations and actions. Having to remember the exact method name is an overhead that distracts the developer from their primary job of making the logic work. In this way our new development environments have made development quicker – but that requires that the development environment know what options there are. When you’re dynamically putting in your own name/value pairs the environment may not know what is and isn’t expected.
The Intellisense functionality is really an extension of compile-time checking. That is a static analysis of the code to determine if there will be a problem. This isn’t possible in a true dynamic language. As a result not only do you not get intellisense, you don’t get the protection of the compiler telling you that you’re making a mistake. We know that the sooner that defects are discovered the less costly they are to resolve. If it’s resolved by Intellisense it’s really small. At compile time larger but still small. When you discover the problem in a developers debugging session it’s getting large enough to be called something other than small. If you have a problem that only occurs sometimes and is data/execution path dependent, it becomes expensive to locate. That means more development costs.
Clear Text Code
Having source code for someone else’s library may seem like a great thing in terms of helping developer productivity, however, from a corporate standpoint where they’re building their business on the competitive advantage of the code – through better operations or through the sales of licenses to use the code, the idea that your code is just hanging out there for all to see is very concerning – and potentially a barrier to development.
In my practice as a software developer it’s the standard that the client will own the intellectual property that I develop, in other words the code is theirs and they’re not going to share it. In some cases I’ve had clients who have allowed me to share this code with other clients – however, only in cases where the second client didn’t compete with the one who had paid for the work.
While these challenges are not insurmountable, they’re problems for which there aren’t great – generically applicable – answers.
Making the Bad Easy and the Good Hard
If we accept that bad behavior is a function of the environment then why does it matter? Well, it matters because we’re establishing “norms” about how development should be done and establishing bad “norms” will create bad habits – and habits are hard to break. So if you’re used to neither having to think about how you’re structuring your data nor the scope of variables then even when in another language – you won’t get this the appropriate amount of thought.
After numerous rounds of telling him to use ‘Option Explicit’ because I had found a simple typo in his code that created a problem I eventually had to tell him that if he called me over to debug his code again without option explicit at the top I’d have to fire him on the spot. This is not one of the brighter spots of my management career but I got his attention.
The point here is that we establish what the normal are – and pattern our thinking based on what we expect out of a language. So if we expect all variables to be dynamically declared at a global scope, we’ll design that way – despite the research that says this is a bad idea.
You’ll note that here I’m blaming the system not the developer. Nothing is accomplished by saying that they’re bad developers. Fundamentally I believe they have the potential to be good developers, if not encumbered by the language or system. I mentioned briefly in my review of Diffusion of Innovation that blaming people means you can’t solve the problem – blaming the system means you have something you can fix.
Bad Behaviors lead to Bad Habits
If we take the developer out of the language and put them into a language that requires strongly typed variables and offers them a local scope won’t they just change because of the environment change? Yes – and no. In the clear-cut cases, sure. They’ll default to the specific knowledge they have but in the less clear-cut cases those who are used to and accept global variables will leverage them more often.
Take for instance the case of a global logging class. It’s used just to log things out in case there’s an error. In 2002 I wrote an article “Focus on Functions” for Developer.com. In the article I talk about method (function) signatures. My strong bias is to pass in everything that is needed to function with the smallest objects possible and make the methods static. I do this to minimize unintended consequences. (It drives my testing friends nuts because static methods are hard to test.) So I’m more likely to include a logging class as a parameter to my methods than someone who’s comfortable with a global scope.
I purposefully provided a case where the right answer may be a globally scoped variable (or more appropriately property so it can self-initialize). However, what about passing around a SQL connection – or not. There’s greater testability when you can easily and consistently establish context.
The bad behavior that was learned through no fault of the developer will continue on into new languages simply because it has become the “norm” for how they develop.
An Argument against Being an Ostrich