Skip to content
Brand is a four letter word

Book Review-Brand is a Four Letter Word

On one hand I don’t expect my audience to be trying to brand a company and in that sense I expect that some of my reading on marketing is uninteresting. On the other hand, I realize that many of my readers are trying to brand intranets or are working with clients that are branding intranets. Not having a solid understanding of marketing can be frustrating as the marketing department will try to convince you that something is essential – when there’s no research to support that position. This has led more than one of my colleagues to exclaim four letter words while working on branding and that’s why I believe Austin McGhie’s book Brand is a Four Letter Word is so appropriate.

I should say that most of the time when the word brand comes up in the SharePoint context it’s followed by “ing”. We tend to think of branding the portal. However, as McGhie points out, branding is the prize. It’s the goal that the market (or your market of employees) offers up to you once you’ve positioned yourself. Marketing, he claims, is the art of positioning. Brand is the badge that states that your market places a higher value in you than the utility you provide.

Those are strong words. Branding has been confusing to me since I couldn’t quantify what it was – and wasn’t. As I’ve been working on the branding for the SharePoint Shepherd’s products I’ve been playing with web site designs. I’ve been trying to define the visual interface of the web site but my greatest struggle has been clarifying my thoughts around what the brand should be about – or rather how the products should be positioned in the market. McGhie defines the real work as “defining, clarifying, targeting, capturing and holding your position.” In other words, the problem isn’t the web site (well it might be partially the web site) but it’s the clear understanding about what the positioning is.

Positioning

McGhie spends quite a bit of time talking about ways to think about positioning including how to understand your market both through research and through the development of insights. Along the way he knocks down the idea that you have to be “out of the box” to be successful – including sharing the perspective that any idiot can be different – the trick is being different in a way that’s relevant to the audience.

Positioning is designed to make the unique benefits of your product or solution stand out so profoundly that no one can miss it. Consider those folks who are in the unfortunate position of job hunting. They approach their friends with a response like “I don’t know what kind of job I want.” As a result the friend has no way to help because they don’t know how to connect the job seeker with something that might be interesting to them.

The end goal of positioning is that the market reward you by calling your positioning a brand.

Shorthand

Branding is simply shorthand. It’s your message encapsulated into a neat package that the consumer understands. If you read Starbucks you think coffee. If you read McDonalds you’re going to think about inexpensive fast-food and golden arches. So doing the positioning for your brand is all about creating the result you want to see of the shorthand.

The ultimate shorthand is when you become the stand-in for the product category that you’re in. Facial tissue is called Kleenex based on the strength of the brand. That’s serious shorthand.

Attention and Persuasion

There was a time when marketing meant persuading your market to take an action. You had to convince your market that you were the right choice when standing in comparison to others. However, the problem is no longer a problem of persuasion – of comparison. We’re now so saturated and overwhelmed with information that the key isn’t persuasion it’s attention. We’re constantly under assault by advertising, by too many options, and an overwhelming ocean of information. If you want to build a brand in today’s world you have to get attention.

You have to break through the filters that every consumer has created to keep their mind from becoming hopelessly overwhelmed with all of the information that is available to them every moment. That isn’t to say that there’s no distance between someone becoming aware and them taking action. The book Fascinate told us that you had to first get attention, then remembered, and finally acted on. Similarly, the book Diffusion of Innovation talks about the difference between knowledge, attitudes, and practice. That is while our focus needs to be on attention rather than persuasion, we cannot forget that attention alone isn’t enough – it’s just the scarcest resource in today’s economy.

Karate and Judo

Over the years I’ve had the opportunity to watch markets evolve. I can remember early into the enterprise search game the players and their challenges. It generally wasn’t hard to sell someone who understood the value an expensive solution – but when the same companies who were commanding high-dollars for their solutions tried to broaden their market they quickly faced issues. They quickly got mired in the process of educating the market as to not what they do – but what the market is and why it’s important. Once a person understood what search was and what was possible, selling the actual product was easy.

McGhie makes the point that you don’t want to have a karate fight with your market. Karate is meeting force-with-force. Instead you should be fighting with Judo, which focuses on redirecting the existing force. You’re not going to win an all-out battle with your market by using your force to fight the force of the market. You’ll win by directing the market forces in the direction you want.

U is for Utility

We believe that today people are more me-focused than ever. They’ll step over a dead body to get a free trinket. Speaking as someone who has done a free book signing at a conference, I can tell you with certainty there’s a certain pull to the idea that it’s free. However, there’s also the other side of the coin. What does it do for me? Speaking for me personally, I’m encouraged to upgrade to the latest gadget. I’m certain I’ll be encouraged to purchase a Windows Phone 8 device. However, just as I did with the Windows Phone 7 devices, I’ll probably apply very little effort to getting one. Why?

Well, it’s primarily a question of differential value. What will the new phone do for me that the iPhone that I carry now doesn’t do? It’s got a cool new interface (which I’ll have to learn) and it might do some things better than the iPhone but at the end of the day what new utility will it provide? Speaking as someone who wrote a book on Windows Phone 6 devices (Mobilize Yourself!: The Microsoft Guide to Mobile Technology) I’ll tell you that the core features I need – making/receiving calls, sending/receiving text messages, and sending/receiving emails have all been solved for some time now.

Consumers are more focused on the utility than they have been in the past. What will it do for me? Am I willing to pay the price for that utility?

Disrupt or Be Disrupted

The book Demand quoted Jeff Bezos as saying “If anyone is going to destroy our online shopping business model, it’s going to be us.” Brand is a Four Letter Word encourages you to realize that if you’re not acting disruptively – it’s likely that disruption will be visited upon you by a competitor.

The market is going to change – because that is what markets do. You can choose to be in control or at least in the lead of these changes or you can choose to react to them once they’ve already happened. If you fear disruption, if you’re unwilling to try new things because you’re fearful about how they will impact your business, then all you need to do is wait because disruption will be visited upon you. Whether you’re in the package industry and FedEx comes along and makes shipment tracking easier, you’re in the book shop business and Amazon comes along with an online model, or you’re in the watch business and you’re realizing people use their cell phones for the time now.

Designing for a Core Customer and Taking Your Target Market

When motorcycles became associated with unsavory characters there were numerous companies that tried to distance the images of bikers from the gruff and tough image that had become coupled to the biking culture. However, one organization – Harley Davidson – embraced the image. They focused on the man who was trying to live out his rough loving persona. As a result, they’ve been very successful at selling their motorcycles but more importantly they’ve created a following and culture. They’ve identified that their core customer is a rough and tough person – realizing that by having this as the core customer they’ll take a much larger target market.

McGhie uses the example of Nike and their focus on their core customer – knowing that they’ll be successful by taking the larger target market that wants to identify themselves with the core customer.

Move the Undecided

The final point I want to elevate from Brand is a Four Letter Word is the concept that you’re not trying to move the people who have either decided for – or against – your product firmly. You’re trying to move those folks who haven’t made a firm decision on where to go. You’re trying to move the undecided. You should spend little time convincing those who have already decided to purchase your solution. You also shouldn’t fret about those who’ve already decided to do something else.

If you’re frustrated by the marketing and branding process, you may want to read Brand is a Four Letter Word for yourself.

Article: Size and Scale of SharePoint 2013 in the Desert

At the Microsoft SharePoint Conference 2012 being held in the desert in Las Vegas, NV, there’s a lot of talk about how SharePoint has grown up. At 11 years (77 in dog years), SharePoint’s grown older and wiser. Instead of being seen as a departmental solution to collaboration needs it’s an enterprise-scale platform for creating content solutions.

Looks like Rain

New in SharePoint 2013 are a host of new features that make multi-tenancy easier, and pushes controls further towards the users.  The ability to configure search settings at a site-level makes it easier for organizations to allow departments to customize their own search experiences.  This is just one example of how SharePoint is making it more powerful for clients who are using shared hosting to customize their experience.

Microsoft’s own Office 365 environment will create a real option for organizations to do their SharePoint collaboration in the cloud.  It’s a big bet that organizations are willing – and able – to make the jump to the cloud for at least some of their collaboration needs.

Read more…

Secret SWAG

Next week at the Microsoft SharePoint Conference in Las Vegas, I’m starting a new giveaway – and I want you to be a part of it. I had a custom coin made. The coin shows Thor Projects on one side, and The SharePoint Shepherd on the other – more or less the two different sides of my world and personality. It looks like this:

This is intended to be a giveaway for folks who interact with me during presentations. If you ask a question, I want to give you one. I won’t explain what it is in the session. I plan to hand it over and keep moving on. Clearly, as I’ve said, it’s a coin. I found myself flipping a coin like this over and over in my hands on calls and while I was bored and decided that you should be able to get one as well.

Because you’re reading my blog, I want to extend a special offer to you (and to your friends you want to disclose the offer to) – if you come up to me and hand me a business card, I’ll hand you one of the coins. I’m only going to do this at the SharePoint Conference. After the conference you’ll have to interact during a presentation to get one – here’s why.

Anyone who has one of the coins can mail it back to me for a $25 discount off of any of the DVDs that we sell. That’s right, $25. Why would I do this? Well, I value people interacting in my sessions more than you know. It makes the presentations fun for me – and everyone else. I also value you because you are reading my blog and keeping connected. By the way, if you’re not signed up for the SharePoint Shepherd newsletter – you should do that. We offer some sort of a special offer every month through that list. I’ll make an even more impressive deal to that list in a month or two – so you’ll want to make sure you get one.

One other detail, coins are heavy. I’m only bringing a few with me to the conference, so you’ll want to hit me up early to get yours. My session – “Using TypeScript to Build apps for Office and SharePoint” is Wednesday morning at 10:30. I follow that up with an AvePoint theatre presentation at 12:45 and then two book signings. I don’t expect that I’ll have any coins left by the time I’m done, so look for the Shepherd’s staff earlier in the week and get yours.

Thanks for your continued support

-Rob

Why JavaScript Makes Bad Developers

I’ve bumped into JavaScript from time-to-time through my professional career and up until recently I’ve been able to keep a relatively safe distance from it. I always had a developer who I could assign to do the JavaScript stuff. It was generally a relatively small amount of the overall framework and it was easy enough to assign to a junior developer to work on. However, more recently it’s become a more “real” part of the overall development experience. Therefore, I felt like it finally became something that I had to get neck deep into. What I’ve found is that it’s changed a lot over the years and it’s picked up a lot of scars.

My goal for this post is to help the developer to help development managers understand what they’re asking their developers to do, to talk about how the techniques for writing JavaScript teach bad habits, and address that history is repeating itself.

Rewind: Object Oriented Programming and Agile

When I was first starting development object oriented programming wasn’t an assumption. We were working with non-object languages like FORTRAN and COBOL. We were dealing with C – before the ++ was added. In fact for years object oriented programming was treated with skepticism. I can vividly remember doing a full object architecture for an ecommerce site when the project manager/engagement manager came back to me freaking out that the performance was bad. By the way, bad was some pages at 5 seconds to load. Anyway, he was all up in arms that we were going to have a slow site and that it would never be responsive. The discussion ended with me upset – knowing that it was trivial to address the issues – and him insisting that I prove that the code could perform. It took me about 2 hours to insert the caching and list-object generation into the framework and to reduce our page load times to sub-second. His skepticism was removed – and I turned off all the caching so that we could do the rest of our development and debugging.

The initial reports with object oriented programming was that it was a better system. The initial research seemed to show that switching to an object oriented approach improved developer productivity. However, the push-back on these studies was that the developers who were doing the object oriented development were better developers. Given that Fred Brooks in his classic work The Mythical Man-Month quoted developer productivity rates can vary by up to 10 times between two developers, the improvements in object oriented development could easily have been explained by better developers.

I should say that I don’t believe that a language causes better development practices, rather, I believe that it encourages them. Back in 2004 I wrote an article “What does Object-Oriented Design Mean to You?” talking about how the language doesn’t make you do the right things. However, it can encourage the right things. Kurt Lewin, a German-American psychologist, said that behavior is a function of both the person and the environment. That is that behavior can be explained by some interaction between the person and their environment. You can put a good developer in a bad situation and they’ll succeed. You can put a bad developer in a good situation and they’ll succeed – and the reverse is also true. So while I’m making the point that languages influence good or bad behavior, and later I’ll make the point that our habits are formed by our behaviors, I should say that our goal is to encourage the right behaviors to develop the right habits.

A similar argument about improvement was made in the early days of agile development practices. It was 6 years ago when I was investigating how agile was supposed to be done. (I read and reviewed numerous classical software development books and agile development books: Agile & Interactive Development: A Manager’s Guide, The Rational Unified Process Made Easy: A Practioner’s Guide to the RUP, Peopleware, The Psychology of Computer Programming, Agile Software Development with Scrum, Agile Software Development, Dynamics of Software Development, and Software Craftsmanship – The New Imperative) The arguments against agile fell into two basic categories. The first was that agile was just an excuse to not document things. Numerous teams wanted to try agile by following the steps without understanding the principles and did end up floundering and putting a bad taste in the mouths of large organizations. This lead to the second argument – we don’t’ believe it works. This was placed in opposition to research that seemed to show that it did work. Teams wrote more code, had fewer defects, and the customers were happier with the end solutions. However, the argument was that the best developers worked on these projects and therefore they would have done better whether or not the methodology was better – or not.

Let me flip this argument around and say that I’m convinced that there are some truly gifted developers who will be able to make any technology, methodology or approach work. I believe that it’s absolutely possible for development in JavaScript to be done well by a select few developers. I’m just not sure that these magical developers are either plentiful nor do I believe they are in your organization. (Present company excluded, of course.)

So by drawing parallels to object oriented programing and agile methodologies, I’m not saying that JavaScript is a better approach, in fact, I think you’ll find by the end of the post that I believe it’s not a step forward but rather a fairly large step backwards – I’m simply illuminating that I’m aware of history and how change is necessary to make improvements – however, it has to be the right change. I know where we’ve been in this craft we call software development and I believe I know a step backwards when I see it.

Centralized, Decentralized, and Back

One of the subtle shifts that’s been happening for a long time is the move of processing power from a centralized authority to the client and back again. We started with “dumb” terminals. Whether you were working in an IBM world and knew the numbers 3270 (mainframe terminal) and 5250 (mini-computer terminal) or whether you were looking at a terminal starting with VT (Unix, VAX, etc.) you were staring at a very simple device that did little more than relay keystrokes and receive character updates. The shift came when PCs started landing on users desks. While they were being used to access the host computers they started running applications locally –and those applications seemed more responsive and more in-tune with the way that the user wanted to use the machine.

So we started to pull applications off the centralized processing platform on to the relatively immature PCs. We got ISAM databases like Dbase and FoxPro. We started developing “trivial” applications from the point of view of the central processing host but invaluable from the users point of view. As the PCs got connected to local networks we started the client-server revolution. In this time we used the PCs to do the processing and interaction and had centralized data storage.

It wasn’t long, however, before we discovered that some operations are best done right were the data sits. Enter databases like btrieve and the start of SQL databases. Ultimately the pendulum was swinging back from the client to the server as we moved more and more logic back to the server side for reliability, consistency, and stability. By the time the Internet was becoming popular we had moved most of our development back to core servers. We had intelligent interaction on PCs but all of the meaningful processing happened on the central server.

Internet browsers were created as a way to navigate information that was linked together. Over time users began demanding richer experiences from the web and over time we started enhancing what we did on the client again. We developed plug-ins that operated like mini-operating systems running in the browser. Flash is perhaps the most profound example of this. We moved back into a model with central storage and distributed processing.

For better or worse we oscillate between centralized processing and distributed processing. It’s the latest move for more processing to be done on the client side – without the need of specialized plugins that has driven the demand for JavaScript. However, I’m ahead of myself. We have to talk about what JavaScript even is.

What is JavaScript?

JavaScript is a scripting language which was designed to be used for lightweight applications and scripting. It was designed as an alternative to the strongly typed, compiled, and “heavy” Java language. That was more than 15 years ago. Since then it’s evolved into a tool that folks are using to solve enterprise class problems. As a language JavaScript is supported in nearly every browser on the planet. It’s a scripting language that is dynamic. That is JavaScript doesn’t rely on well-known types. In fact, at least one of the primitive types that we expect in languages – the integer – is completely missing from JavaScript.

Really, JavaScript is all about collections. A collection may be an “object” meaning that contains name-value pairs with the name being what people would consider to be a member of a class. The trick here is that the value portion of this equation can be anything – including a function. In this way, you can have a collection of name-value pairs that act like we expect a regular strongly typed object to behave. Arrays, the other type of collection, is simply a collection of nameless objects. The final thing to realize is that a value may be another object – and therefore you can have deeply nested hierarchies of objects.

JavaScript as just being collections may seem like an oversimplification – but it is a really helpful simplification because you realize that you can dynamically add and remove items from a collection at any time and thus it’s pretty different than a traditional typed language.

Why JavaScript is Good

There are some really great things about JavaScript. Because there isn’t a rigid type system you can do things in JavaScript that are difficult to do in a strongly typed language. You can slip an extra member into an object and use it deep inside the bowels of the code. You don’t have to refactor your methods between the top of the application all the way to the bottom to accommodate additional information – or even recompile. Conversely, without a definition of what you’re passing around it’s difficult to know if you’re passing in the right things. Let’s take a look at a few of the things that make JavaScript a useful language.

Cross Platform and Cross Browser

Perhaps the chief benefit of JavaScript is that it truly runs anywhere. Sure, Java — to which JavaScript has effectively no relation — claims that it runs everywhere but JavaScript actually does. It’s hard to find a web browser or operating system that doesn’t have a JavaScript interpreter. The interpreters have been getting better and better over the last 10-15 years. The result is that we have good interpreters of the language that are fast and mostly reliable.

The list of plugins that have tried to gain ubiquitous support and have failed is long and distinguished including Flash, Java, Silverlight, etc. Having achieved the goal of ubiquitous availability is an impressive accomplishment indeed. As a development manager it means that you’re going to be able to build once – well, almost.

The unfortunate problem is that much like the HTML standards, teams interpreted JavaScript slightly differently. They added access to browser specific features through the object model for the browser that would be supported on one browser or platform – and not on another. As a result there was time spent detecting what browser you were in to determine the browser specific way to approach the problem.

Libraries

The good news is that a set of JavaScript libraries began to emerge to allow for the developer to ignore the browser specific quirks. Chief among these libraries is jQuery. jQuery abstracted a way many of the differences and simplified many of the common – but complicated things that developers needed to do. jQuery handles everything from selecting items in the document object model (DOM) to making AJAX calls. All-in-all it’s the libraries like jQuery that dramatically improve the ability to develop with JavaScript.

With libraries for nearly everything that you might want to do, JavaScript’s functionality has been greatly enhanced. The problem is that you have to download and learn these libraries to be effective at them. It means gathering up bits and pieces from different sources and trying to patch them together into a meaningful quilt of code. Sometimes that works well and other times not so well.

Why JavaScript is Bad

If you ask most professional developers (as compared to a Hobbyist, see “What’s the Difference between a Hobbyist and a Professional“) about JavaScript and you’ll likely get non-positive responses. Most professional developers – that is those that get paid – don’t like JavaScript because it’s hard to make a craft of developing when you’re limited by your language. Most professional developers have come to expect a set of tools to help them develop more reliable code and to create the code faster in the first place. JavaScript isn’t bad for tying a few loose things together, however, developing enterprise scale applications is not what it was designed for.

Language and Library Richness

Unlike Java and .NET, JavaScript isn’t out of the box equipped with an array of libraries to call into. This means that you’re always on the search for a third party library that may be helpful to you. Simple things, like converting a play position from a video file and getting it to a standard text string expressing hours, minutes, seconds, and frame can take a hundred lines of code or so. (I just wrote this so this isn’t an arbitrary number.) The simple – like leading zeros for numbers – isn’t simple. Without the concept of an integer, integer division means grabbing the modulus of 1 and subtracting it from the number to truncate the number.

Professional developers who are used to ignoring some of the low-level functionality will be continuously frustrated by the need to develop their own library of routines to do the things that they’ve taken for granted in other languages. Sure I listed libraries above as a good thing for JavaScript –and it is. However, it’s filling the fundamental void which exists because the language doesn’t have a library of useful functions that it is built upon.

The problem is that libraries and operating environments clash with one another from time-to-time and because of this you find yourself working on getting code to work together. Consider a situation where two libraries require two different versions of jQuery, how do you get each of the libraries gets the jQuery that they need?

Static Typing

I remember writing code in C and C++ eons ago. I remember that my development environment was a glorified text editor. I remember needing to put print statements in my code to see what is going on – and that the idea of interactive debugging was a dream. Today because of strongly typed languages and better development environments we’ve learned to rely on our ability to interactively debug and more importantly we’ve learned to rely on the feature that Visual Studio calls Intellisense.

Hit a period and you’re going to get a list of things that you can do with that class. You don’t have to remember the method name or property name. You can browse from the list of items and simply select the item you need. From a cognitive load/instruction perspective this dramatically reduces the amount of mental processing required and makes it both easier and quicker to work on code. (You can look at my article for TechRepublic “Squeeze more out of the construction phase of application development” for a historical perspective on improving developer productivity.) In my book reviews for Efficiency in Learning and The Adult Learner, I speak more about the impact of cognitive load on learning – the same applies to development. The higher the load the lower the developer productivity.

A quick sidebar to realize that the primary constraint for developing code is the amount of workload the brain is doing. Developers are running mental simulations of how the code will run, they’re tweaking the code in their heads before they type it in. (See Sources of Power for more on mental models.) The in-head compiler doesn’t care about method names, it cares about operations and actions. Having to remember the exact method name is an overhead that distracts the developer from their primary job of making the logic work. In this way our new development environments have made development quicker – but that requires that the development environment know what options there are. When you’re dynamically putting in your own name/value pairs the environment may not know what is and isn’t expected.

The Intellisense functionality is really an extension of compile-time checking. That is a static analysis of the code to determine if there will be a problem. This isn’t possible in a true dynamic language. As a result not only do you not get intellisense, you don’t get the protection of the compiler telling you that you’re making a mistake. We know that the sooner that defects are discovered the less costly they are to resolve. If it’s resolved by Intellisense it’s really small. At compile time larger but still small. When you discover the problem in a developers debugging session it’s getting large enough to be called something other than small. If you have a problem that only occurs sometimes and is data/execution path dependent, it becomes expensive to locate. That means more development costs.

Code Structuring

Software development is still struggling as a profession to become professional. Exercises like the Software Engineering Body of Knowledge (SWEBOK) have attempted to catalog what we know about software development. As we attempt to dig our way out of the primordial soup of software development practices we’ve found that there are some things that are helpful. Like object oriented development mentioned above, we’ve discovered that there are techniques that simplify problems for our feeble human brains and therefore lead to better software. Some of the things we know aren’t effective in software development are the default behaviors in JavaScript.

A long time ago we learned that global variables are bad when used incorrectly. They led to spaghetti code that was difficult to debug and difficult to maintain. In JavaScript, by default, all variables are scoped at a global level. Without the concept of classes, or modules, we’re left with the default that everything has to be a global variable – or a local variable inside of a function. In order to mitigate the impact of this, we’ve built patterns to isolate the scope of a variable inside of JavaScript – but those patterns are working around the default behavior of the language.

Further, without a formal concept of namespaces, modules, or classes we find ourselves fighting to create structure where no structure is implied. It means that the default path for writing JavaScript is a bad path. Structuring code requires even more thought and focus because the language doesn’t lead to the behaviors of isolation that lead to structured code – and better developer productivity.

Clear Text Code

One of the other issues when talking about JavaScript is that the code is all clear-text. While this is good from a debugging perspective, it’s bad from an intellectual property perspective. It means that it’s trivially simple to look at someone else’s implementation of something in JavaScript and copy it. This reduces the competitive advantage for developing a solution to a problem and means that sensitive or proprietary techniques for processing aren’t great fits for JavaScript.

Having source code for someone else’s library may seem like a great thing in terms of helping developer productivity, however, from a corporate standpoint where they’re building their business on the competitive advantage of the code – through better operations or through the sales of licenses to use the code, the idea that your code is just hanging out there for all to see is very concerning – and potentially a barrier to development.

In my practice as a software developer it’s the standard that the client will own the intellectual property that I develop, in other words the code is theirs and they’re not going to share it. In some cases I’ve had clients who have allowed me to share this code with other clients – however, only in cases where the second client didn’t compete with the one who had paid for the work.

Development time

If you get to it the most precious resource in any development project is the developer or perhaps the architect. Developer productivity is absolutely essential to getting projects done – and on this score, JavaScript doesn’t do well. If you read the classics of software development like Peopleware, Code Complete, or The Mythical Man-Month, you’re realize that we’ve been fighting for developer productivity for a long time. I’ve not seen any formalized research on developer productivity in a strongly typed language – like C# or Java compared to JavaScript, however, in my informal conversations about it and in my experience with development teams, I find that developers are roughly 2 to 10 times faster at developing in a strongly typed language than in JavaScript. I’m willing to attribute some of this time to learning time, however, much of it has to do with the inability to build good tooling to support the development process.

Compositing

Many development scenarios today involve compositing. That is taking pieces from different places and getting them to work together. As mentioned above getting two libraries to exist in the same space can be challenging. If you load two different versions of the same library – jQuery for instance – how do you ensure that the most recent version is the one that is being used? Similarly, if the JavaScript is using targeting to find the elements – what happens when you include two sets of the same code on the same page – as might happen on a portal?

While these challenges are not insurmountable, they’re problems for which there aren’t great – generically applicable – answers.

Client Performance

The cliché answer given by a software developer about what’s wrong with code is “It works on my machine.” That’s probably true. However, it’s also equally likely to be true that the developer has a faster, better machine than most of the users who will be using what is developed. Developers get faster machines to be able to do their job. What happens when their JavaScript takes a reasonably long time on their computers? Eric Shupps was speaking about jQuery performance in his initial and follow up posts on the topic. Basically, Eric was sharing that jQuery walks the document object model – and thus is sensitive to larger pages and slower machines. By moving to JavaScript we can decentralize the problem and therefore make it easier to scale, however, we also make the problem sensitive to the remote machine.

If the JavaScript interpreter is broken then the page is broken. If the remote machine is underpowered then the site will appear to be slow. In most cases we’d prefer to have a great deal of control over our performance – and JavaScript is at the mercy of the users’ machine. The user-side problems can create a bad perception of our site.

Debugging

Perhaps the most frustrating thing about JavaScript has to be the debugging experience. It’s not quite back to the days of print statements to be able to see what’s going on. Development tools in FireFox – called FireBug – and in Internet Explorer – called Developer Tools – do allow you to interact with the JavaScript and see what is happening – but you do so disconnected from the authoring environment. Once you’ve determined the source of the problem you’ve got to go back into your development environment (Visual Studio?) and make the change then reload the page. This is certainly not the most developer friendly workflow – even if it can be used to accomplish the goal.

How JavaScript Makes Bad Developers

The title of this blog post may be slightly provocative. How can a language make someone a good or a bad developer? Well, the answer is really found in Kurt Lewin’s description of behavior being a function of person and environment. Many of us would probably agree that if someone behaves badly they would appear to be bad. And that’s really what I’m saying here. If someone does development behaviors (global variables, etc.) then we would label them as a bad developer – and since JavaScript makes this the default behavior, how could we do anything but say that JavaScript makes bad developers. But let’s explore this more deeply.

Making the Bad Easy and the Good Hard

If we accept that bad behavior is a function of the environment then why does it matter? Well, it matters because we’re establishing “norms” about how development should be done and establishing bad “norms” will create bad habits – and habits are hard to break. So if you’re used to neither having to think about how you’re structuring your data nor the scope of variables then even when in another language – you won’t get this the appropriate amount of thought.

I once had a developer who had grown up doing ASP development. In those days it was VBScript that was being used to build pages and just like JavaScript you could declare variables on-the-fly. That is you could just start using a token and it would become a variable. The problem with this – like with JavaScript today – is that you would make a typo (we’re back before Intellisense here) you would accidentally declare a variable and create problems for debugging. They added the ability to put ‘Option Explicit’ at the top of the files which would require that you explicitly declare variables. My developer wouldn’t do it – and would call me for help.

After numerous rounds of telling him to use ‘Option Explicit’ because I had found a simple typo in his code that created a problem I eventually had to tell him that if he called me over to debug his code again without option explicit at the top I’d have to fire him on the spot. This is not one of the brighter spots of my management career but I got his attention.

The discussion that followed was that he felt like it just slowed him down and that it was a polish thing – the thing that you did to make the code “right” at the end. The problem with this is that wasn’t a ceremony thing (Ceremony is a word used in agile to talk about activities that don’t generate results.) It was discipline to get him to think the right way about problems so that he could reduce defects. I wanted him to think about his variables and what they were being used for. I should say that JavaScript now has a similar concept “use strict”.

The point here is that we establish what the normal are – and pattern our thinking based on what we expect out of a language. So if we expect all variables to be dynamically declared at a global scope, we’ll design that way – despite the research that says this is a bad idea.

You’ll note that here I’m blaming the system not the developer. Nothing is accomplished by saying that they’re bad developers. Fundamentally I believe they have the potential to be good developers, if not encumbered by the language or system. I mentioned briefly in my review of Diffusion of Innovation that blaming people means you can’t solve the problem – blaming the system means you have something you can fix.

Bad Behaviors lead to Bad Habits

If we take the developer out of the language and put them into a language that requires strongly typed variables and offers them a local scope won’t they just change because of the environment change? Yes – and no. In the clear-cut cases, sure. They’ll default to the specific knowledge they have but in the less clear-cut cases those who are used to and accept global variables will leverage them more often.

Take for instance the case of a global logging class. It’s used just to log things out in case there’s an error. In 2002 I wrote an article “Focus on Functions” for Developer.com. In the article I talk about method (function) signatures. My strong bias is to pass in everything that is needed to function with the smallest objects possible and make the methods static. I do this to minimize unintended consequences. (It drives my testing friends nuts because static methods are hard to test.) So I’m more likely to include a logging class as a parameter to my methods than someone who’s comfortable with a global scope.

I purposefully provided a case where the right answer may be a globally scoped variable (or more appropriately property so it can self-initialize). However, what about passing around a SQL connection – or not. There’s greater testability when you can easily and consistently establish context.

The bad behavior that was learned through no fault of the developer will continue on into new languages simply because it has become the “norm” for how they develop.

An Argument against Being an Ostrich

I need to end with a call to continue to learn and grow. Whether JavaScript is the right answer for you – or not – exploring new techniques, tools, and approaches are a good thing. In another old article, “The Great Divide“, I talk about how people get stuck by not seeking out new and better ways of doing things. I’m not suggesting that you don’t do JavaScript development or you ignore the changes that are happening in our marketplace. Instead, I’m advocating that you go in eyes wide open – and that you look for ways to minimize the pain. Right now, as it comes to JavaScript the way I’m looking at this is through TypeScript.

Humilitas

Book Review-Humilitas: A Lost Key to Life, Love, and Leadership

If you’ve been reading my blog – or even just the book reviews for a while you may have realized that I’m on a voyage of self-discovery – and it’s a voyage that isn’t even close to its end. I’m constantly trying to improve my understanding of psychology, sociology, and leadership – among other skills. My backlog of reading on these (and other topics) can feel overwhelming to me at times. I bring this up because I can already hear some of the potential comments about reading the book Humilitas – “So now that you’ve read the book you probably know everything there is to know about humility.” Um, no. The truth is that this the third book that I’ve read about humility. The other two, Humility and Humility: True Greatness, didn’t speak to me greatly enough to write a full book review on them. Not that they’re not good books – they just didn’t speak to me clearly.

I read these books because I realize that a lack of humility has been a barrier in my life. It’s like the pervasive smell that is in your clothes, in your hair, and in your world after going to a bon fire. Every time you believe you’ve rooted out the smell there’s another whiff of it. Humilitas ends with a quote from CS Lewis, ”    If anyone would like to acquire humility, I can, I think, tell him the first step. The first step is to realize that one is proud. And a biggish step, too. At least, nothing whatever can be done before it. If you think you are not conceited, it means you are very conceited indeed.” In short, the moment you believe you’ve obtained humility, you haven’t.

I remember years ago I was playing a game Ultima IV: Quest of the Avatar the goal of which was to accumulate eight virtues (honesty, compassion, valor, justice, honor, sacrifice, spirituality, and humility). I was asked a question in one of the dialogs. “Are thou the most humble person on the planet?” (My apologies if the quote isn’t exact.) I can remember pausing and thinking that this was an awful question (which means it was brilliant). I had already earned the humility portion of the avatar, so I was humble. Being at the time 13 years old or so I answered yes. The game responded with “Thou has lost an eighth” a few times because I lost humility and honesty, which in turn caused me to use several of the other eighths that required the ‘truth’. It was profound because I had been working quite hard to complete the game and in one moment I had lost half of the progress (four eighths). For all the evils that people find in roll playing games – this was a very memorable moment for me about humility (and still not enough to teach it to me.)

Since then I’ve been accused of arrogance more than once by a variety of people – and I’ll accept that it’s a struggle particularly when someone challenges what I know. (Strangely enough, this happens more frequently than one would expect.) That’s why I was surprised to see the book very early on say that contemporary western culture often confuses conviction with arrogance. While by no means an excuse for the lack of caring in my responses at times, it softened the blow for me. If I know that the technology is configured wrong, it won’t work or it won’t work right, it may not be arrogance to share this belief – in a caring way.

One of the really difficult things for me has been trying to rapidly establish my credentials in front of new audiences and clients. Conventional wisdom says that you have to introduce yourself to a client with your certifications, achievements, and accolades in an environment where you’re selling. At a corporate level, look at how nearly every technology company has their “brand name clients” slide that lists who they’ve worked with. On a personal level nearly every conference I’ve spoken at suggests or requires a bio slide. It’s the “about the speaker” slide that is supposed to help the attendees realize why the speaker is someone to listen to – and by extension why the conference organizers did their job correctly by selecting such great speakers to speak to you.

In a lot of ways, I’m a slow learner. It’s been relatively recently – and at the specific suggestion of some friends (thank you) – that I’ve stopped doing the “about me” slide. I sometimes have a slide that shares my background – but it does so for the purposes of establishing how I think about the topic – not for the purposes of sharing my credentials. My audiences deserve to know where I’m coming from – but they don’t need to know about my certifications.

I’ve been “taught” this in many different ways. Back at the NSA Conference (click for my experiences) I attended a session by Connie Podesta where she said something simple and profound “the privilege of the platform.” Her talk was about how she made her business work. She was talking about her openness to learning, and how she was grateful for the privilege of speaking to another group of people. She never used the word humility – that I can remember – but she did demonstrate it.

Humilitas is the root of our word humble and the one word with two meanings “to be humiliated” and “to be humble”. The book makes a point that being humble is not self-effacing. It’s not a solitary virtue. It’s not something that is forced on you. Humility is others-serving and a choice. Humility is more than modesty or dignified restraint (modesta is its root word.)

I find it somewhat humorous that John Dickson spends so many words iterating and reiterating that his perspective – that Jesus the Nazarene dramatically changed the course of human events without asserting Christian beliefs. As a student of history he asserts that the change happened whether Jesus was the Christ, a profit, or “just” a carpenter. It’s humorous to me because without any belief there is historical evidence to indicate the impact of Jesus.

It seems like nearly everyone struggles with humility. Thomas Gilovich did a study of 1 million (million!) high school seniors and 70 percent thought they were “above average in leadership ability.” I don’t know these students but I do know that at least 20% of them are wrong. It’s not just students who struggle with humility. Gilovich found that 94 percent of college professors believe they are doing a “better-than-average job”. Hmm, again at least 44% of these processors have to be wrong – statistically speaking.

Dickson makes the point that humility is a useful virtue. That it serves not only the community at large – because humility is defined as using power to help others – but also helps the individual. Humble individuals know that they don’t know everything and so they’re always trying to learn from others, to understand someone else’s expertise.

Sometimes while presenting I have students in the classroom that want to demonstrate their knowledge by the questions that they ask. They ask about some specific situation that they already know the answer to. When I can’t answer their question they provide their own answer. The humble person doesn’t sit quietly and shy away from asking questions, rather they ask questions from the perspective of applying the talk to their situation – and to integrating the presentation into their way of thinking.

One last random thought, which comes by way of Amanda Gore. I was sent a link to one of her YouTube videos by a colleague. She was talking about a concept in Australia called tall poppies. Basically it’s a negative way of talking about those who have high merit. This is connected in that if you hold your head up high – deserved or not – you may have someone come try to cut your head off. Not that you shouldn’t own what you know but just that humility may be the right answer at times.

I invite you to join me on this journey by reading Humilitas.

Performance and Scalability in Today’s Age – Horizontal vs. Vertical Scaling

In early 2006 I wrote an article for TechRepublic. That article was titled “The Declining Importance of Performance Should Change Your Priorities.” Of all of the articles that I have ever written this article has been the most criticized. People thought I was misguided. People thought I was crazy. People believed that I was disingenuous in my beliefs. What I didn’t say then is what I can share now. I believed – and believe – every word. The piece that the readers missed was my background coming to writing the piece – and the awareness that you must have some testing. The title wasn’t “Stop Performance Testing” nor was it “Design your Applications Poorly” it was just about the declining importance.

In 2009, I wrote a series of articles for Developer.com on performance from a development perspective – to cover in detail my awareness of the need to do good performance design. The articles were: Performance Improvement: Understanding, Performance Improvement: Session State, Performance Improvement: Caching, and Performance Improvement: Bigger and Better. In these articles I was talking about strategies, techniques, and approaches to learn how to do high performance systems – and how to measure them. I wrote these articles from the fundamental point of view of scalability that it is mostly horizontal – that adding more servers is how you maintained performance and increased scalability through the introduction of additional servers.

Today, I’m struck by how we scale environments so I want to take a step back to review these articles, I want to explain performance management, and I want to move forward about how we scale in today’s world.

The first thing you have to know about the article from 2006 is that I was writing quite a bit back then. One year was 75 articles on top of my “day” job in consulting. As a result, I took my writing topics from my consulting. At this point, I was at a client where there was so great an investment in performance testing that it was starving out important development needs. What I didn’t say is there was nearly zero correlation between the results of the detailed performance analysis really said nothing about how the system would really perform. That’s the reason I wrote the article, it was an attempt to soften the fervor for performance testing – not because I didn’t believe in knowing more about what you’re building. Rather, it was because I believed that things were changing – and I still believe they are. To this day I recommend performance, scalability, and load testing. I just do so with realistic expectations and realistic budgets.

[Note: I’ll use performance testing as shorthand for talking about performance, load, and scalability testing. I’m aware they are different related topics, but it’s just easier to simplify them to performance testing for conversational purposes – if we get into detail we can split one type of testing from another.]

It used to be that when we thought about performance (and scalability which is inextricably linked) we think about creating faster computers with more memory and faster hard disks – strike that let’s use SSDs. We believe that we had to grow the computer up and make it faster. However, in today’s world we’ve discovered that we have to have to apply multiple computers to the problem to solve whatever problem we have. In effect we’ve moved from vertical scaling to horizontal scaling. I need to explain this but before I go there I have to talk about the fundamentals of performance management. This is a conversation that Paul Culmsee and I’ve been having lately and for which he’s been writing blog articles.

Oversimplification of Performance Monitoring

Over the years I’ve written probably a dozen or so chapters on performance for various books. Honestly, I was very tired of it before I gave it up. Why? Well, it’s painfully simple and delightfully difficult. Simple in that there are really only four things you need to pay attention to: Memory, Disk, Processing, and Communication. It’s delightfully difficult because of the interplay between these items and the problem of setting up a meaningful baseline but more on that in a moment.

Perfect Processing

When I get a chance to work with folks on performance (which is more frequent than most folks would expect), I make it simple. I look at two things first. I look at processing – CPU because it’s easy. It’s the one everyone thinks about. I look for two things. Is there a single core in the CPU that’s completely saturated? If so I’ve got a simple problem with either hardware events or single threaded software. However, in today’s world of web requests the problem of single threading is much less likely than it was years ago. The second thing I look for is to see if all the cores are busy – that’s a different problem – but a solvable one. Whether it’s making the code more efficient by doing code profiling and fixing the bad regions of code or adding more cores to the server doesn’t matter from our perspective at the moment.

I Forgot About Memory

The next thing I look at is memory. Why memory? I look at memory because memory can cover a great many sins. Give me enough memory and I’ll only use the disks twice. Once to read it into memory and once to write it back out when it has changed or when I want to reboot the computer. If you have too little memory it will leak into disk performance issues due to the fun of virtual memory.

Let me go on record here about virtual memory. Stop using virtual memory. Turn off your paging files in Windows. This is a concept that has outlived its usefulness. Before the year 2000 when memory was expensive it was necessary evil. It’s not appropriate any longer. Memory should be used for caching – making thing faster. If your machine isn’t saying that it’s doing a lot of caching – you don’t have enough memory. (For SQL Server, look for SQL Buffer Manager: Page Life Expectancy – if it’s 300 or more … you’re probably OK.)

Dastardly Disks

So then, why don’t I start with disks – particularly since I’m about to tell you it’s the largest issue in computer performance today? The answer is that they’re hard to look at for numerous reasons. We’ve created so much complexity on top of hard disks. We have hard disk controllers that leverage RAID to create virtual disks from physical ones. We have SANs that take this concept and pull it outside of the computer and share it with other computers.

We have built file systems on top of the sectors of the disk. The sectors are allocated in groups called clusters. We sometimes read and write randomly. We sometimes read and write sequentially. Because of all of the structures we occasionally see “hot spots” where we’re exercising one of the physical disks too much. Oh and then there’s the effect of caching and how it changes disk access patterns. So ultimately I ask about the IOPS and I ask about latency (average time per read and average time per write). However, the complexity is so great and the patterns so diverse that it’s sometimes hard to see the real problem amongst all the noise. (If you want more on computer disk performance, check out my previous blog post “Computer Hard Disk Performance – From the Ground Up“)

With experience you learn to notice what is wrong with disk performance numbers and what the root cause is but often explaining this to a SAN engineer, SAN company, or even a server manager becomes a frustrating exercise. Gary Klein in Sources of Power talks about decisions and how experts make decisions and the finding is that they make decisions based on complex mental models of how things work and upon expectancies. In the book Lost Knowledge David DeLong addresses the difficulty of a highly experienced person teaching a less experienced person. The mental models of the expert are so well understood by the expert – and not by the novice – that communicating the knowledge about why something is an issue is difficult. Finally, in the book Efficiency in Learning Ruth Clark, Frank Nguyen, and John Sweller discuss how complex schemas (their word for the mental models) can be trained – but the approaches they lay out are time consuming and complex. This is my subtle way of saying that if the person you’re speaking with gets it – great. If not, well, you are going to be frustrated – and so are they.

Constant Communications

The final performance area, communication, is the most challenging to understand and to test. That is communications is the real struggle because it is always the bottleneck. Whether it’s the 300 baud modems that I had when I first started with a computer or the ten gigabit Ethernet switching networks that exist today – it’s never enough. It’s pretty common for me to walk into an environment where the network connectivity is assumed to be good or enough. Firewalls slip in between servers and their database server. We put in 1GB links between servers – and even when we bond them together two or four at a time we route them through switches that only have a 1GB backplane – meaning that they can only route 1GB worth of packets anyway. There are many points of failure in the connections between servers and I think I’ve seen them all fail – including the network cable itself.

One of my “favorite” problems involved switch diversity. That’s where the servers are connected to two different switches to minimize the possibility of a single switch failing and disconnecting the server. We had bonded four one gigabit connections two on each switch and we were seeing some very interesting (read: poor) performance. Upon further review it was discovered that the switches only had a single one gigabit connection between them so they couldn’t keep up with all the synchronization traffic between the two switches. Oh, and just for fun neither of the switches were connected to the core network with more than a one gigabit connection either. We fixed the local connectivity problem to the switches by doing bonding. We solved the single point of failure problem with switch diversity – but we didn’t fix the upstream issues that prevented the switches from performing appropriately.

With the complexity of communication networks it’s hard to figure out what exactly how to know there’s a performance problem. You can look at how much the computer transmits across the network or you can monitor a switch but there are so many pieces to measure it may be find to find the real bottleneck. I’m not saying that you shouldn’t take the effort to understand where the performance issues are, I’m just saying there are ways to quickly and easily find potential issues –and there are harder ways.

Communications are likely to always be the bottleneck. Even in my conversations today one of our biggest concerns is WAN performance and latency. We spend a great deal of time with client-side caching to minimize the amount of data that must be pulled across the WAN. We buy WAN accelerators that do transparent compression and block caching. It’s all to the goal of addressing the slowest part of the performance equation – the client connectivity. It’s almost always a part of the problem. (If you want a practical example, take a look at my blog post “SharePoint Search across the Globe.”)

Perhaps the most challenging part of working with communications as a bottleneck is the complexity. While we can dedicate a server to a task and therefore isolate the variables we’re trying to test, most communications resources are shared and because of this present the problem of needing to isolate the traffic we’re interested in – and it means that we’ll sometimes see performance issues which have nothing to do with our application.

One of the great design benefits of the Internet is its resiliency. Designed from the ground up to accommodate failures (due to nuclear attacks or what-have-you) buys you a lot of automatic recovery. However, automatic recovery takes time – and when it’s happening frequently, it may take too much time. One frequent problem is packet loss. There’s a normal amount of packet loss on any network due to sun spots, quarks, neutrinos, or whatever. However, when this packet loss even approaches 1% is can create very noticeable and real performance problems. This is particularly true of WAN connections. They’re the most susceptible to loss and because of the latencies involved they can have the greatest impact on performance. Things like packet loss rarely show up in test cases but show up in real life.

Bottlenecks

Here’s the problem with performance (scalability, and load) testing. When you’re looking for a performance problem, you’re looking for a bottleneck. A bottleneck is a rate limiter. It’s the thing that says you can’t go any faster because of me. It gets its name because a bottle’s neck limits the amount of fluid you can poor into or out of the bottle.

With finding bottlenecks, you’re looking for something that you won’t find until it becomes a problem. You have to make it a problem. You have to drive the system to the point where the rate limiting kicks in and figure out what caused it – that’s what performance testing is about. One would appropriately assume that we use performance testing to find bottlenecks. Well, no, at least not exactly. We use performance testing to find a single bottleneck. Remember a bottleneck is a rate limiter so the first bottleneck limits the system. Once you hit that rate you can’t test any more. You have to remove the bottleneck and then test again. Thus we see the second issue with performance testing.

The first issue that I discussed casually was complexity. Now the second issue is we can only find performance problems one at a time. We only get to identify one bottleneck at a time. That means that we have to make sure that we’re picking the right approach to finding bottle necks, so we find the right one. As it turns out, that’s harder than it first appears.

What Problem Are We Solving Again?

We’ve seen two issues with performance testing but I’ve not exposed the real problem yet. The real problem is in knowing how the users will use the system. I know. We’ve got use cases and test cases and requirements and all sorts of conversations about what the users will do – but what will they REALLY do? Even for systems where there are established patterns of use, the testing usually involves a change. We do testing because we want to know how the change in the code or the environment will impact the performance of the system. Those changes are likely to change behavior in unpredictable ways. (If you don’t agree read Diffusion of Innovations.) You won’t be able to know how a change in approach will impact the way that users actually use the system. Consider that Amazon.com knows that even small changes in performance of their site can have a dramatic impact on the number of shopping carts that are abandoned – that doesn’t really make any sense given the kinds of transactions that we’re talking about – but it’s reflected in the data.

I grew up in the telecommunications area in my early career. We were talking about SS7 – Signaling System 7. This is the protocol with which telephone switches communicate with one another to form up a call from one place to another. I became aware of this protocol when I was writing the program to read the switches – I was reading the log of what the messages were. The really crazy thing is that SS7 allowed for a small amount of data traffic across the voice network for switching. In 1984 Friedhelm Hillebrand and Bernard Ghillebaert realized that this small side-band data signaling area could be leveraged to allow users to send short messages to one another. This led to the development of the Short Message Service (SMS) that many mobile phone users use today. (It got a name change to texting somewhere along the way.) From my perspective there’s almost no possible way that the original designers would have been able to see that SS7 would be used by consumers to send “short” messages.

Perhaps you would like a less esoteric example. There’s something you may have heard of. It was originally designed to connect defense contractors and universities. It was designed to survive disaster. You may have guessed that the technology I am talking about is the Internet. Who could have known back then that we’d be using the Internet to order pizza, schedule haircuts, and generally solve any information or communication need?

My point with this is that if you can’t predict the use then you can’t create a synthetic load – the artificial load that performance testing uses to identify the bottleneck. You can guess at what people do but it’s that it’s a guess – it’s not even an educated guess. If you guess wrong about how the system is used you’ll reach the wrong conclusion about what the bottleneck will be. In articles, I have talked about reflections and exception handling. Those are certainly two key things that do perform more slowly – what if you believed that the code that did some reflections was the most frequently used code – instead of the least frequently used code? You’re likely to see CPU as the key bottleneck and ignore the disk performance issues that become apparent with heavy reporting access.

So the key is to make sure that the load you’re using to simulate the load of users is close to what they’ll actually do. If you don’t get the simulation right you may – or may not – see the real bottlenecks that prevent the system from meeting the real needs of the users. (This is one of the reasons why the DevOps movement is entertaining. You can do a partial deployment to production and observe how things perform for real.)

Baselining

Before I leave the topic of performance and performance testing, I have to include one more key part of performance management. That is, baselining the system. That is recording performance data when things are working as expected so that it’s possible to see which metric (or metrics) are out of bounds when a problem occurs. It is one thing to know that performance has dropped from 1 second per page to 3 seconds per page – and quite another to identify which performance counters are off. What often happens is that the system is running and then after a while there’s a problem. Rarely does anyone know how fast the system was performing when it was good – much less what the key performance counters were.

Establishing a baseline for performance is invaluable if you do find a performance problem in the future – and in most cases you will run into a performance problem in the future. Interestingly, however, rarely do organizations do a baseline because doing a baseline requires a time-disconnection from launch and remembering to do something a month after a new solution is launched is hard to do because you’re on to the next urgent project – or you simply forget. Further, without a periodic rebaselining of the system, you’ll never be able to see the dramatic change that happens right before the performance problem.

With baselining you’re creating a snapshot of “normal” that is, this is what normal is for the system at this moment. We leverage baselines to allow us to compare the abnormal condition against the known simplification of the normal condition. We can then look for values that are radically different than the normal that we recorded – and then work backwards to find the root cause of the discrepancy – the root cause of the performance issue. Consider performance dropping from 1 second per page to 3 seconds per page. Knowing a normal CPU range and an abnormal one can lead to an awareness that there’s much more CPU time in use. This might lead us to suspect new code changes are causing more processing, or perhaps there’s much more data in the system. Similarly if we see our average hard disk request latency jumps from 11ms to 23ms we can look at the disk storage system to see if it is coping with a greater number of requests – or perhaps a drive has failed in a RAID array and it hasn’t been noticed so the system is coping with fewer drive arms.

Vertical vs. Horizontal Scaling

There was a day when we were all chasing vertical scaling to deal with our scalability problems. That is we were looking for faster disks, more memory, and faster CPUs. That is we wanted to take the one box that we had and make it faster – fast enough to support the load we want to put on it. Somewhere along the way we became aware that no box – no matter how large – could survive the onslaught of a million web users. We decided – rightfully so – that we couldn’t build one monolithic box that would handle everything but instead we had to start building a small army of boxes willing to support the single cause of keeping a web site operational. However, as we’ve already stated, there are new problems that emerge as you try to scale horizontally.

First, it’s possible to scale horizontally inside of a computer – we do this all the time. Multiple CPUs with multiple cores and multiple disk arms all working towards a goal. We already know how this works –and what the problems are. At some point we hit a wall with processor speeds, we could only get the electrons to cycle so quickly. We had to start packing more and more of the CPUs with more and more cores. Which led us to a problem with is the problem of latency and throughput.

The idea is that you can do anything instantly if you have enough throughput. However, that doesn’t take into account latency – the minimum amount of time that it will take for a single operation to take. Consider the simple fact that nine women can have nine babies in nine months, however, no women can have a baby in a month. The latency of the transaction – having a baby – is nine months. No amount of additional women (or prompting by parents) will increase the rate at which babies are made. Overall throughput can be increased by adding more women but the inherent latency is just that – inherent. When we scale horizontally we confront this problem.

A 2Ghz CPU may complete a thread of execution twice as fast (more or less) than a 1Ghz CPU – however, a two core 2Ghz CPU won’t necessarily complete the same task any faster – unless the task can be split into multiple threads of execution. In that case we may see some – or dramatic – improvement in the performance. Our greatest enemy when considering horizontal scaling (inside a single computer or across multiple computers) is the problem of a single large task. The good news is that most of the challenges we face in computing today are not one single unbreakable task. Most tasks are inherently many smaller tasks – or can be reconstructed this way with a bit of work. Certainly when we consider the role of a web server we realize that it’s rarely one user consuming all the resources. Rather it’s the aggregation of all of the users that cause the concern.

The good news is that horizontal scaling is very effective at problems which are “death by a thousand cuts.” That is they are good at problems which can be broken down, distributed, and processed in discrete chunks. In fact, this is the process that is most central to horizontal scaling.

Horizontal Scaling as breaking things – down

My favorite example of horizontal scaling are massively distributed systems. A project called SETI@Home leverages “spare” computing cycles from thousands of computers to process radio telescope data looking for intelligent life beyond our planet. It used to be that the SETI project (Search for Extra Terrestrial Intelligence) used expensive super computers to process the data from the radio telescopes. This process was converted so that “normal” computers could process chunks of the data and return the results to centralized servers. There are the appropriate safeguards built into the system to prevent folks from falsifying the data.

Putting on my developer hat for a moment, there are some parts of a problem that simply can’t be broken down and far more where it simply isn’t cost effective/efficient to even try to break the problem down. Take for instance a recent project where I needed to build a list of the things to process. This took roughly 30 seconds to complete in production. The rest of the process, performing the migration, took roughly 40 hours of time if processed in a single thread. I didn’t have that much time for the migration. So we created a pool of ten threads that each pulled out one work item at a time out of the work list and worked on it (with the appropriate locking). The main thread of the application simply waited for the ten threads to complete. It wasn’t done in 4 hours – but it was done in 8 hours. Well within the required window because I was able to break the tasks into small chunks – one item at a time.

Admittedly without some development help breaking the workload down isn’t going to be easy – but anytime I’m faced with what folks tell me is a vertical scaling problem – where we need one bigger box – I have to immediately think about how to scale it horizontally.

Call to Action

If you’re facing scalability or performance challenges… challenge your thinking about how the problem is put together and what you can do to break the problem down.

Honest Evaluation

It’s time for another installment of the Nine Keys to SharePoint Success. It’s been nearly nine months since I wrote the original blog post that defined the nine key things that your SharePoint implementation should have to be successful. This is key number five and the first in the area of cultural change for the organization. And this post has been by far the hardest for me to write. We’ve moving into areas which are a bit more sensitive to the organization – in part because they echo how we exist as humans.

The psychological behaviorists argued that introspection should be abandoned as a psychological technique, instead focusing on things which were objective and measurable. While I support the idea that we cannot rely solely on introspection to help us understand ourselves, I also believe that we should use introspection to help us understand more about ourselves. Alcoholics Anonymous kicked off a 12 step program movement which has resonated throughout the globe as a model for treating all kinds of addictive behaviors. Step four is “Made a searching and fearless moral inventory of ourselves.” This is clearly introspection – as objectively as our egos will allow. The irony of the fact that behaviorists wouldn’t like introspection is that the 12 step technique is successful because it changes behavior in part due to a heavy dose of introspection. Here as in a 12 step program our goal is to change behavior – that is we want to ensure that we’re doing the behaviors that lead to better SharePoint outcomes.

Those who have been in and around 12 step programs will tell you that step 4, the introspection is one of the most difficult steps in the program. Our ego actively protects us from admitting our faults. (As Daniel Gilbert said in Stumbling on Happiness, it’s our psychological immune system) Every person has faults. I have faults and limitations. You have weaknesses and opportunities for growth. Your best friend struggles with something. Your mother sometimes fails. Your father doesn’t always succeed. Despite this we all fall into the trap of trying to hide our faults. We feel like we need to portray an image of perfection to the outside world. Those with a Christian church background may have heard Romans 3:23 “for all have sinned and fall short of the glory of God.” You may notice that the word is “all” not “some.” We all have our limitations. None of us are perfect. It’s just as true that we want to hide these faults.

If you’re still with me you may be asking how our personal hang-ups impact the organization. Well, we reflect (or project) our personal beliefs into the organization. Jonathan Haidt in The Happiness Hypothesis wrote about our propensity to personalize input. Instead of accepting it as a statement, we internalize it as a character flaw. The key here is that a character flaw is much harder to solve than a simple weakness. I say simple weakness because weaknesses can be simple to resolve. I do not have perfect vision, neither does three quarters of the population. We don’t think about ourselves as being fundamentally and irretrievably flawed because we can’t see perfectly. Neither does someone walking down the street say “Look at that poor person, they have to wear glasses.” If you’re one of the few people who don’t need vision correction, I’m sure you’ve got another equally solvable limitation.

When we talk about how there are gaps in the organization – whether they’re simply structural observations about missing skills or whether they’re an inventory of projects that have gone wrong, we have a strong tendency to make those problems about us. We believe that we failed in creating a corporate structure without gaps – as if that were possible. We believe that we should have somehow known exactly how and when each project failed and should have presented it. On the one hand we will violently defend our weakness and in other hand internally we’re all too happy to heap on the blame.

The sinister part of our resistance to acknowledge and accept our limitations is that this is the core of how we hold ourselves back in life. If your struggle is anger management you may find that you spend a great deal of time and money on drywall repair. If you’re bad at balancing a checkbook or paying your bills, you’ll spend more money on overdraft and late fees. An anger management class would be cheaper than constantly repairing drywall. Some assistance with a system for balancing a checkbook and paying bills on time would save much more than you spend.

At an organizational level we’re positively lousy at the same sorts of introspection that we struggle with personally. To admit that we have a flaw or a gap in the organization means that we’re somehow personally flawed as well. We personalize that a defect means that someone isn’t doing their job – even if no one has the job to do.

Consider the situation where an organization is attempting to implement an Enterprise Content Management (ECM) system. They’ve never had an ECM system and they don’t have anyone in the organization that knows how to create the structure for the metadata for the files. There is no experience with creating an information architecture and not surprisingly when the solution is implemented the poor information architecture results in lower adoption. Who is to “blame” in this situation? No one has the skill – it may be that no one even realized the skill was necessary. In a typical post mortem (use post action review if that’s more comfortable to you) of the project it’s identified that information architecture was the problem – however, someone will likely get tagged as being responsible for creating it better.

This intent on identifying a person to blame for a weakness leads the person who’s in the hot seat to seek to create excuses. What else can they do? Everyone is looking for a scape goat and they really don’t want to be it. Like children crying out “Not it” immediately after someone proposes an unpleasant role; we try to keep from being the center of the problem.

Blame and Fault Finding

There’s a radical difference between getting to the root of a problem – to figure out what happened or what went wrong – and trying to figure out who was wrong and who is at fault. Honest evaluation is about finding the root of the problems and an awareness of what the organization is good at – and is not good at.

There are numerous reasons leading back to human nature and psychology that lead us immediately to deflecting blame from ourselves and on to other people. The simple truth is that if a system fails then we’re at fault (we’re to blame) for the failure – a failure to realize that the system would fail. However, if the problem is another person we realize that we’re not responsible – because we’re not responsible for other people. Despite what we say with our lofty efforts to find problems with systems we often revert to trying to find problems with people.

Consider for a moment the 5 Whys technique which is used for root cause analysis. How could using a time-honored approach and asking five simple – one word – questions lead to problems? The answer lies in the perceived tone of the question. Questions based on ‘Why’ are often accusatory – there by leading to a defensive response containing deflection, excuses, and often confusion. Even when a why question is asked with the heart of curiosity, it can be heard in an accusatory tone. This is particularly evident when the person has experienced accusatory people in similar roles or has had a significant personal relationship with an accusatory bias. (E.g. parent or spouse) Business Analysts – who are responsible for eliciting requirements and matching solutions are often encouraged to never ask ‘Why’ questions.

As I mentioned in my review of Diffusion of Innovation, there’s a tendency even in viewing innovations as wanting to blame the people instead of the technology – as was the case in the design of car safety. By explaining a way the problem on “idiots behind the wheel” we failed to look at how we could design systems to improve the chances for success.

Ultimately, blaming or finding fault finding in a person doesn’t help because every human is imperfect. Even if you remove that person from the system the next human will make mistakes too – and maybe different and more difficult to detect mistakes to discover. There’s an assumption that humans will be perfect instead of assuming that people are necessarily flawed and the systems we put around them should expect this and accommodate this expectation.

It isn’t that a particular person does – or does not – have flaws. Rather we all have flaws, the better question, the one that leads to a better evaluation, is what components are the system are insufficient to ensure success of a project. If you recognize a gap in requirements gathering, project management, information architecture, development, or infrastructure – why not supplement your team with the additional skills you need – upfront instead of suffering through a failing implementation. (The old saying that a stitch in time saves nine is appropriate here.)

Ownership and Acceptance

There are two key skills that will diffuse the blame and fault finding. They will help evaporate the personalization. The first is simply owning a problem. This sounds counter intuitive – and it is – however, it’s amazingly effective at stopping a “witch hunt.” When you own the problem. When you admit that somewhere in your area of responsibility there is in fact a problem and you’re aware of it, you stop others from trying to prove to you that you do have a problem.

It may sound like I’m saying that you should personalize the problem and own it. That is partially correct. I’m suggesting that you own it. You admit that it was something you could have done better. However, personalization is about making it about who you are – I’m not suggesting that. I’m suggesting that you can be imperfect and make a mistake without it being a character flaw. If I am late for a meeting it doesn’t mean that I can’t tell time.

The other skill is the one you need to be able to own the problems. That is acceptance. Accepting that you can make a mistake – and that all people make mistakes is incredibly freeing. When you accept this fundamental truth, you are more willing to own your part in causing bad outcomes – and you’re more willing to accept it from others.

Working at Getting Help

In an economy we have to make decisions about what we will and won’t do. We don’t get to make our decisions in a vacuum. We have to make our decisions on where to invest our efforts based on the greatest benefit. However, sometimes we make our investments based on what we know to invest in – rather than those things which are hard to invest in.

If you’re used to investing in supplementing your infrastructure skills with outside resources, chances are that you’ll continue to do this. You already know who to talk to. You know the vendors, the rates, and the issues. Making an investment in an area that you’ve never considered before – like information architecture – is challenging because you don’t know who to call nor do you know what to expect. What does it take to create an information architecture? What exercises will you go through? How much time will it take?

Interestingly if you’re familiar with infrastructure and even if you’ve outsourced it for years, you will have built up an awareness of what needs to happen. You know about the planning and the testing and the day-of activities. Because you’ve been a part of successful projects – run by outside parties – you’ve become more able to successfully implement something that may have previously been difficult. However, nothing is as easy as letting someone else manage it – unless allowing someone to manage the infrastructure part of the problem means that you won’t be able to manage some other part of the project.

Honest evaluation is – in part – a revisiting of the evaluations that you’ve made in the past. There was a time when the most cost effective solution to putting together a flyer was to hire a printing company to create the flyer for you. They owned specialized and expensive desktop publishing software. Today with Microsoft Word and Microsoft Publisher, most of us wouldn’t think of asking a printer to create a simple flyer for you. Either you’ll ask a design firm for something important – or you’ll do it yourself if it’s simple. We used to send out to have color copies made now most of our copiers – now called multi-function devices – can print in color. In my office, I have a sophisticated DVD duplicator which will print directly on the surface of the DVD after it’s burned. There was a time – not so long ago – when you would pay a specialized company to burn 500 DVDs and you would warehouse those that you didn’t sell. Today we print up batches of 20 or 50 and sell out of our inventory until we make more. Also because of the DVD production, we have a professional paper cutter which is capable of cutting hundreds of sheets of paper simultaneously, so we don’t even have our DVD covers cut down.

Honest evaluation is knowing where the inflection point is between having had something outsourced and when it’s time to get the solution accomplished differently.

Pre-mortem

Most folks are familiar with after-incident reviews and post project reviews called post mortems. Those reviews, however, are after the event has happened. The review may be helpful for the next event (or project) but it does little for the existing one – and as was pointed out in Lost Knowledge the “lack of absorptive capacity” may make it difficult for the knowledge to be used on the next project. So a useful exercise is to do a pre-mortem exercise, as was recommended in Sources of Power. This exercise requires that the participants accept that the project has failed at a future time and they’re trying to determine why. The goal is to turn over stone that could have led to the failure of the project. The perspective that it MUST fail for the exercise and that the goal is to identify the most likely candidates for the failure often leads to exposing many things which may not otherwise have been thought of. (Psychologically speaking it’s difficult for humans to identify gaps in existing plans. By focusing on the idea that there was a failure forces you to look for them differently.)

Facts not Feelings

One of the other keys to getting to an honest evaluation is staying focused on the facts and not the feelings. It’s one thing to say that “I think we manage projects pretty well around here.” That’s a feeling. What does the data actually say? How many projects come in on-time and on-budget? If you’re like most organizations, not many projects come in on-time and on budget. However, knowing this and examining the reasons – which is frequently changing requirements or bad requirements – gives you a place to look when doing your pre-mortem. It might be easy to say that I’m speaking out of both sides of my mouth here – saying that post mortems aren’t valuable and then suggesting that you use the output of previous post mortems as feedback to the pre-mortem process. However, that’s not really what I’m saying.

What I’m saying is that you should absolutely do post-mortems – however, you should recognize that you may not be able to get the value out of them – unless you use them as feedback into a process of actively evaluating the next project.

Look Outside

Developers shouldn’t test their own code. Authors shouldn’t edit their own work. Why? Because the same cognitive processing that lead to the fault will lead to not seeing the fault when you do any sort of a review. The value of an outside perspective is that it gives you a different way to process your situation. In the book Thinking, Fast and Slow Kahneman spends a great deal of time talking about the two “systems” going on in our head and how even our own outside view can be valuable. That embedded experts often fall into group think and fail to process what they know the truth to be based on the context of the question. He also discusses at length the problem of WYSIATI – What You See Is All There Is. This bias is based on the assumption that you have no blind spots – which in truth all of us have.

Pulling in outside resources brings in new views and perspectives that change what the group sees. Whether it’s bringing in – or bringing together non-competitive peers – or bringing in an experienced consultant, the different perspectives will lead you to a different set of challenges – and different ways of evaluating whether your organization has the skills and talents necessary to be successful.

Putting it Together

Honest evaluation is by no means easy. It’s by no means automatic. However, without truly assessing your strengths and your weaknesses, how can you possibly expect that you’ll be successful at a SharePoint project which is so complex and difficult to get right?

Appearance: Run as Radio – Robert Bogue Makes Ten Mistakes with SharePoint!

I’m pleased to share that last week I got a chance to sit down with Richard Campbell face-to-face here in Indianapolis and record an episode of Run As Radio which was creatively titled “Robert Bogue Makes Ten Mistakes with SharePoint!“. Check it out and tell me what you think of the conversation. We got a chance to talk through the 10 most common non-SharePoint technical mistakes that people make – when setting up SharePoint. Oh, and we got off topic about things like load balancers and load/scalability testing.

Including TypeScript in a SharePoint Project

If you missed it, Microsoft announced a new language that is a super set of JavaScript that compiles down into regular JavaScript. The compiler is written in JavaScript and works everywhere. The big benefit of the language is type-checking at compile/edit time. If you want to learn more go to http://www.TypeScriptLang.org/.

There is some rather good tooling and documentation, but one problem for me was making it work from inside my SharePoint Projects after I installed TypeScript. The way that the SharePoint tools run, they do the deployment before the TypeScript compiler runs. That’s not exactly helpful, however, you can fix this. First, you’re going to need to right click on your project file and unload it.

Next, you need to right click it again and edit the project file (TypeScriptFarmTest.csproj)

Then you need to modify your Project node to include a new InitialTargets attribute pointing to TypeScriptCompile:

<Project ToolsVersion=4.0 DefaultTargets=Build xmlns=http://schemas.microsoft.com/developer/msbuild/2003 InitialTargets=TypeScriptCompile>

Then you’ll need to insert into the inside of this node a new Targets node:

<Target Name=TypeScriptCompile BeforeTargets=Build>
<Message Text=Compiling TypeScript… />
<Exec Command=&quot;$(PROGRAMFILES)\Microsoft SDKs\TypeScript\0.8.0.0\tsc&quot; -target ES5 @(TypeScriptCompile ->’&quot;%(fullpath)&quot;‘, ‘ ‘) />
</Target>

From here save the file, close it, and right click the project and select Reload Project. Now all your TypeScript will compile into JavaScript before the deployment happens (actually before anything else happens because we told Visual Studio and Build to do this operation first, before anything else (with the Project InitialTargets attribute.)

SharePoint Profile Memberships in 2010

There was a lot of talk about how the User Profile memberships in SharePoint 2007 worked. The net effect was that the memberships were stored in a profile property MemberOf or internally SPS-MemberOf. This was driven by a timer job which ended with ‘User Profile to SharePoint Full Synchronization’. However, this changed slightly in 2010 – and the change mattered. In SharePoint 2010 the memberships got their own collection property off the UserProfile object. The Memberships became a full collection to store values.

This didn’t change the requirement that to be listed the user had to be a member of the Members group of the site – not the Owners – only the members. So the list still has its issues from a usability perspective. The memberships for a user shows up in profiles in two places: Memberships and Content. In Memberships it shows a listing of sites vertically. In Content you can navigate across a horizontal bar of sites and libraries:

In all honesty, I generally recommend that organizations replace the Memberships and Content functionality of a my site with a library in the user’s my site that contains links to the sites that they have permissions in. I’ve done this various ways – including opt-in from the user from the personal menu but no matter how it’s done we invariably find that users can actually understand it if they’re managing it and when it is being driven by their permissions to the site. However, in this case, the company didn’t consider replacing these and they were up against their deadline for implementing the Intranet.

The client was reporting that their memberships in their user profiles were out-of-date. There were sites that no longer existed that people were seeing, they were also getting access denied while trying to access some of the sites either through the memberships tabs or through the content tab. Upon digging I found that they had split their farm from a single global 2007 environment to a regionally deployed 2010 environment and the 2010 environment migrated the global 2007 profile service. The net effect of this was that they inherited the memberships for all of the sites across the globe – but the user profile service was only updating those memberships for the URLs that the local farm owned. So in this case there were numerous memberships for non-local SharePoint sites that were no longer being updated.

I should say that there are tons of longer term answers to the problem of managing a single user profile and memberships across a global organization, but for right now they decided they wanted to remove all of the memberships so it wouldn’t be wrong. As it turns out the code to do this is relatively simple:

SPServiceContext svcCtx = SPServiceContext.GetContext(SPServiceApplicationProxyGroup.Default, SPSiteSubscriptionIdentifier.Default);
UserProfileManager upm = new UserProfileManager(svcCtx);
foreach (UserProfile user in upm)
{
MembershipManager mm = user.Memberships;
mm.DeleteAll();
}

Run the code above and all memberships will disappear. If you then run the timer sync job it will re-add the local sites to the farm.

This took an amazingly large amount of time to track down given the relative simplicity of the final answer.

Recent Posts

Public Speaking