Web Hacker Boot Camp

Book Review-Web Hacker Boot Camp

“What evil things lurk inside the hearts and minds of men… “

When I was still in high school – many moons ago – I remember a programming competition. We were supposed to create a program that calculated the length of a line in a triangle (or some such trivial problem). To test your application once you said it was done, the judges would often enter a negative number for the length of a line. On a whim we had put a trap in for that particular condition while building the program. It was this test that helped us complete the challenge.

Putting testing into our code today for bad values seems commonplace. We test for values out of range and have validation controls and all sorts of things that lead us toward developing more robust solutions. However, despite all of our advances in the areas of validating user input, we as an industry still struggle with security issues in our applications.

While the news may be focused on the next new exploit of Windows – because that has mass appeal –  we are still finding that our applications have their own flaws in them that can be exploited just as easily as someone exploiting a flaw in Windows. Actually, much of the time the mistakes in our applications are much more trivial to reveal.

Web Hacker Boot Camp is a journey through the mind of a hacker. It reveals how hackers do their job and what their techniques are. It stops short of telling you precisely what to do to make your application secure, but it certainly provides you with the information you need to think about to make the application secure.

There’s good content in this book if you’re interested in how to fortify your application against attack. The only downside is that you’ll have to look past some editing and layout issues to get the information out of the book.

If you want to discover how you could lose control of your web servers because of flaws in your application code, Web Hacker Boot Camp is definitely a book to get.

forge

Stupid ASP.NET Tricks (Don’t try these at home)

I’ve been struggling with ASP.NET lately and wanted to offer up two things not to do …

1) Try to dynamically load another instance of the same user control from within a user control — ASP.NET gets very confused and starts loosing track of your controls in inexplicable ways.  All of the sudden the label in the partially defined class will unexpectedly be null.  (By the way, I was trying to draw “sublines“ in a shopping cart — things like free accessories I wanted to stay with the parent line.)

2) Load dynamic controls in a repeater — For some reason still unknown to me, if you dynamically add controls to a repeater (such as user controls into table rows) the post back values never come back … and the controls aren’t represented in the control tree.  I could just be missing something but I’ve decided it’s a bad idea…

forge

Article: .NET Offers a “First Chance” to Squelch Performance-killing Hidden Exceptions

Processing exceptions in any language is expensive. The process of capturing an exception and rolling it into a package that can be processed by code requires a stack walk—basically looking back through the history of what has happened—and that requires a lot of processing time to complete. In .NET software development we sometimes talk about boxing and unboxing of variables and how that can consume a great deal of time in a program—and it can. However, the amount of time necessary to process an exception dwarfs the time necessary to box and unbox a variable. (If you want to know more about boxing and unboxing you can read about it in the article “Boxing and Unboxing of Value Types in C#” on CodeGuru.)

Despite the fact that exceptions are expensive to process they are positively a great thing for debugging applications and making software more reliable. The challenge is when exceptions are used incorrectly; specifically, when they are used to respond to normal conditions instead of exceptional conditions. Generally when this happens the layers of software above the layer generating the exception start “eating” the exception—because nothing is wrong (more on eating exceptions in a moment)—and manually return to normal program flow. This, however, leads to subtle performance issues that are difficult to find. Running a profiler on the code can show where performance is being impacted, however, there’s a quicker and simpler way to find performance problems like these—or rule out exceptions as a potential cause of performance issues.

Using first chance exceptions in the debugger is a way to ferret out these hidden exceptions that your application is generating so that you can go back and prevent them from happening in the first place. The more exceptions you prevent, the faster your application will run. First chance exceptions are exposed via Visual Studio and can be leveraged in any language that compiles to the CLR.

http://www.devx.com/dotnet/Article/31414/

forge

Article: Not just a buzz word, virtualization technology can improve your app development

It used to be that we talked about DLL hell. This was a place where everyone who was evil enough to try to run too many applications on their computer ended up. One day you would install a new application and you would be instantly transported into DLL hell. You would try replacing one DLL after another. You would move DLLs to individual application directories. In the end you would end up feeling as if you were just shuffling the DLLs, trading one problem for another.

While we’ve made some progress from the days when DLL hell was inflicted on most developers at some time or another, we have not fundamentally resolved all of the issues that are caused by trying to take one machine and get it to serve multiple masters. A more current day challenge might be trying to run multiple versions of Windows on one hard drive, or Linux and Windows side by side on multiple partitions. No matter where we are in the technology lifecycle, we’ve seen how we can try to get one system to do too many things and have had to live with the challenges. This is where virtualization technology comes to the rescue.

No longer do you have to mix two applications that don’t get along together just because you only have one PC on which to work. Virtualization technologies allow you to run multiple, independent operating systems which have the ability to be isolated completely from other processes running on your PC.

http://www.techrepublic.com/article/not-just-a-buzz-word-virtualization-technology-can-improve-your-app-development/

forge

What is Architecture?

Arno Nel asked the question on his blog “What is Architecture?” This is my response…
I have a whole series on the various roles in software development on Developer.com, the software architect one is at http://www.developer.com/mgmt/article.php/3504496.
However, my thoughts on your statements are …
  1. On Architecture is 50% business, 50% tech — I’d say that architecture is one part art (elegance, simplicity), one part facilitated visioning (getting a common idea of what is going on), and one part conversion (converting requirements into design).  While it’s true that business and technology are both required, I’d say that the important aspects are not well respected with this point of view.  Knowing how to facilitate people coming together, knowing how to convert requirements into design, and knowing how to maintain simplicity in the face of complexity are a much more important perspective.
  2. On The Architected doesn’t decide w[h]ether or not to use DataGrids – I don’t know that I can agree to this statement in every case.  It depends upon what the solution is and what is necessary.  She may decide that DataGrids are critical to some reusability that is desired and could specify them.  However, in most cases, you’re right. In most cases shouldn’t be any reason why an architect would specify the use of DataGrids or not.  Think of it this way, a traditional building architect doesn’t generally specify the supplier for a bolt – only how strong it must be.
  3. On The architect doesn[‘]t care about coding standards – I heartily disagree here.  He doesn’t care WHAT is in the coding standards but he does that they exist.  In the same way that a building architect doesn’t care what kind of light fixtures are used in the building but does care that they all match (consistency) or coordinate (complimentary).
  4. On The architect doesn[‘]t care about reading UML – Here to I heartily disagree.  I don’t disagree that the architect won’t read all the UML, however, I believe the architect should do “drive bys” of what is being constructed.  That includes reviews of the UML and any other supporting documentation being used to construct the actual solution.  If the point is that they don’t care whether it’s UML or something else – I disagree again.  The architect’s time is overcommitted.  If the architect understands UML then UML should be used to facilitate communication (reduce the cost) and to make communications more effective.
  5. On The architect doesn[‘]t care about Agile/RUP/MSF etc. – Again, I disagree.  The architect needs to understand the process being used to know how to insert themselves into the process in a way that is both timely and respectful of the process.  Choosing an approach that they are familiar with is important.  Also, they may decide that to achieve the objectives one may work better than another.  For instance, an agile approach will work better with poorly understood requirements.  RUP will work better with risky components to the project, etc.
  6. On The Architect translates IT to Business – While I agree that the statement is true I believe it misses the fundamental truth that an architect guides a conversion process from raw data into a solution.  Also, I’d argue that the statement even in it’s form here should be reversed.  They help to translate the business problems into IT solutions (not IT into business.)
I think an interesting question is how does a solutions architect differ from a building architect?
forge

Article: Get out of the information technology reactionary rut

This document explores what being proactive is when it comes to IT, why it’s important, and how to get your group back on track when it strays.

We all start out trying to be proactive. We plan to control our lives. We make the plans and somewhere during the execution phase we get off track. We run into some unplanned snag or snarl. We slip into a reactionary mode to address the problem. We try to get back into our planned, proactive mode of operation but sometimes the next issue comes along and we’re off to react to it.

Although there are areas of IT that would seem incapable of escaping the reactionary rut, like for instance, a help desk, and the truth is that we can all influence our areas into more proactive and therefore less frustrating modes of operation. Let’s explore what being proactive is, why it’s important, and how to get your group back on track.

Why is proactive is better?

While I firmly believe that control is an illusion, it’s a useful one. Assuming a proactive stance to try to control, or at least influence, the future into a positive direction, is effective at reducing the overall workload. That is an addition to reducing stress caused by unforeseen circumstances. While being proactive, like anything else, can be taken to an extreme, in most cases proactive time spent preparing for a problem, developing an approach, or understanding the environment is immensely powerful in terms of its ability to save time in the future.

http://www.techrepublic.com/article/get-out-of-the-information-technology-reactionary-rut/

forge

Article: Code Access Security: When Role-based Security Isn’t Enough

Role-based security works pretty well in most situations but as Sharepoint developers learned long ago it doesn’t work for everything. Now that .NET supports Web parts, even more developers will find they need to get a basic understanding of Microsoft’s Code Access Security.

Ask any typical .NET developer about Code Access Security (CAS) and you’ve got the chance of hearing “Huh?” as the response. Most developers haven’t run into CAS at all—let alone in a way that would cause them to develop a deep understanding of it.

Ask your typical SharePoint developer about CAS and they’re likely to begin to shudder uncontrollably. Why is that? Well, SharePoint developers have been dealing with CAS since the day that SharePoint was released. Unlike ASP.NET, which makes the assumption of full trust—effectively neutralizing any impact that CAS will have on a standard .NET application—SharePoint starts with a minimal trust, which means most code will need to have a CAS policy applied to it in order to work.

http://www.devx.com/security/Article/31259/

forge

Article: Microsoft Windows SharePoint Services or Microsoft Office SharePoint Portal Server?

Most organizations, when they initially start looking at SharePoint, are confused about what SharePoint is. They struggle to understand the features and what they need to solve their business problems. It doesn’t take long for the conversation to turn to a discussion of whether the organization needs to take on the additional cost of Microsoft Office SharePoint Portal Server (SPS), or whether the Windows SharePoint Services (WSS) already bundled with their operating system licenses is be sufficient. Most of the time, the decision isn’t very black and white. This article discusses the questions you should be asking before you make a decision.

http://mssharepoint.advisorguide.com/doc/17866 [Article removed]

forge

Article: Use your star as a catalyst for productivity by amplifying the halo effect

To get the most out of your star developers and other team members, you’ll want to amplify the halo effect. In this document you’ll learn how to identify the halo effect, understand it’s potential, how to amplify its effect, and how to convert the halo effect into a permanent change even after the star developer has left.

Adding a star developer or architect—from here on we’ll just call them a “star”—to the team does more than improve your organization’s productivity by the amount that star can do. A halo effect often occurs where the existing developers on your team begin to improve their effectiveness too. This halo effect can be either amplified so that nearly every person on the team becomes better, or it can be isolated so that few, if any, people are affected by it.

To get the most out of your star developers and other team members, you’ll want to amplify the halo effect. In this document you’ll learn how to identify the halo effect, understand it’s potential, how to amplify its effect, and how to convert the halo effect into a permanent change even after the star developer has left.

Understanding the halo effect

A friend of mine and I were talking before her wedding which occurred a few years back. It was important to her that she be able to do ballroom dancing for her first dance with her new husband. She had been taking dance lessons and she related how when she danced with her instructor her dancing was much better than with her husband and she didn’t understand it. When she asked her instructor about it he told her that she was able to dance better because his leading was better. That with strong leadership she was able to follow clearly. Her future husband was a good, but not professional dancer, and so his leading wasn’t as strong or assured. Although their first dance turned out fine, she learned first hand about how a “star” can artificially increase the effectiveness of her dancing.

http://www.techrepublic.com/article/use-your-star-as-a-catalyst-for-productivity-by-amplifying-the-halo-effect/

forge

Creating SharePoint Solutions Is About Designing for Happy Thoughts

Note: This was a full length article I wrote on speculation for a publisher who decided it wandered too much for them.  I’d really appreciate your feedback on whether you thought it was valuable or not.  Comments on the blog are disabled but you can email me your thoughts.  thanks — rlb
Playing the role of devil’s advocate is fun. You get to punch holes in even the most detailed and elaborate plans. You can ask fun questions like, “What happens if both your primary and secondary power sources fail?” Most developers and architects who have much experience are adept at playing the devil’s advocate role. It’s one that others have played in the past for us, and occasionally “Murphy” has intervened and demonstrated a few worst-case scenarios.

However, all of this planning for the worst possible case has a cost and, in many ways, a non-trivial cost. Instead of viewing what we can do and how things can solve problems, we get wrapped up in how someone might work around the system, create a problem, or get things out of whack. We begin to plan for the exception rather than planning for the rule.

SharePoint solutions inherently don’t do well with the point of view that everyone is going to try to break the system down, or make changes they aren’t supposed to make. SharePoint just doesn’t work well when you assume that the users are generally bad people. SharePoint is, instead, focused on enabling people to get things done and trusting that the people that you have given permission to will be responsible adults.

You need to understand the difference in mindset that facilitates successful, and inexpensive, SharePoint implementations as it compares to the way that most IT organizations are familiar with doing business. In this article we’ll break apart the problem for you and show you the attitude to take for success, and why to take it.

Traditional Software Development

When developing software for computers, we as an industry started with accounting packages and enterprise systems that were used to manage money and coordinate large groups of people. Computers were expensive, so they could be used only for those tasks which were considered mission critical. Because of the types of systems that were being built, we needed checks and balances, authorizations, and controls for who could do what and when. When you’re working with the ability to cut checks for millions of dollars, for instance, it’s a good idea to make sure someone’s authorized to cut that check.

The Internet and the rise of hackers (or crackers, if you prefer) has shed new light on the idea that you must be able to secure systems from the masses, even the masses that include your customers. The need for security on nearly all levels has become, unfortunately, a way of life. We have to accept that we must protect ourselves from the world at large.

The same way you might lock your doors in a bad or unfamiliar neighborhood to discourage a car jacking, in software development you make sure that all of your proverbial doors are locked, the ‘Ts’ are crossed, and the ‘Is’ are dotted. You certainly don’t want to have someone breaking into your system or accidentally doing something that they’re not supposed to do.

Traditional software development is focused on managing risk, even small risks. Interfaces, even hidden ones, that might allow an individual to bypass some security or some business process are locked down so that no one can use them inappropriately. This happens whether the interface is critical to the system or not. Even if the interface simply logs a note to the account, it must be locked down, audited, and security tested.

Author’s Note: This is not an attempt to say that traditional development is wrong, invalid, or bad. It’s simply a statement of observation. When I do security application reviews, these are the things that I look for because that’s what the organization (and in some cases the government) wants to see.

SharePoint Development

In stark contrast to the kinds of financial-impact systems that started software development, and the hoards of “bad” people on the Internet, SharePoint is most frequently used by a group of people who know and trust one another. They are people who, even if they don’t work for the same team, often work for the same organization. More frequently than not, the people using SharePoint exist within the sphere of “a neck to choke,” if they won’t play by the rules. Even if you can’t physically reach out and touch every person using the system, you generally know someone who can. You have some level of community responsibility with the people using the system.

 There’s a higher degree of integrity offered by the closeness of the folks using a SharePoint solution and a greater degree of community responsibility. SharePoint systems aren’t elaborate machines where each person doesn’t see the effects of his work. Instead, each member sees his part in the solution. In some cases, he feels a responsibility to help be part of the solution, and not a part of the problem.

There is very little need for the controls, locks, and gates on which traditional development focuses. Sure, we all would like the ability to refine how the system works to enforce business rules; however, when developing for a small group, and especially a team, these controls are nice to have, rather than necessities.

Planning the Happy Path

Planning for the happy path is simple; however, it’s so simple that most IT professionals block it out of our minds. When spreadsheets came out on the market, they were rapidly adopted as a way for users to convert the data they had into the information they needed. Lotus 1-2-3 introduced a powerful macro language. Microsoft Excel extended this model (as Microsoft often does) and enhanced it into a true programming language. Even today many organizations rely in part on Excel to close the accounting records of their businesses. (We hope that the Sarbanes-Oxley Act is reducing this practice; however, by observation it is still all too popular.)

Microsoft Access spread like wildfire in the early 1990s. Users were suddenly able to take data sets too large to process in Excel and create decent looking reports, simple data entry screens, and – in general – simple solutions. Many organizations are still recovering from their addiction to Microsoft Access as well. They struggle with systems written by engineers who’ve long since moved on. They’re not sure precisely how the database works but they do know that it does work.
Evaluating Excel and Access, one can quickly surmise that neither tool is perfect. Certainly I wouldn’t be using words like “addiction” with a product without flaws. Excel’s lack of formal structure is a double-edged sword. The flexibility of the tool for solving a wide range of problems in turn increases the likelihood of a user mistake in data entry or an error in a formula. Excel lacks a good data entry interface, and has until recently been limited in the amount of data that it can manage.
Access has its own set of challenges. It has a tendency toward data corruption with large numbers of users. Performance can be slow, particularly across a WAN. The file-based mechanism of accessing the database means that some of these problems cannot be directly resolved. Most databases are developed by business users with a problem. Their organization’s IT department is overwhelmed, underfunded, and hard to reach out to. As a result, users have developed innumerable databases for their own use, for their department’s use, and in some cases for their enterprise’s use.
So why, if these applications have glaring problems, do they gain the kind of user acceptance that makes them so popular? The answer is that they can be made to solve a need. Excel is not perfect; the tool can be broken. It doesn’t have neat and clean data entry forms, and validating input is beyond most users. However, it allows quick and easy creation of solutions.
Herein lies the fundamental truth of SharePoint’s power. SharePoint may not do everything you want exactly the way that you want. However, with a little bit of manual work, some ingenuity, and a little bit of give, SharePoint can create very workable solutions to complex problems.
So, you can’t have item-level security. The solution is to create a library for each access level of permission needed by individual groups of people for different types of documents. No one would say the approach is perfect, but it’s an approach that works.
This is the happy path. No one will ever get into an item and change a field that he’s not supposed to. You’re not worried about someone taking his task and marking it closed unless the project manager has approved it. No, or little, consideration is given to the idea that someone on the team would change something they shouldn’t.
Of course, people do change things they shouldn’t, and there are problems occasionally with data being wrong; however, the number of times that people accidentally or purposefully change data that they shouldn’t is so small that building a complete system with all of the controls, locks, and gates just isn’t justified.
The happy path is that: a happy path. It’s characterized by focusing on what you can get done. Potential problems are acknowledged, but their importance is minimized when compared against the benefits SharePoint can bring.

Falling Off the Happy Path

The ideal of everyone playing nicely is certainly the goal, but what about the situations when you can’t do that? What about when you must have field-level validation, which SharePoint doesn’t natively support? What if you have to secure individual data to prevent users from accessing things that they shouldn’t? The answer, however flippant it may sound, is, “Deal with what you must, mitigate what you can.”

In other words, if you absolutely have to do it, plan on finding a solution – one that will likely be an order of magnitude more expensive to create than doing it the “happy path” way. However, before you give up and hand over your checkbook, evaluate the real risk that you’re taking by not creating the kinds of restrictions that you’re considering.
Many times the requirement to provide field-level validation or secure individual documents can be either mitigated or addressed via an alternate structuring of the problem.
Let’s explore this idea of field-level validation. Say that the purpose is to restrict the assigned-to field to yourself or one of your direct reports. In other words, you should be limited to accepting tasks for yourself and those for whom you directly control the work. This makes sense, and could easily be written into a procedure manual. The probability that a user or manager would assign a task to someone outside of his group, even without the procedure, would be low – why would they do something like that? The impact of the problem is likewise low; someone else will need to address the item or it can be set it to unassigned again. All in all, it’s probably not that important of an item – certainly not one to put on the must-have list of enhancements for SharePoint.
Why shouldn’t it be on the must-have list? There is simply no way, short of creating your own database with its own web parts, its own administration, its own integration to Excel, and its own alerting system, to effectively trap every case where someone might try to input or update a record in a SharePoint list and force it through your validation. You can, as I describe in the article Master Advanced List Editing in SharePoint, change the user interface of the list edit page to perform the additional validation that you require; however, you cannot intercept the Excel datasheet view, so users can still enter data directly, bypassing your enhancements to the user interface. The net result is that adding these kinds of enhancements to a SharePoint site is substantially more expensive than just creating procedures to support the somewhat open way that SharePoint approaches the problem.
One of the observations that is often made by clients who’ve deployed SharePoint is that the easy is very easy and the hard is very hard. There is little in between these two extremes.

Conclusion

Performing a cost-benefit evaluation on every feature that you want to implement may be impractical. However, being able to identify those features which may cost substantially more to implement than a nearly as good solution is important to maximizing your return on your SharePoint implementation – or any implementation, for that matter. Consider the impact of embedding the business rule into your process rather than in the solution.