Skip to content

SharePoint Gets Search Analytics

Microsoft’s SharePoint Portal Server 2003 was sold into a large number of organizations based solely on the strength of the search tool. Organizations hungered for a way to find the data they had generated.

Structured data such as invoices, products, and shipments may have been easy to find in the applications designed for that data, but the growing mountain of documents seemed to make the unstructured information that you were looking for perpetually out of reach.

Search in SharePoint made significant progress in its ability to connect users with the unstructured information that they were seeking. But the effectiveness of searches depended upon the skill of the searcher and the alignment of the terms that the searcher used to the terms in the documents. The world of search analytics was still very foreign to most organizations. Thankfully, the next version of SharePoint Search with its focus on relevancy will also include reports that allow you to see the effectiveness of the searches users are executing.

Microsoft Office SharePoint Server 2007 is a part of the Office System and is set to debut sometime in late 2006 or early 2007. Microsoft Office SharePoint Server 2007 includes numerous enhancements designed to improve search relevance, Internet usage, content management scenarios, and many other features which were shared this week at the SharePoint Conference in Bellevue, Wash., this week.

In this article you’ll learn about the basics of search analytics, what you can do today to improve your search results, and what to expect in Microsoft Office SharePoint Server 2007.

http://www.intranetjournal.com/articles/200605/ij_05_18_06a.html

Move beyond mere source control to the problems source control doesn’t solve

If you’ve ever working on a multi-person development project that didn’t use some sort of source control system, you’re probably painfully aware of what it feels like to loose some of your hard earned work. Sure you may be able to reproduce the work but the feeling that you’ve already made the investment to create it once is very nagging.

Even if you’ve used source control systems you may have noticed that there are certain problems that a source control system doesn’t solve. Managing configuration files, consistency of file locations, standard coding practices, a check-in schedule, and a build process are all examples of application development problems that having a source control system with check-in and check-out won’t solve.

Article: When Less Is More: User Interface Reusability, Part 1

With increasing pressure on software development teams to deliver more, better, faster, it is no wonder that finding ways to capitalize on reusability are more important today than ever before. We all need to do more with less and reuse is the panacea of doing more with less. Once the initial investment has been made, little or no effort must be consumed to use the work again.

While software development as an industry has focused on reuse through structured programming, object oriented programming, service oriented architecture, and several other techniques little thought or effort has been applied to the process of finding opportunities for reuse in the user interface. More often than not each application’s interface is seen as completely independent, having no need to be reused for any reason.

However, organizations of all sizes are finding needs to develop and reuse small components of applications in an effort to minimize costs, increase reliability, and allow for rapid changes to the platforms their technology lives on. This is why there is a need to create reusable user interface components. In this article, the first of two parts, we’ll explore why user interface reusability is important, how it’s been overlooked, and challenge you to develop a strategy for dealing with the need to make user interfaces reusable to reduce costs.

http://www.intranetjournal.com/articles/200605/ij_05_05_06a.html

Web Hacker Boot Camp

Book Review-Web Hacker Boot Camp

“What evil things lurk inside the hearts and minds of men… “

When I was still in high school – many moons ago – I remember a programming competition. We were supposed to create a program that calculated the length of a line in a triangle (or some such trivial problem). To test your application once you said it was done, the judges would often enter a negative number for the length of a line. On a whim we had put a trap in for that particular condition while building the program. It was this test that helped us complete the challenge.

Putting testing into our code today for bad values seems commonplace. We test for values out of range and have validation controls and all sorts of things that lead us toward developing more robust solutions. However, despite all of our advances in the areas of validating user input, we as an industry still struggle with security issues in our applications.

While the news may be focused on the next new exploit of Windows – because that has mass appeal –  we are still finding that our applications have their own flaws in them that can be exploited just as easily as someone exploiting a flaw in Windows. Actually, much of the time the mistakes in our applications are much more trivial to reveal.

Web Hacker Boot Camp is a journey through the mind of a hacker. It reveals how hackers do their job and what their techniques are. It stops short of telling you precisely what to do to make your application secure, but it certainly provides you with the information you need to think about to make the application secure.

There’s good content in this book if you’re interested in how to fortify your application against attack. The only downside is that you’ll have to look past some editing and layout issues to get the information out of the book.

If you want to discover how you could lose control of your web servers because of flaws in your application code, Web Hacker Boot Camp is definitely a book to get.

Stupid ASP.NET Tricks (Don’t try these at home)

I’ve been struggling with ASP.NET lately and wanted to offer up two things not to do …

1) Try to dynamically load another instance of the same user control from within a user control — ASP.NET gets very confused and starts loosing track of your controls in inexplicable ways.  All of the sudden the label in the partially defined class will unexpectedly be null.  (By the way, I was trying to draw “sublines“ in a shopping cart — things like free accessories I wanted to stay with the parent line.)

2) Load dynamic controls in a repeater — For some reason still unknown to me, if you dynamically add controls to a repeater (such as user controls into table rows) the post back values never come back … and the controls aren’t represented in the control tree.  I could just be missing something but I’ve decided it’s a bad idea…

Article: .NET Offers a “First Chance” to Squelch Performance-killing Hidden Exceptions

Processing exceptions in any language is expensive. The process of capturing an exception and rolling it into a package that can be processed by code requires a stack walk—basically looking back through the history of what has happened—and that requires a lot of processing time to complete. In .NET software development we sometimes talk about boxing and unboxing of variables and how that can consume a great deal of time in a program—and it can. However, the amount of time necessary to process an exception dwarfs the time necessary to box and unbox a variable. (If you want to know more about boxing and unboxing you can read about it in the article “Boxing and Unboxing of Value Types in C#” on CodeGuru.)

Despite the fact that exceptions are expensive to process they are positively a great thing for debugging applications and making software more reliable. The challenge is when exceptions are used incorrectly; specifically, when they are used to respond to normal conditions instead of exceptional conditions. Generally when this happens the layers of software above the layer generating the exception start “eating” the exception—because nothing is wrong (more on eating exceptions in a moment)—and manually return to normal program flow. This, however, leads to subtle performance issues that are difficult to find. Running a profiler on the code can show where performance is being impacted, however, there’s a quicker and simpler way to find performance problems like these—or rule out exceptions as a potential cause of performance issues.

Using first chance exceptions in the debugger is a way to ferret out these hidden exceptions that your application is generating so that you can go back and prevent them from happening in the first place. The more exceptions you prevent, the faster your application will run. First chance exceptions are exposed via Visual Studio and can be leveraged in any language that compiles to the CLR.

http://www.devx.com/dotnet/Article/31414/

Article: Not just a buzz word, virtualization technology can improve your app development

It used to be that we talked about DLL hell. This was a place where everyone who was evil enough to try to run too many applications on their computer ended up. One day you would install a new application and you would be instantly transported into DLL hell. You would try replacing one DLL after another. You would move DLLs to individual application directories. In the end you would end up feeling as if you were just shuffling the DLLs, trading one problem for another.

While we’ve made some progress from the days when DLL hell was inflicted on most developers at some time or another, we have not fundamentally resolved all of the issues that are caused by trying to take one machine and get it to serve multiple masters. A more current day challenge might be trying to run multiple versions of Windows on one hard drive, or Linux and Windows side by side on multiple partitions. No matter where we are in the technology lifecycle, we’ve seen how we can try to get one system to do too many things and have had to live with the challenges. This is where virtualization technology comes to the rescue.

No longer do you have to mix two applications that don’t get along together just because you only have one PC on which to work. Virtualization technologies allow you to run multiple, independent operating systems which have the ability to be isolated completely from other processes running on your PC.

http://www.techrepublic.com/article/not-just-a-buzz-word-virtualization-technology-can-improve-your-app-development/

What is Architecture?

Arno Nel asked the question on his blog “What is Architecture?” This is my response…
I have a whole series on the various roles in software development on Developer.com, the software architect one is at http://www.developer.com/mgmt/article.php/3504496.
However, my thoughts on your statements are …
  1. On Architecture is 50% business, 50% tech — I’d say that architecture is one part art (elegance, simplicity), one part facilitated visioning (getting a common idea of what is going on), and one part conversion (converting requirements into design).  While it’s true that business and technology are both required, I’d say that the important aspects are not well respected with this point of view.  Knowing how to facilitate people coming together, knowing how to convert requirements into design, and knowing how to maintain simplicity in the face of complexity are a much more important perspective.
  2. On The Architected doesn’t decide w[h]ether or not to use DataGrids – I don’t know that I can agree to this statement in every case.  It depends upon what the solution is and what is necessary.  She may decide that DataGrids are critical to some reusability that is desired and could specify them.  However, in most cases, you’re right. In most cases shouldn’t be any reason why an architect would specify the use of DataGrids or not.  Think of it this way, a traditional building architect doesn’t generally specify the supplier for a bolt – only how strong it must be.
  3. On The architect doesn[‘]t care about coding standards – I heartily disagree here.  He doesn’t care WHAT is in the coding standards but he does that they exist.  In the same way that a building architect doesn’t care what kind of light fixtures are used in the building but does care that they all match (consistency) or coordinate (complimentary).
  4. On The architect doesn[‘]t care about reading UML – Here to I heartily disagree.  I don’t disagree that the architect won’t read all the UML, however, I believe the architect should do “drive bys” of what is being constructed.  That includes reviews of the UML and any other supporting documentation being used to construct the actual solution.  If the point is that they don’t care whether it’s UML or something else – I disagree again.  The architect’s time is overcommitted.  If the architect understands UML then UML should be used to facilitate communication (reduce the cost) and to make communications more effective.
  5. On The architect doesn[‘]t care about Agile/RUP/MSF etc. – Again, I disagree.  The architect needs to understand the process being used to know how to insert themselves into the process in a way that is both timely and respectful of the process.  Choosing an approach that they are familiar with is important.  Also, they may decide that to achieve the objectives one may work better than another.  For instance, an agile approach will work better with poorly understood requirements.  RUP will work better with risky components to the project, etc.
  6. On The Architect translates IT to Business – While I agree that the statement is true I believe it misses the fundamental truth that an architect guides a conversion process from raw data into a solution.  Also, I’d argue that the statement even in it’s form here should be reversed.  They help to translate the business problems into IT solutions (not IT into business.)
I think an interesting question is how does a solutions architect differ from a building architect?

Article: Get out of the information technology reactionary rut

This document explores what being proactive is when it comes to IT, why it’s important, and how to get your group back on track when it strays.

We all start out trying to be proactive. We plan to control our lives. We make the plans and somewhere during the execution phase we get off track. We run into some unplanned snag or snarl. We slip into a reactionary mode to address the problem. We try to get back into our planned, proactive mode of operation but sometimes the next issue comes along and we’re off to react to it.

Although there are areas of IT that would seem incapable of escaping the reactionary rut, like for instance, a help desk, and the truth is that we can all influence our areas into more proactive and therefore less frustrating modes of operation. Let’s explore what being proactive is, why it’s important, and how to get your group back on track.

Why is proactive is better?

While I firmly believe that control is an illusion, it’s a useful one. Assuming a proactive stance to try to control, or at least influence, the future into a positive direction, is effective at reducing the overall workload. That is an addition to reducing stress caused by unforeseen circumstances. While being proactive, like anything else, can be taken to an extreme, in most cases proactive time spent preparing for a problem, developing an approach, or understanding the environment is immensely powerful in terms of its ability to save time in the future.

http://www.techrepublic.com/article/get-out-of-the-information-technology-reactionary-rut/

Article: Code Access Security: When Role-based Security Isn’t Enough

Role-based security works pretty well in most situations but as Sharepoint developers learned long ago it doesn’t work for everything. Now that .NET supports Web parts, even more developers will find they need to get a basic understanding of Microsoft’s Code Access Security.

Ask any typical .NET developer about Code Access Security (CAS) and you’ve got the chance of hearing “Huh?” as the response. Most developers haven’t run into CAS at all—let alone in a way that would cause them to develop a deep understanding of it.

Ask your typical SharePoint developer about CAS and they’re likely to begin to shudder uncontrollably. Why is that? Well, SharePoint developers have been dealing with CAS since the day that SharePoint was released. Unlike ASP.NET, which makes the assumption of full trust—effectively neutralizing any impact that CAS will have on a standard .NET application—SharePoint starts with a minimal trust, which means most code will need to have a CAS policy applied to it in order to work.

http://www.devx.com/security/Article/31259/

Recent Posts

Public Speaking