Cookie cutter

Which SharePoint Site Template Do You Believe Will Be Created the Most?

We’re putting the finishing touches on the SharePoint Shepherd’s Guide for 2013 and I was trying to figure out the best way to point users to the right site template to use. As a part of that I decided that it would be fun to stack rank order the site templates that will be used the most – so that I know where to position them on the decision tree. Here’s my order from most frequently to least frequently used:

  1. Team Site
  2. Project Site
  3. Publishing Site
  4. Publishing Site with Workflow
  5. Enterprise Wiki
  6. Blog
  7. Business Intelligence Center
  8. Community Site
  9. Records Center
  10. eDiscovery Center
  11. Document Center
  12. Community Portal
  13. Enterprise Search Center
  14. Basic Search Center
  15. My Site Host
  16. Developer Site
  17. Product Catalog
  18. Visio Process Repository

Do you agree with this order? — Or do you have your own?

forge

SharePoint REST TypeScript Library

At the end of November I posted a blog post titled SharePoint, REST, TypeScript, and the Library where I talked about the TypeScript library I built to demonstrate the power of TypeScript as a tool for large scale JavaScript. JavaScript gets messy quickly, particularly when you have to make a series of calls and deal with deferred callbacks.

I’ve had a few dozen people ask for the code and I’ve finally gotten around to publishing it into codeplex at http://sprestts.codeplex.com — if you’ve offered to help and you don’t see an invite from me in the next day or two go ahead and send me a follow up email and I’ll get you added.

forge

SharePoint Saturday Indianapolis 2013

Last Saturday (Jan 12, 2013) Indianapolis held a SharePoint Saturday. We were honored with…

  • 472 registrants
  • 302 attendees
  • 96 evaluations
  • 30 sessions
  • 10 sponsors
  • 1 good time

Our overall satisfaction score for the event was 4.53 – which I think is really, really good and it’s a testimony to the great steering committee that we had. I’d like to recognize them here.

I couldn’t have asked for a better team. They really made the event a pleasure.

forge

SharePoint, REST, TypeScript, and the Library

For the Microsoft SharePoint Conference in Las Vegas I did a session on using TypeScript to develop for SharePoint and Office. In that session I demonstrated a non-trivial example of a library I built in TypeScript for accessing SharePoint’s REST services. I did this because TypeScript is about building enterprise scale applications – something that JavaScript doesn’t do particularly well (to be as kind as possible.) I got several requests by the other speakers at the conference and the attendees about what I intended to do with the library. And I’m happy to confirm that I’ll be releasing the code as an open source community project. The intent is to create a library that can be used anywhere to build applications for SharePoint.

However, before I decide where to put the library, I need to know who would be willing to help with the enhancement of the library. I don’t want to create a project on CodePlex to find I’m the only developer – if that becomes the case I’ll just host the project here on my site. I’m really looking for who would be willing to contribute. I should be clear much of the code won’t be super-technical, highly architectural. A lot of it is just slogging out support for various interfaces and following the pattern I’ve already created. Not that there won’t be fun problems to solve but I also don’t want to discourage folks who don’t feel like they’re architect-level developers yet. Before go into more details, let’s talk about the library itself.

What is TypeScript?

If you missed it, TypeScript is a super-script of JavaScript that compiles down into regular JavaScript. The benefit is that it adds type safety through compilation time checking – and intellisense in Visual Studio. It’s a great tool for those who are tired of debugging for hours to realize that they had a capitalization problem or a typo on a method that they just couldn’t see while looking at the code. I believe that it’s the future way to develop JavaScript. We’ll develop in TypeScript and compile down into JavaScript as an intermediate language.

Who Can Use the Library?

Because the library is TypeScript based it compiles into regular JavaScript. That means anyone that has a platform that uses JavaScript can use the library. Whether it’s Office and SharePoint applications like the library was built for or whether it’s a Windows 8 HTML/JS application. The one requirement is the use of jQuery as a base library. It’s also likely that the library will ultimately require the datajs library that’s being developed to make OData access easier. Otherwise, the library is intentionally easy to use.

What Does the Architecture Look Like?

The primary challenges for most developers who are getting involved with JavaScript are the lack of type safety and the increased dependence on asynchronous calls to keep work of the primary display thread. Building the library in TypeScript resolves the concern of type-safety. The dependence on asynchronous calls is handled through the use of promises. jQuery includes a Deferred object. This object implements JavaScript promises.

The short is that a promise is a “publish-subscribe” object which allows you to subscribe to events you’re interested in and get notified when they’re completed (typically through anonymous methods). Once the event occurs your code is run. The beauty of this is you can write code that looks something like:

Obj.DoSomethingAsync().done(function () { alert(‘success’); }).fail(function () { alert(‘oops’); }).then(function () { alert(‘next?’); });

This will cause the right methods to be called when the async method completes. This approach keeps the async response code together with the code that started the async event. The beauty of this is that the deferred object can be held and if other code wants to subscribe to the event later it can. If the event is already completed the code that is subscribing to the event will run immediately. This allows you to do caching internally in your objects to minimize repeated calls.

With this as the basis the objects mirror the objects in the server side object model. The methods are intended to mirror the way that you deal with the objects on the server. There are a few differences because of the way that JavaScript works when compared to .NET/C# — however, the differences are designed to be minimal. A developer who knows the SharePoint object model should be able to use Intellisense in Visual Studio to see what’s happening.

Why not the JavaScript Client Side Object Model?

A rather obvious question is, “Why would I write a totally new library to access the REST services when there’s already a JavaScript client side object model?” The short answer is that the JavaScript Client Side Object model doesn’t always work. It works just fine if you’re in a SharePoint hosted application. However, in situations where you don’t have an app web the SharePoint JavaScript library won’t work. Even when it does work – outside of the SharePoint hosted app – it’s quirky. I wanted one library that worked whether I was working on remote server, a Windows 8 application, or on the SharePoint server itself. I didn’t want to have to do things fifteen different ways.

Developers Needed

At the beginning I mentioned that I wanted some other folks who would be willing to help me flesh out the scope of the model. That work is something that doesn’t require a huge amount of skill – it’s something that’s good exercise for developing against SharePoint remotely. If you’re relatively new to development – I absolutely can use your help here.

In addition, I’m looking for some folks who are willing to help me keep an eye on the architecture of the library as it grows. I’ve got a long background in building architectures and libraries – but JavaScript isn’t my native world. I’d love someone who can help with the finer points of architecting enterprise scale JavaScript. For instance, techniques for managing cross-domain calls.

If you feel like you can help (even if it’s only a few hours a month) please email me. If you don’t feel like you can help – but you want to see the library to see if it would be useful to you, send me an email. I’ll send you the code for your educational use. Yea, it’s send me an email and I send you a response. I want to track the number of folks who are interested in it so I can notify them where I end up posting the project.

forge

Article: Size and Scale of SharePoint 2013 in the Desert

At the Microsoft SharePoint Conference 2012 being held in the desert in Las Vegas, NV, there’s a lot of talk about how SharePoint has grown up. At 11 years (77 in dog years), SharePoint’s grown older and wiser. Instead of being seen as a departmental solution to collaboration needs it’s an enterprise-scale platform for creating content solutions.

Looks like Rain

New in SharePoint 2013 are a host of new features that make multi-tenancy easier, and pushes controls further towards the users.  The ability to configure search settings at a site-level makes it easier for organizations to allow departments to customize their own search experiences.  This is just one example of how SharePoint is making it more powerful for clients who are using shared hosting to customize their experience.

Microsoft’s own Office 365 environment will create a real option for organizations to do their SharePoint collaboration in the cloud.  It’s a big bet that organizations are willing – and able – to make the jump to the cloud for at least some of their collaboration needs.

Read more…

forge

Secret SWAG

Next week at the Microsoft SharePoint Conference in Las Vegas, I’m starting a new giveaway – and I want you to be a part of it. I had a custom coin made. The coin shows Thor Projects on one side, and The SharePoint Shepherd on the other – more or less the two different sides of my world and personality. It looks like this:

This is intended to be a giveaway for folks who interact with me during presentations. If you ask a question, I want to give you one. I won’t explain what it is in the session. I plan to hand it over and keep moving on. Clearly, as I’ve said, it’s a coin. I found myself flipping a coin like this over and over in my hands on calls and while I was bored and decided that you should be able to get one as well.

Because you’re reading my blog, I want to extend a special offer to you (and to your friends you want to disclose the offer to) – if you come up to me and hand me a business card, I’ll hand you one of the coins. I’m only going to do this at the SharePoint Conference. After the conference you’ll have to interact during a presentation to get one – here’s why.

Anyone who has one of the coins can mail it back to me for a $25 discount off of any of the DVDs that we sell. That’s right, $25. Why would I do this? Well, I value people interacting in my sessions more than you know. It makes the presentations fun for me – and everyone else. I also value you because you are reading my blog and keeping connected. By the way, if you’re not signed up for the SharePoint Shepherd newsletter – you should do that. We offer some sort of a special offer every month through that list. I’ll make an even more impressive deal to that list in a month or two – so you’ll want to make sure you get one.

One other detail, coins are heavy. I’m only bringing a few with me to the conference, so you’ll want to hit me up early to get yours. My session – “Using TypeScript to Build apps for Office and SharePoint” is Wednesday morning at 10:30. I follow that up with an AvePoint theatre presentation at 12:45 and then two book signings. I don’t expect that I’ll have any coins left by the time I’m done, so look for the Shepherd’s staff earlier in the week and get yours.

Thanks for your continued support

-Rob

forge

Appearance: Run as Radio – Robert Bogue Makes Ten Mistakes with SharePoint!

I’m pleased to share that last week I got a chance to sit down with Richard Campbell face-to-face here in Indianapolis and record an episode of Run As Radio which was creatively titled “Robert Bogue Makes Ten Mistakes with SharePoint!“. Check it out and tell me what you think of the conversation. We got a chance to talk through the 10 most common non-SharePoint technical mistakes that people make – when setting up SharePoint. Oh, and we got off topic about things like load balancers and load/scalability testing.

forge

Including TypeScript in a SharePoint Project

If you missed it, Microsoft announced a new language that is a super set of JavaScript that compiles down into regular JavaScript. The compiler is written in JavaScript and works everywhere. The big benefit of the language is type-checking at compile/edit time. If you want to learn more go to http://www.TypeScriptLang.org/.

There is some rather good tooling and documentation, but one problem for me was making it work from inside my SharePoint Projects after I installed TypeScript. The way that the SharePoint tools run, they do the deployment before the TypeScript compiler runs. That’s not exactly helpful, however, you can fix this. First, you’re going to need to right click on your project file and unload it.

Next, you need to right click it again and edit the project file (TypeScriptFarmTest.csproj)

Then you need to modify your Project node to include a new InitialTargets attribute pointing to TypeScriptCompile:

<Project ToolsVersion=4.0 DefaultTargets=Build xmlns=http://schemas.microsoft.com/developer/msbuild/2003 InitialTargets=TypeScriptCompile>

Then you’ll need to insert into the inside of this node a new Targets node:

<Target Name=TypeScriptCompile BeforeTargets=Build>
<Message Text=Compiling TypeScript… />
<Exec Command=&quot;$(PROGRAMFILES)\Microsoft SDKs\TypeScript\0.8.0.0\tsc&quot; -target ES5 @(TypeScriptCompile ->’&quot;%(fullpath)&quot;‘, ‘ ‘) />
</Target>

From here save the file, close it, and right click the project and select Reload Project. Now all your TypeScript will compile into JavaScript before the deployment happens (actually before anything else happens because we told Visual Studio and Build to do this operation first, before anything else (with the Project InitialTargets attribute.)

forge

SharePoint Profile Memberships in 2010

There was a lot of talk about how the User Profile memberships in SharePoint 2007 worked. The net effect was that the memberships were stored in a profile property MemberOf or internally SPS-MemberOf. This was driven by a timer job which ended with ‘User Profile to SharePoint Full Synchronization’. However, this changed slightly in 2010 – and the change mattered. In SharePoint 2010 the memberships got their own collection property off the UserProfile object. The Memberships became a full collection to store values.

This didn’t change the requirement that to be listed the user had to be a member of the Members group of the site – not the Owners – only the members. So the list still has its issues from a usability perspective. The memberships for a user shows up in profiles in two places: Memberships and Content. In Memberships it shows a listing of sites vertically. In Content you can navigate across a horizontal bar of sites and libraries:

In all honesty, I generally recommend that organizations replace the Memberships and Content functionality of a my site with a library in the user’s my site that contains links to the sites that they have permissions in. I’ve done this various ways – including opt-in from the user from the personal menu but no matter how it’s done we invariably find that users can actually understand it if they’re managing it and when it is being driven by their permissions to the site. However, in this case, the company didn’t consider replacing these and they were up against their deadline for implementing the Intranet.

The client was reporting that their memberships in their user profiles were out-of-date. There were sites that no longer existed that people were seeing, they were also getting access denied while trying to access some of the sites either through the memberships tabs or through the content tab. Upon digging I found that they had split their farm from a single global 2007 environment to a regionally deployed 2010 environment and the 2010 environment migrated the global 2007 profile service. The net effect of this was that they inherited the memberships for all of the sites across the globe – but the user profile service was only updating those memberships for the URLs that the local farm owned. So in this case there were numerous memberships for non-local SharePoint sites that were no longer being updated.

I should say that there are tons of longer term answers to the problem of managing a single user profile and memberships across a global organization, but for right now they decided they wanted to remove all of the memberships so it wouldn’t be wrong. As it turns out the code to do this is relatively simple:

SPServiceContext svcCtx = SPServiceContext.GetContext(SPServiceApplicationProxyGroup.Default, SPSiteSubscriptionIdentifier.Default);
UserProfileManager upm = new UserProfileManager(svcCtx);
foreach (UserProfile user in upm)
{
MembershipManager mm = user.Memberships;
mm.DeleteAll();
}

Run the code above and all memberships will disappear. If you then run the timer sync job it will re-add the local sites to the farm.

This took an amazingly large amount of time to track down given the relative simplicity of the final answer.

forge

SharePoint Search across the Globe

Several of my global clients have approached me over the last few weeks in some stage of planning or implementation of a global search solution. So I wanted to take a few moments and talk through global search configuration options including the general perceptions we have, the research on how users process options, the technology limitations – and the options. The goal here is to be primer for a conversation about how to create a search configuration that works for a global enterprise.

Single Relevance

There’s only one way to get a single relevance across all pieces of content – that is to have the same farm do the indexing for every piece of content. Because the relevance is based on many factors – including how popular various words are, etc., the net effect of this that if you want everything to be in exact right relevance order you’ll have to do all of the indexing from a single farm. (To address the obvious questions from my SharePoint readers, neither FAST nor SharePoint 2013 resolves this problem.)

OK, so if you consider that in order to accomplish the utopian goal of having all search results for the entire globe in a single relevance ordered list, one farm is going to have to index everything, you’ll have to have one massive search index. This means that you’ll have to plan on bringing everything across the wire – and that’s where the problems begin.

Search Indexing

In SharePoint (and this mostly applies to all search engines), the crawler component indexes all of the content by loading it locally (through a protocol handler in the case of SharePoint), breaking it into meaningful text (IFilter) and finally recording that into the search database. This is a very intensive process and by its very nature it requires that all of the bits for a file travel across the network from the source server to the SharePoint server doing the indexing. This is generally speaking not an issue for local servers because most local networks are very idle – there’s not an abundance of traffic on them and therefore any additional traffic caused by indexing isn’t that big of a deal. However, the story is very different in the case of the wide area network.

In a WAN most of the segments are significantly slower than their LAN counterparts. Consider that a typical LAN segment is 1gbps and a typical WAN connection is at most measured in megabytes. Let’s take a big example of a 30 Mbps connection. That means the LAN is roughly 300 times faster. For smaller locations that might be running on 1.544 Mbps connections the multiplier is much larger. (~650). This level of difference is monumental. Also consider that most WAN connections are at 80% utilization during the day.

Consider for a moment that if you want to bring across every bit of information from a 500 GB database across a 1.544 Mbps connection it will take about a month – not counting overhead or inefficiency to pull the data across the wire. The problem with this is what happens when you need to do a full index or when you need to do a content index reset.

Normally, the indexing process is looking for new content and just reading that and indexing it. That generally isn’t that big of a deal. We read and consume much more content than we create. So maybe 1% of the information in the index would change in a given day – in practical terms it is really much less than this. Pulling one percent of the data across the wire isn’t that hard. If you’re doing incremental indexes every hour or so you’ll probably complete the incremental index before the next one kicks off. (Generally speaking in my SharePoint environments incremental indexing takes about 15 minutes every hour.) However, occasionally your search index becomes “corrupt”. I don’t mean that in the “the world is going to end” kind of way, just an entry won’t have the right information. In most cases you won’t know that the data is wrong – it just won’t be returned in search results. The answer to this is to periodically run a full crawl to recrawl all the content.

During the time that the full crawl is running, incremental crawls can’t run. As a result while the indexer is recrawling all of the content some of the recently changing content isn’t being indexed. Users will perceive the index to be out of date – because it will be. If it takes a month to do a complete index of the content then the search index may be as much as a month out of date. Generally speaking that’s not going to be useful to users.

While you will schedule full crawls on a periodic basis – sometimes monthly and sometimes quarterly. However, very rarely you’ll have a search event that will lead to you needing to reset the content index. In these cases the entire index is deleted and then a full crawl begins. This is worse than a regular full crawl because it won’t be just that the index is out of date – but it will be incomplete.

In short the amount of data that has to be pulled across the wire to have a single search is just not practical. It’s a much lower data requirement to just pass along user queries to regionally deployed servers and aggregate those results on one page.

One Global Deployment

Some organizations have addressed this concern with a single global deployment of SharePoint – and certainly this does resolve the issue of a single set of search results but at the expense of everyday performance for the remote regions. I’ve recommended single global deployments for some organizations because of their needs – and regional deployments for other situations. The assumption I’m making in this post is that your environment has regional farms to minimize latency between the users and their data.

Federated Search Web Parts

Out of the box there is a federated search web part. This web part will pass the query for the page to a remote OpenSearch 1.0/1.1 compliant server and display the results of the query. Out of the box it is configured to connect to Microsoft’s Bing search engine. You can connect it to other search engines as well – including other SharePoint farms in different regions of the globe. The good news is that this allows users to issue a single search and get back results from multiple sources; however, there are some technical limitations; some of which may be problematic.

Server Based Requests

While it’s not technically required for the specifications, the implementation that SharePoint includes has the Federated Search Web Parts doing the processing of the remote queries via the server – and not on the client. That means that the server must have access to connect to all of the locations that you want to use for federated search. In practical terms this may not be that difficult but most folks frown on their servers having unfettered access to the Internet. As a result having the servers running the federated searches may mean some firewall and/or proxy server changes.

The good news here is that federated search options must be configured in the search service – so you’ll know exactly what servers need to be allowed from the host SharePoint farm. The bad news is that if you’re making requests to other farms in your environment you’ll need a way to pass user authentication from one server to another and in internal situations that’s handled by Kerberos.

Kerberos

All too many implementations I go into don’t have Kerberos setup as an authentication protocol – or more frequently their clients are authenticating with NTLM rather than Kerberos for a variety of legitimate and illegitimate reasons. Let me start by saying that Kerberos, when correctly implemented, will help the performance of your site so outside of the conversation about delegating authentication it’s a good thing to implement in your environment.

Despite the relative fear that’s in the market about setting up and using Kerberos, it is really as simple as setting SharePoint/IIS to use it (Negotiate), setting the service principle name (SPN) of the URL used to access the service to the service account, and setting the service account up for delegation. In truth that’s it. It’s not magic – however, it is hard to debug. As a result, most people give up on setting it up. Fear of Kerberos and what’s required for it to be setup correctly falls into what I would consider to be an illegitimate reason.

There is a legitimate reason why you wouldn’t be able to use Kerberos. Kerberos is mutual authentication. It requires that the workstation be trusted – which means that it has to be a domain joined PC. If you’ve got a large contingent of staff that don’t have domain joined machines, you’ll find that Kerberos won’t work for you.

Kerberos is required for one server to pass along the identity of a user to another server. This trusted delegation of user resources isn’t supported through NTLM (or NTLMv2). In our search case, SharePoint trims the search results to only those results that a user can see – and thus the remote servers being queried need the identity of the user making the request. This is a problem if the authentication is being done via NTLM because that authentication can’t be passed – and as a result you won’t get any results. So in order to use the out-of-the-box federated search web parts to another SharePoint farm, you must have Kerberos setup and configured correctly.

Roll Your Own

Of course, just because the out-of-the-box web parts use a server-side approach to querying the remote search engine – and therefore need Kerberos to work for security trimming – doesn’t mean that you have to use the out of the box web parts. It’s certainly possible to write your own JavaScript based web part that will issue the query from the client side to the server and therefore have the client transmit their authentication to the remote server. However, as a practical matter this is more challenging than it first appears because of the transformation of results through XSLT. In my experience, clients haven’t opted to build their own federated web parts.

User Experience

From a user experience perspective, the first thing users will realize when using the federated search web parts is that the results are in different “buckets” and they’re unlikely to like this. As we started this post, there’s not much that can be done to resolve this problem from a technical problem – without creating larger issues of how “fresh” the index is. So while admittedly this isn’t our preference from a user experience perspective there aren’t great answers to resolving it.

Before dismissing this entirely, I need to say that there are some folks who have decided to live with the fact that relevance won’t be exactly right and are comingling the results and dealing with the set of issues that arise from that including how to manage paging, what to do about faceted search refinement – that is the property-value selection typically on the left hand side of the page. When you’re pulling from multiple sources you have to aggregate these refiners and manage your paging yourself – this turns out to be a non-trivial exercise, and one that doesn’t appear to improve the situation much.

Hicks Law

One of the most often misused “laws” in user experience design is Hick’s Law. It states, basically, that given a longer list of items vs. two smaller lists that a user will be able to find what they’re looking for out of one list faster. (Sorry this is gross oversimplification; follow the link for more details.) The key is that this oversimplification ignores two key facts. First, the user must understand the ordering of the results. Second, they must understand what they’re looking for – that is they have to know the exact language being used. In the case of search, neither of these two requirements will be met. The ordering is non-obvious and the exact title of the result is rarely known by the user that’s searching.

What this means is that although intuitively we “know” that having all the results in a single list will be better, the research doesn’t support this position. In fact some of the research quoted by Barry Schwartz in The Paradox of Choice seems to indicate that meaningful partitioning can be very valuable about reducing anxiety and improving performance. I’m not advocating that you should break up search results that you can get together – rather I’m saying that we may have a perception that comingled results may be of higher value than they may actually be.

Refiners and Paging

One of the challenges with the federated search user experience is that the facets will be driven off of the primary results so the results from other geographies won’t show in the refiners list. Nor is there paging on the federated search web parts. As a result the federated results web parts should be viewed as “teasers” which are inviting users to take the highly relevant results or to click over to the other geography to refine their searches. The federated search web part includes the concept of “more…” to reach the federated search results’ source. Ideally the look and feel – and global navigation – between the search locations will be similar so as to not be a jarring experience to the users.

Putting it Together

Having a single set of results may not be feasible from a technology standpoint today, however, with careful considerations of how users search and how they view search results you can build easy to consume experiences for the user. Relying on a model where users have regional deployments for their search needs which provides some geographic division between results but also minimizes the total number of places that they need to go for search can help users find what they’re looking for quickly – and easily.