Skip to content

SharePoint 2010 and Access Services Place

I’m going to take a break for this post from what tends to be some very technical detail that you need to know. Instead in this post I’m going to talk about how one of the new features in the Office 2010 wave and the way these features may impact the market. Of course, this is a bit of prognosticating on my part but I think my perception should be on firm ground. Let me start by explaining.

Back when Microsoft released Microsoft Access 1.0, I was working for Woods Industries as a network manager. At the time we were more interested in Borland’s Paradox for Windows. However, I was interested enough in both to perform a technical edit on the New Riders Publishing books Inside Paradox for Windows and Inside Microsoft Access. Thus my experience with Microsoft Access spans more than 15 years. Since my time at Woods I’ve run across Access dozens, if not hundreds of times. I’ve seen it in use as small organizations and large organizations. I’ve seen it used as a way to transform data and as a complete solution. There are folks who have made their entire careers building Access databases. The platform is robust enough to create a career around. There used to be magazines and conferences dedicated to Microsoft Access development. I used to speak at Advisor Media conferences where Access was covered as a track. (I was speaking on SharePoint.) I’ve had plenty a conversation with folks who only did Access development at these conferences.

Access sits in this spot in the market where organizations (or departments) don’t need or can’t afford a fully custom solution developed. As much as .NET applications are, I believe, easier to build and quicker than other technologies there’s still a great deal of information that you need to know in order to build a scalable application. Access tends to be what people use when they need to get something done and they don’t have another way to do it. That doesn’t make it bad. As long as you don’t get the scale of the application too out of whack it’s a cost effective way to build things.

However, Access isn’t perfect. The speed of development comes with a cost. Access has well known classic issues with corruption and difficulty with recoverability. There are solutions to these issues; however, most of the databases that are created are created by those that don’t know how to solve these issues. Access also has the inherent limitations of being a tool that requires the client application be installed. In some cases it’s not possible to implement solutions where Access is required on the client. Access is the application that SharePoint resembles most from an adoption standpoint. Access’ initial adoption was viral. SharePoint’s adoption is viral. Both are tools that users and managers in organizations can use to create solutions.

Still, the relationship between Access 2007 and SharePoint 2007 isn’t that great. It’s possible for Access 2007 to consume SharePoint 2007 lists but because of locking issues on the SharePoint side it’s not possible to have an Access .MDB database in SharePoint. It’s a love-hate relationship. I won’t quite to go so far as I did with calling out relationship issues like I did with InfoPath. However, the relationship between Access and SharePoint in 2007 isn’t the best.

With that background I want to break down the folks that use access into a set of categories so that we can see where Access Services fits – and Access itself fits with SharePoint 2010. I see SharePoint cannibalizing of some of Access’ core market. I see organizations implementing SharePoint where the issues of having a client installation aren’t acceptable. While SharePoint doesn’t have nearly the flexibility that Access has for customization without code, SharePoint is flexible enough for a broad array of applications which used to require Access.

I see two key sizes of organization that use Access. There’s the small size business that don’t have a centralized IT department or their IT department is one or two people. Access tends to be used because it’s easy to do and is something that the IT people that work for the organization can deal with. They have to do everything so they don’t have time to become and expert at development. They use Access to help the business when buying a package isn’t cost effective – and neither is hiring a professional programmer.

The other size of organization that I see using Access is the very large organization where there is a centralized IT department of dozens, hundreds, or thousands of people. In these organizations Access is used because there are dozens of small projects that can’t get prioritized because they don’t have enough value to the organization. Not that they’re not useful. Not that they won’t be valuable for the organization. Instead the project’s value (and maybe cost) are too small to get scheduled. It also may be that there are factions in the organization that dislike or distrust the central IT department and therefore they want to work on their own. Access is a perfect tool because they are likely to have it and it’s generally powerful enough to accomplish the goal.

Certainly there are mid-sized organizations that use Access but I see it in use more in large organizations and small organizations than in the mid-sized organization. Generally mid-sized organizations are trying to “grow up” to be big enterprises and start that awful adolescent phase where they are too big to accept the same level of risk they used to and too small to cope with the bureaucracy that a lower risk tolerance requires. As a result they believe that they have to develop everything “big” and Access temporarily takes a back seat to more traditional development languages like .NET.

In SharePoint 2010 one of the new services is the Access Service. Through this service and the Access 2010 client it’s possible to upload entire Access applications to SharePoint. This allows the application to be run from SharePoint. That means that the application – or part of the application – can be run by those clients who don’t have the Access client installed on their desktop. This can dramatically increase the reach of Access Applications.

Access Services isn’t without its limitations. Like InfoPath Forms Services there are some things that just don’t make sense or work that well in a web world. One buddy of mine quoted 82 limitations in Access Services – 82 things that don’t move from Access to Access Services. Certainly there are some things that you can do in Access that don’t make sense on the web (think special characters in field and table names). If you can, however, live with those limitations you can start to create a way for users to quickly work with data via the web.

There are a few key things that you should know about Access Services 2010. First, all of the design objects are stored in SharePoint so if you lose the .ACCDB file it’s not a big deal. You can regenerate it from the ribbon in SharePoint. This applies to all of the design objects – including those which are designed for client only use. It also applies to the data files that aren’t linked. The second thing to know is that not every object that you use from Access Services must be accessible from the web. If you have some reports that you need client features for you can still have those – they just won’t be visible on the web. So if you have items that can’t be converted to run on the web (think of the 82 from above.)

I should also mention that VBA code isn’t supported running on the server, but Access Macros are. These aren’t the same old macros from years ago, they’re a brand new set which have a brand new macro editor. The reason for this is part of the primary tenants of SharePoint – that is that code from one user shouldn’t be able to impact another. Macros are made up of a set of trusted components which are specifically designed and tested to prevent side effects that might allow one user to access another’s information. As a result they can be run on the server – where VBA code cannot. That means that you’ll need to plan to build your logic with macros if you want it to run server side. Access also added data level macros which can eliminate the need to copy validation logic from one form to another. On the server these are implemented as QuickFlows – basically a workflow that can’t persist. This is one of the features added to SharePoint 2010.

One way to think about this is the same way we think about workflows in SharePoint Designer. They’re available for anyone (with permission) to create because they are declarative workflows – they stitch together a set of known components. They’re trusted because the components themselves are trusted. In the same way Access Macros components are trusted and thus can be used by anyone.

From my perspective there will be one key place where Access Services will really excel. That is for the Enterprise scenarios where a large organization has deployed SharePoint Enterprise. This should make sense given that SharePoint Enterprise is required to get Access Services and it’s generally the largest organizations that have made this investment. Because of the cost of the enterprise licensing it’s unlikely that smaller organizations will leverage Access Services.

In addition to being used as a tool to create solutions for business units and groups inside of an organization, there’s one other key reason why Access Services may be used. That is that Access Services will generate the RDL language used by SQL Server Reporting Services (SSRS). This means that you can use Access to quickly and easily report on SharePoint data – this is a huge hole in SharePoint. In order to do reports you either have to do a lot of work to generate reports in SSRS including the use of third party components, export the data to Excel and make the spreadsheet pretty, or attach Access to the SharePoint lists and build your reporting from there. In 2007 none of these solutions are very palatable.

In 2010 we have another enhancement that is designed to protect the system but it also makes it harder to do reporting. Query Throttling will prevent queries asking for too many records (administrator controlled, defaulted to 5,000 records) from being run during the day. There’s an administrator setting for when large queries can be run (i.e. the middle of the night.) However, if you need a report before the end of your day – trying this from SharePoint directly may be difficult. Access Services works around this issue and allows you to build reports on large datasets. The net result is that you’re able to do reporting on the larger data sets – and the report design experience is good.

So I believe a key area of use for Access in the Enterprise will be for the development of reporting even on applications not originally created in Access.

InfoPath contacted a data source but failed to receive data

I’m doing some testing on a form including some security testing. One of the things that I ran into was the following error (details shown):

The form uses Universal Data Connections (UDCX). The user I’m testing doesn’t have write access to some of the reference lists. When I dug into the issue, I got tons of hits for Kerberos issues and some for the LSA Loopback check issue. However, the issue that I found was that my UDCX file included entries for update as well as reading the data. This was the default UDCX file that InfoPath created – take a look:

<?xml version=”1.0″ encoding=”UTF-8″?>
<?MicrosoftWindowsSharePointServices ContentTypeID=”0x010100B4CBD48E029A4ad8B62CB0E41868F2B0″?>
<udc:DataSource MajorVersion=”2″ MinorVersion=”0″ xmlns:udc=”http://schemas.microsoft.com/office/infopath/2006/udc”>
<udc:Name>Sites</udc:Name>
<udc:Description>Format: UDC V2; Connection Type: SharePointList; Purpose: ReadOnly; Generated by Microsoft Office InfoPath 2007 on 2009-10-19 at 07:55:36 by DEMO\Administrator.</udc:Description>
<udc:Type MajorVersion=”2″ MinorVersion=”0″ Type=”SharePointList”>
<udc:SubType MajorVersion=”0″ MinorVersion=”0″ Type=””/>
</udc:Type>
<udc:ConnectionInfo Purpose=”ReadOnly” AltDataSource=””>
<udc:WsdlUrl/>
<udc:SelectCommand>
<udc:ListId>{7159DFE5-D236-472A-81B8-D031EAE14F59}</udc:ListId>
<udc:WebUrl>http://wss/sites/moc/</udc:WebUrl>
<udc:ConnectionString/>
<udc:ServiceUrl UseFormsServiceProxy=”false”/>
<udc:SoapAction/>
<udc:Query/>
</udc:SelectCommand>
<udc:UpdateCommand>
<udc:ServiceUrl UseFormsServiceProxy=”false”/>
<udc:SoapAction/>
<udc:Submit/>
<udc:FileName>Specify a filename or formula</udc:FileName>
<udc:FolderName AllowOverwrite=””/>
</udc:UpdateCommand>
<!–udc:Authentication><udc:SSO AppId=” CredentialType=” /></udc:Authentication–>
</udc:ConnectionInfo>
</udc:DataSource>

All I did to resolve the problem was to remove the <udc:UpdateCommand/> tag so that the UDC didn’t have an update option. That didn’t in and of itself resolve the issue — but as soon as I reuploaded and checked in the UDC files they started working — the UDCs were not checked in in my connections library.  Doh!

Hyper-V, NVidia, Lenovo T61p, and how LinkedIn came to the rescue

It’s no secret that I do a fair number of presentations. It’s also no secret that I am a big SharePoint user. (In case you missed it the Pope is Catholic too!) With SharePoint 2010 being 64 bit only I really had two choices – Hyper-V or VMWare workstation. Since I have been an avid user of VMWare for a while that’s not a problem – until you consider that I’ll be presenting at some Microsoft run conferences and that I’ve got a few projects with the product team. So I had to get Hyper-V working.

In my case that meant taking my LenovoT61p and installing Windows Server 2008 R2 on it. That process went fine, right up to the point where I enabled Hyper-V. This turned my system into what I like to refer to as a flying brick. Hyper-V disables certain things like suspend and hibernate on machines that have the role enabled. What’s worse – much worse – is that there’s a design flaw (you might say the virtualization team and I disagree on this particular issue) where if your driver allocates memory with write combining it has to flush the translation lookaside buffer on the processor. In practical terms, this surfaces as an issue with high performance video cards (or high performance video drivers) in machines running Hyper-V. The result is substantially reduced performance. The Microsoft recommendation? Run with the out of the box VGA drivers. (See for yourself.) Where’s my problem?

Well, the fact that I do presentations means that I sort of need to be able to control what video I send to the external monitor port – which the out of the box drivers don’t support. The driver that NVidia had used the write combining flag (as they should have). When I contacted the virtualization product team at Microsoft the recommendation eventually became… use two machines — one machine that you use to drive the monitor which does a remote desktop connection into your other machine. It’s that advice that drove me to purchasing my Lenovo X200 Tablet (which I still love).

Of course, I wasn’t satisfied. The more I dug into the problem the more stonewalls I got. I was once told to buy new hardware – of course, the hardware wasn’t actually available yet. So I decided to reach out and see if I could find someone at NVidia to help. If they could allow me to turn off the write combining when they allocated a page of memory, I could eliminate the performance problem. (I have to say again, they’re doing the right thing, it’s Hyper-V that has the problem.) So I fired up linked in and realized that I had two different second order connections to NVidia. I sent emails to both of the folks I knew who reportedly had connections to NVidia. Both of those connections ended up making it to NVidia. I won’t bore you with the details and the people but I will say that ultimately NVidia made a fix to their driver for Hyper-V.

I can’t explain what it means to me to hear that a company like NVidia would go out of their way to help their customers – even when the problem they’re solving isn’t really one that they created. I had made a decision (because of some seriously negative interactions with ATI years ago) that I’d be buying all NVidia graphics cards. It’s not often that I get proof that the decisions I make are the right ones – but this definitely falls in this category.

I still use more VMWare Workstation than I use Hyper-V. However, at least I don’t feel like stabbing a fork in my eye when I’m running Hyper-V on my laptop anymore.

SharePoint Saturday Indianapolis Jan 30th, 2010

We’ve gone official that we’re going to be having a SharePoint Saturday in Indianapolis on Jan 30th, 2010. It will be at the JA center on North Keystone and should be a great day. We’re planning three tracks with five sessions each. The tracks are going to be: IT Professional, Developer, and Business and End User. There are details on the site on how you can submit session proposals as we’re hoping to close speaker submissions soon.

I look forward to seeing you all there.

SharePoint Workflow OnTaskChanged Invoked Event Sender Parameter Is Null

Just when you think you’ve seen it all, you run into something that makes you challenge what you think you know. Today’s SharePoint Workflow challenge was a bit of code that was behaving badly — but for reasons that didn’t make any sense. When I dug into it I found that my invoked method for a OnTaskChanged activity was null. In order to understand the importance of this fact I have to do a quick review.

In simple workflows the design time model and the run time model are the same. So you can have an activity named Foo and you can reference it with Foo in your code. That’s easy. The problem is when you introduce a loop, a replicator, or anything else that might cause activities to be replicated — and therefore you runtime model and your design time model no longer match. As soon as this happens you can’t use the activity name. You have to start referencing things based on the sender that your event receives as its first parameter.

I’m working on a parallel approval workflow which uses a custom sequence activity to hold the instance data for each instance of a replicator. To get to my instance information I walk up the tree from my sender to find my specific sequence activity which I know is the parent — and the activity that holds my instance data. This works great right up and until I don’t get my sending activity in the sender parameter. Without it I’m completely lost as to where I am in the workflow.

The solution is to add a code activity after OnTaskChanged and use the ExecuteCode event on the code event. This event will get its sender and can therefore locate itself in the hierarchy.

Replication, Workflow Serialization Problems and DependencyObject

I’ve been working on a workflow that needs to do parallel approval and therefore I have a replicator in the workflow. One of the things that I did was used the pattern of a custom sequence activity inside the replicator to hold the instance data. The custom sequence ends up with a custom property that is set by utilizing the ChildInitialized event of the replicator. You can see Tom Lake’s detailed explanation of this process in the forums: http://social.msdn.microsoft.com/Forums/en-US/windowsworkflowfoundation/thread/ca034011-d2f7-407b-90c5-d0303f753f50

I was using this to manage my set of tasks for the parallel approval. The problem is I was trying to be “nice” and use dependency properties in a class. So my SequenceActivity contained a reference to my class ApprovalTask. The ApprovalTask class contained the properties for the TaskId and TaskProperties (that are needed for managing a task.) To be nice I made these dependency properties which meant that I had to derive ApprovalTask from DependencyObject. To make a long story short, there’s something wrong with deriving from DependencyObject that causes the serialization of the workflow to get screwed up. If you do an OnTaskChanged in your workflow (which makes sense if you want to see if the task was approved or not) you’ll get an error similar to the following in the ULS:

Engine RunWorkflow: System.Workflow.Activities.EventDeliveryFailedException: Event “OnTaskChanged” on interface type “Microsoft.SharePoint.Workflow.ITaskService” for instance id “397f2b91-156c-4de2-b07b-6cae2fefd072” cannot be delivered. —> System.ArgumentNullException: Value cannot be null.     at System.Workflow.ComponentModel.DependencyObject.FixUpMetaProperties(DependencyObject originalObject)     at System.Workflow.ComponentModel.DependencyObject.FixUpMetaProperties(DependencyObject originalObject)     at System.Workflow.ComponentModel.Activity.FixUpMetaProperties(DependencyObject originalObject)     at System.Workflow.ComponentModel.CompositeActivity.FixUpMetaProperties(DependencyObject originalObject)     at System.Workflow.ComponentModel.Activity.Load(Stream stream, Activity outerActivity, IFormatter formatter)     at System.Workflow.ComponentModel.Activity.Load(Stream stream, Activity outerActivity)     at System.Workflow.Runtime.Hosting.WorkflowPersistenceService.RestoreFromDefaultSerializedForm(Byte[] activityBytes, Activity outerActivity)     at Microsoft.SharePoint.Workflow.SPWinOePersistenceService.LoadWorkflowInstanceState(Guid instanceId)     at System.Workflow.Runtime.WorkflowRuntime.InitializeExecutor(Guid instanceId, CreationContext context, WorkflowExecutor executor, WorkflowInstance workflowInstance)     at System.Workflow.Runtime.WorkflowRuntime.Load(Guid key, CreationContext context, WorkflowInstance workflowInstance)     at System.Workflow.Runtime.WorkflowRuntime.GetWorkflow(Guid instanceId)     at System.Workflow.Activities.WorkflowMessageEventHandler.EventHandler(Object sender, ExternalDataEventArgs eventArgs)     — End of inner exception stack trace —     at System.Workflow.Activities.WorkflowMessageEventHandler.EventHandler(Object sender, ExternalDataEventArgs eventArgs)     at Microsoft.SharePoint.Workflow.SPWinOETaskService.RaiseEvent(SPWinOeWorkflow workflow, SPWorkflowEvent workflowEvent, Object workItem, IPendingWork workHandler)     at Microsoft.SharePoint.Workflow.SPWinOeHostServices.Send(SPWinOeWorkflow winoeworkflow, SPWorkflowEvent e)     at Microsoft.SharePoint.Workflow.SPWinOeEngine.RunWorkflow(Guid trackingId, SPWorkflowHostService host, SPWorkflow workflow, Collection`1 events, TimeSpan timeOut)

The reason is because when ApprovalTask came back from serialization it’s SPWorkflowTaskProperties property was blank. The solution? Don’t derive from DependencyObject, mark my ApprovalTask as Serializable, and convert the properties into regular properties. This made the problem go away.

Mentions: K2 SharePoint Governance Series Interview

I had the pleasure of being interviewed by my buddy Chris Geier at K2 for a new series he’s starting on SharePoint Governance. You can find his introduction to the series here and the download for the interview here. Check it out.

XML, XPath, and Namespaces

One of the problems that I had early on with XML when I started was that I couldn’t figure out an easy way to handle namespaces when I was processing XML. SharePoint and other Microsoft technologies like InfoPath make extensive use of namespaces. For instance, if you right click a field in InfoPath and select Copy XPath you’ll get something in your clipboard that looks like: /my:TestForm/my:Repeating/my:Message –Frankly, that’s not all that complex of an XPath statement, except that it has a namespace in it. In this case we don’t know what “my” refers to. That made it easier for me to transform the XPath statement into /*[local-name()=’TestForm’/*[local-name()=’Repeating’]/*[local-name()=’Message’]. It’s frankly not that big a deal except that it’s somewhat tedious. This allows me to call XMLDocument.SelectNodes(string xPath) instead of XMLDocument.SelectNodes(string xPath, XmlNamespaceManager nsmgr) which was good because I didn’t have a namespace manager and I didn’t know how to create one easily. However, I ran into a new problem that my quick solution didn’t allow me to get around. I wanted to add a set of new nodes to an existing document — in that case I couldn’t ignore namespaces any longer.

After a bit of gnashing of teeth I realized that I can create a XmlNameSpaceManager pretty easily by looking at the attributes of my document element (DocumentElement) node. Take a look at this code:

XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);

foreach (XmlAttribute attr in doc.DocumentElement.Attributes)

{

if (attr.Name.IndexOf(“xmlns:”) == 0)

{

string prefix = attr.Name.Replace(“xmlns:”, “”);

string nsUri = attr.Value;

nsmgr.AddNamespace(prefix, nsUri);

}

}

In this snippet I use the document element attributes to create my namespace manager — thus I can use the same namespace prefixes as are in use in the document itself. (Forgive me for using .Replace() rather than .Substring() – I ultimately decided it was more readable.) Taking this scenario a bit farther, with the namespace manager I can create a node of <my:Message /> with:

XmlElement ele = doc.CreateElement(“my”, “Message”, nsmgr.LookupNamespace(“my”));

This abstracts out what the namespace actually is so that I can just reference the existing prefixes. With the element created I can set its InnerText property and then use XmlNode.AppendChild() to append the new element into my document.

Since it’s easy to create XmlNameSpaceManager objects now … I might have to forget my *[local-name()=’foo’] trick… although it’s still useful when I’m leveraging Xml editing tools and I don’t have namespace support in them.

MOSS Built In Site Column Table

I needed to get the site column IDs for some of the built in site columns and while there are several places on the Internet where you can find the technique for pulling them out of the URL when editing a field that was going to be a bit tedious for what I was going to do so I wrote a quick utility that would dump out the site columns to a text file. The resulting XLS file is available here.

I hope that it helps if you’re looking for a specific field IDs and you don’t want to URL decode them.

Moving SharePoint Development Forward – p&p SPG

One of the things I like best about my work is that I get to work with some great people doing fun and interesting things. I can honestly say that in our small part of the universe we manage to push the ball forward. I’ve had the pleasure of speaking at the SharePoint Best Practices Conference as well as other events – to try to share the things we’ve learned about how to develop scalable, maintainable, robust applications on the SharePoint Platform. One of the things that I have the most fun with is helping the Microsoft Patterns and Practices group put together their SharePoint Guidance. The first iteration we worked on basic collaborative applications. The second iteration (this one) we took a look at web content management scenarios and line of business integration.

Certainly I don’t expect that everyone can use all of the work that was done here – however, I expect that it’s more than worth your time to take a peruse through the materials. I know you’ll find at least one thing that you didn’t know or didn’t think of before. I know that because there were tons of things that I didn’t know about the platform – or hadn’t considered – that I learned.

Go check out the latest version of the SharePoint Guidance from p&p

Recent Posts

Public Speaking