Skip to content

HTTP 400, Kerberos, Bad Request, MaxTokenSize, TokenSz

We turned on Kerberos for a client this past weekend and one of the gifts that we got was that some of the users couldn’t log into the portal. Other users weren’t able to post a form to the server. They would get a HTTP 400 Bad Request.

Initially it was thought that the Kerberos ticket might be getting larger than the MaxTokenSize (See KB327825). After I chatted with my friend Laura Hunter I was pointed to the TokenSz utility. It will show you the token size of the current user. With this information I found that for the most part the users had token sizes of ~3K. One user had a token size of 8194 bytes. So knowing that the token sizes were much smaller than the 12K limit that MaxTokenSize defaults to I had to do some more digging.

Ultimately, I found that HTTP.SYS has a smallish default buffer size for incoming requests and large Kerberos tickets can actually exceed the available size for this buffer. Luckily KB2020943 shows you the registry settings you can change to increase the buffer size of HTTP.SYS. There’s a reboot required after the change but after that the users were able to login. For our environment we felt like a MaxFieldLength of 25K was plenty of headroom for our needs.

CPU Transistor Count Over Time

I recently had to generate a graph for Moore’s Law which plots CPU transistors over time, I thought the end result looked interesting so I wanted to share it:

The line is a bit more varied than most you see because I included AMD, Cyrix, and a few other CPU vendors. I also noticed that some of the data I was using included onboard cache in the transistor count and some didn’t. However, I think the graph is interesting because it does show how we’ve roughly doubled the transistors on processor every 18 months-2 years as Moore predicted. Of course, we’ve stopped increasing our speed and now we’re focused on driving more cores onto the chips – and therefore forcing developers to think about multi-threading their applications.

Content Organizer in SharePoint 2010

There are so many new features and enhancements in SharePoint 2010 that it’s hard to keep track of all of the great things going on. One of the interesting new enhancements is the Content Organizer. It started out its life in SharePoint 2007 as the records routing available only to records centers. However, In SharePoint 2010 it’s been set free for use in any type of site. There are a few unique ways that it can be used – and a few quirks you should be aware of if you want to leverage it in your environment.

The Content Organizer feature is a site feature. Once activated a new library – called the Drop Off Library – is created and two new links are added to the Site Settings menu – Content Organizer Settings and Content Organizer Rules. The settings allows you to restrict uploading to libraries that content organizer has rules for, controls whether content organizer rules can target other sites, controls additional folders being created once a specified number of items have been added to a folder, as well as some other settings. The content organizer rules is more interesting because it allows you to establish a set of rules as to where the content should go.

To start with you need to know that Content Organizer works with content types. That means you’ll need to create content types for the types of content that you want to route. Technically, you could use document but that’s not going to be very interesting. Once you have selected a rule in a new content organizer rule you can specify additional conditions. For instance, if you wanted to place expense reports that were for more than $10,000 in a different location (to drive a different workflow) you could specify that the total value is greater than $10,000 for the rule. The final part is to specify the target location and potentially the additional sub-folder based on the attributes. This is the powerful part, you can automatically route documents into folders based on the metadata in the item.

This feature can be leveraged to automatically route forms to the correct location based on content type – and a field in the content type. When you develop your forms in Microsoft Office client applications you can have fields from the document automatically populate columns of metadata in SharePoint – therefore providing the content organizer something to route on. You can learn more about how to create Office client templates that promote fields in my whitepaper written for SharePoint 2007 Managing Enterprise Metadata with Content Types. This shows you how to do this for Word documents. You can do similar things by promoting fields InfoPath forms. By doing this the user simply fills out the form and submits it. SharePoint does the work to move it into the right document library and folder.

One of the interesting side effects to this approach is that the documents aren’t moved immediately from the drop off library to their respective target. Somewhere along the way the process for providing the required metadata is skipped and as a result instead of an event receiver kicking off and moving the document, the system waits for a timer job to run which looks for items in the drop off libraries of the various sites that have been created and processes any of the items in the drop box – if possible. You can change the schedule – or force running of the Content Organizer Processing by going into Central Administration, selecting Monitoring, and Content Organizer processing. The default schedule of once a day may not be enough if you plan on heavily leveraging this feature.

You can also trigger processing of individual documents, if you were to go in and save the properties of the document, so it’s possible to get the content organizer to route an individual document immediately.

If you want to create a simple test case, that’s easy enough to do:

  1. Create a new content type built upon the document content type.
  2. Name the content type something like Office Document
  3. Add the ‘Author’ site column to the content type.
  4. Activate the Content Organizer feature
  5. Create a target library called ‘Documents’
  6. Add the Office Document content type you created to target library and to the DropOff library created by the content organizer.
  7. Remove the document content type from the DropOff library so uploaded documents default to the Office Document type you created.
  8. Create a content organizer rule (Site Actions-Site Settings-Content Organizer Rules) that routes Office Documents to the Documents library and creates folders by Author with the format of just %2.

Now when you upload word documents to the DropOff library, the author field will be automatically populated for you from what was in the document. The content organizer will see the content and route it to the ‘Documents’ library and create a sub-folder for the author’s name based on the rule you created.

Of course, you can create more complicated scenarios where you’re using Quick Parts to capture data in your Word document and routing based on that information, but this is a quick way to leverage the content organizer.

Don’t Deactivate that Site Template Solution

It’s open season for hunting down SharePoint 2010 bugs and I’ve found a particularly ugly one. I created a site from a site template, and when I went in to the solution gallery and deactivated the solution (after the site was created) the solution gallery began throwing an exception:

[NullReferenceException: Object reference not set to an instance of an object.]
Microsoft.SharePoint.WebPartPages.ListViewWebPart.PrepareContentTypeFilter(SPList list, Hashtable[] excludedTransformers) +176
Microsoft.SharePoint.WebPartPages.ListViewWebPart.GenerateDocConvScriptBlock(SPWeb web, SPList list) +482
Microsoft.SharePoint.WebPartPages.ListViewWebPart.OnPreRender(EventArgs e) +1957
Microsoft.SharePoint.WebPartPages.WebPartMobileAdapter.OnPreRender(EventArgs e) +78
System.Web.UI.Control.PreRenderRecursiveInternal() +11025422
System.Web.UI.Control.PreRenderRecursiveInternal() +223
System.Web.UI.Control.PreRenderRecursiveInternal() +223
System.Web.UI.Control.PreRenderRecursiveInternal() +223
System.Web.UI.Control.PreRenderRecursiveInternal() +223
System.Web.UI.Control.PreRenderRecursiveInternal() +223
System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3393

The specific steps are really simple…

  1. Create a blank site.
  2. Save site as a template.
  3. Create a new site collection but don’t establish a template
  4. Upload the template created in step #2 to the solution gallery and activate
  5. Create the site with the template.
  6. Go to the solution Gallery and deactivate the solution.

If you want or need to get back to a partially useful state you can do the following steps in the SharePoint Management Shell..

$site = get-spsite http://localhost/sites/mysitehere
$solutions = $site.Solutions
$solutions.Add(1)

The moral to this story is don’t deactivate the site template that you created a site from (at least not at the moment). In SharePoint 2007 you could remove a site template’s STP file once the site was created because the STP file was essentially a macro that replayed the creation steps over the existing site definition. However, this isn’t the case with the new WSP format. The source of the items including content types is tracked, and may be removed if you remove the site template.

CS0016: Could not write to output file with SharePoint

I was at a client today and they were having all sorts of errors in SharePoint. One of them was that central administration pages were returning ‘unknown error’. When I turned on debugging I saw a server error in application with these details…

CS0016: Could not write to output file ‘c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\242d0ff4\3604ddcb\en-US\App_GlobalResources.aaqxxojv.resources.dll’ — ‘The directory name is invalid.

It was quick enough to find a Microsoft KB article (825791) for CS0016. It points to the fact that there may be a bad TMP or TEMP directory setting. We went into the service and manually set the TMP and TEMP variables in the profile. However, the problem didn’t go away. After further digging we changed the TMP and TEMP at the system level – because we discovered that someone had set the system values to an invalid directory. When we changed the system level setting to a valid directory the problem went away.

XML Invalid Data and Byte Order Marker

While working on some InfoPath and Workflow I got bit again by the Byte Order Marker and I felt like I should document what’s going on. I was getting an exception… “The data at the root level is invalid. Line 1, position 1.” Here’s why:

The XML encoding that InfoPath uses is UTF-8. UTF-8 will make the first byte of the file (when decoded with UTF-8) a byte order marker. When XmlDocument sees this it’s confused. It expects the XML tag to appear at the very first character of the string you provide it. It’s simple to deal with – but frustrating to find. This code create a new XmlDocument, extracts the file contents from SharePoint, and loads it into the document.

XmlDocument wfDoc = new XmlDocument();

Byte[] fileBytes = wfFile.OpenBinary();

string fileAsString = (new System.Text.UTF8Encoding()).GetString(fileBytes);

wfDoc.LoadXml(fileAsString.Substring(1)); // SKIP Byte Order Marker @ beginning of file

A better approach is to hand off the stream to XmlDocument:

XmlDocument wfDoc = new XmlDocument();

using (Stream wfFileStrm = wfFile.OpenBinaryStream())

{

wfDoc.Load(wfFileStrm);

}

This will load fine without stripping the Byte Order Marker – but in my case, this isn’t supported in SharePoint Sandbox code because the System.IO.Stream type isn’t allowed.

Slide Decks from SharePoint Pro 2010 Summit

Last week I had the pleasure of delivering a pre-conference session and three regular sessions at SharePoint Pro in Las Vegas. Several folks have asked to get copies of the slide decks, so I’ve uploaded them with links below:

I should say that there were plenty of demos, which obviously aren’t in the decks. However, the decks should be a useful reminder if you were there.

Creating Intelligent Content Types with Word, SharePoint Designer, and Visual Studio

In Office and SharePoint 2007 we had the capability of creating a content type that was intelligent – one that allowed you to specify fields in your document and have those promoted to the SharePoint library the content type is in. In fact, I wrote about this in my “Managing Enterprise Metadata with Content Types” whitepaper that I wrote for Microsoft. However, in that whitepaper I only talked about how to do the creation of the content type from the user interface – you couldn’t pick this content type up and move it from one site collection to another. Of course in SharePoint 2010 you can use the Managed Metadata Service but that requires server – and a connection between farms. What if you want to pickup a content type and move it from one farm to another that aren’t connected – or if you don’t have server.

The good news is that the tools in Visual Studio 2010 are much better than in Visual Studio 2008. I put together a video as a part of my work for SharePointPro Summit. The video shows how to create a new hire form and then turn that into a package that can be redeployed.

Take a look at how easy it is with the new tools to Create a Content type and package it.

Infrastructure Ripple Effect – The Story of Servers, Racks, and Power

A few months ago I decided that I needed a new server. (It was actually several months ago but a few months ago I gave in and decided to buy it.) My server infrastructure was outdated. I did pickup a new server to put at the collocation center a year and a half ago. However, that server isn’t local to me and I still need something that is local for file storage, DNS, DHCP, etc.

I also needed to have a Hyper V host machine for some work I’m doing with Microsoft on the 10232A “Designing Applications and Solutions for Microsoft SharePoint 2010” class. (i.e. Professional SharePoint Development course). My Lenovo T61p laptop doesn’t work as a HyperV host – or at least hasn’t worked until recently. Ultimately the fact that HyperV disables Suspend and Hybernate means it’s not a good fit for the laptop.

So I decided to buy a Dell R710 rackmount server. I added some processors, memory, disks, etc. so that it can be a complete virtualization platform here in my office. I did get the remote access card so I could check on the server while I’m not here. Anyway, that seemed like it was all good, I mean I had other servers in my rack already … that was until it arrived.

I had conveniently forgotten that I had a telecommunications rack which is only 24 inches deep and servers (at least the professional servers) expect a 30″ deep rack. Setting the server into my old rack was semi-comical as it was sticking out both the front and back of the rack. So… I picked up a new rack. An IBM NetBay 25U rack that works pretty well in the space I have. The rack is way oversized for the 2U of server and the 1U of switch gear. However, 25U was the right size at the right price.

When I ordered the server I picked up dual power supplies to minimize points of failure. Of course, if you have two power supplies you might as well have two different electrical circuits – so I paid the handyman to install a second 20 amp circuit into my server/storage room. (I even had him put it on the opposite leg of the power just in case we had a single leg power failure.) That lead to a desire to have two UPSs… and they might as well be rackmount since I’ve got all of the extra space in the rack. I’ve had good experience with APC UPSs both personally and professionally so I settled on the SUA2200RM2U – 2200VA or a full 20Amp capacity.

This lead me to try to install the APC PowerChute software on the Dell which by now was running HyperV and several virtual machines. What I discovered is that the Power Chute software didn’t support Hyper-V. A bit of research lead me to seeing Ben’s post about support or UPSs in Windows. That’s fine but I really wanted automated testing and better support than is available out of Windows.

Of course, the PowerChute Network Shutdown supports Hyper V. So I buy the network cards for the UPSs and install them. After some real fun trying to get them configured because you have to use the APC serial cable – and you have to disconnect the USB connection before the serial port will work. I finally get them configured and discover that despite what an APC agent told me the PowerChute Network Shutdown software (free) doesn’t support HyperV. Instead, there’s a specific HyperV version that does support Windows HyperV hosts – but of course it costs another $99. (At this point I’m not too concerned about the cost but it is frustrating that it isn’t included with the network cards and that an agent had told me it wasn’t necessary.)

I still don’t have it all configured exactly as I want it. However, at least the fundamentals are in place. I’ve probably got one more purchase to make. My Linksys switch SGE2000 can have a secondary power supply (an RPS1000) attached to it. The RPS1000 would allow me to have one power supply plugged into the first UPS and the second one into the secondary power supply. Thus the switch would keep running even if the first UPS had a problem. I wouldn’t do this except that now that I have the UPSs on network cards the switch becomes a point of failure in a power outage situation.

I am reminded that any change creates a ton of little ripples.

The Public Debut of Super Pig

At the SharePoint Conference 2009 they were handing out flying pigs – including their capes. So my son and I developed a short story board, recruited a neighborhood friend and put together a little short movie staring Super Pig (the flying pig given away at the SharePoint Conference.) Take a look for yourself: https://thorprojects.com/wp-content/uploads/2015/07/SuperPig.wmv

I’ve had this put together for a while – since everyone I’ve shown it to likes it so much I had to share it with the world.

Recent Posts

Public Speaking