Skip to content

Honest Evaluation

It’s time for another installment of the Nine Keys to SharePoint Success. It’s been nearly nine months since I wrote the original blog post that defined the nine key things that your SharePoint implementation should have to be successful. This is key number five and the first in the area of cultural change for the organization. And this post has been by far the hardest for me to write. We’ve moving into areas which are a bit more sensitive to the organization – in part because they echo how we exist as humans.

The psychological behaviorists argued that introspection should be abandoned as a psychological technique, instead focusing on things which were objective and measurable. While I support the idea that we cannot rely solely on introspection to help us understand ourselves, I also believe that we should use introspection to help us understand more about ourselves. Alcoholics Anonymous kicked off a 12 step program movement which has resonated throughout the globe as a model for treating all kinds of addictive behaviors. Step four is “Made a searching and fearless moral inventory of ourselves.” This is clearly introspection – as objectively as our egos will allow. The irony of the fact that behaviorists wouldn’t like introspection is that the 12 step technique is successful because it changes behavior in part due to a heavy dose of introspection. Here as in a 12 step program our goal is to change behavior – that is we want to ensure that we’re doing the behaviors that lead to better SharePoint outcomes.

Those who have been in and around 12 step programs will tell you that step 4, the introspection is one of the most difficult steps in the program. Our ego actively protects us from admitting our faults. (As Daniel Gilbert said in Stumbling on Happiness, it’s our psychological immune system) Every person has faults. I have faults and limitations. You have weaknesses and opportunities for growth. Your best friend struggles with something. Your mother sometimes fails. Your father doesn’t always succeed. Despite this we all fall into the trap of trying to hide our faults. We feel like we need to portray an image of perfection to the outside world. Those with a Christian church background may have heard Romans 3:23 “for all have sinned and fall short of the glory of God.” You may notice that the word is “all” not “some.” We all have our limitations. None of us are perfect. It’s just as true that we want to hide these faults.

If you’re still with me you may be asking how our personal hang-ups impact the organization. Well, we reflect (or project) our personal beliefs into the organization. Jonathan Haidt in The Happiness Hypothesis wrote about our propensity to personalize input. Instead of accepting it as a statement, we internalize it as a character flaw. The key here is that a character flaw is much harder to solve than a simple weakness. I say simple weakness because weaknesses can be simple to resolve. I do not have perfect vision, neither does three quarters of the population. We don’t think about ourselves as being fundamentally and irretrievably flawed because we can’t see perfectly. Neither does someone walking down the street say “Look at that poor person, they have to wear glasses.” If you’re one of the few people who don’t need vision correction, I’m sure you’ve got another equally solvable limitation.

When we talk about how there are gaps in the organization – whether they’re simply structural observations about missing skills or whether they’re an inventory of projects that have gone wrong, we have a strong tendency to make those problems about us. We believe that we failed in creating a corporate structure without gaps – as if that were possible. We believe that we should have somehow known exactly how and when each project failed and should have presented it. On the one hand we will violently defend our weakness and in other hand internally we’re all too happy to heap on the blame.

The sinister part of our resistance to acknowledge and accept our limitations is that this is the core of how we hold ourselves back in life. If your struggle is anger management you may find that you spend a great deal of time and money on drywall repair. If you’re bad at balancing a checkbook or paying your bills, you’ll spend more money on overdraft and late fees. An anger management class would be cheaper than constantly repairing drywall. Some assistance with a system for balancing a checkbook and paying bills on time would save much more than you spend.

At an organizational level we’re positively lousy at the same sorts of introspection that we struggle with personally. To admit that we have a flaw or a gap in the organization means that we’re somehow personally flawed as well. We personalize that a defect means that someone isn’t doing their job – even if no one has the job to do.

Consider the situation where an organization is attempting to implement an Enterprise Content Management (ECM) system. They’ve never had an ECM system and they don’t have anyone in the organization that knows how to create the structure for the metadata for the files. There is no experience with creating an information architecture and not surprisingly when the solution is implemented the poor information architecture results in lower adoption. Who is to “blame” in this situation? No one has the skill – it may be that no one even realized the skill was necessary. In a typical post mortem (use post action review if that’s more comfortable to you) of the project it’s identified that information architecture was the problem – however, someone will likely get tagged as being responsible for creating it better.

This intent on identifying a person to blame for a weakness leads the person who’s in the hot seat to seek to create excuses. What else can they do? Everyone is looking for a scape goat and they really don’t want to be it. Like children crying out “Not it” immediately after someone proposes an unpleasant role; we try to keep from being the center of the problem.

Blame and Fault Finding

There’s a radical difference between getting to the root of a problem – to figure out what happened or what went wrong – and trying to figure out who was wrong and who is at fault. Honest evaluation is about finding the root of the problems and an awareness of what the organization is good at – and is not good at.

There are numerous reasons leading back to human nature and psychology that lead us immediately to deflecting blame from ourselves and on to other people. The simple truth is that if a system fails then we’re at fault (we’re to blame) for the failure – a failure to realize that the system would fail. However, if the problem is another person we realize that we’re not responsible – because we’re not responsible for other people. Despite what we say with our lofty efforts to find problems with systems we often revert to trying to find problems with people.

Consider for a moment the 5 Whys technique which is used for root cause analysis. How could using a time-honored approach and asking five simple – one word – questions lead to problems? The answer lies in the perceived tone of the question. Questions based on ‘Why’ are often accusatory – there by leading to a defensive response containing deflection, excuses, and often confusion. Even when a why question is asked with the heart of curiosity, it can be heard in an accusatory tone. This is particularly evident when the person has experienced accusatory people in similar roles or has had a significant personal relationship with an accusatory bias. (E.g. parent or spouse) Business Analysts – who are responsible for eliciting requirements and matching solutions are often encouraged to never ask ‘Why’ questions.

As I mentioned in my review of Diffusion of Innovation, there’s a tendency even in viewing innovations as wanting to blame the people instead of the technology – as was the case in the design of car safety. By explaining a way the problem on “idiots behind the wheel” we failed to look at how we could design systems to improve the chances for success.

Ultimately, blaming or finding fault finding in a person doesn’t help because every human is imperfect. Even if you remove that person from the system the next human will make mistakes too – and maybe different and more difficult to detect mistakes to discover. There’s an assumption that humans will be perfect instead of assuming that people are necessarily flawed and the systems we put around them should expect this and accommodate this expectation.

It isn’t that a particular person does – or does not – have flaws. Rather we all have flaws, the better question, the one that leads to a better evaluation, is what components are the system are insufficient to ensure success of a project. If you recognize a gap in requirements gathering, project management, information architecture, development, or infrastructure – why not supplement your team with the additional skills you need – upfront instead of suffering through a failing implementation. (The old saying that a stitch in time saves nine is appropriate here.)

Ownership and Acceptance

There are two key skills that will diffuse the blame and fault finding. They will help evaporate the personalization. The first is simply owning a problem. This sounds counter intuitive – and it is – however, it’s amazingly effective at stopping a “witch hunt.” When you own the problem. When you admit that somewhere in your area of responsibility there is in fact a problem and you’re aware of it, you stop others from trying to prove to you that you do have a problem.

It may sound like I’m saying that you should personalize the problem and own it. That is partially correct. I’m suggesting that you own it. You admit that it was something you could have done better. However, personalization is about making it about who you are – I’m not suggesting that. I’m suggesting that you can be imperfect and make a mistake without it being a character flaw. If I am late for a meeting it doesn’t mean that I can’t tell time.

The other skill is the one you need to be able to own the problems. That is acceptance. Accepting that you can make a mistake – and that all people make mistakes is incredibly freeing. When you accept this fundamental truth, you are more willing to own your part in causing bad outcomes – and you’re more willing to accept it from others.

Working at Getting Help

In an economy we have to make decisions about what we will and won’t do. We don’t get to make our decisions in a vacuum. We have to make our decisions on where to invest our efforts based on the greatest benefit. However, sometimes we make our investments based on what we know to invest in – rather than those things which are hard to invest in.

If you’re used to investing in supplementing your infrastructure skills with outside resources, chances are that you’ll continue to do this. You already know who to talk to. You know the vendors, the rates, and the issues. Making an investment in an area that you’ve never considered before – like information architecture – is challenging because you don’t know who to call nor do you know what to expect. What does it take to create an information architecture? What exercises will you go through? How much time will it take?

Interestingly if you’re familiar with infrastructure and even if you’ve outsourced it for years, you will have built up an awareness of what needs to happen. You know about the planning and the testing and the day-of activities. Because you’ve been a part of successful projects – run by outside parties – you’ve become more able to successfully implement something that may have previously been difficult. However, nothing is as easy as letting someone else manage it – unless allowing someone to manage the infrastructure part of the problem means that you won’t be able to manage some other part of the project.

Honest evaluation is – in part – a revisiting of the evaluations that you’ve made in the past. There was a time when the most cost effective solution to putting together a flyer was to hire a printing company to create the flyer for you. They owned specialized and expensive desktop publishing software. Today with Microsoft Word and Microsoft Publisher, most of us wouldn’t think of asking a printer to create a simple flyer for you. Either you’ll ask a design firm for something important – or you’ll do it yourself if it’s simple. We used to send out to have color copies made now most of our copiers – now called multi-function devices – can print in color. In my office, I have a sophisticated DVD duplicator which will print directly on the surface of the DVD after it’s burned. There was a time – not so long ago – when you would pay a specialized company to burn 500 DVDs and you would warehouse those that you didn’t sell. Today we print up batches of 20 or 50 and sell out of our inventory until we make more. Also because of the DVD production, we have a professional paper cutter which is capable of cutting hundreds of sheets of paper simultaneously, so we don’t even have our DVD covers cut down.

Honest evaluation is knowing where the inflection point is between having had something outsourced and when it’s time to get the solution accomplished differently.

Pre-mortem

Most folks are familiar with after-incident reviews and post project reviews called post mortems. Those reviews, however, are after the event has happened. The review may be helpful for the next event (or project) but it does little for the existing one – and as was pointed out in Lost Knowledge the “lack of absorptive capacity” may make it difficult for the knowledge to be used on the next project. So a useful exercise is to do a pre-mortem exercise, as was recommended in Sources of Power. This exercise requires that the participants accept that the project has failed at a future time and they’re trying to determine why. The goal is to turn over stone that could have led to the failure of the project. The perspective that it MUST fail for the exercise and that the goal is to identify the most likely candidates for the failure often leads to exposing many things which may not otherwise have been thought of. (Psychologically speaking it’s difficult for humans to identify gaps in existing plans. By focusing on the idea that there was a failure forces you to look for them differently.)

Facts not Feelings

One of the other keys to getting to an honest evaluation is staying focused on the facts and not the feelings. It’s one thing to say that “I think we manage projects pretty well around here.” That’s a feeling. What does the data actually say? How many projects come in on-time and on-budget? If you’re like most organizations, not many projects come in on-time and on budget. However, knowing this and examining the reasons – which is frequently changing requirements or bad requirements – gives you a place to look when doing your pre-mortem. It might be easy to say that I’m speaking out of both sides of my mouth here – saying that post mortems aren’t valuable and then suggesting that you use the output of previous post mortems as feedback to the pre-mortem process. However, that’s not really what I’m saying.

What I’m saying is that you should absolutely do post-mortems – however, you should recognize that you may not be able to get the value out of them – unless you use them as feedback into a process of actively evaluating the next project.

Look Outside

Developers shouldn’t test their own code. Authors shouldn’t edit their own work. Why? Because the same cognitive processing that lead to the fault will lead to not seeing the fault when you do any sort of a review. The value of an outside perspective is that it gives you a different way to process your situation. In the book Thinking, Fast and Slow Kahneman spends a great deal of time talking about the two “systems” going on in our head and how even our own outside view can be valuable. That embedded experts often fall into group think and fail to process what they know the truth to be based on the context of the question. He also discusses at length the problem of WYSIATI – What You See Is All There Is. This bias is based on the assumption that you have no blind spots – which in truth all of us have.

Pulling in outside resources brings in new views and perspectives that change what the group sees. Whether it’s bringing in – or bringing together non-competitive peers – or bringing in an experienced consultant, the different perspectives will lead you to a different set of challenges – and different ways of evaluating whether your organization has the skills and talents necessary to be successful.

Putting it Together

Honest evaluation is by no means easy. It’s by no means automatic. However, without truly assessing your strengths and your weaknesses, how can you possibly expect that you’ll be successful at a SharePoint project which is so complex and difficult to get right?

Appearance: Run as Radio – Robert Bogue Makes Ten Mistakes with SharePoint!

I’m pleased to share that last week I got a chance to sit down with Richard Campbell face-to-face here in Indianapolis and record an episode of Run As Radio which was creatively titled “Robert Bogue Makes Ten Mistakes with SharePoint!“. Check it out and tell me what you think of the conversation. We got a chance to talk through the 10 most common non-SharePoint technical mistakes that people make – when setting up SharePoint. Oh, and we got off topic about things like load balancers and load/scalability testing.

Including TypeScript in a SharePoint Project

If you missed it, Microsoft announced a new language that is a super set of JavaScript that compiles down into regular JavaScript. The compiler is written in JavaScript and works everywhere. The big benefit of the language is type-checking at compile/edit time. If you want to learn more go to http://www.TypeScriptLang.org/.

There is some rather good tooling and documentation, but one problem for me was making it work from inside my SharePoint Projects after I installed TypeScript. The way that the SharePoint tools run, they do the deployment before the TypeScript compiler runs. That’s not exactly helpful, however, you can fix this. First, you’re going to need to right click on your project file and unload it.

Next, you need to right click it again and edit the project file (TypeScriptFarmTest.csproj)

Then you need to modify your Project node to include a new InitialTargets attribute pointing to TypeScriptCompile:

<Project ToolsVersion=4.0 DefaultTargets=Build xmlns=http://schemas.microsoft.com/developer/msbuild/2003 InitialTargets=TypeScriptCompile>

Then you’ll need to insert into the inside of this node a new Targets node:

<Target Name=TypeScriptCompile BeforeTargets=Build>
<Message Text=Compiling TypeScript… />
<Exec Command=&quot;$(PROGRAMFILES)\Microsoft SDKs\TypeScript\0.8.0.0\tsc&quot; -target ES5 @(TypeScriptCompile ->’&quot;%(fullpath)&quot;‘, ‘ ‘) />
</Target>

From here save the file, close it, and right click the project and select Reload Project. Now all your TypeScript will compile into JavaScript before the deployment happens (actually before anything else happens because we told Visual Studio and Build to do this operation first, before anything else (with the Project InitialTargets attribute.)

SharePoint Profile Memberships in 2010

There was a lot of talk about how the User Profile memberships in SharePoint 2007 worked. The net effect was that the memberships were stored in a profile property MemberOf or internally SPS-MemberOf. This was driven by a timer job which ended with ‘User Profile to SharePoint Full Synchronization’. However, this changed slightly in 2010 – and the change mattered. In SharePoint 2010 the memberships got their own collection property off the UserProfile object. The Memberships became a full collection to store values.

This didn’t change the requirement that to be listed the user had to be a member of the Members group of the site – not the Owners – only the members. So the list still has its issues from a usability perspective. The memberships for a user shows up in profiles in two places: Memberships and Content. In Memberships it shows a listing of sites vertically. In Content you can navigate across a horizontal bar of sites and libraries:

In all honesty, I generally recommend that organizations replace the Memberships and Content functionality of a my site with a library in the user’s my site that contains links to the sites that they have permissions in. I’ve done this various ways – including opt-in from the user from the personal menu but no matter how it’s done we invariably find that users can actually understand it if they’re managing it and when it is being driven by their permissions to the site. However, in this case, the company didn’t consider replacing these and they were up against their deadline for implementing the Intranet.

The client was reporting that their memberships in their user profiles were out-of-date. There were sites that no longer existed that people were seeing, they were also getting access denied while trying to access some of the sites either through the memberships tabs or through the content tab. Upon digging I found that they had split their farm from a single global 2007 environment to a regionally deployed 2010 environment and the 2010 environment migrated the global 2007 profile service. The net effect of this was that they inherited the memberships for all of the sites across the globe – but the user profile service was only updating those memberships for the URLs that the local farm owned. So in this case there were numerous memberships for non-local SharePoint sites that were no longer being updated.

I should say that there are tons of longer term answers to the problem of managing a single user profile and memberships across a global organization, but for right now they decided they wanted to remove all of the memberships so it wouldn’t be wrong. As it turns out the code to do this is relatively simple:

SPServiceContext svcCtx = SPServiceContext.GetContext(SPServiceApplicationProxyGroup.Default, SPSiteSubscriptionIdentifier.Default);
UserProfileManager upm = new UserProfileManager(svcCtx);
foreach (UserProfile user in upm)
{
MembershipManager mm = user.Memberships;
mm.DeleteAll();
}

Run the code above and all memberships will disappear. If you then run the timer sync job it will re-add the local sites to the farm.

This took an amazingly large amount of time to track down given the relative simplicity of the final answer.

SharePoint Search across the Globe

Several of my global clients have approached me over the last few weeks in some stage of planning or implementation of a global search solution. So I wanted to take a few moments and talk through global search configuration options including the general perceptions we have, the research on how users process options, the technology limitations – and the options. The goal here is to be primer for a conversation about how to create a search configuration that works for a global enterprise.

Single Relevance

There’s only one way to get a single relevance across all pieces of content – that is to have the same farm do the indexing for every piece of content. Because the relevance is based on many factors – including how popular various words are, etc., the net effect of this that if you want everything to be in exact right relevance order you’ll have to do all of the indexing from a single farm. (To address the obvious questions from my SharePoint readers, neither FAST nor SharePoint 2013 resolves this problem.)

OK, so if you consider that in order to accomplish the utopian goal of having all search results for the entire globe in a single relevance ordered list, one farm is going to have to index everything, you’ll have to have one massive search index. This means that you’ll have to plan on bringing everything across the wire – and that’s where the problems begin.

Search Indexing

In SharePoint (and this mostly applies to all search engines), the crawler component indexes all of the content by loading it locally (through a protocol handler in the case of SharePoint), breaking it into meaningful text (IFilter) and finally recording that into the search database. This is a very intensive process and by its very nature it requires that all of the bits for a file travel across the network from the source server to the SharePoint server doing the indexing. This is generally speaking not an issue for local servers because most local networks are very idle – there’s not an abundance of traffic on them and therefore any additional traffic caused by indexing isn’t that big of a deal. However, the story is very different in the case of the wide area network.

In a WAN most of the segments are significantly slower than their LAN counterparts. Consider that a typical LAN segment is 1gbps and a typical WAN connection is at most measured in megabytes. Let’s take a big example of a 30 Mbps connection. That means the LAN is roughly 300 times faster. For smaller locations that might be running on 1.544 Mbps connections the multiplier is much larger. (~650). This level of difference is monumental. Also consider that most WAN connections are at 80% utilization during the day.

Consider for a moment that if you want to bring across every bit of information from a 500 GB database across a 1.544 Mbps connection it will take about a month – not counting overhead or inefficiency to pull the data across the wire. The problem with this is what happens when you need to do a full index or when you need to do a content index reset.

Normally, the indexing process is looking for new content and just reading that and indexing it. That generally isn’t that big of a deal. We read and consume much more content than we create. So maybe 1% of the information in the index would change in a given day – in practical terms it is really much less than this. Pulling one percent of the data across the wire isn’t that hard. If you’re doing incremental indexes every hour or so you’ll probably complete the incremental index before the next one kicks off. (Generally speaking in my SharePoint environments incremental indexing takes about 15 minutes every hour.) However, occasionally your search index becomes “corrupt”. I don’t mean that in the “the world is going to end” kind of way, just an entry won’t have the right information. In most cases you won’t know that the data is wrong – it just won’t be returned in search results. The answer to this is to periodically run a full crawl to recrawl all the content.

During the time that the full crawl is running, incremental crawls can’t run. As a result while the indexer is recrawling all of the content some of the recently changing content isn’t being indexed. Users will perceive the index to be out of date – because it will be. If it takes a month to do a complete index of the content then the search index may be as much as a month out of date. Generally speaking that’s not going to be useful to users.

While you will schedule full crawls on a periodic basis – sometimes monthly and sometimes quarterly. However, very rarely you’ll have a search event that will lead to you needing to reset the content index. In these cases the entire index is deleted and then a full crawl begins. This is worse than a regular full crawl because it won’t be just that the index is out of date – but it will be incomplete.

In short the amount of data that has to be pulled across the wire to have a single search is just not practical. It’s a much lower data requirement to just pass along user queries to regionally deployed servers and aggregate those results on one page.

One Global Deployment

Some organizations have addressed this concern with a single global deployment of SharePoint – and certainly this does resolve the issue of a single set of search results but at the expense of everyday performance for the remote regions. I’ve recommended single global deployments for some organizations because of their needs – and regional deployments for other situations. The assumption I’m making in this post is that your environment has regional farms to minimize latency between the users and their data.

Federated Search Web Parts

Out of the box there is a federated search web part. This web part will pass the query for the page to a remote OpenSearch 1.0/1.1 compliant server and display the results of the query. Out of the box it is configured to connect to Microsoft’s Bing search engine. You can connect it to other search engines as well – including other SharePoint farms in different regions of the globe. The good news is that this allows users to issue a single search and get back results from multiple sources; however, there are some technical limitations; some of which may be problematic.

Server Based Requests

While it’s not technically required for the specifications, the implementation that SharePoint includes has the Federated Search Web Parts doing the processing of the remote queries via the server – and not on the client. That means that the server must have access to connect to all of the locations that you want to use for federated search. In practical terms this may not be that difficult but most folks frown on their servers having unfettered access to the Internet. As a result having the servers running the federated searches may mean some firewall and/or proxy server changes.

The good news here is that federated search options must be configured in the search service – so you’ll know exactly what servers need to be allowed from the host SharePoint farm. The bad news is that if you’re making requests to other farms in your environment you’ll need a way to pass user authentication from one server to another and in internal situations that’s handled by Kerberos.

Kerberos

All too many implementations I go into don’t have Kerberos setup as an authentication protocol – or more frequently their clients are authenticating with NTLM rather than Kerberos for a variety of legitimate and illegitimate reasons. Let me start by saying that Kerberos, when correctly implemented, will help the performance of your site so outside of the conversation about delegating authentication it’s a good thing to implement in your environment.

Despite the relative fear that’s in the market about setting up and using Kerberos, it is really as simple as setting SharePoint/IIS to use it (Negotiate), setting the service principle name (SPN) of the URL used to access the service to the service account, and setting the service account up for delegation. In truth that’s it. It’s not magic – however, it is hard to debug. As a result, most people give up on setting it up. Fear of Kerberos and what’s required for it to be setup correctly falls into what I would consider to be an illegitimate reason.

There is a legitimate reason why you wouldn’t be able to use Kerberos. Kerberos is mutual authentication. It requires that the workstation be trusted – which means that it has to be a domain joined PC. If you’ve got a large contingent of staff that don’t have domain joined machines, you’ll find that Kerberos won’t work for you.

Kerberos is required for one server to pass along the identity of a user to another server. This trusted delegation of user resources isn’t supported through NTLM (or NTLMv2). In our search case, SharePoint trims the search results to only those results that a user can see – and thus the remote servers being queried need the identity of the user making the request. This is a problem if the authentication is being done via NTLM because that authentication can’t be passed – and as a result you won’t get any results. So in order to use the out-of-the-box federated search web parts to another SharePoint farm, you must have Kerberos setup and configured correctly.

Roll Your Own

Of course, just because the out-of-the-box web parts use a server-side approach to querying the remote search engine – and therefore need Kerberos to work for security trimming – doesn’t mean that you have to use the out of the box web parts. It’s certainly possible to write your own JavaScript based web part that will issue the query from the client side to the server and therefore have the client transmit their authentication to the remote server. However, as a practical matter this is more challenging than it first appears because of the transformation of results through XSLT. In my experience, clients haven’t opted to build their own federated web parts.

User Experience

From a user experience perspective, the first thing users will realize when using the federated search web parts is that the results are in different “buckets” and they’re unlikely to like this. As we started this post, there’s not much that can be done to resolve this problem from a technical problem – without creating larger issues of how “fresh” the index is. So while admittedly this isn’t our preference from a user experience perspective there aren’t great answers to resolving it.

Before dismissing this entirely, I need to say that there are some folks who have decided to live with the fact that relevance won’t be exactly right and are comingling the results and dealing with the set of issues that arise from that including how to manage paging, what to do about faceted search refinement – that is the property-value selection typically on the left hand side of the page. When you’re pulling from multiple sources you have to aggregate these refiners and manage your paging yourself – this turns out to be a non-trivial exercise, and one that doesn’t appear to improve the situation much.

Hicks Law

One of the most often misused “laws” in user experience design is Hick’s Law. It states, basically, that given a longer list of items vs. two smaller lists that a user will be able to find what they’re looking for out of one list faster. (Sorry this is gross oversimplification; follow the link for more details.) The key is that this oversimplification ignores two key facts. First, the user must understand the ordering of the results. Second, they must understand what they’re looking for – that is they have to know the exact language being used. In the case of search, neither of these two requirements will be met. The ordering is non-obvious and the exact title of the result is rarely known by the user that’s searching.

What this means is that although intuitively we “know” that having all the results in a single list will be better, the research doesn’t support this position. In fact some of the research quoted by Barry Schwartz in The Paradox of Choice seems to indicate that meaningful partitioning can be very valuable about reducing anxiety and improving performance. I’m not advocating that you should break up search results that you can get together – rather I’m saying that we may have a perception that comingled results may be of higher value than they may actually be.

Refiners and Paging

One of the challenges with the federated search user experience is that the facets will be driven off of the primary results so the results from other geographies won’t show in the refiners list. Nor is there paging on the federated search web parts. As a result the federated results web parts should be viewed as “teasers” which are inviting users to take the highly relevant results or to click over to the other geography to refine their searches. The federated search web part includes the concept of “more…” to reach the federated search results’ source. Ideally the look and feel – and global navigation – between the search locations will be similar so as to not be a jarring experience to the users.

Putting it Together

Having a single set of results may not be feasible from a technology standpoint today, however, with careful considerations of how users search and how they view search results you can build easy to consume experiences for the user. Relying on a model where users have regional deployments for their search needs which provides some geographic division between results but also minimizes the total number of places that they need to go for search can help users find what they’re looking for quickly – and easily.

Adding a Google Search Option in Your SharePoint 2010 Search Scopes Without Code

It turns out that integrating internet search providers into the drop down list in SharePoint – the search scopes dialog is relatively easy and doesn’t require any code. Here’s how you can do it.

Create the Scope

If you want to create a global scope you can go into Central Administration then Service Management for the Search Service and finally Scopes on the left, or you can create the scope at the site collection level by selecting the Search Scopes option from Site Settings as shown:

Once there, you need to create a new scope and set it up as follows:

Note that you’ll want to check the Search Dropdown checkbox to get it to show up in the search scopes drop down and that the page being referred to in the target results page must be created. Also, you’ll need to add a rule to the scope. Once you hit OK you’ll be back at the list of the scopes and there will be a link to add rules. You should add a rule for all content (because it’s simple) – The add rules page should look like this:

Once you hit OK you’ve got a fully functioning scope that’s ready for use – just as soon as the system gets around to it. While we’re waiting, let’s go setup the page.

Creating the Redirect Page

In my case, I am using a simple search center for this demo (SRCHLITE) so I went in and copied the default.aspx page and removed the existing web parts. Then I added a content editor web part with some JavaScript to do a redirect to the Google search page. So here’s the bit that’s important. SharePoint automatically appends the search terms with a k= to the query string (k is for keyword). Google needs the query to be a q=, so we’ll have our JavaScript change the k= for a q=. The script looks like this:

<script language=”javascript”>
var queryString = window.location.search;
if (queryString.indexOf(‘k=’) != -1) {
var fullUrl = ‘http://www.google.com/search’ + queryString.replace(‘k=’, ‘q=’);
window.location.replace(fullUrl);
}
</script>

You’ll notice that in the script I check to see if there’s a k= in the querystring and only redirect if there is one – this is so we can manage the page without being redirected. I wrapped this script up into a Content Editor Web Part (which you can get here). I added this web part to my page and it was ready to redirect me to google.

Processing Scopes

If I was patient (and I’m not) I could have just waited for everything to be ready, however, I can also go back into Search management (through central administration) and if there’s a Scopes needing update shown there is also a Start update now link you can click to force search to compile the scopes for use as shown:

Once the search scope is compiled you can enable the scopes drop down list and use the link to Google.

Turning on Scopes for Most Pages

For the standard search box you can enable the scopes drop down list by going to Site Settings – Search Settings and changing the search dropdown mode to show scopes:

Once you hit OK, everywhere that there is a search box in the site collection should now have a scopes drop down list – that is except a search page, which has its own settings.

Turning on Scopes for a Search Page

For a search page you can turn on the scopes dropdown box by placing the page in edit mode then select Edit Web Part from the web part control menu (upper right). Expand the Scopes Dropdown section and select ‘ Show scopes dropwdown’. This will cause the scopes to show.

Testing

To test the setup just go into search, select the Google search scope and enter a search term.

Working with Settings Pages in Windows 8 JavaScript Applications

Adding additional page fragments to be navigated to in the grid application is relatively straight forward, just select the Page Control new item and start modifying the code. However, while this works well for page fragments for the application, it doesn’t work all that well for settings pages. In Windows 8 you’re supposed to leverage the existing Settings Charm – however, adding your settings pages can be a bit challenging because you’re supposed to transparently save the user’s preferences which can be tricky given the unload event won’t fire on the settings pages. So in this blog post I’ll cover the three main components of making a working settings page.

Registering the Settings

The first step is connecting your settings pages to the Settings charm in the first place. So in your default.js (presuming you’re using the grid app as your starting point) paste the following code immediately under the variable declarations (so it happens first):

// Register Settings Values

app.onsettings = function (e) {

e.detail.applicationcommands = {

“about”: {

href: “/pages/about/about.html”,

title: “About”

},

“settings”: {

href: “/pages/Settings/Settings.html”,

title: “Settings”

}

}

WinJS.UI.SettingsFlyout.populateSettings(e);

};

This will register two pages on the settings menu – one for About and one for Settings. (Which I should probably call something different.)

Settings Page HTML

The next step is to convert the HTML in the Page Control .HTML file so that it can be used by settings. That should look something like this:

<div id=”settingsContainer” data-win-control=”WinJS.UI.SettingsFlyout” aria-label=”About the application” data-win-options=”{settingsCommandId:’settings’,width:’narrow’}”>

<div class=”win-ui-dark win-header” >

<button type=”button” onclick=”WinJS.UI.SettingsFlyout.show()” class=”win-backbutton”></button>

<div class=”win-label”>Server Settings</div>

</div>

<div class=”win-content”>

<div class=”win-settings-section”>

<form>

<!– Insert settings UI here –>

</form>

</div>

</div>

</div>

The keys here are:

  • Provide a DIV attached to the WinJS.UI.SettingsFlyout class including options for the command id and the preferred width (narrow or wide.)
  • Provide a unique ID for the DIV so you can fetch it in code.
  • Inside an inner set of DIVs (classed with win-content and win-settings-section) include the controls and labels to capture your configuration.

Settings Page JavaScript

The Page control javascript that was generated has a nice structure, but it also has some problems. Left as is, the unload event will never be called. That’s a problem since the Windows 8 UI guidelines call for no save button. When the user clicks outside the settings page their settings are supposed to be saved automatically. We can make this happen automatically by registering the afterhide event on the WinJS.UI.SettingsFlyout container. However, this will provide a relatively odd context for the item when it comes back in, so we need a way to work around that. Here’s a sample of some of the code we need to make this all work:

var self = null;

WinJS.UI.Pages.define(“/pages/Settings/Settings.html”, {

ready: function (element, options) {

getSettings();

document.getElementById(“settingsContainer”).winControl.addEventListener(“afterhide”, this.unload);

self = this;

},

unload: function () { // Doesn’t appear to be called

self.updateSettings();

},

updateSettings: function () {

updateSettings(settings);

storeSettings(settings);

},

});

I’ve omitted some of the lines for brevity. In this case we’re creating a ‘self’ variable that we’ll assign to this so we can use it later – that’s going to help fix our event handling concerns in a moment.

Inside the ready method/event we get our settings and apply them to the HTML – That’s being done in getSettings(). Then we register our afterhide event and finally we set our ‘self’ variable to ‘this’ so we’ll have it later.

When the settings page is hidden it calls the unload function which uses the ‘self’ variable we created earlier to call updateSettings. Again the self variable is necessary because this within the context of an event handler is the DOM item that triggered the event – which isn’t what we want. The updateSettings uses helper functions to update the computer settings from the DOM and then store them.

As said earlier, unload doesn’t get called automatically and you’re supposed to be making your updates when the user makes them. That’s fine for toggle buttons but for text boxes there aren’t good ways to capture the end of the textbox entry. I know you would normally expect to be able to use onblur but this doesn’t work if the user clicks off the settings page fragment, so we need something that will get the form after it’s been pulled from the screen, thus the afterhide event.

It’s convention to save variables without a save button so you’ll want to save all the variables from the screen to application settings as soon as the user exits. That is the job of the storeSettings() method:

function storeComputerSettings(computerSettings) {

var roaming = Windows.Storage.ApplicationData.current.roamingSettings;

for (var prop in computerSettings) {

roaming[prop] = computerSettings[prop];

}

}

In my case I pass in an object with the appropriate properties defined. I just take those and stuff them into the roaming settings for the application. Once they’re there I can come back and access them again – even on different devices. You’ll note that the approach here will simply push anything in the object passed in, into roamingSettings.

So with these pieces you’ve got a settings page that can store values when the user exits the settings page; something that should be simple that was far more complicated than necessary.

Windows 8 JavaScript Notification Toast

I’m working my way through a set of topics for Windows 8 application development with HTML and JavaScript and I decided that I wanted to display Toast notifications for some things – basically I wanted to let the user know that their settings were saved. Ultimately I’ll pull these but for now they’re also useful for me for debugging to know that my background threads completed successfully without having to set a breakpoint. There are two key pieces to this process. First, the application needs to support notifications and that’s in the package.appxmanifest on the Application UI tab:

You’ll see a section for notifications – and an option to do Toast capable. Then you need some code to show a toast message. Here’s something straightforward that you can adapt to suit your needs:

function sendLocalNotification(message) {

var template = Windows.UI.Notifications.ToastTemplateType.toastText01;

var contentXml = Windows.UI.Notifications.ToastNotificationManager.getTemplateContent(template);

var toastTextElements = contentXml.getElementsByTagName("text");

toastTextElements[0].appendChild(contentXml.createTextNode(message));

var toast = new Windows.UI.Notifications.ToastNotification(contentXml);

var toastNotifier = Windows.UI.Notifications.ToastNotificationManager.createToastNotifier();

toastNotifier.show(toast);

}

Happy toasting.

Working with the Windows 8 Visual Studio JavaScript Grid App – Part 1 – Data and Navigation

I’ve been working on learning Windows 8 application development and I’ve been struggling to get my head wrapped around some of the pieces, so I wanted to document some of the key pieces that I’ve learned about the Grid App template so far. Let’s start with Data.

Data.Js

The data model that’s a part of the template is crazy. It’s static initialization inside of the /js/data.js file I wanted to eliminate that and make it read from a data file. The idea here is that I can get quite a bit of what needs to be localized into a single file that will be easy (ish) to localize. To do that I replaced the call generateSampleData().forEach(function (item) { list.push(item); }; with my own generateData() method. It’s the second method in the following image:

OK, so in this case I’m deploying the data file in a /data folder and it is called menu.data. I load it by using the Windows.Storage.FileIO API. Once I have it I parse it with JSON.parse() then I iterate each item grab the group item, stuff a link into the items list and I add the rest of the URL for the background image. I push that to the list that was defined above. The helper function above generateData() just locates the group. Yea, resolveGroupReference() is very similar, however, it expects that the groups are on items already.

The data file itself isn’t that bad – except that you have to deal with the fact that JSON.parse() is really particular about what it gets in. It has to have the keys quoted it gets upset at extra whitespace between the colon and the value, etc. Take a look at my simple sample file:

You’ll see the quoted values, and if you look to the far right you’ll see a new key/value for page which is the URL for the page to load for the item. The out of the box grid app expects every item loads the same detail page – and every group loads the same page. I didn’t want that. I wanted each item to be able to load its own page, thus adding the URL to the object. You’ll also notice that my object has a .groups and a .items – The code above in generateData() and getGroupObject() just stiches the groups into the items like the data expects.

Note: I know this is a stupid/silly way to do the groups now, I’ve just not gone back and fixed it since doing so was more of an architectural change than I was comfortable making. If you do this you’ll want to manage this slightly differently and stuff the data for groups into a different list rather than doing the silly mapping that the Grid app does.

The final thing is that the way the grid app is created it automatically sorts groups based on the key so I just prefixed the key to get the order I wanted. I could change the compare function, but I was lazy.

groupedItems.html

The application starts with default.html but the control on default.html loads a Application.PageControlNavigator that loads HTML fragments into the page for you. The data-win-options sets the parameter for the PageControlNavigator so that it’s home is /pages/groupedItems/groupedItems.html to be loaded in. That page has a few key pieces which I’ll explain after the screen shot.

OK, in this. The key is actually at the bottom. The last DIV tag is really a WinJS.UI.ListView – which the code will stitch together with the data model to make everything get displayed. The first two DIVs are templates that ListView will use. One key thing to note is that the group header (in the header template) is defined as a button – and it has a button handler. This will be important as we talk about navigation.

groupedItems.js

The way that the navigation and fragment pages work is that you do a WinJS.UI.Pages.Define and provide the URL of the page plus a collection of items to tie to the page. These are mostly event handlers and call backs that can be used from outside the page. The ready property/method/delegate will be called by the framework when the fragment is loaded and ready to go. This starts by binding the ListView control to its templates and defining a handler _itemInvoked:

Later this method calls this._initializeLayout(), this method actually binds the data source:

This is does something crazy. If the application is in a snapped state (think side-by-side – it’s not full screen) then it loads the groups into the item template and blanks out the group data source. The effect of this is that the groups become items. In the normal case the headers are loaded as the groups and the items are loaded. This all got crazy when I started looking at _itemInvoked:

Looking at this code you can clearly see that sometimes it treats an item like a group and othertimes like an item. This doesn’t make sense unless you see that they start treating groups like items in _initializeLayout. So in order to get the navigation to use my pages that are a part of the objects now I need to go update the button event handler like so:

The commented out line is the original and the two lines that follow are me getting the group (instead of just the key) so I can get the page off of it. Next is the _itemInvoked:

Here you’re seeing the behaviors I want. I get the item out of the groups and navigate to it by calling navigateToGroup – which I updated above, and for items I get the item then navigate to the page attached to that data.

Whew!

Wow, untangling the pieces of this to get it to a working structure that I can use to nave my data initialized and to be able to navigate was quite the challenge. Next up is starting to implement some of the other features. I’ll let you know how that goes.

Personality Types

Book Review-Personality Types: Using Enneagram for Self-Discovery

I’m not stranger to personality types. Whether it is doing impromptu Myers-Briggs Type Indicator analysis (guesses) for friends in the SPTechCon speaker room in Boston, or evaluating folks in terms of their time perspective (ala The Time Paradox), I enjoy personality typing tools as a way to seek a better understanding of the folks that I live and work with. I know that this “automatic typing” that I do makes some folks nervous; however, it’s just one attempt on my part to be able to communicate in ways and languages which will resonate with the other person.

When a friend suggested that I look at the Enneagram, I found the official web site and took their free version of the test. It came back for me as a type 1 –”The Reformer.” However, I wasn’t sure what that meant exactly. That’s where the book Personality Types: Using the Enneagram for Self-Discovery comes in. It explores the enneagram and how the system works – including the intricacies of the different types.

Fundamentally the system revolves around nine different personality types which fall into three categories. The categories are Instincts, Thinking, and feeling. The idea is that every person struggles with one of these three and that they are more prone to completely repressing the category, over expressing it, or under expressing it. For instance, my type “The Reformer” is likely to under express their instincts – they’re less likely to accept things the way they are.

The system is most frequently expressed as a circle since the nine type (The Peacemaker) is connected to the one type (The Reformer), however, that’s difficult to quickly express so I’ll convert some of the data to tables. For the categories (called triads in the book) and the under/out of touch/over is here in the following table (with the names):

Category (Triad) Under Out of Touch Over
Instinct

1

9

8

Thinking

7

6

5

Feeling

4

3

2

It would be easy to believe that’s it. There’s all great detail and news about the personality types based on this information, however, this isn’t the end. In fact, it’s just the beginning because each of the types has nine operating levels. That is that inside of each personality type there’s a level of operating effectiveness. Three are healthy levels of operating (One-Three), three are average (Four-Six), and three levels of operating are unhealthy (Seven-Nine). Here’s a matrix of the personality types and their nine levels of operating using the labels from the book:

Personality Type

Level 1-Reformer 2-Helper 3-Motivator 4-Individualist 5-Investigator 6-Loyalist 7-Enthusiast 8-Leader 9-Peacemaker
One Wise Realist Disinterested Altruist Authentic Person Inspired Creator Pioneering Visionary Valiant Hero Ecstatic Appreciator Magnanimous Heart Self-Possessed Guide
Two Reasonable Person Caring Person Self-Assured Person Self-Aware Intuitive Perceptive Observer Engaging Friend Free-Spirited Optimist Self-Confident Person Receptive Person
Three Principled Teacher Nurturing Helper Outstanding Paragon Self-Revealing Individual Focused Innovator Committed Worker Accomplished Generalist Constructive Leader Supportive Peacemaker
Four Idealistic Reformer Effusive Friend Competitive Status-Seeker Imaginative Aesthete Studious Expert Dutiful Loyalist Experienced Sophisticate Enterprising Adventurer Accommodating Role-Player
Five Orderly Person Possessive “Intimate” Image-Conscious Pragmatist Self-Absorbed Romantic Intense Conceptualizer Ambivalent Pessimist Hyperactive Extrovert Dominating Power Broker Disengaged Participant
Six Judgmental Perfectionist Self-Important “Saint” Self-Promoting Narcissist Self-Indulgent “Exception” Provocative Cynic Authoritarian Rebel Excessive Hedonist Confrontational Adversary Resigned Fatalist
Seven Intolerant Misanthrope Self-Deceptive Manipulator Dishonest Opportunist Alienated Depressive Isolated Nihilist Overreacting Dependent Impulsive Escapist Ruthless Outlaw Denying Doormat
Eight Obsessive Hypocrite Coercive Dominator Malicious Deceiver Emotionally Tormented Person Terrified “Alien” Paranoid Hysteric Manic Compulsive Omnipotent Megalomaniac Dissociating Automaton
Nine Punitive Avenger Psychosomatic Victim Vindictive Psychopath Self-Destructive Person Imploding Schizoid Self-Defeating Masochist Panic-Stricken “Hysteric” Violent Destroyer Self-Abandoning Ghost

Higher levels of functioning have embraced their struggles based on their personality type. They’ve integrated their ego into a part of healthy functioning rather than having it angrily demand that it’s needs be met and that past hurts be soothed. They’ve learned to heal their own brokenness. The lower a person slides in their healthiness the more their ego takes the reigns and the more self-centered rather than self-less that they become.

Integration and Disintegration

The enneagram also has the concept of integration and disintegration. That is that healthier individuals in a personality type can take on the healthy aspects of another personality type. For instance, a healthy one (Reformer) will take on the thinking and behaviors of a healthy seven (Enthusiast). Similarly, an unhealthy personality may take on the unhealthy thoughts and behaviors of a different personality type. Again using Ones as an example they disintegrated into fours (Individualist). Take a look at the following table of integration and disintegration:

Personality Type Disintegration Integration
1-Reformer

4

7

2-Helper

8

4

3-Motivator

9

6

4-Individualist

2

1

5-Investigator

7

8

6-Loyalist

3

9

7-Enthusiast

1

5

8-Leader

5

2

9-Peacemaker

6

3

Wings

Another concept is that of wings – that is that you’ll also to a lesser extent be influenced by either the personality type on either side of your primary type. That is a One may be influenced by a tendency to nine or to two. (In my case it is two – helper.) This influence is called a wing. Wings come in a range of scales. By definition your primary personality type must be at least 51% of your personality so the most a wing could influence you is 49%. However, there’s a range here from very impactful (49%) to very negligible (technically 1%). The degree these wings play on a personality can explain some level of variability even within a personality type. To simplify this scale it might be useful to consider three categories of impact from a wing: High (49%-33%), Medium (32%-16%), and Low (15%-1%).

Simple and Complex

So at its heart the enneagram system contains nine basic personality types. Considering the potential variants in the Myers-Briggs system is 16 – nine seems less fine-grained. However, when you consider the nine functioning levels to each of the nine types and then add three potential levels of impacts for wings you end up with 243 combinations – more than anyone could keep track of in their head. So at one level the system is relatively simple – at least less complex than other measurements. On the other hand, at the most detailed level the variation is sufficiently nuanced that you should have a good idea of the core makeup of a person.

The Value

So what’s the real value of the enneagram? Well, as the book’s title says, it’s self-discovery. While it may be interesting to be able to gain insight into others, the real value is gaining insight into you and your own thoughts and behaviors. Unique (as far as I’m aware) to the enneagram and the book is the discussion of how the personality type breaks down into lower levels of operating effectiveness. For my own situation the prescription is to be wary of the possibility to become a judgmental perfectionist or worse (see the table above). The book has given me a map to follow to know when I’m descending into lower levels of effectiveness. What to do about the slide is simply a reflection of thinking and behaving like the level above. If you’re interested in being the best person you can be, you’ll want to pick up Personality Types: Using the Enneagram for Self-Discovery.

Recent Posts

Public Speaking