Skip to content
Outliers: The Story of Success

Book Review-Outliers

I’ve read Malcom Gladwell’s other two works: The Tipping Point and Blink. I figured that Outliers would be interesting and entertaining. Why not? The other two books were. What I didn’t expect was that it would change the way I see myself and the world around me. In retrospect that’s what Blink did. It changed my perception of perception – of how we make decisions and how the lines aren’t as clean as we would like them to be. Outliers was different though because the book was talking about me. It was talking about me as someone who doesn’t fit the norm who doesn’t fall into the normal categories. Through my history I’ve been called odd, abnormal, and several other things… so I can identify with the idea of being an outlier.

The two main sections of the book are: the opportunity and the legacy. I translate the legacy into hard work. Not hard work in terms of difficulty but meaningful work over long periods of time. I won’t rehash the book’ main points, but I do want to explain it from my perspective. I should say that I don’t believe that I have done – or will do – extraordinary things. My world is comfortable. I’m blessed by the opportunity to work with good people. That being said – I’ve never followed the easy path.

So if we start first with opportunity. I can remember having powdered milk on cereal. (I don’t recommend this.) I can remember drinking out of plastic butter “cups.” There are dozens of other memories that can remind me that growing up we didn’t have an abundance of money. However, my mother by the time I was in the fifth grade managed a Commodore 64 computer. I, of course, wanted to play games. Her response was that I could play anything I wrote. Of course, that broke down at some point but the message was clear enough.

In High School, her work had a computer for her at home. I used it to learn programming again. (It was C and assembly back then.) I had the opportunity to use the modem to connect to BBSs. The high school had a program with the local community college that allowed high school students to attend classes at the community college. This was advantageous for two reasons. First, it allowed you to get high school credits which in turn meant you didn’t need to take many classes at high school. Second, you could start getting a college education while in high school.

So opportunity is about the opportunity for hard work. During my tenure at high school I had one semester where I was carrying 10 credit hours at the community college and going to high school all day. I can remember taking my college homework into my high school speaking class and working on it while the other students were at the front of the room doing their turn. I also had a stretch where I was working 40 hours a week on top of high school.

I should point out at this point that this was completely against child labor laws – but we manipulated the system so that it appeared I had only worked 20 hours and that my rate was doubled. I suspect that the co-op program coordinator knew but I’m also sure that since the work was actually as a consultant/developer for a local computer company everyone looked the other way. However, I kept my grades up. I didn’t give the teachers any problems. I got along well with my counselors and the principal. So no one had any reason to expose what was going on.

I should also say that during this time the high school had an “open attendance policy.” This means that you weren’t considered for disciplinary action for not attending classes as long as your grades were acceptable. At one point I racked up 157 unexcused absences in a single semester. That doesn’t count the few that were excused and a handful more which were “school related” which means that the principal asked me to talk to another kid, or we there was some other reason I wasn’t in my normal class. This is a tad confusing until you realize there were 7 classes a day so an absence is only an absence from a single period. I only missed 22 days worth of classes. However, that’s not really correct either. In truth I missed most of my economics class after the teacher – I want to say his last name was Ryan – had a heart attack. I’d find out if there was a test – if there wasn’t I’d get the homework and not show up. The principal asked me about this once and I explained that the class wasn’t useful. She told me that the substitute was following the same lesson plan. I told her that the substitute was probably teaching the same breadth of material – just not to the same depth.

So high school was busy for me. I was sleeping about 4 hours a night – maybe 5 – back then. I had an alarm that, it was said, could wake the dead. I was on the 3rd floor of the house in the old maid’s quarters and my parents would have to yell up at me to wake up because my alarm had been going off for 15 minutes. I was living in Bay City Michigan at the time which had one of the lowest property values in the nation – so the idea were living in a house that used to have maids quarters isn’t as impressive as it sounds.

I left high school half a year early and came back to Indiana to work. A part of that deal was supposed to be that I’d go to college – although that didn’t happen. I got a job working on a VAX writing C. I stayed with family in Terre Haute, IN and then Greenwood, IN for a few months before getting an apartment in Carmel, IN where I was working – and where I live today. If you fast forward a bit I changed jobs and met a marketing manager at the company who introduced me to her husband who was a product development specialist at New Riders Publishing. He asked me if I was interested in doing some tech editing of books. Tech Editing he explained meant fact checking. It was following the steps to make sure things were right. It was trying things out.

Over the next SEVERAL years I was doing a lot of tech editing, some writing and some development editing. (A summary of my book work is on my site.) I estimated that at one point I was editing 16,000 pages per year – in addition to my full time job. It was busy but perhaps not as busy as during high school. I learned a ton of what I know about computers through the process of reading, trying, and verifying those books over the years.

Jumping forward again, my schedule today is – for the most part – getting up at 5:30. In my office by 6AM and leaving my office between 6PM and 6:30 PM for dinner and time with the family. I come back out for an hour after my son is in bed – to give my wife some quiet time to herself before we spend time together. My neighbor asked what time I got into my office because he would go out and get his newspaper and the lights in my office would already be on. When I told him he thought that it was pretty early. My wife has asked why I get up so early (particularly during the brief respites where I don’t feel overwhelmed.) The honest answer is that I enjoy what I do – and I even more importantly love the time of a morning when I can do things like this blog post – I can contemplate things and work on projects uninterrupted.

Back to the book… The book talks about how Bill Gates had opportunity – and hard work. The Beatles had opportunity – and hard work. There are less popular examples but the pattern remains the same opportunity + hardwork => Success. Of course, you’ll need to define for yourself what hard work is. The number recommended in the text is 10,000 hours of “practice.” I don’t know that I got 10,000 hours of practice in working on the tech editing of books – but I know I’m grateful for the opportunity.

If you’re trying to figure out how success was made in other people. If you’re trying to understand your own success – Outliers is a new perspective.

SharePoint 2007 Development Recipes

Book Review-SharePoint 2007 Development Recipes

Some folks like to cook from recipes. You get a predictable result and you know what to expect. My wife will attest that that’s not exactly the kind of guy I am. I’ve created meals that are good and a fair number of them that weren’t fit for the dog to eat – literally the dog wouldn’t eat them. Still, I recognize the value of recipes. That’s why I think SharePoint 2007 Development Recipes: A Problem-Solution Approach is a good read if you’re trying to wrap your hands around SharePoint.

One of the problems with typical computer books is that they’ll tell how WHY something works but not HOW to make it work or WHEN to use it. (My own personal rebellion to this is The SharePoint Shepherd’s Guide for End Users which is all about HOW to do things.) That’s why I like the style which shows you how to do practical things. You can read the details of some interface on MSDN, you don’t need a book for that. What you need a book for is HOW you should use it.

Article: Performance Improvement – Caching

If you’re looking at performance and you want to get some quick wins, the obvious place to start is caching. Caching as a concept is focused exclusively around improving performance. It’s been used in disk controllers, processors, and other hardware devices since nearly the beginning of computing. Various software methods have been devised to do caching as well. Fundamentally caching has one limitation — managing updates — and several decisions. In this article, we’ll explore the basic options for caching and their impact on performance.

Cache Updates

Caching replaces slower operations—like making calls to a SQL server to an instantiate an object—with faster operations like reading a serialized copy of the object from memory. This can dramatically improve the performance of reading the object; however, what happens when the object changes from time to time? Take, for instance, a common scenario where the user has a cart of products. It’s normal, and encouraged, for the user to change what’s in their shopping cart. However, if you’re displaying a quick summary of the items in a user’s cart on each page, it may not be something that you want to read from the database each time.

That’s where managing cache updates comes in, you have to decide how to manage these updates and still have a cache. At the highest level you have two different strategies. The first set of strategies is a synchronized approach where it’s important to maintain synchronization of the object at all times. The second strategy is a lazy (or time) based updating strategy where having a completely up-to-date object is nice but not essential. Say for instance that you had an update to a product’s description to include a new award the product has won — that may not be essential to know about immediately in the application.

Read More at http://www.developer.com/design/article.php/3831821

Article: How to Leverage the Gravity of Your Intranet

Most of us have learned about the concept of gravity through our schooling in Newtonian physics. Although, most of us remember it as Sir Isaac Newton getting hit on the head by an apple. We know that objects draw other objects in. Despite the fact that we’ve learned this we’re confronted with our daily observation where we see that two objects sitting on a desk don’t appear to be zooming towards each other through this attraction.

This explains our challenge with understanding gravity as it applies to our Intranets. The concept is one that makes sense — but it’s difficult to see and get a tangible feel for. However, despite its elusive nature gravity does have a profound impact on our lives — and it can have a profound effect on your Intranet.

In this article we’ll talk about how gravity works on the scale of planets and galaxies so that we can see how we can make our intranets larger.

More

 

Article: Performance Improvement – Session State

In the first part of this series a discussion was presented on what performance is, and some of the techniques that can be used to improve or monitor performance in your application. In this article the focus is specifically on managing session state and the things that you can do to maintain performance in your application.

There are two key areas to understand in session state management. First, you need to understand the options you have for maintaining session state. Second, you have to consider the different kinds of information that need to be managed in session state and how the different needs for maintaining session state impact how you might choose to manage it.

Background

Just as we had to review some background concepts in order to understand the broad performance discussion, there are a few key concepts related to the communication between the client and the server. This includes the weight of the request in terms of the bytes transferred to the server and transferred from the server to the client. We’ll talk about the request/response weight as well as the benefits and weakness of various encryption techniques that may allow you to leverage the users’ machines for some session state management.

Read the rest at http://www.developer.com/design/article.php/3829586

What does an OutOfMemoryException in .NET (on 32 bit) really mean?

I can remember writing code in C on PCs years ago and when I got an out of memory exception I just blindly accepted that it meant there was literally no more memory for me to use. I realize now that this wasn’t the case – really. It really meant “hey, I don’t have that much memory left that’s all together.” Looking back allocating and deallocating memory had made the memory in the computer look like swiss cheese where I was using some memory locations and not other locations. At some point when I asked for a bit of memory that was larger than one of the holes I had the whole thing came crashing down.

Today in an era where we have virtual memory operating systems where quite literally our hard disk appears to be memory it would seem that this wouldn’t be a problem. Sure we would waste some amount of memory with gaps where we’ve deallocated objects but surely if we’ve got a hard disk to fill we won’t run out of memory – right? Well, yes, but we run into another problem. Whether you have 512 MB of memory of 8 GB when you’re running a 32 bit operating system no one process is going to get more than 2GB of RAM. (I’m purposefully ignoring PAE/AWE since it’s a whole different ball game that doesn’t apply to .NET developers.) How does that work? Well, the addressing of memory is still 32 bits. 32 bits gets you access to 4GB of RAM. The operating system lops off the top 2GB for its own use (Basically if the topmost bit of the 32 bits is a one the address belongs to the system) so any given process can have access to 2GB of its own memory. (There’s a switch to change where this split occurs so that the process gets 3GB and the operating system gets 1GB but it’s not supported for every situation.) So even if we have several TB of storage any one 32 bit process can only see 2GB of RAM.

How does this all impact .NET getting an OutOfMemoryException? Well, the first problem that I used to have with C programs .NET tries to help solve. The garbage collector (GC) in addition to getting rid of objects that are no longer in memory will pull objects together that are in memory but spaced out. The easiest way to think of this (for me) is to think of any presentation that you’ve been in. When the room fills up it fills up relatively randomly but there are generally spaces between people. (Memory actually fills up mostly sequentially but the spaces remain.) When the room starts to get full you’ll eventually get the presenter ask everyone to squeeze together to eliminate all of the spaces. This is what the garbage collector is doing; it’s squeezing together the objects. (For those of you wondering how, double dereferencing of pointers is the key here.)

However, just like in a presentation there are occasionally stubborn people who refuse to move. In the garbage collector’s case it’s that there are a set of objects that are pinned in memory – they can’t be moved. Why would that be? Well, generally it’s because something outside of .NET is holding a reference to that location. It knows that some memory buffer, resource handle, or something exists in that location. Since the garbage collector can’t tell something outside of .NET that the object has moved it stays in its spot. The net effect is that you get a little bit of space under those objects which remains unallocated.

If .NET’s GC is cleaning up memory for us shouldn’t we always stay underneath the 2GB memory limit of a 32 bit process? Well, maybe yes and maybe no. There are lots of other issues with the garbage collector and how good (or poorly) that it works that I won’t go into. For our discussion we need to know that the garbage collector is getting rid of objects and pushing objects together. We also need to know that this isn’t an instantaneous process. For objects without a finalizer (also known as a destructor) as soon as there are no more references to the object it can be discarded by the garbage collector. However, if the objects have a finalizer the finalizer has to be run. There’s a single thread per process that runs finalization. Thus even if an object is ready to be finalized and thus give up its memory, it may not be able to quick enough as Tess Ferrandez explains. The problem is that until the objects are finalized they can’t be removed and if they can’t be removed memory usage will keep creeping up. By the way, the use of the IDisposable interface and a call to GC.SuppressFinalize() in the Dispose() method can prevent the need for the finalizer to run on your objects (even if one is defined) and therefore allow the GC to free your object out of memory sooner. (Unfortunately, I’ve been in situations where even this wasn’t enough because the GC itself wasn’t running fast enough – but that was in .NET 1.1 – the algorithms are much better now.)

It’s also important to know that the .NET object heaps aren’t the only thing consuming memory in the process. There are still a ton of COM objects that get used by .NET for the actual work. These COM objects allocate their own memory on their own heaps outside of .NET’s visibility and control. These objects take up memory. They don’t always take up memory that you’re going to see in task manager. Task manager shows, by default, private working set. That is the bytes that the process is actively working on (please excuse my gross oversimplification of this.) The real number to watch is the virtual bytes – which you have to access from performance monitor’s process object. This counter tells you how much memory has been allocated from the system by the process. In other words, how many addresses have been used up by the process. When this number reaches 2GB and there’s another request for memory in .NET that it can’t fit into an existing allocation from the operating system – you get an OutOfMemoryException in .NET. (If you want to know much more about memory allocation in Windows I recommend the book Microsoft Windows Internals. I’ve never had a memory question it hasn’t been able to answer.)

As a sidebar, the working set for the W3WP (IIS Worker process) tends to get between 800-900MB before it gives an out of memory exception. I’ve seen processes (with very good memory allocation) drive the W3WP to over 1.3GB before it finally gave up and gave an out of memory exception. Based on what you’ve seen above what does this mean? It means that working set for the IIS worker process represents roughly half of the virtual bytes that are allocated.

You may have 8GB of physical RAM available and 1TB of disk space. You may actually have tons of space in memory that’s become fragmented because of references to COM objects and other non-.NET code who’s memory allocation can’t be moved. However, if you reach 2GB of allocations (without the /3GB switch to allow the process 3GB of address space) and ask for one more thing – it is game over.

So what do you do if you have this situation happen to you? There are a few things I’d recommend:

  1. If you’re using an object that implements IDisposable and you’re not calling the .Dispose() method – do it.
  2. If you’re using a disposable object and not doing the dispose via a using () { } or a try { } catch {} finally {} block – do it. If you get an exception inside of a method that doesn’t use one of these two techniques the .Dispose() method won’t get called – if you do have an exception in either of these cases the .Dispose() will still get called.
  3. If you open any kind of a resource make sure you close it. Whether it’s a TCP/IP port, a SQL connection, or anything else, they’re going to get pinned in memory and need a finalizer … try to take care of that yourself.
  4. Get a dump of your process and look at what objects are in memory. You can do this without a ton of knowledge about debuggers. (Production Debugging for .NET Framework Applications will get you at least this far.)

Custom XML Serialization of a .NET class

I love serialization — right up to the point where it breaks. I have always found that it’s difficult to get right if the out of the box stuff breaks. However, I may have changed my mind. I had to do some of my own serialization because some of the properties that I was working with in my class didn’t serialize well. After a long and drawn out look at the problem here’s my input:

  1. Implement the IXmlSerializable interface. It contains three methods
  2. GetSchema() has been obsoleted. Just return null. There’s a suggestion that you should use an [XmlSchemaProvider] attribute on your class to communicate the method to be used to return the schema for your Xml serialization. My recommendation is to skip it — if you don’t have to validate your Xml (and I don’t know why you would) you don’t have to have this.
  3. WriteXml() writes the data to an XmlWriter. Use WriteAttributeString(string, string) to write out the attributes you need. You can also write out sub-elements but using attributes is easy enough for non-complex types.
  4. If you need to write out a blob of data in middle of your tag you can use WriteCData() to write the contents of a string to the center of your element tag.
  5. ReadXml() reads the serialized data from an XmlReader. Getting your content out is as simple as doing .MoveToContent() and a set of indexer deferences for attributes (i.e. reader[“myAttributeName”]). Finally if you want to read the inner contents you put into a CData section you can do .ReadString().

That’s all there is to writing your custom Xml Serialization interface. This way you don’t have to worry about the dynamic assemblies.

System.Web.UI.Page.IsPostBack and Is This the First Request

Every once in a while I’m surprised by what I don’t know. I have been developing in ASP.NET for a while and I’ve known about Page.IsPostBack to determine whether this is the first request to the page (thus I need to populate controls). However, I had never realized that there was a scenario where this didn’t work. It doesn’t work when you have another page post to your page. The property sees that it’s a POST HTTP request and says “Hey, it’s a postback!” — of course in the scenario when it’s another page that’s doing the posting this doesn’t work so well. So I put together a simple function to use the referrer to figure that out.

public static bool IsFirstRequestToPage()

{

HttpRequest curReq = HttpContext.Current.Request;

string referer = curReq.Headers[“Referer”];

if (string.IsNullOrEmpty(referer)) // not present – should be present on postback

{

return (true);

}

else

{

// Is referrer current page?

Uri curPage = curReq.Url;

Uri refPage = new Uri(referer);

if (Uri.Compare(curPage, refPage,

UriComponents.SchemeAndServer |

UriComponents.Path,

UriFormat.UriEscaped,

StringComparison.InvariantCultureIgnoreCase) == 0)

{

// Same referrer (i.e. postback)

return false;

}

else

{

// Different referer

return true;

}

}

}

Of course, using the referer header is bad because there are scenarios where it won’t be transmitted and some browsers that won’t transmit it, etc., but in my case it works well enough.

Article: Performance Improvement – Understanding

One of my least favorite discussions in development is the discussion about performance. It’s one of my least favorite because it requires a ton of knowledge about how systems work, and either a ton of guesswork or some very detailed work with load testing. I generally say that the results you get out of any performance prediction exercise are bound to be wrong. The goal is to make them as least wrong as possible.

I’m going to try to lay out some general guidelines for performance improvement through improving understanding about what performance is, how to measure it, and finally solutions to common problems. This article will cover the core understanding of the performance conversation. The second and third articles will cover session management and caching because they have such a great impact on performance — and on what solutions you can use to improve performance. The final article in the series will focus specifically on ways to improve performance.

No series of article (or book for that matter) could cover every possible situation that you can get into with computer system performance. My background includes nearly 20 years of work with systems from a VAX running VMS to more current projects which are based on Microsoft products such as .NET, Microsoft SQL Server, and Microsoft SharePoint. The concepts in this article are applicable to any complex system; however, I use the Microsoft platform including Windows, .NET, and SQL Server.

http://www.developer.com/db/article.php/3827266

 

Note: This is part 1 of a 4 part series

Active Directory Cookbook

Book Review-Active Directory Cookbook

Many people don’t know this (or care) but when I was first awarded my Microsoft MVP award it was for Windows Networking (which was pretty quickly clarified to Windows Server Networking). At the time I was working on Windows Server books and MCSE study guides of various sorts. I had the pleasure of having Emily Freet as my first MVP Lead. She introduced me to another one of her MVPs, Laura Hunter. A while ago I got a copy of Laura’s Active Directory Cookbook, 3e of course being behind I didn’t get much time to look through it – until this weekend.

One of the things that I’ve always liked about Laura’s knowledge of Active Directory, Networking, etc., is that she’s always thought about the problem not just from the perspective of “how do I do this once” but she’s always been thinking “how do I make this repeatable.” And I don’t just mean from the perspective of writing a process or a procedure. She’s been involved in some VERY large and RAPIDLY changing environments so she’s always had an awareness of the need to script things. That’s one of the things that makes the book so powerful. It’s not just going to show you how to create a user – but how to do it in mass … or via a process. Having been engaged more than once to create tools for working with AD, I can say that I honestly appreciate the work that goes into providing information about how to automate activities in AD.

If you’re looking to manage an environment over and over again … or you have clients that you work with that you want to be able to repeat your work … or you just need coverage of how to do common tasks in AD, I think you’ll find the cookbook has the answers you need.

Recent Posts

Public Speaking