From Santa: Continuous Integration, Progress Bars, Build Scripts, Oh My!

Continuous Integration is a wonderful concept.  The basic idea is to get integration-level feedback as quickly as possible.  It’s done wonders for cutting down on “it builds on my machine but not on anyone elses” phenomenon.

A piece of software, the free version of CruiseControl.NET in this case, runs on a PC and checks the version control repository periodically (usually every 15mins or so).  Once it detects changes it grabs them and runs the build.  So when you forget to check in that COM library that a library your library depends on needs you find out within 15mins or so instead of the next morning.  Nightly builds are nice but continuous builds are nicer.

The rules of thumb that I’ve seen are for Progress bars are:

  • If an action takes up to 3 seconds to complete show an hourglass/busy cursor.
  • If an action takes 10 seconds or more then it needs to have a Progress Bar.

Progress Bars are pretty much the first thing to go when deadlines are tight so the holidays present a wonderful time to go back into those UIs on top of long running processes and add a progress bar.

When you've got a mix of managed and native code (a mixed blessing?) Visual Studio’s otherwise stellar dependency checking may show signs of wear.  This is probably due to the cobbled together mixture of post-build scripts and external bat (or NAnt) scripts that represent your “evolutionary” build.  The holidays are a wonderful time to make the build process less “evolutionary” and more “cultured” so-to-speak.

COM Interop Dance: To C++ or C#

So there’s a managed class that I’m using from within native code via COM Interop.  The managed class has a property that’s a bit like a collection; it stores multiple elements and can retrieve an element by it’s position within the collection.

Each element has an integral ID and a string descriptor.

The problem is that I now need to access these elements from within native code in a certain order (integral ID ascending).

A few approaches come to mind:

  1. Do the sort in native code.  Create a structure that stores the ID, Desc pair OR get a little more fancy and use an STL class.  Store these in a collection (or array).  With the STL class writing the code to sort the collection is more elegant (e.g., easily adapted to different element types) but ultimately I’ll still need to write what is essentially a function that the sort function will use to compare elements.
  2. Do the sort in managed code.  Since this is COM Interop, provide a method that returns a collection of the IDs in order then use the existing method to get the Desc for a given ID.  The collection can be accessed in native code via the smart pointer mscorlib::_ArrayListPtr.  This is probably easier to write but still doesn’t match what I think is conceptually the correct solution.
  3. Provide an enumerator over the collection of ID-Desc pairs.  This will require both managed and native code changes since there’s a little bit of plumbing to use a managed enumerators from C++ (no handy built-in foreach).  In pre 2.0 C# I’d say that this is the correct solution.
  4. Provide an iterator over the collection of ID-Desc pairs.  I think this is closest to the conceptually “ideal” solution.  Unfortunately I have no idea how to use iterators from native code.  And time is extremely short.

Since time is so extremely short I think I’ll go with option 2.  I spend more time in managed space (usually, though for the past month this hasn’t been true) so there’s less ramp up to making this work.

Design and the Nature of Things

While working on a project involving two applications an issue came up involving site customization.  It illustrates the importance of not fighting the nature of a given thing when faced with a design decision involving that thing.  This heuristic will usually save you a world of headache even if you can’t immediately think of circumstances in which it’ll prevent error.

There are 2 products (aka applications); when product A is installed, product B is also installed.  Unlike product A application B is often installed by itself.

Application B has a wonderful piece of software for site customization.  So wonderful in fact that it’s being drafted to perform its wonders in service of application A.

While going back and forth about the best way to accomplish this, someone suggested having the site customizer automatically detect the presence of application B and update both application A and application B so that the user doesn’t have to remember to run the site customizing software twice.

So far so good right?  What’s the problem?  Being a purist at heart the idea of having a site customizer automatically update 2 separate products immediately runs afoul of my instincts.  But instincts are not evidence and unless you’ve really developed an intuition for these things they aren’t compelling arguments.  Put another way, until you develop an intuition for design many conceptual objections will tend to be unpersuasive.

This being a project in the real world we quickly moved on to other problems.  Fortunately everyone was ok with not having the site customization software automatically update the 2 products.

Why fortunately?  Given a little time to think about it (in between long-running builds) it occured to me: What if product B has already been independently updated at the time the site customizer is run?  It could end up overwriting newer data with an older version of the data.  One of the benefits of having product B as a separate product was that it allowed us to get functionality out in the field in a way that didn’t require updating every product in the suite.

Of course this objection could be handled by the site customizer; it could check to make sure that it isnt overwriting a newer version of the data with an older version.  However, doing this well would require the data to carry along its own version information.  This is information that it currently doesn’t have.  This kind of version information often requires a system of its own to maintain since a file can go for many releases without any changes.  Without an automated system it’s yet another task to be forgotten during the often hectic process of releasing an upgraded version of software.

Versioning can be a pain but it’s not insurmountable.  Another objection is that it creates a dependency between the site customizer and the 2 specific products it was supposed to automatically update.  In computerese, the proposal raises the coupling between the site customizer and the customized applications.  Product B can no longer change its location (e.g., to support side-by-side installation) without breaking the site customizing program.

The cascade of issues resulting from a proposal that tends to fight against the nature of the underlying things is a design smell.

Another approach that provides the desired convenience without fighting against the nature of the products involved is to have a separate program/script/process that invokes the site customization program twice.

What’s the difference between putting this intelligence into the site customizer or having it invoked externally?  For one putting it in a separate program provides an opportunity to make explicit to the user the fact that they’re customizing product A and product B.  Several months down the road the fact that customizing product A automatically customizes product B may be forgotten (and lead to the unintended data corruption mentioned above).

Recall that the site customizer, initially intended to customize product B, has already been drafted into customizing product A.  It won’t be long before it’s customizing product C, D, etc… 

Having a separate program invoke the customizer will tend to encourage a design that easily accommodates customizing more than 1 product.  This kind of parameterization (loose coupling) of the customizer will make it that much easier to apply to products C, D, E, etc…

All of these benefits could have been lost, or made much more expensive, had we ignored the conceptual incongruence of making a relatively general purpose customizer try to do too much.

Visual C++ 6 to Visual Studio 2008

Porting the last vestige of VC++ 6 to Visual Studio 2008.  Over the years the Microsoft C++ compiler has gotten more standards compliant.  Oddities like:
for (int i=0; i < SOMEVAL; i++)
{
// ...
}

if ( i > SOMEOTHERVAL )





no longer pass muster; ISO C thankfully limits the scope of i to the block in which it is defined.  I say thankfully because it’s so common to use i as a loop variable that promoting its scope to the enclosing scope (the default behavior of older compilers) is asking for loops to step on each other’s loop counter.

Other things to keep an eye out for:

  • Turning off wchar_t as an implicit type if the existing 2008 solution/project has it disabled.
  • Mixing MBCS and Unicode.
  • Hardcoded paths to old versions of SDK header files (e.g., program files\visual studio\vc98\include\…)
  • Use of swprintf() without specifying a count for the maximum number of characters to write.

Design Patterns Quick Reference

A colleague recommended what appears to be a wonderful podcast for people interested in building software well.  IEEE Software’s On Architecture.  It’s authored by Grady Booch (one of the heavyweights in the Object Oriented Programming/Design world), the podcasts are short and sweet (typically less than 10minutes) and it’s even got a bit of a pedigree (being affiliated with the IEEE and all).

After listening to a few of the podcasts I was struck with a desire to revisit basic heuristics of the trade.  The classic work on design patterns was written by a group of authors colloquially known as the Gang of Four.  It’s an excellent reference but it isn’t exactly compact.

I stumbled onto a Design Patterns “cheat sheet”.  It’s even color coded by pattern type! (structural vs behavioral vs creational).  Kudos to Jason S. McDonald for his handiwork.

Installers, COM and Legacy

I’ve used a variety of installers over the past few years while doing Windows Client development.  InstallShield, Nullsoft Installer, Caphyon’s Advanced Installer and last, but not least, Visual Studio Setup/Deployment projects.

They’re all intended to shield the developer from the innards of Windows Installer (or the custom installer technology).  With varying degrees of success they make it easy to create the typical interaction between the user and the computer during an application install.  Or at least that’s the idea.

Try as they might, these installers can’t entirely shield you from the underlying components, tables, properties, binary files and obscenely verbose MSI logs that are the world of the Windows Installer.

One such case has bedeviled a recent project as it relates to COM components.  Whilst most of the codebase is managed code a few components (including a critical one) are still packaged as COM libraries.  Registration-free COM, as I describe in an earlier post, relieves us of the perils of system-wide, centrally registered libraries that get installed and reinstalled by separate, not-necessarily cooperating applications.

However, one of the libraries is a 3rd party library that gets installed (and uninstalled) with several different applications.  This is a problem because these applications are not always installed/uninstalled at the same time.  They’re also versioned independently.  In every case that I’ve seen installers will uninstall COM libraries when an application is uninstalled.  This breaks every other application on the machine that depends on that COM library.

It turns out that the way to prevent application A from breaking application B when application A is uninstalled (and the shared COM library is uninstalled along with it) is to mark the COM library as a Shared, Reference Counted DLL.

No matter the installer this feature is controlled by a property taken directly from the guts of Windows Installer: msidbComponentAttributesSharedDllRefCount.  This bit field determines whether or not Windows Installer will uninstall the DLL when the application is uninstalled.

The classical problem with using reference counting to track “liveness” (when a component is still in use) is the problem of circular references.  If Components A and B depend on each other then they can never be removed entirely because their reference counts can’t be reduced to zero until they’re both zero.  Fortunately this scenario hasn’t reared its head.  When it does I’ll be sure to post.

HOSTs, CNAMEs, SRVs Oh My!

DNS is a bit of an undiscovered country for me.  I’ve never had to set up a zone file.  As far as I know I’ve never even configured BIND (though it might have been configured on a linux distro by default).

Exchange 2007 introduced this wonderful thing called Autodiscover.  It is what it sounds like, a discovery service that Outlook 2007 clients can use to get information about exchange.

It works out-of-the box (mostly) if you’re hosting exchange yourself and your users are on a domain.

Getting it to work when you’re neither on a domain nor hosting exchange yourself is a bit of a bear involving, potentially, CNAMEs, SSL certificates, http redirection and, lastly, SRV records.

Outlook 2007 makes educated guesses about where it can access Autodiscover.  Unfortunately one of those guesses results in a persistent Security Warning if you’ve gone the CNAME route and your CNAME points to someone else’s (your mail hosting service) domain.

The recommended way (after applying this hotfix) to deal with this is to create an SRV record, a kind of specialized DNS entry pointing to the real Autodiscover service.  If you’re lucky enough to be handling DNS on a Windows box then there’s a handy GUI that makes setting up the SRV record a snap.

If you’re unlucky enough to have to use a paper-thin Web-based interface on top of DNS zone files then you, like me, get the joy of experiencing an anachronistic, extremely sparse configuration syntax (think sendmail.conf) that will choke on the slightest grammatical variation (e.g., forgot to end the host, which is actually a domain, with an extra .?  Clearly you weren’t thinking that would fly even though in every other context a trailing . on a domain name will *break* the request…).

What’s more, since DNS is a decentralized confederation every change you make takes a while to make its way through the system.  And while you can trick nslookup into using the nameservers where you made the change (surely they’d be up to date, right?), it’s about the only software you can trick into doing that – making it useless for any end-to-end testing.

The Programmer’s Abridged Guide for Investigating Errors

Consider the following sources in order:

  1. You probably screwed up.  This is the most likely cause.
  2. The library author screwed up.  The is less likely than a personal screw up but more likely than subsequent sources of error.
  3. Microsoft screwed up.  This is even less likely than the library author screwing up but more likely than the last source of error.
  4. Intel (or AMD) screwed up.  This is the least likely source of error and is only to be considered when all else fails (or you’ve run out of time and need someone to blame!).

As with any guide, YMMV.  Best taken with a grain or two of salt :)

Why Extension Methods?

Extension Methods, introduced in the C#3.0, provide a way to add a method to an existing type without having access to the source code of that type.

Although Extension Methods were added to support LINQ (a MOST awesome innovation) if you use generics (introduced in C#2.0) to create strongly typed collections and find yourself storing these strongly typed collections as strings in a file then Extension Methods provide a very natural way to extend the strongly typed collection.

For example, take List<MyCustomClass>.  If this gets stored in a file, say as name=value pairs, then it would be great to be able to parse the contents of that file simply by calling List<MyCustomClass>.Parse(fileContentsAsString).

Unfortunately, unless you’re at Microsoft (or Novell ala Mono) you probably don’t have the source code to the generic List type.  Even if you do have access to the source code you can’t anticipate all of the specific types users will create or the way they will represent them as a strings.

Enter Extension Methods.  They provide a way to extend List<MyCustomClass> (and, more generally, any type including generic types like List<T>).  So you can add a Parse(string) method to go from string form to List<MyCustomClass> form.  You can also add a ToString() method to go from a List<MyCustomClass> to a string.

The only peculiarity I’ve found so far is that the Extension Method must be invoked on an instance of the type even though they’re defined as static methods of a static class.  Not sure why the language designers chose to go this way but I’m sure there’s guidance on this out there somewhere.

Triple Monitor LCD Desk Stand

I’ve gotta tip my hat to Ergotech.  They’ve managed to make a Triple LCD Desk Stand (model 100-D16-B03) that’s easy to put together, adjust, mount and dismount. 

I opted for the three-way horizontal configuration over the four-way two-tiered configuration to minimize eye strain.

triple monitor lcd desk stand

VS2008 – Online Help finally useful!

After installing Visual Studio 2008 I usually turn off 2 help options: “Partial Matching” and “Online Help”.

“Partial Matching” because it returns way too many results.

“Online Help” because it slows down searches.

While searching for the syntax to check a VARIANT_BOOL, I decided to give it a try.  Lo and Behold, the Online Help returned something useful!  Looks like specifications showing up on MSDN library might not be a bad thing after all…

variant_bool

Taskbar not Grouping Buttons

Jump to Workaround

I can’t recall exactly when but at some point in the past year my Taskbar stopped grouping similar buttons.

In the past few months I’ve reinstalled XP at least once.  I also started docking my Taskbar against the left edge of the screen.  Being an efficiency junkie it occurred to me that most monitors have more horizontal space than vertical.  Docking the taskbar to the left seemed a natural response to this. :)

But undoing this micro-space-optimization didn’t correct the problem.  Similar Taskbar buttons were still not being grouped together.

For a while I turned on “open each explorer window in a separate process” to help debug a rogue shell extension; an in-process server that didn’t ever seem to want to unload.  Since that was fixed a while ago I disabled “open each explorer window in a seperate process”.  Alas, this had no effect.  Similar Taskbar buttons were still crowding out the Taskbar instead of grouping.

This might not be big deal for most people but I like to have a lot of windows open at once.  20 is a minimum, 30 – 40 is pretty typical.  Once the existing Taskbar space is exhausted the button size gets halved.  When docked to the left edge of the screen this makes it all but impossible to determine what any given Taskbar button represents (is that Windows Explorer opened to c:\documents and settings\<username>\Local Settings\Temp ?  Or c:\documents and settings\<username>\My Documents\someproj\somelogs.txt ?). 

Clearly this is monumentally important stuff.  Fortunately there’s a workaround.  It turns out that the Taskbar, by default, tries to figure out when it should group similar buttons based partly on estimates of how much space a button needs.  Perhaps this estimate doesn’t work well with my non-standard, docked-to-the-left-edge, Taskbar.  Serves me right for using a non-standard configuration.  Still, there’s a registry fix documented here that allows you to override this and force grouping whenever 3 or more instances of a Taskbar button appear.

Welcome back – Taskbar buttons, let’s group hug!

Image Processing in the Spatial Domain

Although there’s a rich set of techniques for processing images in the frequency domain, spatial domain techniques are in widespread use.

For one thing, they’re easier to implement.

Spatial processing techniques can be broadly classified as either point processing or mask processing.

Point processing modifies the value of each pixel (sometimes this is called a gray level for 8 bit images) based solely on the location of the pixel.  Examples include:

  • Negation - each pixel level is subtracted from the max pixel level.
  • Contrast Stretching - a function is used to enhance/intensify certain regions of gray level over others.
  • Thresholding – Pixels with values exceeding the threshold are transformed into the maximum value, all others are zeroed out.

Mask processing modifies the value of each pixel based on the values of all pixels in the neighborhood surrounding the pixel.  The neighborhood is usually a square or rectangle of odd dimensions.

Image Averaging is a kind of mask processing whereby each pixel is replaced by a weighted average of its neighbors.  This kind of processing is used to reduce some kinds of noise.  The downside is that it tends to blur sharp edges; usually sharp edges represent features of interest.

Since border pixels can’t be surrounded entirely by the mask (aka window or filter) the only way to get a perfectly filtered image is to accept a slightly smaller image as output.  Unfortunately this is usually unacceptable so various methods for padding are employed.

Averaging tends to suppress features that are smaller than the size of the mask.

Order Statistic Filters are non-linear filters that determine the value of a pixel based on some order statistic about its neighboring pixels.  The median is an order statistic (middle-most in rank order).

The median filter tends to suppress salt-and-pepper like noise without the side effect of blurring sharp edges.

Convolution confusion

So suppose you have a function that you’d like to modify.  Maybe you don’t like the way the function treats certain inputs.  Maybe you wish the function emphasized some aspect of its input over other aspects.  Maybe you just don’t think the function is aesthetically pleasing.

So you modify the function.  One way to modify the function is to add a constant value at every point.  This is a linear modification.  Since the fourier transform of a function is based on integrating sinusoidal basis functions, and linearity is preserved in integration, you can make this modification in either the time domain OR the frequency domain.

Suppose adding a constant to the function at every point doesn’t modify the function to your liking.  It’s a pretty broad brush change.  Instead of enhancing a particular aspect of the function it, at least in the time domain, merely shifts the function.

Another way to modify the function would be to multiply it by a constant value.  This scaling will also survive the translation to the frequency domain because the Integral(a * f(x) dx) is equal to a * Integral(f(x) dx).  This modification is similar to the first modifcation (adding a constant) in that it’s not selective at all.  It’s very broad brush.

What you really want to be able to do is to modify the input function selectively.  That is, you’d like to use another function to modify the original function.

Enter convolution.  Through some rather heroic mathematics this wonderful operation allows you to use another function, call it h(x), to modify your original function, call it f(x), to produce your output function, call it g(x).

Convolution is a binary operation that is implemented by multiplying the Fourier transforms of the 2 operands.

If f(x) and h(x) are the original function and the modifying function in the time domain then F(u) and H(u) are their corresponding Fourier transforms (we call this the frequency domain).

f(x) convolve h(x) can be performed by multiplying F(u) * H(u) then taking the inverse Fourier transform to produce g(x) – the modified version of the original input.  H(u) has many names, one of which is the transfer function.

Once we’re in the frequency domain, we can make H(u) pretty much whatever we want.  The simplest non-trivial function would be one that multiples F(u) by 1 if it’s below a certain value and 0 if it’s above.  This is an ideal low pass filter.  By ideal it is meant that there is a sharp cutoff – the signal is passed entirely when inside the cutoff and not passed at all when outside.

A key point to keep in mind is that H(u) operates in the frequency domain.  That is, we’re modifying frequencies, which will ultimately produce a modification in the time domain.  I believe the art part of using the Fourier Transform, is figuring out how to modify the function in the frequency domain in a way that produces the desired result in the time domain.

So, for instance, we know that sharp edges in the time/spatial domain require high frequency components in the frequency domain.  To enhance sharp edges via convolution requires that we filter out low frequency components.  To smooth/blur an image (that is, reduce sharp edges) requires the exact opposite (filtering out high frequency components instead).

Sharing and Storing Visual Studio Help Favorites

Tucked away inside the wonderful Tools –> “Import and Export Settings” dialog is the ability to export you current Help Favorites.  You do have Visual Studio’s Help (aka MSDN) installed locally right? :)  There are thousands of precious milliseconds being wasted each time you click on an MSDN online link and wait for the page to reload.  With the local help the load time is perceptually instantaneous.

Help Favorites are one of the many settings that you can include in a settings export.  If it’s the only setting you include it’s just like exporting a list of favorites (bookmarks for you old-timers) to a file.

I think I’ll also end up using this to store sets of related help.  After a while my help favorites list becomes too long to be useful as a quick reference.  As a rule of thumb once a list has more than a dozen or so elements it takes longer to find the desired item in the list than to search for it.  Saving the help favorites from time to time and having them automatically indexed by Windows Search combines the best of both worlds.

Notes on Fourier Series

Stanford’s course on the Fourier Transform has, for the first 6 lectures, been almost entirely about Fourier Series.

Fourier Series can be used to represent any periodic phenomena.  Phenomena = functions in math, signals = engineering.

Periodic phenomena are broadly classified as either periodic in space (e.g., a ring) or periodic in time (e.g., a wave).  Periodicity arises from symmetry inherent in some property of the periodic phenomenon.

Prof Osgood is a genius.  His enthusiasm is surpassed only by his insight into mathematics.

The Fourier series is based on linear combinations of sine and cosine.  These are in turn based on the unit circle.  Mentioning this relationship to the unit circle because it isn’t the only basis for trigonometric functions.  For example, the Hyperbolic sine is based on a hyperbola.

Any periodic function can be expressed as a linear combination (a sum) of these trigonometric basis functions.  The hard part is figuring out the coefficients to apply to each linear component of the sum.

For mathematical convenience, the exponential form of the sine and cosine are often used when deriving the coefficients of the Fourier series for a particular function.  There’s a lot of calculus involved here but, at least up to lecture 7, it’s all pretty basic plug and chug integration and differentiation.  Despite that, it makes me yearn for a refresher in things like Integration by Parts.  Thank God for Wikipedia and Schaum’s Outlines!

Apparently the key breakthrough in the theoretical justification for Fourier Series occured when mathematicians gave up on trying to prove that the Fourier series converges exactly to the function being represented.  Instead, they were able to prove that the mean squared error (difference between the value of the function and its Fourier series) of the Fourier series for a given function, in the limit, approaches zero.

Sine and Cosine are both continuous and smooth (infinitely differentiable).  Because of this edges, which are either jump discontinuities or sharp changes in the derivative, require higher and higher frequency components to express.  I visualize this as higher and higher frequency sinusoids “bunching up” together to produce a sharp change in the value of the function at any given point.  The more such points in a function (e.g., the more edges), the more these high frequency sinusoids are needed to represent the function.

I believe it takes an infinite number of such high frequency components to perfectly reproduce any discontinuous function since the sum of a finite number of continuous functions is a continuous function.

In lecture 6 Prof Osgood formally bridges Fourier Transforms to Fourier Series.  The Fourier Transform is a limiting case of Fourier Series.  Limited in the sense that the phenomena need not be periodic (which means, mathematically, that the period tends to infinity).

Median of 5 numbers in 6 comparisons

Back in 2003 while studying for the Algorithms portion of the PhD Qualifying Examination in Computer Science at Georgia State, I ran across an algorithm so useful it cried out for implementation.

So, dredged from the archives of usenet, the code is below.  It’s vanilla C w/ preprocessor macros but should easily port to any other language.

// Change the type of pixelValue to suit your needs.
typedef int pixelValue ;

#define OPT_SORT(a,b) { if ((a)<(b)) OPT_SWAP((a),(b)); }
#define OPT_SWAP(a,b) { pixelValue temp=(a);(a)=(b);(b)=temp; }

/*-----------------------------------------------------------------------
* Function : opt_med5()
* In : A pointer to an array containing 5 pixel values
* Out : The median pixel value of the input array
* Job : Fast computation of the median of 5 pixel values in 6
comparisons.
* Notice : The input array is modified: partly sorted so that the
* middle element is the median.
* No bound checking to gain time, it is up to the caller
* to make sure arrays are allocated.
*---------------------------------------------------------------------*/
pixelValue opt_med5(pixelValue *p)
{
OPT_SORT(p[0],p[1]);
OPT_SORT(p[2],p[3]);

if ( p[0] < p[2] )
{
OPT_SWAP(p[0],p[2]);
OPT_SWAP(p[1],p[3]);
}

OPT_SORT(p[1], p[4]);

if ( p[1] > p[2] )
{
if ( p[2] > p[4] ) return p[2];
else return p[4];
}
else
{
if ( p[1] > p[3] ) return p[1];
else return p[3];
}
}

#undef OPT_SWAP
#undef OPT_SORT



RAID-0 vs a Standalone Hard Drive

Jump to the Charts - Jump to the Table

I’ve been looking for a way to speed up secondary storage access.  Over the past few months I’ve looked at Solid State Drives, faster RPM spin drives (e.g., Western Digital’s VelociRaptor), bigger on-board cache (16MB vs 8MB), support for Native Command Queuing via the Advanced Host Controller Interface (AHCI) and, lastly, RAID-0.

Solid State Drives would probably offer the fastest overall read times.  Since one of my goals has been to shorten the boot cycle this is clearly a win.  Unfortunately they’re still a bit too pricey.

On the spin drive front, faster RPMs (assuming I’m not interested in SCSI) usually means going for a drive that operates at either 7200 RPMs or 10,000 RPMs.  Western Digital’s VelociRaptor spins at 10,000 RPMs and blows away every other drive in its class in benchmarks.  It’s not too pricey either coming in at $160 for the 150GB model at the local Frys.

Native Command Queuing (NCQ) places requests in a queue (up to ~32 requests) and allows them to be re-ordered to minimize the amount of repositioning necessary by the drive heads to satisfy all of the requests.  Mainstream CPUs use a similar optimization but call it out-of-order execution.  NCQ provides the biggest benefit when the workload results in HD I/O requests that are widely distributed across the disk.  While I think this would probably increase overall throughput, even in a desktop scenario, I didn’t think it would provide nearly as much bang as RAID-0 striping.  Throw in regular fragmentation and the locality of reference principle and NCQ looks less compelling.

Which brings me to the scenario I opted to try (RAID-0).  Why RAID-0?  RAID-0 spreads the data across each of the drives in the volume.  In theory this should yield a linear increase in throughput since writes and reads can occur simultaneously, minus some overhead, across the drives.

Thanks to the wonderful Intel Matrix Storage technology RAID support is “baked into” the motherboard and natively supported by Windows Vista.  By natively supported I mean that Vista can be installed onto a RAID volume out-of-the-box (XP could do this but required the user to provide drivers during the setup process).

On my board, CTRL+I during the boot up sequence brings up the Intel Matrix Storage Manager Console.  This allows you to define RAID volumes.  My setup is as follows:

RAID-0 Volume 0 = 3 Western Digital 160GB Caviar Blue drives

RAID-0 Volume 1 = 2 160GB drives (1 Seagate, 1 WD)

OEM versions of the WD drives sell for $44 so this actually ends up being cheaper than a single 150GB VelociRaptor.  It also has more than 3 times the space (RAID-0).

The results?  See for yourself below.

Standalone System Drive (No RAID):

Standalone System Drive

Three-Way RAID-0 System Drive (32k stripe size):

Three-Way RAID-0 System Drive

I tend to create a separate data partition to minimize contention with the system cache and program loading.  So instead of throwing out the old system and data drives I combined them into a second RAID-0 volume and use that for data.

Standalone Data Drive (No RAID):

Standalone Data Drive

Two-Way RAID-0 Data Drive (128k stripe size):

Two-Way RAID-0 Data Drive

Unfortunately the axes on the screenshots were automatically scaled to different ranges.  The following table highlights the results.

  Throughput (MB/s) Access Time (ms) Burst Rate (MB/s) CPU Usage
Standalone System Drive 54.7 18.5 68.1 2.6
3-way RAID-0 System Drive 130.1 15.9 75.7 8.9

improvement

75.4 2.6 7.6 -6.3
         
Standalone Data Drive 56.6 17.7 61.3 2.2
2-way RAID-0 Data Drive 70.5 18.6 61 3.1

improvement

13.9 -0.9 -0.3 -0.9

For the system drive Three-Way RAID-0 is a clear winner except for CPU Usage but with 4 hyperthreaded cores who cares? :)

Two-Way RAID-0 improves throughput but barely pulls even with the standalone data drive.  This may be due in part to the differing stripe size (128k vs 32k), drive mismatch (the drives in the 3-way setup are identical, the drives in the 2-way setup are not) or other factors.

At 130 MB/s average throughput the Three-Way RAID-0 setup is pushing in the neighborhood of 1Gb/s.  Sata II has a theoretical bandwidth of 3Gb/s.  I wonder if a Nine-Way RAID-0 would be enough to nearly saturate the Sata II link while still being usable?…

As for data backup, I’m taking a few approaches.  A nightly file based backup to an external HD and Windows Live Sync to a laptop for certain key directories.  Windows Live Sync provides nearly real-time backup to on another machine but each directory has to be configured manually and has a limit of 20,000 entries.  The theoretically less reliable Three-Way RAID-0 System Drive gets imaged nightly to an external hard drive.  Since it’s a system drive it’s setup to be rebuildable from installation media so a loss is less critical than it would be for items on the data drive.

Learning the Fourier Transform in the 2st Century

Stanford has put several of its courses on YouTube.  MIT has done the same.  All free.  Absolutely amazing.

Since YouTube works on phones that are not the iPhone (e.g., devices running Windows Mobile) these courses can be listened to from practically anywhere.

First up, the wonderful Fourier Transform.

Concurrency: So Easy it’s Hard

There’s a seemingly perpetual undercurrent of buzz in the software world about simplifying concurrency (aka multithreading, parallelism, etc…). 

Since software makers are in the business of making software many of these solutions focus on simplifying the software involved in the implementation of concurrency.  For example, the next version of the .NET Framework, .net 4.0, adds parallel extensions.  Old mainstays such as PVM and MPI are extensions to C designed to simplify concurrency.  Java shipped with its own threading API.  There are many others that can be added to this list (e.g., Stackless Python).

I’d say that the problem of creating concurrency, that is, creating a system that depends on multiple sequences of instructions executing simultaneously, has been solved so well that it’s been solved too well.  It’s so easy to create concurrency that we can do it without realizing it (e.g., event driven programming).

Meanwhile, the difficulties associated with concurrency do not typically stem from lack of library (or syntax) support.  Concurrency is hard because it’s hard to model/design in way that is both correct and maintainable.

This is not, per se, a problem of software libraries or syntax.  It’s a problem of comprehension and communicating comprehension.  The difficulty of modeling concurrent processes is to be expected given that our most basic modeling tool , the human brain, is not very good at executing multiple complex tasks simultaneously.  Sequential thinking, for complex tasks, is so rooted in our psychology that it can be used to our advantage: if you don’t want to be distracted by random thoughts then focus on a single thought because “you can only focus on a single thing at a time”.

The solution, in my opinion, is to do what we always do when the number of mental widgets necessary to successfully handle a task exceeds the ability of most individual brains; come up with concurrency focused abstractions that gloss over as much non-concurrency related detail as possible while remaining useful.

In this case, since the problem is design and communication of design, the focus should be on improved notation.  We need notation that makes shared state leap off the page.  Notation that makes it clear when our designs may result in deadlocks.  Notation that calls attention to results that depend on the order in which multiple threads of execution complete (race conditions).  Perhaps most importantly we need notation that is widely used so that the complex orchestration of instruction streams we call concurrency can not only be created but modified, adapted, extended, etc… as well.

A Quick Note on Correlation

The correlation coefficient of 2 variables measures the strength and direction of the linear relationship between the 2 variables.

Strength: Expressed visually by how “line-like” a plot of both variables will appear.  Lines are thin (infinitely thin in the most abstract sense).  Strength is indicated by how close the absolute value of the correlation coefficient is to 1.

Direction: Do the variables rise and fall together?  Or does one variable fall as the other rises?  This is indicated by the sign of the correlation coefficient (positive or negative respectively).

Converse considerations.  While a non-zero correlation coefficient implies some degree of linear dependence between the 2 variables, a correlation coefficient of 0 does not imply independence.

  • The relationship might be non-linear.  The correlation coefficient identifies linear relationships.

Why did I bother to write this note?

Given a linear relationship between 2 variables (or, put another way, a linear dependence of one variable on another) one variable might be used to predict the other.  If the variables in question represent measurements than this can be incredibly valuable because some measurements are much harder to perform than others.  Substituting a prediction based on a cheap measurement for an expensive actual measurement can yield cost savings.  Cost might be “measured” in dollars, time or some other way so this statistic can turn out to be valuable in many industries/disciplines.

Running Fedora 11 in Microsoft Virtual PC 2007

Aside from a few quirks during the install (e.g., the partitioning tool kept trying to format the install partition, which makes installation crash immediately after it succeeds), I’ve been pretty impressed with just how easy Fedora is to use.

For the most part the System menu looks like a clone of Windows’ Control Panel (for XP).  It even has an “Add/Remove Programs” application though, to be fair, that’s a pretty generic term.

To get the window manager to display properly I keep having to manually add the “vga=791” boot parameter at boot.  So I figured I’d edit it using the handy “Bootloader” application that’s in Fedora’s System menu.

It starts, asks me for the root password, then promptly crashes!  On the plus side, at least it crashed with a helpful stacktrace complaining of a missing module called “kudzu”.  I’ve never heard of kudzu but the GUI package manager (“Add/Remove Software”) was able to find it about 10 mins after it started updating its list of software.

Alas, it was not to be.  I tried starting the BootLoader app but was never greeted with a user interface for editing the bootloader.  Ah well, grub uses a menu.lst file for most configuration so I’ll have to edit that manually.

The saga continues…

Visual Studio Efficiency Tip

While working on code it’s sometimes helpful to be able to look at more than one section of code at a time. 

For example, when you’re creating a method that’s similar to an already existing method it helps to be able to see both the old and new methods at the same time. 

Another example is when you’re moving code from one method to another.  In this case, seeing both code sections cuts down on mistakes (less worry about things like “where did I just cut that block of code from…”).

One of my favorite ways to deal with this is to split the code editor window.  I stumbled on it while using Word to edit a document, tried it in Visual Studio and presto! It worked!

Being a keyboard shortcut-o-phile, I’ve setup a shortcut for jumping from one part of the split window to the other (Tools –> Options –> Environment –> Keyboard: Window.NextSplitPane => CTRL+W, CTRL+N).  Since the location of the splitter varies depending on which part of the code needs more space, I tend to establish the split with the mouse.splitting_code_editor

To split the window with the mouse move the cursor to the area circled in green in the image above (above the scrollbar) then click-drag the splitter.  The result, shown below, will be an independently scrollable editor pane.

split_code_editor

Trees in SQL Databases

The Tree is a widely used data structure.  It’s found all over the place from industry (e.g., organizational charts are usually trees) to geneology (e.g., family trees) to biology (e.g., phylogenetic trees).

Trees have been heavily studied by Computer Scientists since data stored in trees can often be located very quickly (relative to the total amount of data).  Trees can also be used as a design tool to identify highly parallelizable components of an algorithm.

SQL Databases are designed around the Entity-Relational model.  The benefits of SQL are many but if we’re to reap these benefits we’ve got to find a way to get the world’s data into a form congenial to the Entity-Relational model.

Converting an existing data model into relational form usually entails identifying the entities and their relationships.  These are then mapped to tables, keys and foreign keys.

Trees are a little bit different in that they are typically modeled as an entity with a relationship to itself.

familymemberstable

The nodes of a tree exhibit a 1-to-many relationship.  Each child has a single parent; each parent can have 0 or more children.  In the relational model each node is represented as a row.  The parent of a node, if any, is stored in the ParentSSN field.  The SSN is the primary key for the table so each row is required to have one.

The ParentSSN field allows nulls because every node does not have a parent (e.g., the root node has no parent).

While trees can be represented in SQL databases in this way, the hierarchical nature of the tree makes querying it via SQL somewhat more difficult than querying other kinds of data.

Oracle supports hierarchical queries via the CONNECT BY keyword.  SQL Server 2008 added support for hierarchical queries.

LINQing away boilerplate collection transformations

Forcing a list of strings to upper case before LINQ:

List<string> list = new List<string> { "element1", "element2", "element3" };
List<string> listCopy = new List<string>();

foreach (string listElement in list)
listCopy.Add(listElement.ToUpper());

foreach (string upperCaseListElement in listCopy)
{
// ...
}






Since LINQ supports transforming sequences inline this can be represented much more succinctly with the following:




foreach (string upperCaseElement in list.Select(el => el.ToUpper()))
{
// ...
}






This expressivity is enabled by C#3.0 support for lambda expressions and the standard query operators.

Keeping Music on Your Desktop and Your Laptop no Matter Which One You Used to Buy it.

I have to take note of an excellent utility buried in the Windows Live software stack for synchronizing folders.  It’s discussed in a SuperUser.com posting (Keeping folders synced between several machines - Super User).

Synchronization (or sync for short) means different things to different people so some clarification is in order.  Like many people I have multiple computers at home.  Sometimes I’ll buy music (mostly from amazon.com/mp3 these days) while using my laptop.  Sometimes I’ll buy it while using my desktop.

Unfortunately Vista doesn’t come with a built in way to automatically keep these 2 music folders in sync.  It’s got plenty of functionality grouped under the rubric of synchronization but none of them let you automatically keep two already existing folders in sync.

This problem is probably so common that I’ll give it a name: “Playlist Frustration Syndrome” (PFS).  The main symptom is awkward pauses as your desktop computer unsuccessfully tries to play music you bought while using your laptop.  Other symptoms include PC Rage, spontaneous swearing, manual copying and, lastly, disillusionment with the whole concept of multimedia convergence.

Fear not Windows Live Sync (formerly Foldershare) is the cure for this and other ills.  I haven’t explored those other Ills so won’t comment on them in this post.

Windows Live Sync supports 2 types of sharing: for keeping folders on a LAN in sync use the “create personal folders” option.  The personal folder option provides a way to map folder A on computer A with any other computer + folder combination.

In my case, for music this means mapping:

DESKTOP\music to LAPTOP\music

Since it uses Peer-to-Peer networking no information needs to be sent over the internet in this scenario.

They even wisely used Universal Plug-and-Play so if you have a router that supports it (and it’s turned on) then any firewall settings are automatically taken care.

This was definitely a smart acquisition on Microsoft’s part.

Software vs Terrorism

Ran across an article about software that’s improving intelligence analysis and has already saved lives.  Although interesting for many reasons I think it highlights the significance of a few trends in software development:

  1. Building software is an iterative process.  To get it right you need frequent input from the end-users.  This is a key distinction between agile methods and classic waterfall design.  According to the article, “Every other week for about two years, the engineers returned to Washington with a revised product, based on analysts’ requests.”
  2. Rapid application development.  You can’t revise a product every 2 weeks unless you’ve got tools capable of supporting that level of productivity.  I would be very surprised if the company’s flagship product didn’t have large components written in one of the more productive languages (e.g., Java, C#, Visual Basic, etc…).
  3. Software Engineers have to be able to work with end-users and subject matter experts to create better software.
  4. The user interface is important.  A key feature of the software is something that anyone who uses google takes for granted: the ability to search several databases conveniently.
  5. Lightweight categorization (aka tagging) can produce better results than up-front, deep hierarchies.  Tagging is easy for users to do (so they are more likely to do it) and flexibly accommodates changing categorization needs.  Instead of having a team of experts build a deep semantic hierarchy that’s out of date within a week tagging lends itself to a bottom up approach.  All the end user has to do is tag the information using whatever terminology makes sense to them.  Software is then used to identify clusters in the data.

I’m sure there are other lessons to be learned from the success of Palantir.  It’s great to see software developed using modern methodologies put to such positive use in such a short time.

Running Fedora 11 in Microsoft Virtual PC 2007 SP1

There are some wonderful open source network monitoring tools out there.  A few that I want to try only run in a unix/linux environment.  I don’t have any spare hardware nearby so I decided to install linux in a virtual machine.

Getting Fedora (RedHat’s free linux distribution) to install and run in a Microsoft Virtual PC 2007 SP1 Virtual Machine was somewhat less than painless.  On a Core 2 Duo Laptop it required the following steps:

  • Disable hardware virtualization in the BIOS.  Not sure why, Debian seemed to work with it enabled.
  • Append noapic nolapic noreplace-paravirt to the options passed the kernel on boot.  This was done by pressing tab at boot and manually adding the options to the end of the kernel boot line.  This prevents several “unrecoverable error system will restart” and “kernel panic” messages on boot.
  • After install but before running the installed OS, append vga=791 to avoid unreadable/pixelated graphics.

Most of these steps are outlined in this very helpful post at Kartones Blog (though he was working with Debian).

Anyone whose had to go through this pain is asked, “Why didn’t you use {VMWare, VirtualBox, Xen, etc…}?”  Virtual PC 2007 is free and was already on the local network.

The Virtues of Virtualization

Back in the bad old days, say 5+ years ago, when an enthusiast user wanted to run more than one operating system he/she would run a multi-boot setup. That is, the computer would boot into a boot manager that allowed the user to choose which operating system to run.

Although this worked it was kind of a pain to maintain. Recovery was dangerous as one operating system tended to overwrite the boot setup of any other operating system. It also, usually, required separate disk partitions. Deciding how much space to allocate for each operating system was a black art that always seemed to leave lots of space unusable because it had been allocated to one of the other operating systems.

Over time setup and maintenance became easier as operating systems increased their support for multiboot scenarios. Linux users are laughing at that statement since linux has always existed in a heterogenous environment and has pretty much always had support for multiboot (via various boot loaders like grub or lilo). Windows may have been a little later to the party but boot.ini has been around since NT.

Even with improved OS support for playing nice with multiple boot scenarios there's something fundamentally unsatisfying about multiboot. One of the most common reasons to run more than one operating system on a single computer is to be able to test software on a different OS. A similar task is to be able to test a web app in multiple browsers on different OSes. Yet under multiboot each "test" requires a machine reboot. This is an incredibly inefficient way to test. Because it's so inefficient it inhibits the frequent "make small change-test change-repeat" cycle that is key to reducing bugs.

Enter virtualization. Instead of having a single computer run a single operating system at a time, virtualization allows a single computer to run multiple operating systems simultaneously. The first operating system that boots is called the host operating system. Subsequently run operating systems are called guest OSes.

Virtualization allows a single computer to run multiple operating systems simultaneously through the magic of software. The key piece of software is the virtual machine. It is exactly what it sounds like. It is a piece of software whose sole purpose is to fool an operating system into believing it is running on real hardware.

A non-virtual machine, e.g., a computer you might buy from Dell, has a CPU, chipset, memory, a hard disk and peripherals. All of these are physical devices. A virtual machine has the virtual equivalents of these; a virtual chipset, virtual video card, virtual hard disk, etc...

There are several vendors that make virtualization software. Each provides their own flavor of virtual hardware. That is, VMWare provides a different virtual video card than Microsoft's Virtual PC, which is different from the one provided by Sun's VirtualBox.

Virtualization is similar but not identical to emulation. Emulation has been around for years. You can take a quad core, 64bit machine and run software to emulate a Commodore 64. I'm not quite sure why people do this but am told it's all the rage in PC nostalgia. The difference between emulation and virtualization is that virtualization depends on a tighter integration between the virtual machine and the actual machine. An emulator runs in a process like any other; the emulation software runs in user mode, its threads are scheduled like any other user thread.

When a virtual machine is running an operating system, it is executing in the same privileged mode (with a few exceptions) as the host operating system. Programs running in the guest operating system can be pre-empted by the guest operating system. This improves performance and contributes to an illusion that also distinguishes a virtual machine from emulator software: the guest operating system need not be aware that it is running in a virtual machine.

Like physical machines an operating system must be in installed on a virtual machine to profitably take advantage of the system. Unlike physical machines the virtual machine doesn't need a physical hard disk; it uses a virtual hard disk created by the virtualization software. The flip side of this is that virtual machines can only use virtualized hardware. Most virtualization software provides support for virtualizing network adapters, USB, printers, serial ports and a few other commonly used peripherals. If the virtualization software doesn't provide a way to virtualize a given piece of hardware then that hardware probably can't be used in a virtual machine.

The road to LINQ, Part 1

Sometimes a program doesn't know what it needs until the user asks for it. When the primary purpose for a program's existence is to find things when the user asks for them then the program can be, generally, classified as a query tool.

Query tools are found all over the place. Practically every business in every industry has its own way of generating, describing, storing and querying data.

In years gone by the data might have been generated when someone filled out a form. The description of the data would be the form's labels and fields. Form storage was provided by file cabinets. Back in those days the query tool was the receptionist, file clerk or whoever happened to know how to track down the desired bit of information.

These days computer programs are the query tools. One problem common to pretty much every query tool is how to conduct its search in a way that will flexibly accommodate a wide variety of user requests. Put another way; how do I give you a program that lets you find what you're looking for without me having to modify the program every time you're looking for something else?

Fortunately an elegant solution to this problem was invented long ago. It's called SQL.

Unfortunately, SQL requires data to be stored in a very specific form. That form, the relational model, though powerful, takes some getting used to. So although there has existed an elegant solution to the problem of flexibly querying data even to this day much of that data is not in a form that can benefit from SQL.

Why is SQL elegant? In essence, it is elegant because it succinctly represents a higher level abstraction that can be applied to almost any kind of data. SQL operates at a level of abstraction closer to the way human beings formulate questions.

With SQL the user specifies what it is they're looking for. The significance of this may not be obvious to non-computer programmers but for programmers it's a radical notion. A program is a sequence of instructions that specify how to do something. Computer programmers have for decades been in the business of telling computers how to do some set of tasks.

For example, a college professor may want to know the names of all students that passed his most recent exam. A traditional program written to answer this question focuses almost entirely on the how. Psuedocode for that program might look like the following:

open grades file
loop over each grade entry
if the grade entry is above 70 copy the entry to the passed list
end loop
print every entry in the passed list

While that's perfectly comprehensible to pretty much any programmer it's pretty far removed from the way the professor himself might express his desire. In SQL this might be:

select firstname, lastname from grades where grade > 70

This is clearly much closer to what the professor had in mind. He wants the names of every student that passed his exam. A passing grade on the exam is one that is higher than 70.

The how is entirely missing from the SQL version. The cost of this convenience is that the grades have to be stored in relational form because SQL can only query data in relational form.

As the information age steamrolls on the fact that most of the world's data is not in relational form becomes a growing problem. From the example above it's obvious that writing the how is more difficult than writing the what. More difficult means more errors will occur.

Another problem inhibiting use of SQL to tame the onslaught of data that characterizes the information age is that it represents a paradigm shift for generations of computer programmers. If you got a degree in CS before 1995 there's a good chance you've never even heard of SQL.

Remotely debugging managed code

So, with hopes of debugging managed code, you're running the Visual Studio Remote Debugger Monitor (msvsmon.exe) on the remote machine. Unfortunately, it will not let you connect to it from Visual Studio running on your local machine. You've tried the following steps to no avail:
  • You're running it as an administrator (or member of the administrators group) on an XP-SP* machine.
  • Your firewall has an exception for msvsmon.exe.
  • msvsmon.exe is using the default authentication setting (Windows Authentication).
In other words, you've read the documentation and followed the steps contained therein.

Despite these valiant efforts every time you try to connect to the remote debugger monitor from Visual Studio (ctrl+alt+p, transport=default, host set to the remote machine's WINS name) a wonderfully unhelpful error dialog pops up informing you that it can't find the remote debugger monitor.

A quick way to work around this is to:
  1. Create an account on the remote machine with the same username and password as the account on your local machine that is running Visual Studio.
  2. Put the account into the administrator's group on the remote machine.
  3. Run msvsmon.exe (it can be on a remote share) using the account you just created. This can be done by right clicking msvsmon.exe, choosing "Run As" then changing the logon and password.
As far as I can tell, the problem is that msvsmon.exe will only let you connect to it (with the transport that supports debugging managed code) using the credentials of the user under which it is running. Although it allows you to debug other user's processes once you've connected, it will not let you connect unless you're running it as you.

Yet another reason to love Google Reader

Over the past year or so more and more of the reading I do on the web is being done through Google Reader.

This has been entirely unintentional. It just happens to be very handy to have a single destination that gets updated frequently throughout the day.

Part of this accidental dependence is the result of good design. Google Reader devotes most of its space to the intended content (news, blogs, headlines, etc...).

It makes liberal use of AJAX to make the UI incredibly responsive.

It even has keyboard shortcuts though those haven't grown on me yet.

They've also got several nice bundles of content that make it easy to find interesting articles.

It even plays nice with mobile phones that are not the iPhone. It is hard to overstate how amazing it is that Google Reader works with Pocket Internet Explorer 5! Nothing works with Pocket IE5 except sites that assiduously restrict themselves to whatever meager fraction of WAP-friendly HTML is grokked by Pocket IE5.

Not only does it play nice with Pocket IE5 it even makes other sites play nice with Pocket IE5 by stripping away all of the HTML goodness that PIE5 can't deal with.

I'm not sure I could go back to standing in line without Google Reader on Windows Mobile (or something like it).

Today I discovered that they didn't forget their "raison d'etre". The convenient search bar at the top of the Reader homepage searches, among other things, any of the articles you've ever read in Google Reader! So it's a great way to track down that handy link about something or other you read a while ago but forgot about.

Achtung Baby! (Warnings are your friend)

I'm busy with a release this week so the posts will be slim.

As a release nears the time available to get anything done decreases and the temptation to ignore "little things" increases. Compiler warnings tend to get ranked lower on the "need to fix" scale when deadlines loom large.

I experienced a cautionary tale in why compiler warnings are important, even when time is short, during a recent build. While developing a class I realized that it needed a few more properties. So I added the properties to the class, added parameters for the properties to the constructor and completely forgot to set the properties in the constructor.

C# dutifully warned me that "parameterX is not being used". Unfortunately this streamed by in the output window and was lost amidst the flurry of other warnings that have crept into the source code over the past few years. This resulted in several minutes wasted scratching my head over why the property was never changed from its default value even though it was clearly being passed a new value in the constructor.

Gotta take advantage of the freebies that life gives you. Compiler warnings are freebies - it's foolish to ignore them.

When you need to write lots of code in a short amount of time

I needed to port a fairly sizable VB6 app to WinForms/C# in a short amount of time (~2months). One of the tricks to writing this much code quickly was to:
  1. Port 3 - 5 menu elements that were representative a most of the menu elements.
  2. By the time you get to the 3rd menu element some common patterns emerge.
  3. Put those common patterns into a code snippet and reuse the code snippet for the remaining menu elements.
  4. Repeat steps 1 - 3 for each "class" of menu element (where a class includes all menu elements that execute similarly).

Exception handling in event handlers was low hanging fruit. Sometimes an Exception occurs at a location where I have information that can make the error message more useful. e.g., An IO Exception that occurs in 1 part of a multistep process makes more sense if the multistep process is included in the error message.

So I made a code snippet with a try/catch (ApplicationException)/catch (Exception). If the Exception can be made more useful then it gets wrapped in ApplicationException-derived Exception, otherwise just let it bubble up.

This code snippet would be the first thing I'd include in any new menu event handler.

Another trick was taking advantage of the DynamicInvoke() method of delegates. Most of the menu event handlers opened a dialog window, retrieved input from the user then passed that input to a method call in another library. The library would create a new output file as a result. There were 40 - 60 of these kinds of dialog boxes.

Instead of separately doing this input setup/marshalling and output handling it was easier to have each of these dialogs package their input along with a delegate to the underlying library call to a single method. This method would execute the call (via DynamicInvoke()) and return the results to the caller.

Turns out that this "wrapper around DynamicInvoke()" ended up being a natural place for adding context to error message. This context makes it much easier to debug when you receive a screenshot of the exception message from a client in the field.

A final trick was to make use of Visual Inheritance. Since so many of the menu elements open a dialog window, get input from the user, execute 1 or more steps, etc... some of this functionality was combined into a base form from which all other dialogs derived. The base form took care of positioning the OK and Cancel buttons, a descriptive paragraph near the top of the form and incorporating progress in the status bar of the dialog window. Each of the 40 - 60 dialog windows then derived from the base form.

Calling methods on an ActiveX control from Managed Code/.NET

.NET 2.0 introduced BackgroundWorker. I have come to love and rely on this little gem to maintain UI responsiveness while a long running task executes in the background.

What happens if the long running task that needs to execute in the background must run in a Single Threaded Apartment? This might happen if you're doing image processing and the library that handles the heavy lifting is an ActiveX control.

Since it has a visual representation it must run in a Single Threaded Apartment. But BackgroundWorker does it's background work on a thread that does not run in a Single Threaded Apartment.

To address this I wrote a shameless rip of BackgroundWorker. The only significant difference is that it does its background work on a thread that runs in a single threaded apartment.

Hopefully the .NET devs won't mind. Imitation is the sincerest form of flattery.

A colorized and formatted version of the code below for StaBackgroundWorker can be found here.
/// <summary>    
/// Similar to BackgroundWorker except that it does its work on a Single Threaded Apartment thread.
/// </summary>
public class StaBackgroundWorker
{
public event System.ComponentModel.DoWorkEventHandler DoWork;
public event System.ComponentModel.ProgressChangedEventHandler ProgressChanged;
public event System.ComponentModel.RunWorkerCompletedEventHandler RunWorkerCompleted;
private Control creatorControl;
public StaBackgroundWorker(Control creatorControl)
{
this.creatorControl = creatorControl;
}

public void RunWorkerAsync()
{
RunWorkerAsync(null);
}

public void RunWorkerAsync(object userState)
{
Thread staThread = new Thread(new ParameterizedThreadStart(RunWorkerAsyncThreadFunc));
staThread.SetApartmentState(ApartmentState.STA);
staThread.Start(userState);
}

private void RunWorkerAsyncThreadFunc(object userState)
{
DoWorkEventArgs doWorkEventArgs = new DoWorkEventArgs(userState);
Exception doWorkException = null;

try
{
OnDoWork(doWorkEventArgs);
}
catch (Exception ex)
{
doWorkException = ex;
}

RunWorkerCompletedEventArgs workerCompletedEventArgs =
new RunWorkerCompletedEventArgs(doWorkEventArgs.Result, doWorkException, doWorkEventArgs.Cancel);

creatorControl.Invoke(new MethodInvoker(delegate() { OnRunWorkerCompleted(workerCompletedEventArgs); }));
}

protected virtual void OnDoWork(DoWorkEventArgs e)
{
if (DoWork != null)
DoWork(this, e);
}

private bool cancellationPending;
public bool CancellationPending
{
get { return cancellationPending; }
}

public void CancelAsync()
{
cancellationPending = true;
}

public void ReportProgress(int percentComplete, object userState)
{
ProgressChangedEventArgs e = new ProgressChangedEventArgs(percentComplete, userState); // marshal this call onto the thread that created the control that created us
creatorControl.Invoke(new MethodInvoker(delegate() { OnProgressChanged(e); }));
}

protected virtual void OnProgressChanged(ProgressChangedEventArgs e)
{
if (ProgressChanged != null)
ProgressChanged(this, e);
}

protected virtual void OnRunWorkerCompleted(RunWorkerCompletedEventArgs e)
{
if (RunWorkerCompleted != null)
RunWorkerCompleted(this, e);
}
}



Hackers, Sculptors, Instincts and Design

Although there are many things that a Hacker does well design is not one of them.

The key question is, why not?

Partly it's because design doesn't lend itself to the Hacker style of working. For one, design is explicitly creational. The Hacker, on the other hand, enjoys probing an existing system for faults. The goal of design is to create systems that produce the desired output without those faults. Since the job is creating the system there's no system to probe (yet), so the Hacker's favorite activity has nothing on which to operate.

For another, design often involves choosing what to avoid. Put another way, sometimes good design is about choosing what to not do. Is it worth optimizing a given process or is it better to go with the naive implementation? The Hackers instincts run so strongly counter to choosing to not do something that he will often not recognize that such a choice is at hand (let alone make that choice). The consequences of his "hack" live on in infamy until it's undone.

During the design process you inevitably have to make choices. Do you optimize for space or speed? Do you design for caller simplicity or callee simplicity?

When the Hacker encounters such a decision his instincts work against making the design tradeoff in exchange for a more coherent system. The Hacker specializes in discovering system faults and working around them no matter how much the workaround is at odds with the manifest intent in the design of the system. The choice, to the Hacker, appears to be a problem in need of fixing and he's got tried and true methods for fixing problems.

The Sculptor is less likely to see the choice as a problem requiring a workaround. To him it's a chance to make the right tradeoff for the particular problem the system is intended to solve. To the Sculptor it goes without saying that all systems have limitations. He's more interested in having the system do whatever it was intended to do well than in having it do anything that can be done.

For the Sculptor the discovery of a system fault is somewhat less pleasant than it is to the Hacker. The system fault might mean somewhere along the line he made a bad design choice. Many design choices, once made, are very hard (or impossible) to undo once a system is in use. The discovery of a bad design choice is also unpleasant to the Sculptor because he enjoys making good design choices.

Sometimes it isn't obvious which tradeoff is the correct one. This is often a clue that the Sculptor doesn't understand some aspect of the problem domain. Ideally he'll be able to get a better understanding; either by talking with the client, referring to existing documentation, Wikipedia, reference books, etc... Sometimes this isn't possible - the client may not be available, there may be no existing documentation, nothing else to go on. The Sculptor has to make a call. However, because the Sculptor has gotten into the habit of getting to know more about the underlying problem domain, there's a good chance that he's run across something in the problem domain that points to the right decision.

Sometimes the Sculptor has a pretty good idea of the relative costs of the choice but doesn't know how significant either decision is to the client. A choice that's many times more expensive can often be discarded because it turns out that the client is more than happy with either result. The longer the Sculptor has been sculpting the better he gets at recognizing these "decision moments".

For all these reasons, the Hacker's instincts work against good design. Where good design is found there isn't as much need for hacks. Changes to existing systems becomes less about probing, trial and error or making the system work in ways it wasn't intended and more about finding the right pieces in the existing system to assemble or extend just-so to add the needed functionality.

As a code base increases in size good design can mean the difference between shipping a new, nearly bug free feature in 6 weeks instead of 6 months. It can mean the difference between a new employee becoming productive in a few weeks instead of a few months. The systems tend to survive and thrive long after the designer is no longer working on them.

Systems designed by Hackers, on the other hand, are plagued by bugs. Some easily repeatable, some seemingly non-deterministic. They're fixed quickly but seem to crop up again a few months later. The Hacker isn't (usually) intentionally creating a buggy system so that he has something to do; it's just that he isn't as good at design. He doesn't enjoy it nearly as much as he enjoys hacking and so puts commensurately less effort into design.

Systems designed by Hackers tend to accumulate hacks at several levels. At the level of the user interface there may be major discrepancies both internally and with respect to other, similar products. This is the manifestation of the Hacker's instinct to hack the system. He may have come across a user interface need that wasn't directly met by the available user interface widgets. So he baked something entirely from scratch. Ironically, the desire to hack the system often acts as a disincentive to taking advantage of the built-in functionality in a given system; why bother to figure out what's available when all you need is enough to start and you'll hack out the rest?

At the level of coding constructs systems designed by Hackers will often have a lot of copy/paste re-use. Somewhat more experienced hackers may make use of libraries but they'll often have strange quirks. Maybe they throw lots of exceptions during execution. There will be lots of exception handling, even in places where it doesn't make sense. The Hacker may think of this as "defensive programming" but in reality it's another manifestation of the Hacker instinct: try a bunch of things and hope something works. Oddly enough, the Hacker is often proud that he doesn't really understand exactly why his fix works as long as it appears to work (most of the time).