Upgrading the Samsung Captivate to Android 2.2 (Froyo)

  1. Root the phone. This provides superuser/administrator access to the OS. Refer to this xda post for details.
  2. Install ClockworkMod’s ROM Manager. This can be downloaded from the Android Marketplace. It’s been so useful that I went ahead and bought it.
  3. Enable non-market apps on the phone (needed Titanium Backup used in the next step). Refer to this xda post for details.
  4. Install Titanium Backup. This program provides a way to restore your apps once you’ve upgraded to the new OS. I used the free version but it’s so useful that I’ll probably buy it.
  5. In ROM Manager, flash to the most recent version of the restore firmware (it’s the first option in the list of commands) to the internal SD card.
  6. Make a Nandroid backup. To do this start ROM Manager then choose “Recovery Mode”. This will reboot the phone into recovery mode. The phone will boot to a text interface with menus for executing commands. The volume up, volume down and power buttons can be used to navigate up, down and select (respectively). I put the backup on the internal sd card (/sd on a stock Captivate) but it could just as well have been put on the external sd card (/sd/sd on a stock Captivate). Once the backup completes (took about 20mins on my device) and the phone reboots connect it to your PC and copy the update.zip file that was created as a part of the backup. Rename this file something meaningful like “stock_captivate_nandroid_backup.zip” in case you need to revert the OS back to the one that ships with the phone.
  7. Make a Titanium Backup backup. Include system and apps. The backup can be stored on the internal or external sd card. Make a copy of the file on your PC for safekeeping.
  8. Download a ROM. I used Cyanogen’s Cognition v2.3b8 since it says that it’s “intended for the USA Captivate”. The ROM will be a single zip file. Copy the ROM onto the internal SD card of the phone.
  9. Start ROM Manager. Choose “Install ROM from SD Card” then browse to the Cognition ROM copied from your PC onto the internal SD card.
  10. Sit back, relax, brew some tea and take a walk. It’ll be a while (took about half an hour). The phone will boot into recovery mode (I think it’s recovery mode, might be download mode. Either way the whole process occurs in the text-based boot menu).

Once it’s all finish setup your Gmail account first since other programs use this login information. Titanium Backup can be used to restore individual apps (change the filter to “uninstalled” to limit the display to uninstalled apps backed up by Titanium Backup). I’m not sure if this is strictly necessary for purchased apps because the Android Marketplace seems to remember the apps I’ve bought.

Cognition includes AdFree (it’s not uninstallable directly) which seemed to cause Pandora to stop streaming after a while. So I downloaded AdFree from the Android Marketplace, used its “revert” host files option then uninstalled it.

Enjoy Froyo! Cognition’s ROM takes a little getting used to (e.g., the Applications screen scrolls top-down while the home screen scrolls left to right) but seems quite responsive and solid (not having lots of “force close” dialogs). Kudos to designgears of xda for getting us Captivate users access to flash!

Android and the Samsung Captivate

Things I like about this phone and its OS

  1. The screen is absolutely gorgeous. On those rare occasions where power isn’t a concern this screen is so beautiful that I sometimes crank up the brightness just to look at it in all its glory.
  2. The phone is incredibly slim with a stylish design. Even with a rubber protector/glove it easily fits into my pants pocket without looking like I’m smuggling bricks.
  3. The Android marketplace has really taken off. There are a lot of apps. My favorites so far are Google Maps/Nav, Google Listen, Pandora, Last.fm, SlingPlayer and TouchDown.
  4. Contacts. Firstly, linked contacts are awesome. No more having 5 entries for the same person from my yahoo address book, Google address book, Facebook, Twitter and SIM contacts. The native Twitter and Facebook clients pull data from the sites and automatically associates it with the corresponding contact. Google smartly included the ability to manually link contacts. It even tries to speed manual contact linking by displaying a list of close matches when you’re manually linking contacts.
  5. The browser is excellent. I really like how the browser automatically lays out most web pages so that text is displayed in a single column that doesn’t spill over and require horizontal scrolling. It doesn’t work on every website but works on most.
  6. Access to the shell. This comes in handy when debugging network connectivity issues since it’s a full featured linux shell. It’s also a quick way to reboot the phone without having to power down. I like having access to the underlying OS without having to jailbreak the phone (ala iPhone).

Things I don’t like about this phone and its OS

  1. It’s got perhaps the buggiest GPS I’ve ever seen. The current location jumps about like a firecracker in a soda can. It’ll blithely move my car from a bridge across Lake Washington to a few dozen feet out on the water – who knew I was driving a submersible?
  2. Poor integration with the Windows Live suite of services. There’s a Bing client and I think there are ways to integrate with messenger but that’s about it. This may not be a problem for most people but if you’ve got a lot of stuff (pictures mainly) in the Windows Live infrastructure then it can be a pain.
  3. No native sync. So when I buy music on my laptop I have to manually move it onto the phone and vice versa. The phone is so handy that most of my music purchases are on the phone. I’ll have to check out DropBox, or a similar service, to address this.
  4. Android’s music client is blissfully unaware of classical music. I don’t listen to a lot of classical music but while cruising Amazon’s MP3 store there was a special on “99 essential pieces of classical music” that I couldn’t resist. Unfortunately it shows up in the music client as 99 separate albums!
  5. The native exchange integration, at least as of Android 2.1, is minimal. No global address search. No sender photos. No notes. Fortunately TouchDown shores this up with a stellar exchange client.

Disabling the Blue LEDs on the Antec 900 PC case

The Antec 900 is a great case especially if you're running 2 or more video cards.  It supports up to 4 intake fans (two 200mm in front, one 200mm on the side and one 200mm inside the front) and 2 exhaust fans (one 230mm monster/"Big Boy" fan on top and one 120mm rear fan).

Despite the number of fans the case runs relatively quiet for the amount of cooling it provides.  These fans are larger than the standard 80mm so they can provide much more airflow even though they're running at lower RPMs.

The only thing that I don't like about this case is that the 2 front fans come with extremely bright blue LEDs without an off switch. Not a great configuration if your PC is in your bedroom facing your bed. Normally I'd just put a book or piece of plastic in front of the LEDs but since these LEDs are attached to the intake fans that's not an option.

Fortunately these LEDs can be disabled pretty easily. Cutting one of the wires, as pictured below, connected to each of the 3 LEDs makes this otherwise excellent case suitable for bedroom use.

First Week at Microsoft and Escaping Virtualization

After driving across the country I'm finally starting to settle into my new job at Microsoft.  It's only been a week though so I'm still in the honeymoon phase.  Good god they use technology here, it's awesome!

A few firsts:
  • First time ever having the option of using software as a work/desk phone.  Office Live Communicator truly rocks!
  • First time ever logging onto a domain from a copier/scanner/fax machine.
  • First time successfully using PXE boot without suspecting the network was really running on a dial-up modem. It was, and is, blindingly fast (usually).
Escaping Virtualization

Since testing involves installing lots of builds I've gotten to work with several virtualization programs.  They all seem to default to a different command sequence to 'escape' back to the host OS:
  • Virtual PC uses right ALT. Not really my favorite as it takes my hand off the mouse.
  • VirtualBox uses right CTRL.  Ditto.
  • Hyper-V uses the truly draconian CTRL+ALT+LEFT ARROW. Yikes!

Refactoring with References and Pointers Tip

Inherited source code is a fact of life for any professional programmer.  Sometimes you inherit a 5000+ line C++ method chock full of pointers and pointers to pointers.

If you’re utilizing the copy-and-paste algorithm for refactoring large blocks of pointer and pointer-to-pointer riddled code then C++’s Pass by Reference support can make these changes much less painful.  Thankfully it supports References to Pointers!

So instead of taking a block similar to the following:

int *ptr = NULL;

if ( someCond )
ptr = new int[len]; // we either allocate it here or leave it null

for (int i=0; i < len; i++)
{
if ( ptr == NULL )
{
ptr = new int[len];
}
else
{
delete [] ptr;
ptr = new int[len + i];
}

int x = ptr[i] * ptr[i + 1];
// ... several hundred lines of code
}






and converting the loop into something unreadable, error prone and barely recognizable as the former loop body:




void SomeFunc(int** ptr, int len)
{
for (int i=0; i < len; i++)
{
if ( (*ptr) == NULL )
(*ptr) = new int[len];
else
{
delete [] (*ptr);
(*ptr) = new int[len + i];
}

int x = (*ptr)[i] * (*ptr)[i + 1];
// ... several hundred lines of code
}
}





You can convert the loop into something easily recognizable as the former loop ala:




void SomeFunc(int*& ptr, int len)
{
for (int i=0; i < len; i++)
{
if ( ptr == NULL )
{
ptr = new int[len];
}
else
{
delete [] ptr;
ptr = new int[len + i];
}

int x = ptr[i] * ptr[i + 1];
}
}



Intervals, Interval Boundaries and Pixels

Anyone working as a computer programmer has at some point had to precisely distinguish between intervals and interval boundaries.

Intervals are delimited by interval boundaries.  For instance, 1-2 denotes a single interval.  In this case the interval is delimited by 2 interval boundaries (1 and 2).

There are always 1 more interval boundaries than there are intervals (since intervals are closed).

A similar relationship can be seen in languages that index arrays/collections from 0.  The total number of elements (or count) in an array is 1 greater than the number of indices.

Having recently worked on drawing lines on 2D surfaces with DirectX I’m beginning to think of pixels as intervals and vertices as interval boundaries.

RECT in GDI vs DirectX

In GDI the right and bottom edges of a RECT are adjacent to the region being specified by the RECT.  So the right most pixel in the region will be at RECT.X – 1.

DirectX also uses RECTs to designate rectangular regions.  Does DirectX follow the same convention as GDI with respect to the exclusivity of the right and bottom edges?  From some preliminary testing I’d say no – RECTs are inclusive along all edges in DirectX.  I’ll need to do more testing to be sure of this.

Accessing the Internet from a VPN connection

So you’ve got VPN working and people are able to access LAN resources remotely via VPN.

But they’re not able to access the Internet via VPN.  This isn’t usually a problem since users can always use their local internet access to access the internet.  After all, this is how they’re getting to the VPN.

There are situations where they’d rather access the internet via the VPN connection.  For instance, maybe their internet access allows VPN connections but blocks access to their favorite news site.  Or their favorite search engine.

If you’re providing VPN access using a SOHO firewall/router combo device then there is a good chance that the device will not support providing internet access to its VPN clients.  Emphasis on “will” since this restriction is an optional restriction mainly aimed at getting you to buy a router.  It’s usually worded along the lines of something like “this device won’t transmit packets received on an interface back out that interface”.

Fair enough, it *is* a routing function and businesses have every right to differentiate their products as they see fit.

If you’re not interested in, or are unable to, purchase a router then you can use a proxy server to provide access to the internet for remote VPN clients.  Proxy servers are cheap and setting them up is easy.

The major downside to this approach is that clients have to configure their applications to use the proxy server.  Many networked applications have support for this (e.g., browsers) but the configuration is slightly different for each application.  And users have to remember to turn it on and off as their connection changes.  But in a pinch this will do the trick.

Is this image lined up?

A problem frequently encountered in image processing is that of determining if an image is oriented properly.  Sometimes this question is so difficult to answer that computer people, like their math people cousins, solve it by redefining the solution and solving for the redefinition.

In this case, instead of answering the question “Is this image properly oriented” we answer the question “Is this image aligned to some other image?”  We’ll assume that “some other image” *is* oriented properly.  So if we can line up our image with canonical image then we’re good to go.

There are a few techniques that are brought to bear.  One is a frequency based technique that exploits the fact that the product of 2 functions is maximal when they’re perfectly aligned.  This technique, convolution, is excellently visualized in this Wikipedia entry.

When it comes to images each image can be considered a 2 dimensional function (f(x, y) = z where x and y are the coordinates of any given pixel) defined over some finite interval.  One can visualize sliding a 2D function over another 2D function by imagining a multicolored blanket sliding over another blanket.  It can be moved in either or both of 2 directions: x and y.

The finite interval part is important.  In the case of an image the finite intervals are the dimensions of the image.  We assume it’s 0 everywhere else so that the product of the function outside of its dimensions is 0.

Since this is a 2D function, unlike the 1D function depicted in the Wikipedia article, their product produces a surface.  The area under this surface will be maximal when they are perfectly aligned (assuming the images are identical).

A coworker has just explained a different, spatial technique for determining image alignment (registration in the jargon of the trade).  In the spatial domain if 2 images are identical and perfectly lined up then if you were to subtract one from the other at each pixel location you would get 0s at every location.  To account for variations in magnitude the square of the difference at each point can be taken.  The sum of these squared differences will be zero when the images are perfectly aligned.

If the images are identical but not perfectly aligned you can figure out how to align them but sliding one around the other and examining the sum-of-squared differences (SSD).  This too can be plotted as a surface the minimum of which represents the amount one image needs to be shifted in x and y to line up perfectly with the other.

There’s a lot assumption-wise that I’ve left out of the discussion.  The technique assumes that the image has regular features.  And that there are strong/sharp features (in frequency space these are represented by high frequencies) that will tend to dominate the SSD such that when the images are out of alignment the SSD will be large vs near zero when in alignment.  If the image were uniform noise then the SSD is likely to bounce around with no clear minimum as there’s no sharp edge content to anchor the sum.

Domains and Impersonation

What happens when your Windows Service tries to impersonate a local user while joined to a Domain? 

Does “.” still represent the local machine or does it represent the default domain?

To find out the answers to these questions I fired up Virtual PC 2007 and installed Windows Server 2003 R2.  Normally I’d have gone with Server 2008 but suspect that the target environment is running 2003.

First read this Wikipedia article on Windows Domains then follow this excellent tutorial for setting up Active Directory.  Why Active Directory (AD)? AD is basically the primary database for Windows Domains.  Even though it’s technically a directory service, not a traditional RDBMS.

So, to answer the original questions, Impersonation works just fine whether or not the computer is joined to a domain.  And “.” still means local machine.  Yippeee!

Generalist’s Delight: Impersonation, UNC, NFS and Virtual Machines

Recently got a chance to exercise some of the technical muscles us generalists love to preen.

The basic problem: A windows service that writes files to a local directory needs to be able to write files to a directory on a Unix system.  Quickly (as in we have a day or two at most to get this to work).

Caveats

  1. The windows service runs as LocalSystem which cannot access resources over the network.  It has to (as far as I know) run as LocalSystem because it needs interactive desktop access.
  2. FTP and Samba are not available due to local policy.

But we can’t even SEE uNIX directories

Since the remote host is running some flavor of Unix I expected we’d have to use NFS.  Our Windows builds don’t include NFS support. 

Fortunately Microsoft gives away a Windows add-on, Windows Services for Unix (SFU), that allows Windows to access NFS exports.  After a little setup that is.  In a non-NIS environment authentication is handled locally.  So the client system (the one mounting the NFS exported directory) needs its own copy of the user name, user id, password and group id of the account it will use to authenticate access to the Unix system.

In the Unix world that would have been the end of the story.  You’d pass the info on the command-line when you access the remote directory.  Fortunately there’s a wonderful GUI in SFU (PCNFSD) that lets you map Windows accounts to locally defined Unix user IDs.  Once mapped NFS mounts can be accessed in the Windows familiar UNC format (\\server\export) or NFS format (server://export).

Impersonation

Now that our Windows machine can “see” the remote directory we’ve got to modify our service so that it can write to it.  From my web programming days I remembered that a process can impersonate another user.  In a web context this is usually done when the app server process (worker process) needs to do something on behalf of the currently connected user; something the account under which the worker process executes does not have the privs to do.

Since this is a .Net service, platform invoke is necessary to access the LogonUser() and related Win32 APIs.  Oddly enough System.Security.Principal has a class that wraps the impersonation API call but does not wrap the functionality necessary to acquire the security token required by impersonation.

But we don’t speak Unix here

We don’t have any systems running unix but fortunately, as described in this previous post, I’ve still got a Virtual PC vhd of  RedHat Fedora 11.  This will do for testing purposes.  It’s painfully slow, since I can’t seem to get Fedora to boot in Virtual PC 2007sp? with hardware virtualization support enabled (heck, I was suprised that my laptop even supports hardware virtualization).  I’m sure a VMware appliance would run faster but I didn’t have that on-hand and time was short.

After booting into Fedora I’m pleasantly surprised by all of the things they’ve copied from Windows.  I expected to have to play around with /etc/fstab then bounce nfsd manually but all of this configuration can be done via UI these days.  After creating a test user, exporting a directory in the test user’s home directory and noting the test user’s user ID and group ID we’re off to plug these into our PCNFSD account mapper and start testing.

But We Can’t Read Maps

Managed/.Net processes can apparently only see drives mapped by the user under which the process is running (maybe this limitation isn’t specific to managed processes?)  So using a persistent mapped drive, which SFU doesn’t support anyway, isn’t an option.  Fortunately UNC syntax works (albeit slowly the very first access).

Tying it all together

An ls –al provides that feeling of satisfaction as the recently created file shows up in the listing.  In pretty colors no less.  Woohoo!

D3DX – easing the path to 2D Direct3D9

Here I’m thinking I have to create my own vertex data structure and set (or mask) the FVF (flexible vertex format) flags but it turns out that D3DX, that most wonderful of utilty libraries on top of Direct3D, has predefined several useful structs.

In my case, I’m only interested in drawing 2D shapes so D3DXVERTEX2 fits the bill.  Always nice to find a data structure that fits with the philosophy of using the least complicated data structure that’ll get the job done.  It exposes X and Y which is mostly all that I need.

2D drawing with D3DX is not unlike drawing with GDI primitives.  In GDI you select a pen into a device context, set properties of the pen then draw onto the device context with various primitives possibly changing properties as you go to achieve a different result (e.g., color, line width, etc..).

The D3DX analogue is a line (ID3DXLine).  This is created directly on a Direct3D device (which abstracts the underlying hardware in a way similar to the way a Device Context abstracts the underlying hardware in GDI).  Drawing is accomplished by setting various line properties then passing in an array of vertices representing the desired line segments.

ActiveX control method name changing case out-of-the-blue?

I’m humming along working on an application when suddenly Visual Studio reports that it can’t find a certain property.

The property name, let’s call it SomeProperty, is a property of an ActiveX control that gets referenced from a Windows Forms 2.0 application.

So I do a clean or three assuming that an old AxInterop dll is lying around somewhere.  After the 2nd clean I blow away a few more library directories to guarantee that the control is both being created and imported from scratch.

Still no luck.  So I clean out my temp directory since Visual Studio uses the temp directory for some intermediate files and who knows, maybe a file somehow got locked or had its permissions changed.

Same error.  I pull up OleView to verify that SomeProperty really has become someProperty.  Sure enough, it shows up as someProperty.

It turns out that this is a known issue with the IDL compiler.  It uses the case of the first instance of an identifier that occurs in the IDL file for any subsequent occurrences of that identifier even when the identifier is later used in a totally different context!

In this case, I recently added a struct with a member name that happens to collide with a property of an interface defined later in the file.  Who knew that all identifiers needed to be globally unique? 

I’ll have to give the mapping thing mentioned in the KB article a try but for now I’ll leave a comment in the IDL and rename the struct member.

Designing for the future

One of the surest ways to create designs that aren’t easily reused is to try to anticipate the domain-specific data structures that will be needed and create them in advance.  Like so much design advice this requires some caution to apply.

The domain-specific part is crucial.  General purpose data structures are very reusable.  One general purpose data structure is a list.  A domain specific data structure is a list of Employees.  How do you know when a strongly typed list of Employees is going to be reusable?  One clue is that you yourself find that you need it.  Another clue is that you keep running across code where devs create collections (or arrays) of Employees and manipulate them in some way.  If you don’t find yourself in need of a data structure and you can’t find evidence of other devs needing/using that data structure then it is probably too early to create that data structure. 

Since most of us are terrible at predicting the future the odds of getting something like a domain-specific data structure right are low.  Since we’re more likely than not to get it wrong our prematurely created data structure might get in the way of discovering the right data structure.

I find myself most able to re-use components that are highly cohesive and loosely coupled.  High cohesion means that a given method does only a single thing at the level of abstraction appropriate to its context.  Loose coupling reduces the number of dependencies between a given method or object and other methods or objects.

In practice, loose coupling is mainly about choosing what to parameterize.  For any non-trivial method it’s usually too expensive to parameterize every thing.  If a method can usefully perform its highly cohesive functionality with only its input parameters then that method is loosely coupled.  These are great candidates for public entry points because they don’t make much in the way of assumptions about conditions.  Every method doesn’t need to be extremely loosely coupled but those that are tend to be more easily reused then those that aren’t.  Of course this has to be weighed against readability – a method that takes 500 parameters might be extremely reusable but it won’t be reused much because it’s too much work to use.

For example, a method that sorts a list of strings tends to be highly reusable when working with lists of strings.  A method that takes a list of strings, tokenizes them, executes them in a shell, collects the output and summarizes the execution results will tend to be less reusable except in a very specific context.

The second method is less reusable because it does so many things.  I’m less likely to be able to compose functionality from that method because it does things I don’t want it to do.  The fewer things a function does the less likely it is to do something I don’t want and, consequently, the more likely it is to be reused.

Waking up your PC over the Internet

To save power, particularly during the hot summer months, I’ve wanted to be able to wake up home PCs remotely.  I tend to use my home PCs to troubleshoot network connectivity issues while at work but other than that there’s really no need for the PCs to be on during the day.

Wake-on-LAN (WOL) has been around since the 90s and provides a way to wake up a PC remotely.

What does it take to make it work?

  1. WOL must be supported and enabled in the BIOS.
  2. WOL must be supported and enabled on the Network Interface Card (Device Manager –> Network Adapter properties –> Advanced).
  3. Power Management must be enabled for the NIC (Device Manager –> Network Adapter properties –> Power Management though this may vary from driver to driver).
  4. A program that will send the magic-packet.  The Wikipedia entry for Wake-on-LAN links to several programs that will do this.
  5. Forwarding a UDP port through the firewall to the LAN broadcast address.  In my case I forwarded UDP port 9 to 192.168.0.255.

On Windows there are other ways to remotely wake up a PC.  Devices that support the “Network Device Class Power Management Reference Specification”, a Microsoft standard described here, defines 2 additional methods: a network wake-up frame and detection of a change in the network link state.  The NIC has to support pattern-match based WOL to take advantage of the Network Wake-Up Frame method.

Both the magic packet or the Network Wake-Up Frame can be delivered in 3 ways: NetBIOS broadcast, ARP resolution and Unicast.

Since we intend to do this remotely we can rule out NetBIOS broadcast and ARP because the PCs are behind a firewall whose protection we want.  That leaves Unicast.  Windows registers the MAC address with the NIC driver as a pattern.  An IP packet with the MAC address and IP of the NIC will therefore serve as a Network Wake-Up Frame and bring the PC out of Sleep.

Since the PCs are behind a firewall 1 or more ports will need to be forwarded.  On my D-LINK 855-DIR router forwarding port 9 to the broadcast address for the internal network (192.168.0.255 in my case) does the trick!

Debugging with Immediate Mode

Sometimes you want a quick way to start stepping through some code that you suspect has an error.

Depending on where the code lives simply getting to it can be a pain.  A function in a service that depends on a bunch of other libraries can be non-trivial to execute in an isolated context.  The 20mins or so of setup, over time, starts to add up.

In a web or client-server context getting to the code of interest may be even more convoluted.  It may require a client to step through several pages before triggering the request that results in execution of the code.  Put that code inside a web service and oy, we’re talking 20-30mins just to start stepping through a function.

Visual Studio’s immediate mode can execute functions in the context of the current project.  CTRL+D,W to open up the Immediate window then ?int temp=SomeStaticFunc(3);

results in the value of temp being printed out.  If the function isn’t static you’ll need to have some other function that creates the desired instance (let’s call it p).  Then in the Immediate window ?int temp=p.SomeInstanceMethod(3); does the trick.

Of course there are other ways to accomplish the same result but Immediate mode is another tool in the toolset for quickly getting into a function of interest.

Reservations and Commitments

Most programs need memory to do their work.  Since programs don’t typically know where in memory they’re going to be loaded programs work with a virtual address space.

It’s virtual in the sense that it doesn’t correspond to any particular hardware implementation of memory.  Neither address lines nor flip-flops, the stuff of RAM that you buy in a store, spring into existence when your program refers to a location in memory.

Under the hood the operating system keeps track of where a virtual address is stored in physical memory.  On Windows the data structure that stores this mapping is called the page map.  Virtual and physical memory are divided up into equal sized blocks called pages (on x86 these pages are 4k long).

As programs start and stop, read and write and go about their business these pages are recycled (how conscientious of them!).

At any given moment a Virtual Address Space Page can be Free, Reserved or Committed.  The importance of distinguishing these states often appears when dealing with Big Data (e.g. high resolution imagery, video, audio, large volumes of text).

In many ways virtual pages are like tables at a restaurant.  Before a restaurant opens the tables may well be in a closet somewhere.  They’re free to be used but customers can’t use them just yet.  Trying to use these tables, folded and unprepared as they are, is likely to be an unpleasant experience liable to draw the ire of the restaurant owner.

Even before the tables are set and arranged a restaurant can take reservations for it.  Once reserved it can’t be reserved for someone else (unless the reservation is cancelled).  On the other hand, even though it’s reserved it’s not yet in use and might well still be folded away in a closet (since the restaurant hasn’t opened yet).

The committed state is analogous to a table that’s been placed and prepared.  It now takes up space in the building.  The customer can use the table for whatever purposes are allowed by the restaurant.

Properties in C++?

In the never-ending quest to reduce the duplication of data I’ve been consolidating some data structures into either native structures or managed classes (mainly depending on where they’re modified the most).

While doing this I’ve run across a bit of syntax that I hadn’t seen before in native C++:

pSomeSmartPointer->SomeProperty;





I’m used to seeing properties in managed code and I suspect that the various managed C++ implementations (C++/CLI, managed extensions for C++) have something similar but this lovely expression compiled without complaint in plain old native C++.



What gives?  Native C++ doesn’t have properties does it?



The need to go digging has yielded thus:




  1. Microsoft C++ does in fact have properties.


  2. They’re defined via the __declspec keyword.


  3. The compiler translates them into the corresponding getXXX() or setXXX() methods.


  4. The #import directive takes advantage of this when importing type libraries exported from managed assemblies.  This translation from IDL to C++, gets stored in .tlh and .tli files.





See this link for an example of using properties in native C++.

How the Hacker puts your system into debt

In the past I’ve posted on some of the differences between the hacker school of coding and the non-hacker school of coding.  I recently encountered another anecdote along the lines of this thread.

The Hacker needs to provide users with a way to switch between multiple identical windows in a form.  It’d be nice to minimize use of vertical space since monitors tend to have less of it available and this application is image oriented.

Sounds like a job for a ToolBar/ToolStrip right?  It’s got all sorts of built in functionality for adding and removing buttons, responding to user events, customizing the appearance, OS theme support, designer support, etc…  It’ll even support runtime repositioning in case there are users out there that prefer to dock it against a different edge of the form.

Given such an easy way to provide the required functionality (and then some) why doesn’t the Hacker use it?

  1. He doesn’t know that it exists.  One thing I’ve noticed is that coders from the Hacker school tend to have average to below-average recall.
  2. He thinks he can write a better ToolBar.  Part of being a Hacker is not having a good grasp of the big picture.  So his analysis of better tends to ignore things like designer support, accessibility, OS themes, System Preferences, System Events, Display Resolution, etc…
  3. It didn’t occur to him to use a built-in widget.  Hackers don’t think this way.  Oftentimes they were trained in an era where there weren’t very many built-in widgets.  Being Hackers they wouldn’t have used them even if they were available. 

Each of these in some respects arises from the natural desire and concomitant tendency to “subvert the system” that is a hallmark of most Hackers.

Why does this matter?

By writing his own toolbar the Hacker has just raised the technical debt both for himself and for any other team members that have to work with what he has built.

One effect of technical debt is malfunction.  Situations where the Hacker’s homemade toolbar are likely to break: multiple monitors, large DPI fonts, different version of the Common Controls library, OS upgrades.  The homemade toolbar no longer functions as intended and/or no longer matches the native environment.  This debt costs time and money to track down and repair.  Often the Hacker is long gone or, if he’s still there, has completely forgotten the mechanics of his homemade toolbar.

Another effect of technical debt is interface mismatch.  Put another way, Hackers tend to produce software that doesn’t interface with other software well.  So not only does the Hacker make it harder for the team to do its job, he makes it harder for the company to take advantage of the work of others.

In any complex project there are going to be many instances where design decisions need to be made.  Some of these will be obvious, some less-so.  These will often provide a moment at which technical debt can be increased or avoided entirely.  Hackers, because of the reasons outlined in this and earlier posts, will almost always tend to choose the path that increases technical debt.

Do your system and your company a favor; protect it from Hackers.

Moving a project from Visual Studio 2008 to 2010

The project conversion wizard successfully converted the projects (with a few warnings).  The warnings were mainly about non-standard output locations (e.g., setting the targetpath to a common directory instead of using the default targetpath).

Caveats observed so far:

  • The C++ compiler apparently doesn’t like one of our multiline macros.  Since this is an old macro that’s no longer necessary and I generally find macros troublesome commenting it out addressed this issue.
  • The C++ compiler no longer supports targeting Windows versions earlier than Win 2000.  A few old source files explicitly defined _WIN32_WINNT at 0x400 (Win95/NT4!).  It’s probably a good thing that these had to be updated since we don’t target those platforms anymore.
  • SHGetSetFolderCustomSettings(), typically used for desktop.ini chicanery, is no longer supported.  It’s possible that this is showing up because we’ve corrected the _WIN32_WINNT preprocessor definition but since it was being used in a way that we no longer need (or want) this afforded an opportunity to do away with it.
  • The version of the Windows SDK that ships with Visual Studio 2010 doesn’t play nice with the DirectX August 2008 SDK.  So this had to be upgraded (to the February 2010 DirectX SDK).
  • The DirectX SDK no longer includes dxerr9.h/.lib and the utility macros DXGetErrorMessage9() and DXGetErrorDescription9() drop 9 from their names.
  • The build system now detects when a project references a library that targets a more recent version of the framework.  Updating the target framework for the project fixes this.

As was the case with the migration from 200* to 2008 most of the difficulties arise in native/unmanaged code.

Moving Windows Forms with Subversion and TortoiseSVN

If you’re using a subversion client that doesn’t have Visual Studio integration (e.g., TortoiseSVN) then moving a Windows Form can be a pain.  Here’s one slightly-less-painful way:

  1. Right click the form in Visual Studio’s Solution Explorer and exclude it from the project.
  2. Open up Windows Explorer, browse to the directory containing the Form.cs, Form.Designer.cs and Form.Resx files.
  3. Turn on Folders in Windows Explorer.  This will cause the folder tree view to appear.
  4. Select all three files, the right-drag (yes, Windows Explorer supports dragging a selection with the right mouse button) the files to the destination folder.
  5. When you release the right mouse button choose “SVN Move”.
  6. Include the files in Visual Studio project.  I usually do this by turning on “view all files” in solution explorer then using the context menu (right-click) to include the form.

That’s it.  Not too painful when you think about it but doing it this way keeps the linkage between the new location and the old location in subversion.

new() really is just an alloc and a function call

While reading Eric Lippert’s blog some time ago he mentions that the constructor is just a function like any other function.

This was serendipitously reinforced for me while stepping into a constructor call.  Since Visual Studio ships the CRT source (and it’s been installed), stepping into a constructor call first takes you into new().

I’m pretty sure new’s OK so I immediately step out and find myself executing the constructor.

I guess Eric's right: an object constructor really is just another method.

Parallelism and the Virtue of Selfishness

When it comes to tracking down bugs in recently parallelized code look for static objects in the call path.  This is a case where VB.NET’s naming convention makes it easier to realize this (in VB.NET static objects are called shared).

So far these have cropped up inside of basic data model objects (objects that correspond almost 1 to 1 to a real world object in the domain of your application).  Usually it’s a performance optimization that Moore’s Law has made unnecessary.  Apart from Moore’s Law object allocation in a managed context is pretty inexpensive.

In my case the culprits were a static Regex object and a static SoapFormatter.

When it comes to concurrency, not sharing (selfishness) is a virtue.

Using Window’s FOR command to replace unix find -exec

Finding all directories within a directory recursively:

dir /a:d /s /b <dirname pattern>

e.g.,

dir /a:d /s /b tmpdir*

the /a:d flag restricts the results to directories, /s does a recursive search and /b prints the results in bare format.

Why bare format?  Because that makes it suitable as input to the FOR /F command.  Putting them together, you can delete all these directories with:

FOR /F “delims=” %i IN (‘dir /a:d /s /b tmpdir*’) DO rd /s /q “%i”

FOR’s /F allows, among other things, using the output of a command (in between the single quoted parentheses) as input to a command that’s repeated (the part after the DO).  In this case we’re executing RD (remove directory) on each directory that starts with the name tmpdir.

Misaccounting for Programmer Productivity

When it comes to programmer productivity one metric, widely used in times past, is lines of code produced.  Since code is one of the more visible outputs of a programmer it seems reasonable, at least on the surface, to use this as a gauge of how much work these people you’re paying lots of money to are doing.

The problem with this is that it puts the metric on the wrong side of the accounting ledger.  One of the surest ways to improve the quality of software is to write less of it.  This seems counterintuitive but it’s a natural corollary of the concept of reuse. 

It’s a familiar principle in pretty much every other area of modern life.  We don’t expect construction workers to produce their own vegetables, raise their own cattle, sew their own clothes, make their own mattresses, etc…  because it’s much more efficient for them to focus on construction work and purchase those necessities from people that specialize in producing them.  That is, the construction worker does a better job focusing on construction and gets a better mattress from the mattress maker than if he tried to do both.  The mattress maker needs a place to make mattresses and gets a better quality building if he lets the construction worker build it.

The analogue in software is the use of libraries.  Of course not all libraries are created equal but if you’ve got a choice between using something that’s been specifically built for a purpose that is largely ancillary to your own purpose then you can reap major efficiency gains by taking advantage of it. 

The sorting library that has every kind of sort from merge sort to quick sort to postman’s sort has probably done a much better job at implementing the sorts than you have time to replicate.  Especially if sorting is only a small part of the total piece of functionality someone is paying you to produce.

When considered in that light, the phrase “lines of code produced” seems incongruous.  It should be, to borrow a phrase from Dijkstra, “Lines of code spent” and accounted for as a cost.

A Shortcut a Day Keeps Efficiency Away

What follows is a totally fictional dialog.  It’s the kind of conversation that might happen several times a week on a software team as a release nears.

Hacker, speaking to co-worker, announces: “There’s no way to make these buttons invisible.  They’re initialized by an array that gets set before the screen is displayed. … So I’m just going to disable them instead.”

Co-worker: “But isn’t there a way to remove the buttons?”

Hacker: “There’s a way to add buttons.  … I think it’ll just be easier to disable them.”

In walks Subject Matter Expert (SME) who, after overhearing this conversation, points out: “There’s another screen that does that, it’s probably a good example to use.”

We have reached an inflection point: Does the Hacker go with the technique he figured out after spending quite some time tracking down the issue?  Or does he take the SME’s advice and use the approach used by other screens?

There are many things to be gleaned from the Hacker’s decision that directly bear on how well the software he builds will stand the test of time.

Abandoning an approach that you’ve invested in does not come naturally.  In fact, in other areas of life it’s what you’re not supposed to do.  It’s hard for us even to detect how attached we are to an approach and how that attachment influences our judgment or willingness to abandon it.  But when it comes to software design, like any other exploratory activity, being able to recognize a dead end quickly is crucial to delivering quality on time.

Even though it doesn’t come naturally I’ve found that it does become easier with practice.  Once you’ve done it a few times and, more importantly, reaped the rewards of a system that either functions better or is more easily maintained (or both), those benefits make it easier to swallow the next time.

Sharing a struct definition between managed and native code

By defining a struct in IDL the struct definition can be used in managed and native code.

When the IDL file is compiled the C++ bindings are automatically generated in a header file.  This allows you to use the struct definition from C++.

When the resulting type library is imported by the type library importer the struct gets created along with any other visible IDL type definitions (e.g., interfaces, coclasses, enums, etc…).  This creation involves translating the IDL into IL metadata (you can see the metadata with ildasm).

Troubleshooting:

  1. Make sure the native code builds a type library and that the type library is registered.
  2. Use OleView to see the types included in the type library.  The struct should show up in this list of types.
  3. Don’t typedef the struct in the IDL; although this will compile the type library importer will ignore the resulting alias.  If you want the typedef for use in native code create it manually.
  4. Use the Object Browser or ILDASM to verify that the type library imported the struct.

Quick Hit: Speeding up Visual Studio builds for Solutions with Managed and Unmanaged projects

Unload the native projects (right click native project in solution explorer –> unload).  Any subsequent builds will not build the native projects which usually take much longer to build.

Visual Studio’s memory of which projects are unloaded in a solution appears to be stored locally so it won’t be checked in when the solution is checked in (you’re not checking in .suo files are you?)

Of course this isn’t helpful if you’re working on the native projects but greatly speeds things up when you’re only working on managed code.

Which Control Properties can be automatically saved and loaded with ApplicationSettings?

ApplicationSettings automatically persist and bind property values for a component under the following circumstances:

  1. The component implements IBindableComponent.
  2. The component has either:
    1. A PropertyChanged event for the specific property (e.g., TextChanged, ValueChanged, etc…)
    2. or the component implements INotifyPropertyChanged.

Since all controls inherit from System.Windows.Forms.Control and Control implements IBindableComponent for controls you really only need to worry about the 2nd requirement.  If the control has a property changed event for the specific property then it will probably be automatically persisted when connected to an ApplicationSetting property.

Application Resiliency for Standard Users

Windows Installer has a feature called “Application Resiliency” which, in a nutshell, attempts to make sure that an application’s components are in order before executing the application.

Application Resiliency checks are invoked on a few occasions:

  1. When the user clicks an installer shortcut (which is different from a regular shortcut though the difference isn’t entirely germane to this post).
  2. When the user starts the application via file association (e.g., double clicks on a file extension that’s handled by the app).

These features are configured by registry entries.  The presence of a special key, called a Darwin Description (and described here), causes Windows Installer to do the extra Application Resiliency checks possibly resulting in an automatic repair.

So far so good.  I’m happy to have extra help keeping an application’s components intact.

What happens if the application was installed by an Administrator but the first resiliency check occurs in the context of a user without admin privs?  For example, the administrator installs it then leaves.  The first user to use the application is a user without admin privs.

Some of the resiliency checks require accessing the msi installer package.  This package, at least for one particular installer creating program, is by default left in a location specific to the user that performed the install.

Since non-admin users can’t read this default location, they should fail the resiliency check.

The twist in this particular case is that the user only fails the resiliency check when it is invoked by file extension (e.g., they double click on a file extension that is handled by the app).

A few solutions come to mind.

  1. Use Windows Installer in the “no registry” mode.  This will fix the problem but you give up a lot of functionality provided by Windows Installer.
  2. Manually disable Application Resiliency by deleting the “Darwin Descriptor” registry value after install.  This is bad for several reasons not the least of which is that it relies on an implementation detail that might change.  It’s also hard to do because the registry value appears to be read-only.
  3. Make sure that msi installer package gets installed into a directory that any user can read (e.g., CommonAppData).

1 and 2 are tempting but IMHO these are hackish approaches with too many drawbacks to justify their use.  3 seems to be the way to go but is not entirely without drawbacks: some systems may be so tightly configured that there are effectively no directories that can be read by any user.

The (software) Rockstar’s Muse

Had to share some great linkage about the “Rockstar’s Rockstar” type of developer.

I don’t think having one or more of these types around can magically turn an awful business plan into the next iPhone (or whatever business plan included the iPhone). 

Amazon, as discussed in “Done, and Gets Things Smart”, is a case in point though I’d be surprised if there weren’t a few poking around in the beginning - if only to get the seemingly embryonic technology of the time working well enough to make a business.

However, they might just buy you enough time to make a mediocre business plan into a stellar one.  They do this mainly in 2 ways:

  1. The software they produce and/or architectures they put into place result in far fewer fires to put out.
  2. The software they produce and/or architectures they put into place tend to interface with other software/architectures well.

How do these 2 points conspire to make Rockstar’s Rockstar developers make-or-break components of a software team?  I’ll expand on these in a later post.

Why are unused properties accidents waiting to happen?

When it comes to class design I’ve tended to shy away from using what database types would call “de-normalized” objects.

In the world of Object Oriented Design the analogue of de-normalized tables are objects that have lots of fields most of which are null or unset.

These kinds of objects should be avoided because they depend on hidden knowledge.  They increase the amount of hidden knowledge necessary to work with a given system.

In a perfect world the purpose of an object would be obvious based on its name.  The same goes for its properties.  But when most of an objects properties are not used at any given moment you usually have an object that can take on radically different roles based on which set of its properties are set (or not null).

How is someone new to the code going to know which role the object is in at the time they’re looking at it?  <== There’s that hidden knowledge!

Although it’s tempting in a pinch to pile on the properties/fields of an object, especially when it’s sitting right there and you really need a piece of data, I urge you to resist the temptation.  The befuddlement you’re hoisting into the future my just land in the lap of your future self.

As always this rule of thumb should be taken with a healthy dose of skepticism.  Sometimes it’s necessary to de-normalize; either for performance reasons or because the design genuinely calls for it.