Can't pass structs between an MFC COM object and .NET/C#

A subset of MFC is designed for COM.  Much of this support is in the form of MACROS (e.g., OLECREATE, DISP_FUNCTION, etc...).  After struggling trying to pass a structure back and forth between a C# client and an MFC COM server I gave up and decided to try with an ATL COM Server.


ATL was designed for creating "small, fast COM components".  As it turns out, passing structures (and pretty much everything else) is much easier with ATL than it is with MFC COM.  ATL takes a different approach to supporting COM (mainly via templates).


It seems to me that MFC COM is only designed to work with VARIANT compatible data types (perhaps because it was targeted towards interoperability with VB).  The MFC  "Add a method" wizard won't even allow you to specify a parameter or return type that isn't VARIANT compatible.  This restriction is further implemented in the limitation on the parameter and return types that can be specified via the DISP_FUNCTION (or DISP_FUNCTION_ID) macros.


The .NET Interop Marshaler has built-in support for some of these (e.g., dates) but does not support VARIANTs of type VT_RECORD (see this kb article).  To store a structure in a VARIANT requires a VARIANT of type VT_RECORD.  Ergo, you can't pass structures back and forth between C# and an MFC COM Server.


See this excellent article (CLR Inside Out Introduction to COM Interop) detailing how to pass a structure back and forth between C# and COM Server written with ATL.


The IDL uses the same parameter types as the C++ implementation; e.g., if you have a function that takes a struct MyUDT *ptr then that method's signature in the IDL will take a struct MyUDT *ptr.  So, from C++ to IDL to C# we have:


C++ .h

STDMETHOD(MyFunc)(struct MyUDT *ptr);


C++ .cpp

STDMETHODIMP MyCOMServer::MyFunc(struct MyUDT *ptr) { ... }


IDL struct definition

same as in C++, e.g., struct MyUDT { int x; int y; }


IDL interface method declaration

[id(1), helpstring("method MyFunc")] HRESULT MyFunc(struct MyUDT *ptr);



MyUDT udt = new MyUDT();

udt.x = 1;

udt.y = 2;

MyCOMServerClass sc = new MyCOMServerClass();

sc.MyFunc(ref udt);


If the structure is entirely made up of blitabble types then you don't even have the overhead of copying; the Interop Marshaler will just pin it.  If not then you will need to apply the [in, out] attribute to the parameter in the IDL so that the Interop Marshaler copies the unmanaged representation back to the managed representation when the method call ends (so that you can see any changes).


Any changes made to the struct in the unmanaged code will be visible in the managed code.


Booting DOS from a USB flash drive

I wanted to boot into DOS from a USB flash drive (lexar jumpdrive) to run a memory tester (I believe it needs to run in real mode, windows runs in protected mode, but not sure on that). There are several pages out there describing a procedure that, unfortunately for me, requires the presence of a floppy drive. I don't have a floppy drive :( I also don't happen to have a copy of Windows 98 or MSDOS lying around. And I really didn't want to have to install FreeDOS (or any version of DOS).

So, from my Windows XP machine, these are the steps I took to create a bootable USB flash drive.

  1. Download Virtual Floppy Drive. There is no installer. Extract it to a directory then double-click on "vfdwin.exe" to open the user interface (called VFD Control Panel).
  2. Create a virtual a: drive by doing the following:

    1. From the Driver tab leave the Start Type at "Manual".
    2. click Install (if it's not grayed out). This installs the vfd driver.
    3. click Start. This starts the driver.
    4. Switch to the Drive0 tab.
    5. Click "Change" to choose a drive letter then select A from the drop-down (if it's available). This is the drive letter windows will think is a floppy drive. Make sure Persistent/Global is checked.
    6. Click "Open" then browse to a directory.
    7. Type floppy.img then click OK. This is where the floppy image will be stored when you create it.
    8. Choose Media Type (I leave it at 3.5" 1.44MB but you may want to emulate a different sized floppy).
    9. Click "Create".

  3. Create a startup disk on the virtual floppy drive.

    1. Windows now thinks you have a floppy drive at a: (or whatever you selected).
    2. Using Windows Explorer (or my computer), right click the a: drive and choose Format.
    3. Make sure "Create an MS-DOS Startup Disk" is checked.
    4. Click Start.

  4. Copy whatever DOS utilities/.exe files you want onto the virtual floppy drive.

    1. You can do this with Windows Explorer (drag them onto the a: drive) or command prompt.
    2. Remember - you will only have 1.44MB to work with, so choose carefully.

  5. Save the virtual floppy drive image.

    1. Close any windows explorer or my computer windows that currently have the a: drive open. Same goes for command prompts.
    2. In the VFD Control Panel switch to tab Drive0 if it's not still there.
    3. Click "Save".
    4. Check "Overwrite an existing file" then click "Save".

  6. Download the HP Drive Key Boot Utility. It works for many brands of flash drive, not just HP drives.
  7. Plug in the USB flash drive.
  8. Install the HP Drive Key Boot Utility.
  9. Start the HP Drive Key Boot Utility.
  10. Choose the letter currently assigned to your USB flash drive. Click Next.
  11. Leave "Create New or Replace Existing Configuration" selected. Click Next.
  12. Choose "Floppy Disk". Click Next.
  13. Choose "Image from file" then browse to the file you saved in step 1.7 (e.g., floppy.img). Click Next.

You should now have a USB flash drive that you can boot from.

Plug the key into whatever machine you want to test. Boot the machine, go into its BIOS settings screen (usually by pressing DEL or F2 or F10 during boot), disable all boot devices except for the USB key. You may have to try USB-FDD, USB-ZIP or USB-HDD to get it to work. Save the changes then reboot. You should be greeted with an A: prompt!

Setting network configuration from a command prompt

NetSh is an excellent command line tool that exposes pretty much every configuration option available from the UI in the form of a command-line tool.  It's also scriptable.


Its command structure is similar to that of the Cisco IOS shell on routers and switches.  You switch to the context of the device/port/setting you want to configure then execute a command to configure it.  E.g., to set an interface to a static IP, you would:


interface ip

set address "Local Area Connection" static


interface ip switches you into that context (configuring tcp/ip for interfaces).  set address is a command available in that context.


To view the existing configuration for all interfaces use the show config command from the interface ip context, e.g:


interface ip

show config



more on migrating from unix to windows

Although the Windows XP command shell and standard executables aren't as powerful as standard unix utilities, there are some features that provide similar functionality.  Some of the ones I've found helpful, along with their unix equivalents, are listed below.


  • fgrep -> find.  Basic substring search in multiple files.  The text string has to be quoted.  See find /? for usage.
  • grep -> findstr.  Supports basic regular expressions.  If the pattern contains spaces use /C, e.g., /C:"^pattern with spaces$" .  See findstr /? for usage.
  • find -> haven't found a command line equivalent with as much power but some of the functionality can be accomplished with dir, FOR and Windows Explorer's search (ok that's not a command line equivalent but it's a way to find files by pattern).

    • dir has several options, like /S for recurse, /A:d for directories, etc...  see dir /? for usage.  dir takes a filename glob.  For more powerful searching pipe the output to find or findstr ala dir /a:d | find "pp" to find all directories in the current directory that contain pp in their name.
    • FOR has several options that approximate find.  e.g., /R for recurse.  See FOR /? for usage.

  • loops (e.g., for in bash or foreach tcsh) -> FOR in the xp command shell with a few caveats.  Variables are denoted with %s instead of $.  The % is needed for the formal parameter.  So bash's for i in a b c ; do echo $i ; done becomes for %i in (a b c) DO echo %i in the xp command shell.

    • You can execute a multi-line command by placing it within parentheses.
    • By default, all variables except the FOR variable are evaluated once.  So if the body of your FOR loop changes the value of a variable the variable will only take the first value.  To change this, the command shell must be run with the /V:ON option.  Which changes the %s to !s vis a vis variable expansion.  See cmd /? for usage.

  • exit codes -> you can test for ERRORLEVEL in IF statements.  See IF /? for usage.  Not every executable sets a non-zero error code to indicate an error but the built-in commands seem to.
  • `cmd` parsing -> FOR /f.  To get the year and month of each line of the dir statement you would use for /f "usebackq tokens=1,2,* delims=/" %a in (`dir`) do echo a=%a b=%b

    • See for /? for usage.
    • If your FOR variable is j and you specify tokens then additional variables, starting at j, will be allocated (e.g., %j, %k, %l, etc....)

  • shift -> shift.  Identical!  Of course, you're still using %0, %1, etc... instead of $0, $1, ...  shift is a handy way to get around the 10 positional parameters limit.
  • variables -> set varname=value where value.  Values needn't be quoted.  To evaluate varname use %varname%.  It will only take the first value that is assigned to it unless you turn on delayed expansion via cmd /V:on in which case use !varname!
  • arithmetic -> bash's $((expr)) becomes set /a expr where expr includes the variable being assigned to and the operation being performed.  e.g.,


x=$((1 + 1))

echo $x


set /a x=1+1

echo %x%


Modifying an array passed from a .NET Client to a COM Server

I'm encountering a situation where it would be nice to pass an array from a managed client (in c#) to a COM object (in-proc server) and have the COM object change the values in the array.  In my situation it's an array of strings but the principles apply to an array of base types as well.


In this case the COM object is an MFC dll.  There are several elements involved in setting up the communication.  These elements are:



MFC COM, at least in visual studio 2003, uses a combination of wizard generated IDL, macros and dynamic registration.  Since the focus of this post is COM Interop, I'll leave a discussion of MFC COM to another post (see technote 38 for helpful pointers).


The dispatch interface in the IDL needs to specify that the array is a SAFEARRAY of BSTRs.  e.g:

[id(4)] HRESULT MyArrayFunc([in, out] SAFEARRAY(BSTR) *saArray);

Managed strings are, by default, marshaled as BSTRs by the interop marshaler (not necessarily the same as the platform invoke default marshaling behavior).

SAFEARRAYs are arrays that know their size and number of dimensions.  They're an improvement over old C style arrays, which don't know their length.  The downside is that you have to use a bunch of API calls to access elements of the array (most importantly SafeArrayGetElement() and SafeArrayGetUBound()).



MFC uses dispatch maps to associate a native method with the declared methods of an interface.  Each interface method needs to be included in the dispatch map (BEGIN_DISPATCH_MAP/END_DISPATCH_MAP in your C++ source) via a DISP_FUNCTION (or DISP_FUNCTION_ID) macro.


COM Interop is extremely sensitive to the values set by DISP_FUNCTION_ID.  If the return type or parameter types are incorrectly specified then there is a good chance that a type exception will be thrown when the method is invoked!


To pass an array of strings, the parameter type must be set to VTS_VARIANT, e.g.:

DISP_FUNCTION_ID(CMyClass, "MyArrayFunc", dispidStrArrayFunc, MyArrayFunc, VT_I4, VTS_VARIANT)

NOTE: setting a return type of VT_HRESULT does not work, use VT_I4 (long) instead.


Inside the header file, declare the prototype (make sure it's public, vs2003 wizards seem to put it in a protected: section):

HRESULT MyArrayFunc(VARIANT &vArray)


Then in the source file, implement ala:

HRESULT CMyClass::MyArrayFunc(VARIANT &vArray)



// make sure it's a string array

if (V_VT(&vArray) != (VT_BYREF | VT_ARRAY | VT_BSTR))

AfxThrowOleDispatchException(1001, "Type Mismatch in Parameter. Pass a string array by reference");

// get the safearray from the variant

SAFEARRAY **ppsa = V_ARRAYREF(&vArray);

cout << "In StrArrayFunc()" << endl;

cout << "dimensions=" << SafeArrayGetDim(*ppsa) << endl;

// get lower and upper bounds

long lLBound, lUBound;

SafeArrayGetLBound(*ppsa, 1, &lLBound);

SafeArrayGetUBound(*ppsa, 1, &lUBound);

cout << "lower bound=" << lLBound << ", upper bound=" << lUBound << endl;

// access each element

BSTR bstrCurrent;

BSTR bstrNew;

CString curStr;

for (long i=lLBound; i <= lUBound; i++)


SafeArrayGetElement(*ppsa, &i, &bstrCurrent);

cout << "vArray[" << i << "]=" << CW2A(bstrCurrent) << endl;


bstrNew = SysAllocString(L"replaced");

HRESULT hr = SafeArrayPutElement(*ppsa, &i, bstrNew);

if (FAILED(hr))

goto error;



return S_OK;


AfxThrowOleDispatchException(1003, "Unexpected Failure in StrArrayFunc method");

return 0;



Once a reference to the COMServer is added, an interop class will be created with methods defined in the dispatch interface.  According to the documentation the type library importer is supposed convert [in, out] SAFEARRAY(BSTR) *param to a ref string[].  Unfortunately that doesn't seem to be what happens; it gets imported as a System.Array.  This means there are a few extra steps to calling the unmanaged function and then reading the modified array.  e.g.,

MyCOMServer.MyClassClass mc = new MyClassClass();

String[] ar = new string[30];

for (int i=0; i < ar.Length; i++)

ar[i] = "str1";

Array a = (Array) ar;

mc.MyArrayFunc(ref a);

// NOTE: to get return value, have to cast back from the object passed in by reference...

string[] nAr = (string[]) a;





More on the Token Bucket flow limiting algorithm

So far, the Token Bucket algorithm is doing an excellent job of keeping the bandwidth rate near the desired rate (the r parameter in the algorithm).

However, it has taken a bit to get here. I choose a burst rate that was way too conservative. I ran across a suggested burst rate formula on Cisco's site; burst rate = r * 8 * 1.5 where r is the desired bit rate, 8 is selected because we're using bits (instead of bytes) and 1.5 represents seconds. They empirically determined this value.

My own measurements agree with their indication that Token Bucket will tend to underflow if the burst rate is too small.

At least so far it looks like we get the best results with a bucket that is initially 1/3rd full and an incrementer that adds tokens every 3 milliseconds.

Using multiple threads to process a list of items.

Another multithreading issue I've come across frequently is where I have a fixed size list of items that need to be processed in some way.  Maybe it's a list of IP addresses to check or a list of files to parse.  Either way, the list size is fixed and each item can be processed independently.


One way to deal with these situations is to create a small number of worker threads and have them execute the same loop on the list.  Each thread should be numbered (e.g., from 0 to n-1 where n is the number of threads).  Each thread needs to know its number and the total number of workers.  The loop for each thread then becomes:


for (int i=myNumber; i < size of the list; i += numberOfThreads)

   process list[i]


As far as I can tell this is an example of the isolation approach to concurrency.  The data is carved up into independently processable segments and each segment is handled by a separate thread.  It reminds of the approach that an MPI based program might take.


Since the list is fixed and the elements are independent there's no need for locking.


Of course, the master thread usually needs some way to figure out when everyone is done.  I tend to have the worker threads signal an event then have the master thread wait on all of the thread's events.  An alternative approach might be to use an InterlockedIncrement/Decrement then have the master thread use an InterlockedCompareAndExchange to continually poll until a counter falls back to zero.



A Concurrency Sub-Pattern

I often find that I need to have some work done periodically in the background for some length of time.  The frequency varies but basically a background thread needs to run in a loop, indefinitely, and wake up every so often to do the work.


Since I don't know in advance when I'm going to need to terminate the background thread, I've found that I can use a single Event to kill 2 birds with 1 stone.


Declare a stop event - make sure it's unsignaled by default.  In the Win32 world this is done via CreateEvent().


In the thread loop, instead of sleeping for a set period of time then doing the work, use a timed wait on the StopEvent.  If it times out, do the work, otherwise exit the loop.  Something like


while ( true )


   WaitForSingleObject(stopEvent, SleepTime);

   if ( timed out )

     do work


      break out of loop



When it's time to stop the background thread all you have to do is signal the stop event.  If it's signaled while the thread is doing its work then as soon as it's done the thread will stop.


If the application needs to know when the background thread has stopped, at least in the Windows world, all you need to do is poll GetExitCodeForThread() (have your background thread func return 0 when it's done).  This has the advantage of not introducing any additional event objects (e.g., an "I'm done" event) which plays into my preference for using the existing framework as much as possible.  On the downside it does require some busy waiting (in between checks to GetExitCodeForThread()).  And if your background thread gets "stuck" your foreground thread may get stuck waiting on it forever.


Another rule of thumb I've found useful is "Never Lock and Block()".  As long as you make sure that your background thread doesn't block then there's no need to worry about locking up the foreground thread waiting on the background thread to terminate.

Implementing the Token Bucket algorithm

Implementing this algorithm is relatively straightforward.  I run a thread in the background that periodically fills the token bucket.  This required a minor transformation of the algorithm; instead of adding a single token every 1/r seconds I added T tokens every S seconds where T = r * S.


So the background thread just loops ala:

while ( true )




   if tokens < maxTokens

      tokens += T;




I've left off a few details (e.g., try/catch blocks) but that's the heart of the implementation.


In my case the bucket starts off full (since no data has been sent yet) but that may not be the case in other situations.

Limiting outgoing bandwidth

I needed a way to limit the amount of bandwidth sent by an application.  Ideally the bandwidth limiting would be done in a way that didn't introduce large gaps in transmission (also known as jitter).  After looking around Wikipedia I came across an algorithm that looked like the perfect fit; the token bucket algorithm.


The simplicity of the Token Bucket algorithm for limiting bandwidth is exceeded only by its elegance.  Instead of attacking the problem of sending too much or too little data head on, e.g., by sending more or less data based on what has already been sent, it uses a generative approach. 


Controlling rate of flow based on what has already been sent requires keeping track of what has been sent and determining how much history to keep.  Those are 2 parameters that may vary depending on the characteristics of the flow (bursty vs non-bursty).  In other words, determining how much data to send based solely on how much data has been sent isn't an easy problem.


On the other hand, generating data in proportion to the desired rate of flow is relatively straightforward.  As indicated in the token bucket algorithm, you basically want to generate a token every 1/r seconds where r is the rate of flow.  If an application is only allowed to send data equivalent to the generated amount in the bucket then the application will tend to send data at the desired rate of flow.


The other parameter in this algorithm is b, the maximum bucket size.  Again this was a parameter largely dictated by the data; we have a pretty good idea of the largest burst we'll ever need to send.  In the absence of any other information I suppose you could always use the Maximum Transmission Unit for whatever network you're application is using.

Yahoo Music subscription + XBox 360

I've had an XBox 360 for a little over a year and have totally converted to Console gaming.  With the exception of a brief stint in Second Life, I haven't used my PC for gaming since I got the XBox 360.


Beyond being a great gaming system the 360 is bundled with loads of media streaming functionality.  This is perfect for displaying pictures, listening to music, watching movies, etc...,  since it's already near the TV (which is downstairs).


While playing around with Yahoo Music Jukebox (YMJ) I discovered that the 360 can play subscription tracks!!  I had been considering canceling the subscription because it only saves you 20cents per song purchase AND it's inconvenient having to connect a laptop to the home stereo just to listen to music.  Lo and Behold, with network music enabled in the Yahoo Music Jukebox the XBox360 recognizes it as a music source.


Under the covers I suspect that YMJ is using the Windows Universal Plug-and-Play Device Host API.  If so, it's a perfect example of providing extra functionality via standard frameworks (not necessarily standard in the "committee" sense, but standard in the de-facto sense) leading to greater compatability.

Custom Actions on Vista with visual 2003

One handy feature of Visual 2003 Deployment projects is support for Custom Actions. These can be either executables or libraries that can be invoked during an install (or uninstall).

There are several predefined Installation components, derived from System.Configuration.Install.Installer, for handling things like installing services. You can implement custom actions as Installation Components (by deriving from Installer, specifying a few attributes, etc...) to get built-in support from both the 2003 Deployment project and the .NET SDK installutil.exe program.

The problem I ran into on Vista was that the install was being cancelled by Vista's Data Execution Prevention (DEP). Changing from a dll custom action to an EXE still resulted in the execution being cancelled (though, not by DEP). It turns out that there is a bug in the way VS.Net 2003 deployment projects construct the MSI install package.

To fix this, an application manifest needs to be embedded in the custom action executable. This manifest specifies a requiredPrivileges token (we're using "asInvoker") that is needed for the custom action executable to run during an install. Unfortunately, on VS.Net 2003, I haven't found an easy way to embed the application manifest in the custom action executable. One way of embedding this manifest in the executable is to open the executable in visual studio and manually import the manifest. An example, with steps, follows.

The manifest itself is taken from the "Vista Developer Story" (step 6).

To embed this manifest in the executable:

  1. Build the executable

  2. In visual studio use File - Open to open up the executable.

  3. Right click the executable and choose "Add Resource"

  4. Choose "Import"

  5. enter RT_MANIFEST then click ok

  6. Change the ID property to 1

  7. Save the executable.

  8. Leave the executable open so that the deployment project can't overwrite the executable.

This procedure will need to be done each time to update the embedded resource

Musings on transitioning from Unix to Windows

From 1996 until 2002 I primarily worked in Unix environments.  Mostly Solaris but also AIX and Linux.  There are a lot of things to be said about the Unix world; great text utilities (grep, PERL, sed, awk, etc..), a very powerful command line environment (bash, ksh).  When Java came along things started to improve on the UI front; NetBeans (now eclipse) was a world of improvement over e-macs or vi/vim.


One oddity was that even when the production/deployment environment was a flavor of Unix, almost all of the development was done on Windows.  One company standardized on what was then called Visual Age for Java (from IBM).  Visual Age was an amazing UI - loaded with tons of productivity enhancing features, a first class object browser, a first class forms designer, etc...  It even had rudimentary support for drag and drop programming via graphical components (JavaBeans) that represented basic programming constructs (loops, conditionals).


After finishing grad school (2006) I made a conscious decision to try to find employment in a Windows development shop.  Most of my work during grad school was done in C++ using Visual Studio and I had grown to like the level of integration I found in Visual Studio.  C# was gaining greater acceptance and, coming from Java, I was naturally attracted to it.  Over the years, a few things have stood out to me regarding Unix vs Windows development.

  1. Visual Studio is one of the best IDEs around.
  2. The database tools in Visual Studio make it possible to do round trip development entirely within the IDE.  Stored procedure debugger, Server Explorer, Database Design view all greatly reduce the amount of work involved in building database-centric apps.
  3. Documentation in the Microsoft world is incredibly robust.  Beyond syntax the documentation often includes tons of sample code and examples.  Sometimes a short code sample is much better at illustrating a simple concept than pages of well-formed BNF.  Don't get me wrong, syntax is important but I've come to appreciate the additional code that I find, in one place, in MSDN.


More about datasets

The convenience of being able to visually design the queries and DML statements, while wonderful, is not without its cost.  So far what I'm discovering is:

  • Datasets introduce another state layer between the data model and the database. 

    • They must therefore be initialized.
    • Concurrent access must be serialized/synchronized to prevent corruption.
    • Rolled back transactions need to be accompanied by RejectChanges().

  • Since a DataAdapter is usually necessary to manage each table/query, changing the table structure means changing/updating the corresponding DataAdapter.

    • The "Configure DataAdapter wizard" makes this pretty easy.

It would be nice to have the benefits of the visual query designer w/o having to create a DataSet... Dragging the SqlCommand onto the component/form surface perhaps?

Why strongly typed datasets are cool

Coming from a Java/JDBC background, I was reluctant to use strongly typed datasets.  For one, as far as I know, there wasn't anything comparably generic with the same level of IDE support.  This was around 2000-2001 so things have probably changed in the Java world.


Anyway, beyond being an unfamiliar piece of IDE functionality, I was concerned about memory and computational overhead.  Retrieving an entire table just to manipulate a few of its rows struck me as gratuitous.  In a web environment with large tables on the back end it wouldn't be too hard to use up all available memory in a few requests if each request filled a dataset from one of the larger tables.


I've been converted.  Strongly typed datasets + IDE design support are an awesome combination.  They're cool because:

  • They reduce typing and typos.  The database adapter wizard generates the select/insert/update and delete statements for you.
  • They have built in support for parameters.
  • Each of the DataAdapter commands is optional.  If you only want to insert or update, delete the other commands.
  • Strongly typed datasets take advantage of command completion (Intellisense); very handy for assigning values to rows in tables with lots of fields.
  • They simplify iterative database design.  Changing a column name, adding or dropping columns are a piece of cake; just refresh server explorer, update the XSD (via the XML schema designer), regenerate the dataset and possibly regenerate the DataAdapter and you're set.  Beyond that, the compiler will let you know of any values that no longer apply.  Without the visual database and dataset tools changes to the database design are a lot more work and error prone (e.g., forgetting to change an insert statement or parameter name).

My work flow so far is:

  1. Sketch out a rough design of the table structure on paper.  This could be done inside Database Diagrams but I'm accustomed to working on paper for this part of the design.
  2. Use server explorer to create the tables. 
  3. Generate a database diagram with the relevant tables.
  4. Create a single dataset for each related group of tables by dropping all of the tables on the Schema Designer.  Save to generate the dataset (make sure generate dataset is checked).

    1. I usually manually create the DataRelations but only if I'm going to find it useful to traverse the tables via GetParent() or GetChildRows()...

  5. Create a separate data adapter for each table within the dataset.  Only leave "refresh dataset" option for tables that have an autonumber/identity column.

    1. I also uncheck "use optimistic concurrency" because I think concurrency should be handled at the app level but this is optional.

  6. Use the strongly typed NewRow() and AddNewRow() methods to create the rows.
  7. Fill() and Update() as needed.  Be sure to use the right data adapter to update the dataset.


When RDBMS theory hits practice

There's a project I'm working on that uses MSDE to store its data. I inherited a schema that was totally non-relational; pretty much all of the data model objects are stored in binary serialized IMAGE fields. While this probably made it much easier, initially, to save data it rules out any meaningful querying via SQL.

So I've finally gotten the go ahead to "refactor" this monstrosity. With notions of Relations, Entities (both strong and weak) and Joins in mind I merrily produce a schema that's totally normalized, heavily query-able and makes for quite pretty ER diagrams.

What I neglected to consider was that part of the model relied on data in another part of the model but needed to survive mutations/deletions to that data. Well, I didn't fully consider it. A first cut at the schema wasn't foreign keyed to the data in the other part of the model but, as my boss pointed out, that wasn't sufficient. What good is an ID that refers to a row that no longer exists (even if the deletion didn't cascade)? It's a dangling reference. What I needed was to duplicate some of the data from the other part of the model.

I think the general principle here is that duplication of data can be necessary if some part of the model needs that data to survive mutations/deletions in the source. Probably applies to logging (where strings usually handle the duplication) but also to any derived data objects that need to exist apart from their source model components.

Adding static routes on windows

If a Windows machine has 2 network cards on it, each on a separate network, 1 or more static routes is necessary to tell windows the correct network to use for a given packet.  On Windows XP static routes can be added with the route ADD command.


So if NIC1 was on the with a gateway of and NIC2 was on with a gateway of 2 static routes would be needed:


route ADD MASK -p

route ADD MASK -p


The -p makes the routes persistent (e.g., will survive reboots).  This will direct any traffic destined for 192.168.1.* through NIC1 and traffic destined for 192.168.10.* through NIC2.


Without the static route windows will probably choose to send all traffic out the NIC with the highest (or lowest) MAC address.


Strongly typed WMI for .NET

Just discovered a way to use WMI via strongly typed objects.  mgmtclassgen.exe (comes with the .net SDK) will take a WMI schema object and generate a class with the object's classes and methods!


mgmtclassgen Win32_NetworkAdapter /P Win32_NetworkAdapter.cs 


creates a Win32_NetworkAdapter.cs file that contains a  Win32_NetworkAdapter class with all the properties and methods of the WMI object!


To find instances use Win32_NetworkAdapter.GetInstances("=").