DLL exports, interop and name mangling

It's been a while since I had a need to use platform invoke to access native methods.  When it comes to Interop I've had to use COM Interop much more frequently than platform invoke.

 

However, while trolling the C# fora for questions to answer, I ran across one about platform invoke that looked interesting.  So to answer it I created a test solution, added a Win32 DLL project and a managed C# console app to call the DLL exports via platform invoke.

 

I kept getting the "EntryPointNotFoundException" whenever calling any of the DLL's exported functions.  Running dumpbin /exports test.dll revealed that the functions were being exported.  I assumed that the name mangled form was just an artifact of the way dumpbin displays function names.

 

The name mangled form, which C++ compilers use in part to create the illusion of class functions being part of a single object, is not just a display artifact.  The name mangled form really is the symbol table entry.  It should have clicked when I was able to access the entry point by it's ordinal (e.g., EntryPoint="#3").  Alas, it didn't.

 

The solution is to decorate standalone exported functions with extern "C" ....   Dumpbin will then show the undecorated name in the list of exports.

 

As is so often the case when working in either a new area or an area that I haven't worked in for a while, it takes an effort to see what is right in front of you.  To accept and, more importantly, understand the error messages.  Dumpbin was literally showing me the name of the exported function but I wanted to believe that it was named what I wanted it to be named (the unmangled version).

Interop with 32-bit DLLs on 64-bit Windows

Visual studio 2005 C# projects default to building for the "Any CPU" target.  If you're trying to P/Invoke a method defined in a 32-bit DLL while running on 64-bit windows then this will cause a "BadImageFormatException".  I believe this is because the "run" VS2005 functionality defaults to executing the 64-bit CLR which probably can't read 32-bit DLLs.

 

The solution is to change the target to x86.  Or to build the 32-bit DLL as a 64-bit DLL but I haven't gotten that far yet!

Forms and State

While working on an app with lots of Windows Forms I've been thinking about the best approach for storing the data in a form.

On the web, since http is stateless, the form is implicitly separated from its state. You, or your framework, has to manage state; when a form is submitted its contents are sent from the client to the server, the server marshals this content into some data model, manipulates this data model, then rebuilds the state by creating an entirely new page. This balance of work shifts a little in the AJAX world. Instead of submitting the entire page to the server, the client submits asynchronous requests to the server, the server sends back results then the client updates the page (and, implicitly) the state of the app.

In client apps, such as a Windows Forms app, the form itself can store state. Since processing occurs in one place (the process running on the client's desktop), when the user changes which item is selected in a list this state is stored in the list widget. Despite this, my background from the web has resulted in a tendency to create separate data objects to store state then manually marshal data into and out of the form.

Having been "raised on the web" when Model-Controller-View was all the rage, I find this separation conceptually pleasing. UI changes were more frequent on the web than they are on the desktop so the separation dramatically reduced the amount of work involved in a page redesign. The web, being much more server-centric, also made heavy use of relational databases. Once you've worked out the relational data model then most of the work of designing the corresponding object model was already done.

In some cases the separation has helped. Having separate, non-visual data objects makes it easy to reuse these objects when an offline or batch mode app is needed. More generally, it makes reuse across apps easier since the data can be consumed directly without having to create phantom/hidden windows.

I also find the UI, even in client apps, to be more fluid than the data model. Sometimes it's easier to use radio buttons, sometimes it's easier to use a combobox. Sometimes, based on the user's choices, you want to swap out user controls. When the data model is separate from the UI, it's easier to experiment with different UI designs.

On the other hand, this separation comes with overhead. The overhead of both creating the data model and marshaling data into and out of the UI.

I don't know whether this cost is offset by the increased incidence of reuse and UI flexibility. Being able to reuse a section of data manipulation code, because it's manipulating data not the UI, does save time. Given a project with enough forms this time savings can be non-trivial.

Generalizing Code Creation or "Helping Give Birth to Skynet"

Back in 2001 or so when the .NET Framework was introduced one of the features that struck me as a real innovation was the CodeDOM.

 

There's a good bit to it but the basic idea is that the .NET framework has support for dynamically generating source code then compiling it.  The CodeDOM or Code Document Object Model is a language independent model of source code.  A graph, that general purpose workhorse data structure for CS people, is used to represent source code in a language independent manner.  The "protocol" or "rules" for the graph are the Code DOM.

 

I'm running across a case where it seems like it might be useful - there's a COM enumeration that I want to use but the type library importer hasn't marked it with the FlagsAttribute.  Since I don't want to have to keep updating I'm thinking I'll just reflect over it, construct a managed equivalent and then use the managed equivalent.  In a pre-build event for this project.  Make sense right? 

 

Here's the rub: If procedural programs can produce general intelligence then, IMHO, something like the CodeDOM (and its attendant support) will be a critical step on the journey to real AI.  John Searle seemed pretty convinced that rules and words (what I'm loosely referring to as "procedural programs") are never sufficient to produce intelligence.  Perhaps he's right.  Until that's resolved, I'll work on shaking this feeling of contributing, ever so infinitesimally, to the Birth of Skynet.

 

 

 

Making a background transparent in Visual Studio 2005

Every developer gets to the point in a client-side project where you need to work with graphics.  My favorite tools are, in order of increasing power, the Visual Studio 2005 Image Editor, Microsoft Paint and Adobe Photoshop 3 (mega Kudos to Franklin Thompson, an amazing digital photographer and Web Guru, for giving me a Windows version of Adobe Creative Suite 3 - he's Mac only).

 

If I can get something done w/o having to switch windows or start up another app then I'm all for it.  In that vein, I wanted to make a minor modification to an icon.  I didn't have access to the original icon and had the darndest time getting the background to be transparent whenever I copied it into the Image Editor (32x32 256 colors image type).  Turns out you can have every pixel of a certain color considered transparent by:


  1. Making sure "Opaque Background" is unchecked (choose a selection tool then uncheck Image -> Draw Opaque).
  2. Switch to the eye dropper tool and right click a pixel that matches the color you want to become transparent.
  3. Switch to the selection tool then select the entire image.
  4. Paste the selection into a new image.  You can also paste it into the same image after deleting it then recreating it (with Draw Opaque unchecked).

 

C++/ATL library works in debug but not in release

Ever run across a library that works when compiled in Debug mode but produces erratic results when compiled in Release mode? Not interested in running afoul of the redistribution limitations on the debug runtimes? Nothing like good old static linking to save the day. There are implications for statically linking the debug runtime library to your executable but at least one of these matters little in today's memory rich environment. If you've got separate libraries each statically linked to its own instance of the debug runtime library then I believe there are issues with propagation of hardware traps...

Tables from the Web to the Desktop

Good design tends to spread from wherever it originates to wherever it can be gainfully applied.  HTML tables strike me as an example of this.  Way back in 1996, when the web we know today was but a wee lad, I remember the introduction of browser support for HTML tables.  They instantly became one of many excuses for me to "opt out" of going to class (I was marginally attending the Univ of Penn at the time).  Playing with cellpadding and cellspacing produced visually interesting results (and these turned out to be another instance of good design that were included in tables - control over internal and intra-cell margins) but it became obvious immediately that the big benefit was precise control over layout.

 

Eventually CSS would come to dominate the positioning game but you still find plenty of HTML tables out there on the web. 

 

Anyway, Windows Forms 2.0 introduced the TableLayoutPanel which behaves nearly identically to HTML tables.  Given that the overwhelming majority of display devices (and I include paper as a kind of display device) are 2 dimensional it makes sense that, over time, graphically oriented layout would coalesce around a few "good tricks".  IMHO the best "good tricks" are often rule-based (rule-based in the HTML table sense, not in the CSS 'create as many rules as you like' sense).  I'm a fan of CSS - it provided a much needed solutions to many design (and management) problems but that's a post for another day.

 

It is good to see good design migrating from one modality (the web) to another (the smart client/desktop/OS).

Visual Studio 2005 Preprocessor #defines not being inherited

So I come across a situation where I'd like to modify a set of macros based on the project that has included a given header file.

 

Ugly background - Migrating code from visual studio 6 to visual studio 2005 in pieces.  Some common pieces are still in vc6, which used a compiler that didn't have the extremely NIFTY variadic macros feature introduced in the VC++ compiler in VS2005.  Variadic Macros = macros that accept a variable number of arguments.

 

Anyway, so I have defined my preprocessor #define at the project level but for some reason the code keeps running as though the #define isn't defined!  If I manually include a #define ... then the code picks it up (depending on where the manual #define is placed).  Am I running up against the "30 preprocessor defines via /D" compiler limitation?  No, a quick look at the command-line shows fewer than 10 in total.

 

Turns out that because this particular project was migrated from Visual Studio 2003 a wonderful $(NoInherit) tag was added to the compiler call for each file inside the .vcproj file.  Of course this tag isn't exposed through the UI.  Googling revealed that $(NoInherit) did exactly what it sounded like - it inhibits inheriting of the corresponding project level property.  In this case, it was preventing my project level #define from propagating to the compilation command (cl.exe).

 

ARGH!!!!!!

PERL, like riding a bicycle

I haven't written PERL for pay in several years but a popular wiki tool that I'm using (TWiki) uses it.  So on occasion I have to jump back into the PERL world.  A few quick hits that should help me (and you) get up to speed quickly when tracking down a problem.  I'm using indigoperl on win32 with several of the win32 ports of GNU tools (e.g., less)

 

Getting Help


Thank goodness for perldoc.  Executed via perldoc

or perldoc .  Useful perldoc pages:

  • perltoc - table of contents
  • perlintro - quick syntax reference (see perlsyn for more detail)
  • perlfunc - list of builtin functions
  • perldebtut - perl debugger tutorial.  Covers the basics more than well enough to troubleshoot most problems.

Useful Debugger Commands


Phrased in "Visual Studio" speak for those of us in the Win32 world now..


  • n - step over
  • s - step into
  • r - step out
  • v - display surrounding source code, type again to display more source code
  • . - show next statement
  • c - run until line (the line numbers are listed by v or l)
  • l - list line
  • p $var - prints $var (scalar).
  • x @var - prints @var in a list context
  • x \%var - pretty prints name/value pairs for a hash, also prints any object.

Turns out my particular problem was inside CharsetDetector::detect1().  It was dying when it encountered certain byte sequences.  Quick and dirty solution was to wrap it inside an eval { } (PERL's faked exception handling mechanism - errors stored in $@) then return an empty string.

 

 

Registration free COM for .Net and Native components

On XPSP2 it appears that manifests (assembly and application) need to be both embedded AND standalone.

 

The SideBySide assembly manager puts errors that occur during load into the System event log (on windows XP).  The errors are listed in reverse order so the last one is the most general and earlier ones are more specific.  This has proven invaluable in tracking down Side by Side assembly problems.  Apparently Windows Vista has an sxstrace tool that improves on this but I'm presently stuck on XP...

 

An error message along the lines of "Component identity does not match component requested" was, in my case, traced back to a missing "type=win32" attribute for the assemblyIdentity element.

 

 

Wonderful ISO mounting utility

There are lots of programs out there that enable creation and mounting of ISO images (and other image formats as well). One of my favorites, Alcohol, requires the installation of a "fake" SCSI driver that occasionally interacts poorly with other drivers.

I ran across a wonderful ISO mounting utility directly from Microsoft! It doesn't even support adding a driver to the bootup sequence so it has no impact on startup time. It's not terribly polished - I've found that you can cause it to malfunction if you don't give it time to execute its operations - but it's free, extremely compact and comes directly from The Source (Microsoft). It's called the
Virtual CDRom Control Panel (even though it doesn't really look like a control panel).

Registration Free COM for unmanaged components

By unmanaged components I mean COM servers built with unmanaged/native frameworks; ATL, MFC or (heaven forbid) Win32.

Getting Registration Free COM to work on Windows XP for unmanaged components is similar to getting Registration Free COM to work for managed components.  I'm finding that having the manifest as a standalone file helps a lot during debugging and for managing changes.  Visual Studio 2005 has options for setting a lot of these properties (e.g., assembly identity, dependency fragments, etc...) but I've found it easier to keep them in a single file (included in the project but NOT ending in .manifest because these files are merged no matter what).

The key insight for me was that assembly, in this context, doesn't mean the same thing as an assembly in the managed context.  An assembly is a logical grouping of of Portable Executable (PE) modules (.exes, .dlls, etc...).  It can contain 1 or more DLLs but it's something that the developer creates to control the code that gets executed at runtime (in this case by controlling which module the COM runtime binds to the application).

It MUST have a different name than any of the DLLs (really PEs but you get the point) it contains.  As an example (below), I created an assembly named TestATLLib.X that contains 1 PE - TestATLLib.dll.  That assembly uses the following assembly manifest:


<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1"
manifestVersion="1.0">
<assemblyIdentity
type="win32"
name="TestATLLib.X"
version="1.0.0.0" />
<file name = "TestATLLib.dll">
<comClass
clsid="{EBB0B140-09E7-4C47-B6F1-FACD79FA0F55}"
threadingModel = "Apartment" />
<typelib tlbid="{CC66CB00-53FB-41F3-A745-28D467ED7523}"
version="1.0" helpdir=""/>
</file>
<comInterfaceExternalProxyStub
name="ITestSimpleObj"
iid="{C066CE71-F97C-4CA6-8769-72114D3DDC71}"
proxyStubClsid32="{00020424-0000-0000-C000-000000000046}"
baseInterface="{00000000-0000-0000-C000-000000000046}"
tlbid = "{CC66CB00-53FB-41F3-A745-28D467ED7523}" />
</assembly>







Use project properties -> Manifest Tool -> Input and Output: Additional Manifest Files to have this merged into the automatically generated manifest.  I also set an Output Manifest file ($(OutDir)\$(ProjectName).X.manifest so that I can copy it to the application executable directory after building the application.  The standalone manifest file shouldn't be necessary but I believe there's a problem on Windows XP that prevents that binding process from using the embedded manifest for DLLs.



Now that the unmanaged COM object has been wrapped into an assembly and that assembly has been named and versioned registration free COM requires that the application declare that it depends on the assembly.  This is done with another manifest file (this time an application manifest).  I tested it with a managed client because that's the easiest for me to make but it should work from any client executable.





The application manifest below declares that TestClient needs assembly TestATLLib.X.  Whereas COM usually requires its DLLs to be registered, this test executable will run without the COM object being registered! 



<?xml version="1.0" encoding="UTF-8" standalone="yes"?>


<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">


<assemblyIdentity type="win32"
name="TestClient"
version="1.0.0.0" />


<dependency>


<dependentAssembly>


<assemblyIdentity type="win32"
name="TestATLLib.X"
version="1.0.0.0"/>


</dependentAssembly>


</dependency>


</assembly>









Although you can embed the application manifest in the executable as a native win32 resource I have tended to place it alongside the executable (it must have the same name as the executable with the additional .manifest extension - so TestClient.exe's application manifest is stored in TestClient.exe.manifest).

Registration Free COM for .net/managed components

.NET assemblies can be used from native code via COM.  Unfortunately COM, by default, uses the shared assembly model.  That is, everyone using a given version of a COM server uses the executable (or dll) stored in the same place.  So the last app to install the COM component determines which COM executable is used; if that app is uninstalled all the other apps that depended on that component are broken.

To get around this problem Registration Free COM was introduced.  Now that computers have so much memory one of the main benefits of shared assemblies/shared dlls is no longer relevant.  And the problem of DLL hell is much mitigated when every application has it's own copy of the DLLs it uses.

The excellent article "Registration-Free Activation of .NET-Based Components: A Walkthrough" got me about 3/4ths of the way there but I didn't like having to manually compile a project so as to embed the .net component's manifest as a native win32 resource in the resulting executable.

Fortunately, another blog post "How to embed a manifest in an assembly: Let Me Count The Ways..." points out how to use the manifest tool (mt.exe) included in the 2.0 SDK and vs2005 to embed the resource after the assembly has been built (e.g., a post build event).

I got this working on a small test project.  The key components were as follows:

Native application manifest:

<?xml
version="1.0"
encoding="utf-8"?>
<assembly
xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity type="win32" name="CppClient" version="1.0.0.0" />
<dependency>
<dependentAssembly>
<assemblyIdentity type="win32" name="RegFreeCSharpClassLibrary"

version="1.0.0.0"/>

</dependentAssembly>
</dependency>
</assembly>








This should be added to the generated manifest via Project Properties -> Manifest Tool -> Input and Output -> Additional Manifest Files.  Be sure not to name it with the .manifest extension until you've verified that it work; visual studio 2005 always includes any files in the project that end with the .manifest extension.





NOTE:  Since I'm using Windows XP I couldn't get the native application manifest to work when it was embedded in the executable but it works as a standalone .exe.manifest file alongside the executable.





For the managed component that was being exposed via COM, the manifest was as follows:




<?xml
version="1.0"
encoding="UTF-8" standalone="yes"?>

<assembly
xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity type="win32" name="RegFreeCSharpClassLibrary"
version="1.0.0.0" />
<clrClass clsid="{684DFB01-D74F-41c7-95F9-6520C910BF18}" name="RegFreeCSharpClassLibrary.MyClass"></clrClass>
</assembly>






This manifest needs to be converted into a binary win32 resource then embedded into the resulting executable via the manifest tool mt.  In a post build event this can be done as follows:





mt.exe" -manifest "$(ProjectDir)$(TargetName).manifest"  –outputresource:"$(TargetPath)";#1


Visual Studio 2005 Class Designer

Now that Visual Studio 2008 has been around for a few months I figured it was time to post an update about Visual Studio 2005.  That's right, I've finally gotten around to doing most development in the Last Version of Visual Studio.  Be that as it may...

 

Things that I truly love about Visual Studio 2005:


  • Refactoring - Rename alone would have been plenty but Extract Method, Encapsulate Field and Extract Interface have proven to be enormously useful.  I especially like the Alt-Shift-F10 keyboard shortcut for automatically generating a method based on a name and parameters typed in source.  So if I'm at a point where I know I'll need a method that does something then I'll just type DoSomething(formalParam1, formalParam2); then hit Alt-Shift-F10 and the method is automatically generated!!!!

    • Refactoring + Keyboard Shortcuts has made it so that I can almost design methods entirely within visual studio.  During the early part of a project I still prefer to write psuedocode/PDL describing the steps but I've noticed that I do this less and less frequently now that visual studio supports auto-generating methods.

  • Class Designer.  I've only recently gotten acquainted with the class designer but I totally love it.  It's the OO analog of the Table Designer for databases.  It not only makes creating the in-memory data model a lot easier it also makes it easy to figure out where data *should* live based on the existing data.  I find that the representing certain properties as association and collection association links helps a LOT with figuring out where to put information.

    • I've taken to creating a class, adding the fields that I think I'll need then using refactor to encapsulate all those fields.  I then change the display for key properties to association links.  After a few classes have been created the central/key classes tend to emerge because they're connected to many other classes by association links.
    • Since refactoring works from the designer I tend to use both when laying out the initial data model AND when changing it.
    • The details window is where I spend most of my time; adding fields, methods and the like.  The keyboard shortcuts again come in handy here - using them I can create 5 - 10 fields without my fingers ever leaving the keyboard.  The details window evidences very good design.

  • Debugging.  The managed debugging assistants are nice but I especially like the improved display of non-scalar data types.  Hovering over a class brings up a context menu that allows browsing its properties and values!!!

    • The debugger also seems to be more stable.  Perhaps its the use of the hosting process but the debugger crashes noticeably less often in VS2005 versus previous versions.

  • Anonymous methods.  OK this is more a 2.0 framework thing than a visual studio thing but I am TRULY loving anonymous methods.  Updating the UI from a background thread (I know, I know, I should be using background worker for this but I haven't gotten around to it yet) is so much easier when you don't have to create a new method at every point where you want to update the UI.  Just Form.Invoke(new MethodInvoker(delegate(){ Form.UpdateTheUI(); })) and the call is marshalled onto the UI creator thread.  The syntax took a little getting used to but once I realized that these were anonymous METHODs (not anonymous delegates) then it made a lot more sense.
  • The new user settings/application settings model.  Again, more a 2.0 framework thing but the Visual Studio settings editor makes using this so simple that I can't believe I didn't start using it sooner.  All the little settings that were kind of a pain to deal before VS2005 are much more manageable under 2005.  For starters, the automatically generated Settings classes makes it easy to intercept/change defaults to reasonable values BEFORE they're used.  For another, you can define a new setting to back a property on the fly!

Why the AT&T Tilt/HTC Kaiser is awesome

So many reasons, so little time but here goes...

  1. Built-in GPS. This has prevented me from getting lost so many times in the past few months that I can't imagine living without it. This is especially helpful in downtown atlanta where lots of streets are closed because of construction. Since I'm still not all that familiar with the streets a detour would normally mean me getting lost OR pulling over, pulling out the paper map and trying to get my bearings. Since I bought the tilt I just start Windows Live Search, bring up a map and wallah! An alternate route becomes obvious. Downtown Atlanta IS mostly a grid but there are enough deviations that it's easy to go far out of the way while trying to find an alternate route.

    1. QuickGPS speeds up initialization.
    2. A suction cup PDA holder on the front windshield makes using this while driving much less dangerous :)
    3. GPS requires a clear view of the sky so no luck indoors ;(
  2. ActiveSync. I have no idea what iPhoners have been using but I probably rely more on the Tilt for meeting notices and reminders than my work laptop now. Calendar, Contacts, Email, Tasks and Notes all constantly available!
  3. 3G. This phone is so awesome that it even worked on a recent trip to South Korea! 3G isn't as fast as DSL but it definitely feels like high speed internet access. With a little hacking it's possible to share the Tilt's 3g internet connection with a laptop (both by USB AND by bluetooth!) - this is really handy in airports and other places that don't have free wifi.
  4. Bluetooth. I didn't have a single bluetooth device before getting this phone but it's so useful that I have since added bluetooth dongles to my home laptop, a bluetooth headset (sony dr-22) and plan on getting bluetooth speakers and a bluetooth receiver. If your music collection is entirely on a PC or other device, bluetooth is the missing "link". Windows Media Player even has built-in support for the advanced profile so I can pause and skip from the headset while listening to a playlist.
  5. Pocket IE. Despite all its shortcomings (and they are many), Pocket IE is wonderful for plain and simple text + graphics browsing. There are even some old plugins (e.g., flash) that still work. Just being able to browse the web from anywhere at anytime is a godsend if patience isn't exactly your strong point. I don't mind waiting in line any more since I can use the time to pull up google reader and catch up on internet browsing or send off a short response to an email. IT'S GREAT!!!!!
  6. Office Mobile, Adobe Reader LE - for obvious reasons.
  7. The HTC Audio Manager application. This is probably one of the best designed apps that I've seen for the Tilt. It takes advantage of the touch screen interface in all the right places. The buttons are large enough to use without fumbling. It is so well designed that it doesn't require a stylus to create a playlist on the fly or edit an existing playlist. In my opinion this app works so well because it compensates for the lack of screen space by using context. No screen has very many options (hence there is enough space to make the options available with large buttons or fonts) but when you click an option that requires more input the entire screen switches into that context. And they didn't forget to always provide a way to 'back out' of the current context. At first this was a little counter-intuitive to my desktop-centric expectations (e.g., to switch from 1 artist to another you touch the artists name at the top of the screen - this sends you back to the full artist listing where you can select another artist) but it only takes a few minutes of getting used to.

    1. I have to comment on the playlist editor. It is, hands down, light years better than the Windows Media Player mobile playlist editor. To change the order or delete tracks you switch into edit mode (a single, large button) at which point you can touch-drag tracks around or touch-hold-delete any track. To commit the changes you click done (another single, large button). If your browsing tracks, touch-hold-add to playlist takes you to the list of playlists where a single click on the target playlist adds the track. And the next time you try to add to playlist the last selected playlist is selected by default!
    2. The Windows Media Player Mobile playlist editor requires the use of a stylus (unless you happen to have really long nails). Modifying a playlist is a real pain especially if you're listening to a playlist WHILE modifying it. On the plus side, it will display the album art if it's available, something the HTC Audio Manager doesn't do. And it's playlist format is accessible to the desktop version of Windows Media Player. Still, given that I listen to music more often than not while on the go, I would readily trade playlist sync for a UI that I could use without a stylus!

  8. MicroSDHC support. Right now the biggest card I can find is 8GB but 16GB should be out soon. The manual suggests that it'll support as much as 32GB. 32GB on a mobile phone!!!
  9. The slideout keyboard. After getting used to this I don't even bother with the predictive input/text completion; it's more than fast enough for small to medium sized email messages.
  10. Voice Recorder. With a little hacking you can assign the voice recorder to the "Push To Talk" button. This came in handy in South Korea where an overeager-to-speak-english cab driver regaled me, my boss and a coworker with "Amazing Grace" and "Jingle Bells". Without the Tilt I couldn't have captured this hysterical moment for posterity ;)

Oh and for you Microsoft haters out there - I DO have google maps installed but as of the last update of Windows Live Search it is way easier to get directions with Windows Live Search than with Google Maps. Windows Live Search remembers previously entered addresses and allows you to choose from this list when setting the start and destination. This can be done in the span of a traffic light. Whereas with google maps I have to type the addresses - not possible for me unless the traffic light is really long. Windows Live Search uses an arrow for it's current location which I find helpful in getting oriented while in slow moving traffic. Windows Live Search also makes it much easier to find information on nearby restaurants, gas stations, movie theaters, etc... because of it's categories listing. This is another case where the ease of use is much greater because of large buttons + context; I can browse predefined categories by switching from the map context to the home context (via a big "home" button) then quickly browse to the category of interest. Google's search oriented interface is too cumbersome for use while driving.

Split-Tunneling for PPTP vpn clients

As far as I can tell Split-Tunneling isn't supported by the Windows XP vpn client.  By split-tunneling I mean sending traffic destined for the private network through the encrypted tunnel while sending all other traffic over the VPN client's local gateway.  What this means is that remote users that connect via VPN can access private/company network resources but CAN'T access the internet (this setup has no router, just a PIX firewall).  This is a real bummer for users; they have to repeatedly disconnect then reconnect just to access the internet while they're accessing the company network.

 

Windows XP's vpn client bumps up the metric of the client's local default gateway (from 1 to 11 on my machine) and adds the new vpn pool address as the new default gateway.  This can be seen in the output of "route print" from a command prompt; there will be 2 0.0.0.0 0.0.0.0 entries - these are the original default gateway (which has a metric of 11) and the VPN pool default gateway (which has a metric of 1).  Since routers use the route with the lowest metric, traffic only gets routed over the VPN tunnel.

 


However, if the vpn address pool overlaps the private network pool then windows adds a network route (a route for all packets destined for a network) instead of a default route (0.0.0.0 0.0.0.0 = a route to use if nothing else matches).  So traffic destined for the company/private network gets routed properly and all other traffic uses the client's local gateway.  Be sure to disable the "use default gateway on remote network" tcp/ip option in the XP vpn connection settings otherwise clients won't be able to access the internet.

 

 

A packet's trip back

Should you find yourself setting up Remote Access VPN for a Cisco PIX firewall (or any Cisco IOS based device) you may find it helpful to keep in mind something that kept me stymied for a few hours.  Packets have to be able to make the trip back.  So if you've got a NAT rule translating all outbound packets so that they look like they're coming from your external IP then packets sent from the dial-up/vpn network wont be able to make the return trip.

 

In this setup, like most SOHO setups I imagine, the remote access VPN network is setup on the outside interface.  That's so that people can access it via their ISP.  Although TCP provides the illusion of a single connection, it's still based on packet switching which means that what you've really got are packets going in 2 directions.  Something like:

 

Source to dest is: Remote ISP -> VPN Server at outside interface -> pix firewall -> internal network on inside interface.

Dest to Source SHOULD be: internal network on inside interface -> pix firewall -> outside interface -> remote ISP

 

but with NAT in place, the trip through the firewall changes the packets to make them look like they're coming from the outside interface's external IP.  Since the remote ISP is waiting for packets from the internal network, it ignores packets that look like they're from the oustide interface's external IP.

 

The solution is to exempt outbound packets going to the vpn group from translation.

 

The other part of the solution is to allow packets from the vpn network (which is on the outside interface, a lower security interface) into the internal network.

The lowdown on VARIANTs

Interop between COM and C# (or any other managed language) makes heavy use of the VARIANT data type.  Being a lover of first principles, I finally found a first-principles-ish page describing VARIANTs.

 

In the VS2005 help it's under Win32 and COM Development -> Component Development -> Automation -> SDK Documentation -> Data Types, Structures and Enumerations -> VARIANT.  Apparently VARIANTs came into heavy use under Automation formerly known as OLE Automation, subsequently known as ActiveX.  The basic idea being the programmatic manipulation of components (as opposed to visually manipulating them in their native application environment).

 

For a not-so-first-principles-ish, but still excellent, discussion of VARIANTs see the excellent, if old, series "Dr. GUI and COM Automation, Part 1" (and parts 2-4).

Sometimes DRM can be a real pain

So I've finally upgraded my cell phone - from one I've had since 2000 to the (relatively) new AT&T Tilt.  It's manufactured by HTC out of Taiwan (they call it the HTC 8925 or, internally, the Kaiser).

 

With built-in GPS and a 400mhz ARM processor it's got enough horsepower to be an all-in-one device.  So far, so good; GPS has been extremely helpful - in less than a month it's made finding alternate routes much easier.

 

After adding an 8GB microSDHC card the Tilt has more than enough to store pretty much my whole music collection (downsampled to 160kbps per track).  I bought a new album from the soon to be defunct Yahoo Music (might as well take advantage of that 20cents off per track) and transferred the tracks to the Tilt.

 

The setup worked well until I flashed the device's ROM with a stock WM6.1 ROM from HTC.  The stock ROM is awesome BUT Windows Media DRM no longer recognizes the device.  Windows Media Player valiantly offered to retrieve the licenses over the 3G internet connection but, since there's no version of Yahoo Music Jukebox for Windows Mobile, it couldn't authenticate and acquire the licenses.

 

I get the idea that DRM allows for a plethora of business models; the subscription model would be a lot harder to do without it.  Still, I *purchased* the album.  Why do I have to burn it to CD, wasting a blank CD-R in the process, then re-rip it if I want to play it on other devices? (even if I "format" those other devices)  It isn't enough of a hurdle to prevent nefarious copying but it is enough of an annoyance to slow adoption of legitimate use.

 

Hopefully it won't be long now where non-DRM is the standard for electronically distributed music.  Subscription tracks can always default to DRM but if I buy a track I really hope it won't be protected by default.

 

 

Searching through PDF and DOC files with TWiki KinoSearch and IndigoPERL

Wikis are great but something that has been bothering me is that I wasn't able to search through the contents of Adobe PDF or Microsoft Word DOC files.   IMHO, one of the biggest benefits of wikis is the ability to search.  If there are lots of documents that are unsearchable then there is a lot of information I can't find when I may need it.

 

TWiki runs well on Windows.  It's entirely written in PERL (in this case, IndigoPERL).  There are several TWiki extensions and addons for searching PDF and DOC files but I couldn't get them to work because they all depend on PERL modules that wouldn't install on IndigoPERL (via the CPAN shell).

 

After trial and error I've found that sometimes a module will fail to install from the CPAN shell (e.g., via the CPAN PERL module) but will install without problem manually.  In some cases you have to manually get an earlier version of the module and build it manually; the most recent version won't pass build tests.  The CPAN Search Site is an absolute godsend when it comes to manually installing CPAN modules.

 

Anyway, these are the steps I took to get the SearchEngineKinoSearch TWiki plugin working for TWiki (4.0.5) on Windows (server 2003R2) using IndigoPerl (5.8.6):


  1. Install VC++ from visual studio 2003.net into c:\vs2003 (or a directory name without spaces that is shorter than 9 characters).  Don't forget to include the Windows SDK tools.
  2. Find and download a copy of p2bat.pl.  Save it to your IndigoPERL bin directory.
  3. When manually installing PERL modules always use the visual studio 2003 command prompt (or run vcvars32.bat) so that IndigoPERL can find the VC++ compiler and nmake.  Manually installing is usually as simple as extracting the module into a directory, changing into that directory and running the following commands:

    1. perl Build.PL (for older modules replace Build.PL with Makefile.PL)
    2. Build (for older modules replace this with nmake)
    3. Build test (for older modules replace this with nmake test)
    4. Build install (for older modules replace this with nmake install)

  4. The PERL CPAN shell (perl -MCPAN -e shell) expects to find several GNU tools so install the windows version of the following GNU tools: gnuPG, grep, gzip, tar and unzip.
  5. KinoSearch relies on other programs to extract the text from PDF and DOC files so install xpdf and antiword.
  6. Manually install the following CPAN modules (the version is important): ExtUtils-CBuilder (version 0.21), Spreadsheet-ParseExcel (version 0.32), KinoSearch (version 0.161).  These can be downloaded from the CPAN Search Site.
  7. After manually extracting the SearchEngineKinoSearchAddOn.zip into the twiki dir, manually modify twiki/lib/TWiki/Contrib/SearchEngineKinoSearchAddOn/StringifierPlugins/DOC_antiword.pm and PDF.pm) as follows:

    1. In PDF.pm change system("pdftotext", $filename, $tmp_file, "-q") to system("c:\\bin\\pdftotext.exe", $filename, $tmp_file, "-q"); where c:\\bin\\pdftotext.exe is the fully qualified path of your pdftotext executable (this is a part of xpdf for windows).  The \\ are necessary since \ is a metacharacter in double quoted PERL strings.
    2. In DOC_antiword.pm replace "antiword" with the fully qualified path for your antiword.exe (e.g., c:\\bin\\antiword.exe).

  8. [optional] On my system I've had to modify twiki/lib/TWiki/Sandbox.pm by changing normalizeFileName() to return @result; instead of return join '/', @result; just to get TWiki to work.

 

Converting a DirectX Surface to a GDI+ Bitmap

DirectX surfaces stored in video memory can be rendered a lot faster than images rendered through GDI+ (object oriented graphics device interface windows api).  GDI+, on the other hand, has support for saving images to a lot more formats.

 

I wanted to save a DirectX surface as a JPEG.  There's a Direct3D extension function that supports this; D3DXSaveSurfaceToFile().  Unfortunately it only works with surfaces that are square (width = height).  So to save a non-square JPEG I needed to convert from an IDirect3DSurface9 to a GDI+ Bitmap class then use GDI+'s encoders to save it as a JPEG.

 

To do this, immediately after the call to IDirect3DDevice->Present() do the following:


  • Get the current render target via GetRenderTarget(0), then get the dimensions of the render target via GetDesc().
  • Create a buffer in system memory (not video memory) to hold a copy of the current display.  This is done with CreateOffscreenPlainSurface() w/ D3DPOOL_SYSTEMMEM
  • use GetRenderTargetData() to copy (Blit) the current display to the offscreen buffer.
  • Get a pointer to the offscreen pixel buffer via LockRect().
  • Create a GDI+ Bitmap class using the constructor that takes width, height, pitch, pixel format and a pointer to the pixel buffer.

    • Use the pitch of the offscreen pixel buffer (get it via GetDesc()) not the pitch of the original surface
    • The pixel format of the bitmap must match the pixel format of the surface - this was specified when the IDirect3DDevice was created.

  • use the Bitmap->Save() (defined in Image which is inherited by Bitmap) method to write the file to a JPEG.
  • unlock the rect via UnlockRect() and release any COM interface pointers that were either directly created via QueryInterface() or were indirectly created by calling an interface returning function.

The GDI+ API docs have an excellent description of using the encoders along with a very handy GetEncoderClsid() function.

Copying one big drive/disk/folder to 2 or 3 smaller drives/disks

I needed to copy one big drive (~700GB) to 3 smaller drives.  Since this was going to take a while I didn't want to have to deal with any interactive input.  I basically wanted to copy files to the first drive until it was full then start copying to the 2nd until it was full and, lastly, to copy to the 3rd drive.

 

A batch file to do this follows.  It needs to run from an xp or higher command shell with delayed expansion enabled (e.g., cmd /v:on) as well as command extensions enabled (the default on windows server 2003).

 

This isn't terribly well tested but it worked for my purposes.  If the script encounters a file too big to be stored on the first drive it tries copying it to the second; if it's too big for the second it tries copying it to the third.  If that fails then it prints a line in a copyspan_errors.txt file with the name of the file.

 

Also, it uses the archive attribute to keep track of what should and should not be copied - as long as no other programs are modifying this bit then it should be safe to stop and restart the script with the same arguments.  I ran it with "copyspan.bat d:\ f: g: h:" so that everything on the very large d:\ drive would be backed up to f:, g: and h:. 

 

The batch files contents are:

 

@echo off

REM This batch must run from a command prompt that has
REM delayed expansion enabled (/V:on)

set SRC=%1
set DEST1=%2
set DEST2=%3
set DEST3=%4


REM set the archive bit on the source
REM echo %DATE% %TIME% Setting archive bit in %SRC%
attrib +A "%SRC%*" /S /D


echo %DATE% %TIME% Copying files
rem /M = only copy archive files then clear archive bit
set CPCMD=xcopy /M /F /H /K /Y

if exist copyspan_errors.txt del /q copyspan_errors.txt

set ATTR=
for /R "%SRC%" %%F IN (*) DO (
  for /F "tokens=1*" %%i in ('attrib "%%F"') DO set ATTR=%%i

  if "!ATTR!" EQU "A" (
    %CPCMD% "%%F" "%DEST1%%%~pF"

    if !ERRORLEVEL! NEQ 0 %CPCMD% "%%F" "%DEST2%%%~pF"

    if !ERRORLEVEL! NEQ 0 %CPCMD% "%%F" "%DEST3%%%~pF"

    if !ERRORLEVEL! NEQ 0 echo "%DATE% %TIME% ## error copying %%F" >> copyspan_errors.txt
  ) ELSE (
    echo Skipping already archived file "%%F"
  )
)

:end

echo %DATE% %TIME% Copy complete

Turning COM Errors and HRESULTs into .Net Exceptions

.NET will automatically convert an HRESULT into a specific type of Exception based on the HRESULT.  This is fine for many of the built-in HRESULTs closely related to Win32 API like function calls.  In my case though, I wanted to turn an application specific error that occurs in an ATL COM server (ATL3.0 no less!) into a pretty Exception so that managed code would get a useful error message.

To do this:
  1. Create an empty error object via CreateErrorInfo( &pcerrInfo )
  2. Set the description via pcerrInfo->SetDescription().  CreateErrorInfo allocates a pointer to an object that implements ICreateErrorInfo.  SetDescription() is a method defined on ICreateErrorInfo.  Whatever gets set here will be assigned to the .NET Exception.Message property.
  3. call SetErrorInfo(0, perrInfo) where perrInfo is a pointer to IErrorInfo. 
    1. To get perrInfo pointer, call pcerrInfo->QueryInterface(IID_IErrorInfo, (LPVOID*) &perrInfo);
    2. SetErroInfo() sets the error object for the current thread (actually it clears the current error object then sets it to the new one).
  4. after Release()-ing both pointers, return an error HRESULT (e.g., return E_FAIL).
    1. Don't make any other COM calls before returning the HRESULT as these might overwrite the error object just set.
  5. In .NET this will show up as a COMException where COMException.Message = IErrorInfo->GetDescription()

Since this will be used a lot, I put it into a single function that takes a string.  In practice, I immediately return an error HRESULT after calling the function.


void
CSimpleTestObj::AppSetErrorInfo(LPWSTR pszErr)

{

ICreateErrorInfo *pcerrinfo;

IErrorInfo *perrinfo;

HRESULT hr;

hr = CreateErrorInfo(&pcerrinfo); // create generic error object

if
(SUCCEEDED(hr))

{

// set the
text - in .NET this will map to Exception.Message


pcerrinfo->SetDescription(pszErr);

// need to
get IErrorInfo because this is what the .NET code will see


hr =
pcerrinfo->QueryInterface(IID_IErrorInfo, (LPVOID FAR*) &perrinfo);

if
(SUCCEEDED(hr))

{

// SetErrorInfo sets the
error object for the current thread

// .Net will take it,
make an exception,

// then set
ApplicationException.Message = IErrorInfo.GetDescription()


SetErrorInfo(0, perrinfo);


perrinfo->Release();

}

pcerrinfo->Release();

}

}







MDI, MenuStrips and Windows Forms 2.0

Working on an MDI app.  I wanted to have the main menu change based on the child window that's active.  Windows Forms provides a way to do this via menu merging.

 

Menu merging underwent some changes from WinForms 1.1 to WinForms 2.0.  A few tips:

 


  • set MergeIndex of child menu item to desired index in parent (0 is first index)
  • for each child menu item
  •   if something is already at the desired index then use merge action
  •      if merge action is insert then insert the child menu item above the item already there
  •      if merge action is append, put it after
  •      ...
  •    consider next child in light of new ordering

The last point bears explanation.  If the merge action is insert then the child item pushes down whatever was there.  That increases its new index.  So if you want the next child item to appear below the first child item in the merged menu then you will need to set its MergeIndex taking this into account.

e.g., child menu item1 has merge index of 2 and merge action of insert.

To stuff child menu item1 and 2 into the parent menu child menu item2 will need a merge index of 3.  This is the case because, after the first loop child menu item1 pushed the previous occupant into position 3.  So child menu item2 needs to push that occupant (in position 3) down.

C# Platform Invoke Interop tip

Visual Basic 6.0 includes a tool called "API Text Viewer" that automatically generates method prototypes for win32 api methods.  These are generated for visual basic so they can't be directly used in C# but converting from the VB syntax to the C# syntax is often easier than converting directly from the native prototype.

e.g., the native signature for GetUserObjectInformation() is:

BOOL GetUserObjectInformation(
HANDLE hObj,
int nIndex,
PVOID pvInfo,
DWORD nLength,
LPDWORD
lpnLengthNeeded
);





the VB prototype:






Public Declare Function GetUserObjectInformation Lib 
"user32" Alias "GetUserObjectInformationA" (ByVal hObj As Long, ByVal nIndex As
Long, pvInfo As Any, ByVal nLength As Long, lpnLengthNeeded As Long) As
Long








the C# prototype, at least for nIndex=2 (which gets the name of the object):







[DllImport("user32.dll")]

public static extern int
GetUserObjectInformaiton(int hObj, int nIndex, StringBuilder info, int nLength,
ref lengthNeeded)







A StringBuilder works here because when nIndex is 2 pvInfo will return a string.  Remember to initialize it with a max capacity equal to nLength (e.g., new StringBuilder(300, 300)).

Putting Panasonic SDR-H200 video on the web with ffmpeg

I recently got a Panasonic SDR-H200 camcorder.  It's got excellent quality video, fits perfectly in the palm of my hand and packs a ton of features.

 

I'm a total video newbie so when I tried transferring the video from the embedded hard drive to my computer so that I could post the video to the web I was very much dismayed to find that the files were stored in an unfamiliar format (.MOD).

 

Transferring digital video involves determining 3 things; the file format, the video encoding and the audio encoding.  The file format, sometimes called the container format, specifies how the data is organizing inside the file (e.g., the video encoding, audio encoding, chunk size, etc..)

 

While I'm not sure what the .MOD format is, video recorded from the SDR-H200 is encoded in mpeg2.  Audio is encoded in AC3.

 

Ultimately I wanted to put the video on a website so that family members can watch it.  So I wanted to choose the file format, video encoding and audio encoding that were most widely available.  The following microsoft links helped greatly in this regard:


First download download ffmpeg.  There is no installer so just extract it to a directory.

In case you don't have mpeg2 codecs installed, download and install the klite codec pack.

ffmpeg is a command-line utility.  To use it, open a command prompt and change to the ffmpeg\bin directory.

To convert from .MOD to .wmv:

ffmpeg -i mov001.MOD -f asf -vcodec wmv1 -acodec wmav1 mov001.wmv

This will convert using default settings for bitrate; ffmpeg tries to convert in a way that preserves as much video and audio quality as possible.  However, you may find the quality less than stellar.  In this case, you'll want to increase the bitrate.  One way to do this is

ffmpeg -i mov001.MOD -f asf -vcodec wmv1 -acodec wmav1 -sameq mov001.wmv

This file will be pretty big since it forces ffmpeg to use the same quality as the input.

To determine which file formats, video codecs and audio codecs are installed on your machine:

ffmpeg -formats > formats.txt

This will dump all the formats into a file called formats.txt.  In the doc directory the General documentation has descriptions of each of the formats and codecs.  The names in the General documentation are not the same as the values passed to ffmpeg on the command line.  To get those values you'll need to look at the formats.txt file.

ffmpeg has a bunch of other features, all of which are accessed from command line switches.  See the ffmpeg docs in the doc directory.

Windows XP as a VPN server

Oddly enough, Windows XP (professional and home) can provide VPN access to a home LAN.  I ran across this feature while in a hospital which provided free WiFi access but only allowed outbound web and VPN traffic.  So to do anything other than browse the web (e.g., listen to the soon to be defunct Yahoo Music Jukebox) I needed to VPN to my home LAN then use its network connectivity to run other apps.

 

XP's VPN Server can be configured via the "add new connection" wizard in Network Connections.  Choose "Advanced" then "Accept Incoming Connections".  Don't worry about selecting a physical port for connections (e.g., printer, USB, etc...).  Once the new connection is created change the properties to specify the range of IP addresses (I choose a range near the top of my subnet so that I'd know when someone was using a VPN connection).

 

If you're behind a NAT enabled router you will probably have to put the XP VPN server in the DMZ.  Apparently the more common tunneling protocol (PPTP) uses GRE which sits at the same layer as TCP and UDP in the network stack.  So there's no port to forward for GRE connections.  I'm fine with this as long as Windows Firewall is also running....

 

 

HTML Help for Windows Forms

I briefly tried using HTML Help Workshop to provide context sensitive help for a Windows Forms application but had no luck. Recently I've gotten a chance to take another look at it. So here are a few tips:


  1. Create a separate HTML page for each topic.

    • Another way to put this is to create a separate HTML page for everything that might require context sensitive help. E.g., every user interface widget (or at least those you want to provide with context sensitive help).
    • Use whatever HTML editor you prefer - I like the Visual Studio 2005 designer so I use that.

  2. Create an HTML Help project (.hhp) in the directory containing the HTML files.
  3. Manually add all of the HTML files to the project.
  4. Don't bother with automatically generating a table of contents since all sections within a file will link to the top of the file.
  5. Manually create a Table of Contents from the "Contents" tab.

    • There are 2 types of entries; headings and pages.
    • Heading entries are represented by a book and can contain child entries.
    • If you want the heading entry to display a topic (=HTML page) when clicked you have to edit it.
    • Edit an entry by selecting it then clicking the pencil button to open the "Table of Contents Entry" editor.
    • Under "Files/URLs and their information types" add the topic (e.g., MyFirstTopic.html) to the list.
    • Do the same thing for pages since you usually want a topic to display when a page is clicked from the TOC.

  6. Use relative Files/URLs so that moving the project and its html files to a different directory won't prevent compilation.
  7. Compile the project then click the glasses icon to view the .chm file.

Once the HTML help project is done compiling a new .CHM file can be automated in a post-build event that uses the HHC.exe in the program files directory for HTML Help Workshop.

Happiness

Ran across a very interesting article, in the NYTimes magazine, about a line of inquiry into happiness, behavior and decisions.

 

The major psychological thrust was that we're not very good at predicting how we will feel after an anticipated event.  For example, we tend to underestimate how quickly we will adapt to that shiny new gadget (say, an LCD HDTV with native 1080p resolution).  In the other direction, we also tend to underestimate how quickly we will overcome adversity (e.g., a car accident).

 

The authors term this "impact bias"; impact because the subject is the duration and intensity of the feeling, bias because we tend to underestimate it.

 

The major economic thrust of the research is that we're not very good at predicting how we will behave during periods of anxiety, distress, etc...  The proponents broadly term these "hot states".  For example, mountain climbers that find themselves stuck w/o shelter in frigid temperatures may, under calmer circumstances, swear that they'll stop climbing the next time they're more than X feet away from shelter/food/camp only to find themselves breaking this promise the next time they're in a similar circumstance.  Climbing up a mountain will the sun is setting is the hot state; calm circumstances are the "cool state" and the variation in anticipated behavior (what we think we will do in a hot state) is often wide.

 

Very interesting research.  Another consideration that the researchers (Daniel Gilbert of Harvard, Psychology dept, George Loewenstein of Carnegie-Mellon, Economics dept, and a few others) and find is that the brain's adaptability doesn't seem to kick in until the intensity of the event is pretty high (either strongly positive or strongly negative).  So we underestimate how quickly we'll get over a car accident but overestimate how annoying a squeaky door or broken bed spring will be.

 

In one experiment the participants are allowed to choose from a set of pictures.  One group is told that their choices are final.  The other is told that they can exchange the photos a few weeks later.  The researchers find that the people who couldn't change photos were happier than those who could!  I believe they interpret this as an example of underestimate just how bothersome "buyer's remorse" or "missed opportunities" will feel.

Accessing .NET/C# objects from MFC COM

Turns out that accessing .NET objects from an MFC COM object is relatively straightforward.  The main steps are:
  1. Create a managed class.  It's best to explicitly implement an interface to avoid breakage due to versioning down the road.
  2. Attribute the managed class and its interface(s) with a Guid.  In the COM world GUIDs identify coclasses and interfaces.
  3. Register the managed dll with COM.  In visual studio this will take care of exporting the TLB and running regasm on the managed dll.
  4. To pass a class to an MFC COM object method, define the method as taking a pointer to IUknown.  Every automation object inherits from IUnknown.
  5. In C++, get a reference to the interface via CoCreateInstance().
  6. Call methods/access properties on the interface as needed!

For example, suppose you have a C# class TestClass in TestClass.dll and you want to call TestClass.TestMethod() from an in-process COM server implemented using MFC.  In COM the only way to access the functionality of a class is through an interface.  So you'll need to create an interface containing the methods you want to call and make TestClass inherit from that interface.

C# source:


[Guid("......")]

public interface ITestInterface

{

void TestMethod();

}

[Guid("......."),

ClassInterface(ClassInterfaceType.None)] // to
prevent interop from creating a class interface for you

class TestClass : ITestInterface

{

void TestMethod();

}







To make TestClass and ITestInterface from TestClass.dll visible to C++, the C++ source will need to import both the type library exported from the C#/Managed dll as well as mscorlib.tlb.  To do this, add the following import statements:



C++ .h header file:






#import "mscorlib.tlb"

#import "..\TestClass\bin\Debug\TestClass.tlb" //
created by tlbexp.exe or when visual studio project property "register for COM"
is true










To access TestClass.TestMethod() from C++ you'll need to get a reference to ITestInterface (which TestClass implements).  In COM, interfaces are the only way to access an objects functionality.



C++ .cpp source file (error checking skipped for clarity):






ITestInterface *tptr = NULL;

CoCreateInstance(CLSID_TestClass, NULL,
CLSCTX_INPROC_SERVER, IID_ITestInterface, (void**)(&cpi));

tptr->TestMethod();

tptr->Release(); // COM uses reference counting
to figure out when to release memory










To pass TestClass to an MFC COM object method, you'll have to specify it's parameter type an IUnknown* (aka LPUNKNOWN).



C++ MFC DISPATCH MAP entry:





DISP_FUNCTION_ID(CMyClass, "MyFunc", dispidMyFunc, 
MyFunc, VT_I4, VTS_UNKNOWN) // the vts_unknown ->
IUnknown*










C++ method:






HRESULT CMyClass::MyFunc(IUnknown *pUnk)

{

ITestInterface *pti = NULL; // type
imported from TestClass.tlb


pUnk->QueryInterface(IID_ITestInterface, (void**)
(&pti)); // IID_ITestInterface also defined in
TestClass.tlb

pti->TestFunc(); // calls the
managed method

pti->Release();

}









This method can be called from C# as follows:






TestClass tc = new TestClass();

comObject.SomeFunc(tc); // assumes comObject
is an RCW around the MFC COM object you're passing TestClass to