Introducing Windows Clipboard Viewer

Ever miss Android's built in clipboard history?

Windows Clipboard Viewer is an open source project that brings Android's Clipboard history to Windows.


Git repo URL https://github.com/arnshea/WindowsClipboardViewer


Screenshots



Tech Epiphanies: A Spotify Listener Looks Longingly at Google Play Music

Uh-oh, Spotify. Google Play Music has some killer features you may want to copy:

  • Family-friendly mode (aka "Protect Jr's Ears" mode).
  • Easy mixing of streamed and owned tracks.
  • Start Song Radio.

As with most tech epiphanies this one occurred while doing something else (managing offspring videos). YouTube Red ads feature prominently. I suspected the offline features of Red might be helpful (this suspicion has not yet been confirmed).

At the bottom of the Red ad it says "includes Google Play Music subscription" [question to future self: any value in dynamic ad element positioning? deal hunters might be more tempted if tucked away where others might not notice]. The first month is free so I decide to give it a spin.

Family-friendly mode (aka "Protect Jr's Ears" mode)

Is a family friendly mode a big deal? Have a look at this Spotify community post. When tens of thousands of people bother to complain about the lack of a feature they likely represent 10x their number as most won't post at all.

Google Play Music has it. And they didn't punt on user convenience, the system itself finds the edited/radio-friendly version of a song and plays that one instead when non-explicit mode is on.

Start Song Radio

All* the streaming services have a "start song radio" feature. It's an attempt to match the convenience of terrestrial radio (turn it on and music starts playing). They already know they've got the terrestrials beat on quality and selection (which is why DJs are coming back... but I digress).

But they all do it awkwardly - if you start listening to the song and then "start song radio" for that song the song is interrupted and either starts over or another is started. Google Play Music does it properly. It's a little thing but the experience is offputting for audiophiles and civilians alike. Great to see it implemented so cleanly.

TensorFlow Tutorial on Windows 10

These are the steps I took to complete the TensorFlow tutorial on Windows 10 (build 14393).
  1. Download version 3.5.3 64-bit (not 3.6, current latest) from https://www.python.org/ftp/python/3.5.3/python-3.5.3-amd64.exe
    1. Use the custom install and make sure to specify install for all users.
    2. This should change the path to the Program Files directory instead of %LocalAppData%...
    3. Make sure 'add to PATH' is checked.
  2. Open an elevated command prompt and run the following commands (native python method):
    1. pip install -U setuptools
    2. pip install -U wheel
    3. pip install -U --no-cache-dir  https://pypi.python.org/packages/ce/2c/6a1cf90746879c2d05df04efc86a8b1edd79d7b06323a5c8fa63f5520824/tensorflow-1.0.0-cp35-cp35m-win_amd64.whl
Python Interpreter in the Start Menu after Successful Installation
Validating the install produced several errors (e.g., OpKernel(....) for unknown op: BestSplit) during some session runs but still produces the expected 'Hello, TensorFlow!'.

Many to Many Entity Framework 6.0 Relationships Without Extra Nesting

So you've got a many to many (m2m) relationship. And you'd like the Entity Framework to return both ends of this relationship using SQL that isn't overly nested/CASE-d/UnionAll-d.

Let's take the canonical m2m relationship of Authors to Books. An Author can write many Books and a Book can be written by many Authors.

An Author

Many Books
Depending on how well the Entity Framework can figure out the multiplicity of each end you may end up with SQL that's very inefficient.

A human would query this with LEFT OUTER JOINs.

But given the following C# code (LINQ to Entities, query syntax):


from a in AUTHORS
select new { a, a.BOOKS }

The generated SQL has too much nesting and aliasing:

SELECT 
"Project1"."C1" AS "C1", 
"Project1"."AUTHOR_ID" AS "AUTHOR_ID", 
"Project1"."FNAME" AS "FNAME", 
"Project1"."LNAME" AS "LNAME", 
"Project1"."C2" AS "C2", 
"Project1"."BOOK_ID" AS "BOOK_ID", 
"Project1"."NAME" AS "NAME", 
"Project1"."DESCRIPTION" AS "DESCRIPTION"
FROM ( SELECT 
 "Extent1"."AUTHOR_ID" AS "AUTHOR_ID", 
 "Extent1"."FNAME" AS "FNAME", 
 "Extent1"."LNAME" AS "LNAME", 
 1 AS "C1", 
 "Join1"."BOOK_ID1" AS "BOOK_ID", 
 "Join1"."NAME" AS "NAME", 
 "Join1"."DESCRIPTION" AS "DESCRIPTION", 
 CASE WHEN ("Join1"."BOOK_ID2" IS NULL) THEN NULL ELSE 1 END AS "C2"
 FROM  "SCOTT"."AUTHORS" "Extent1"
 LEFT OUTER JOIN  (SELECT "Extent2"."BOOK_ID" AS "BOOK_ID2", "Extent2"."AUTHOR_ID" AS "AUTHOR_ID", "Extent3"."BOOK_ID" AS "BOOK_ID1", "Extent3"."NAME" AS "NAME", "Extent3"."DESCRIPTION" AS "DESCRIPTION"
  FROM  "SCOTT"."AUTHOR_BOOKS" "Extent2"
  INNER JOIN "SCOTT"."BOOKS" "Extent3" ON "Extent3"."BOOK_ID" = "Extent2"."BOOK_ID" ) "Join1" ON "Extent1"."AUTHOR_ID" = "Join1"."AUTHOR_ID"
)  "Project1"
ORDER BY "Project1"."AUTHOR_ID" ASC, "Project1"."C2" ASC

This holds true even if eager loading (.Include("BOOKS")) is used. It would be nice to use a LEFT OUTER JOIN here without the extra nesting.

Modified C# to eliminate nesting:

from a in AUTHORS
from b in a.BOOKS.DefaultIfEmpty()
select new { a, b }

Eliminates the extra level of nesting in the resulting SQL:

SELECT 
1 AS "C1", 
"Extent1"."AUTHOR_ID" AS "AUTHOR_ID", 
"Extent1"."FNAME" AS "FNAME", 
"Extent1"."LNAME" AS "LNAME", 
"Join1"."BOOK_ID1" AS "BOOK_ID", 
"Join1"."NAME" AS "NAME", 
"Join1"."DESCRIPTION" AS "DESCRIPTION"
FROM  "SCOTT"."AUTHORS" "Extent1"
LEFT OUTER JOIN  (SELECT "Extent2"."BOOK_ID" AS "BOOK_ID2", "Extent2"."AUTHOR_ID" AS "AUTHOR_ID", "Extent3"."BOOK_ID" AS "BOOK_ID1", "Extent3"."NAME" AS "NAME", "Extent3"."DESCRIPTION" AS "DESCRIPTION"
 FROM  "SCOTT"."AUTHOR_BOOKS" "Extent2"
 INNER JOIN "SCOTT"."BOOKS" "Extent3" ON "Extent3"."BOOK_ID" = "Extent2"."BOOK_ID" ) "Join1" ON "Extent1"."AUTHOR_ID" = "Join1"."AUTHOR_ID"

This example is based on using Entity Framework 6.0 with Oracle's ODP.NET Managed Entity Framework driver. That driver, in turn, depends on the ODP.NET Managed Framework driver (strangely enough, at least at first glance, these are not the same thing).

Parser-Friendly Angular 1.3.* Formatting

Angular 1.3.* controllers were initially formatted as follows:

1
2
3
angular.module('path.to.moduleName').controller(['$scope', function($scope) {
  //...
}];

This style has its advantages, particularly while learning angular, in that it reinforces the various concepts (modules, app, dependencies) that are important in the design of the framework.

But unfortunately this style tends to stymie many JavaScript parsers. One widely used library for parsing languages is called Ctags. Many editors rely on this library for parsing. And given a sufficiently long controller using the aforementioned format will easily break the parser.

With a small change in formating we can get the benefit of parsers (e.g., Jump To Definition) while working in JavaScript. With such a free-form language I'll take all of the tool help I can get.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
var myCtrlFunc = function() {
    // ... code goes here
    function fun1() {
        //...
    };
    
    $scope.fun2 = function() {
        //...
    };
};

angular.module('path.to.module').controller('myCtrl', [myCtrlFunc]);

What Is Windows Up To Right Now?

Years ago one relied on the hard drive light for reassurance during some particularly unresponsive episode on the PC. The reassuring hum of those spindles spinning formed a kind of non-deterministic progress indicator.

Spin drives gave way to Solid State Drives (SSDs) which afford no reassuring hum. For a while the activity light remained but nowadays it seems that even that has entered the dustbin of PC history. Sort of loses its usefulness if it's always on...

So, like pretty much every commercial concern, I turned to software. Windows has shipped with Resource Monitor (resmon) for several versions. It augments Task Manager's Performance tab (so much so that a button to open it was added to that tab in windows 7).

For answering the general question "What is Windows doing right now?" I maximize the Overview tab on the right monitor. So far the Disk activity grid seems to yield the greatest insight into what's going on at any given instant.

For example, just watching the Disk activity grid while loading a web app revealed excessive logging settings leftover from previous debugging efforts. You can directly see iisexpress.exe writing to the trace logs.

Here are a few tips for getting the most out of resmon's Overview tab:

  1. Increase the height of the Disk grid. It should be, at least, double the height of grids.
  2. Widen the Disk grid's image and file columns. The file column should take up most of the space to account for very long pathnames.
  3. Sort the Disk grid's by Total (B/sec) column in descending order. This brings the files with the most disk activity to the top.
  4. Sort the CPU grid by Average CPU descending.
  5. Sort the Network grid by Total (B/sec) descending.
  6. Sort the Memory grid by Working Set (KB)

Single Page Applications And The Impending Concurrency Explosion

Ok explosion might be a little dramatic but bear with me for a moment.

Going from a Multi Page Application (MPA, did I just coin a retronym?) to a Single Page Application (SPA) is like going from a single synchronization lock (coarse-grained locking) to multiple synchronization locks (fine-grained locking).

Anyone who has made that journey knows that it is fraught with peril. Even for a solo developer it can be difficult to implement without error. For a team of developers, with varying level of application domain familiarity and developer expertise, it is far more daunting.


How is going from an MPA to an SPA like this concurrency problem?

With an MPA the full page submit is analogous to acquiring a single lock. The state of the app is updated in one fell swoop.

With an SPA the same amount of state will be distributed across separate async requests that fire, largely, in response to user interaction. Since the user drives the requests any ordering requirements must be enforced by the developer(s).

Each of these requests is like acquiring one of many locks (e.g., read-only consumers only need the read lock, which is shareable, whereas read-write consumers need both the read and write lock (not shareable)). On the user interface, independently queryable data can update different UI widgets in parallel*. This increases the perceived interactivity of the web app but comes at the cost of increased complexity.

It can be difficult to coordinate multiple lock scenarios even when all of the developers working on the project are co-located. Part of this difficulty stems from the difficulty modeling concurrency - there are many notations available and none used by everyone.

Throw in developers distributed geographically, varying levels of expertise, even varying languages and the coordination required to prevent lock-related errors becomes quite substantial.

The theory of it is sound but in practice there are often subtle data dependencies that don't reveal themselves when all data is fetched and the page is rendered in one fell swoop. That is, the dependencies don't show up until teasing apart the data needed for the various dynamic portions of the page. In other words, the data access patterns of an SPA are going to be different than the data access patterns of an MPA.

Angular And Filtering Exactly

Angular ships with a set of filters for common output transformations (e.g., formatting a date, formatting currency, etc...).

The filter provider also exposes the 'filter' filter. This filter provides a way to query an array of objects based on property values. It's not as sophisticated as LINQ but it's better than nothing.

By default, the filter 'filter' does a substring match instead of an exact match.

For example, the following code snippet:


$filter('filter')([{name: 'one', male: false}, {name: 'two', male: 'trueisms'}, {name: 'three', male: true}], {male: true})

returns a list with 2 objects (male = 'trueisms' and male = true) instead of 1 (which is what I expected). This is because the default filter will return every object with a male property for which true is a substring.

To get exact matching behavior (and thereby return 1 object in the aforementioned example) pass true as the 3rd argument to the $filter('filter')(...) call.


Chrome Tip - Searching Every/All Source Files At Once

To search through every source file loaded by the currently viewed web page (e.g., the HTML, JavaScript and CSS) open Chrome Dev Tools. From the Sources tab right click topmost node in the tree then choose "Search in all files".

Given that it's accessed via an element in a tree one wonders if it can be used to search through all source files under a given node? A quick test confirms this!

Asynchronous Validation In Angular 1.3*

Problem Introduction

One of Angular 1.3*'s more powerful features is its concept of directives. Directives are, in my opinion, what separate Angular from other frameworks and also contributes to its ease of adoption.

If you have worked with HTML at all then you've applied attributes to HTML elements to alter appearance and/or behavior in some way. Why not augment that with the ability to implement custom attributes?

Why Fix This Now?

The need for an application domain specific, server-based auto-complete/type-ahead text input widget crossed an internal, totally subjective threshold of mine. That threshold, N=3, is the point at which I'd like to start consolidating those 3 separate instances into a reusable component because N=4 is rarely far behind N=3 whereas sometimes N=2 is a fluke...

From a framework agnostic perspective this involves providing the user with a textbox for input then, as they type, querying the server and presenting matching options.

How Can Angular Help?

Angular's facility for handling this is its validation pipeline. While it could be done synchronously part of the rationale for an async-heavy framework like Angular is raising the level of interactivity of web applications.

Client side UI frameworks have had support for the behavior for ages (e.g., a ComboBox).

By using the async validation pipeline the user is able to modify other input, resize the browser, etc...  while waiting for results to update. This increases the perceived interactivity of the application.

How Does Angular's Async Validation Work?

Angular's validation framework is exposed through the Form that is published onto $scope by name (e.g., $scope.myForm when the form tag's name attribute is myForm). The details are well described in the Developer Guide/Forms/Custom Validation section.

A Few Considerations To Keep In Mind

  1. Use the custom validation example as your starting point.
  2. The user should be given some indication of progress while the server-side query is executing.
    1. myForm.$pending.myCustomAsyncValidator is defined (e.g., angular.isDefined() returns true) while the operation is executing. Makes an excellent candidate for an ng-show binding.
  3. The promise returned by the custom validator is a little tricky. If the async call succeeds the then-able promise executes any chained callbacks. But it is very likely that you need to inspect the result of the call to determine if the call's input was valid.
    1. To do this, have your custom validator return $q.reject(reason) from the success callback which will trigger rejection.
  4. If validation is failed (e.g., your validator returns $q.reject(reason)) then the error is published to the form property's error object (e.g., myForm.myProperty.$error.myCustomAsyncValidator evaluates to true).
    1. myForm.$valid will also be false.
  5. If validation succeeds (e.g., your validator returns anything other than a rejected promise) then myForm.myProperty.$error.myCustomAsyncValidator evaluates to false.
    1. This doesn't mean that myForm.$valid will be true. All validators (both synchronous and asynchronous) must successfully resolve for myForm.$valid to be true.

Code Snippets


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
angular.module('myModule').directive('myCustomAsyncValidator', ['$q', function($q) {
  //  ... returns a definition object, most omitted for brevity
  return {
    link: function(scope, elm, attrs, ctrl) {
      ctrl.$asyncValidators.myCustomAsyncValidator = function(modelValue, viewValue) {
        
        // ...
        
        // promise executes the server call using viewValue as input, 
        // omitted for brevity
        
        return promise.then(function(data) {
          if (data.failsValidation) { 
            return $q.reject('input not valid');
          } else {
            return angular.copy(data);
          }
        }); // could also handle case where the server call fails to complete here...
      }
    }
  };
}]);

This directive is applied to the HTML input element as follows:

1
<input type="text" name="myProperty" ng-model="data.myModel.myProperty" my-custom-async-validator>


Why won't git ignore .vs/config/applicationhost.config?

In this case, I was trying to ignore anything in the .vs/ folder (mainly to ignore .vs/config/applicationHost.config which is iisexpress' config file). This directory, created by Visual Studio 2015, contains machine specific data (similar to why you don't want to store .suo files in the repository).

The pattern I went with (.vs/) was both what I came up with originally and the same pattern used by the gitignore template for visual studio.

The pattern was stored in .gitconfig in the repository root directory.

But every time I ran git check-ignore --verbose .vs/config/applicationHost.config the file would not match the pattern.

Oddly enough, a different file in .vs/ matched the .gitignore pattern!

It turns out that .gitignore will not match files that have ever been tracked by git. This file was mistakenly checked in at some point in the past. To check .gitignore patterns without regard for checkin history, use the --no-index flag (eg., check-ignore --verbose .vs/config/applicationHost.config).

To stop tracking the file use git rm --cached path/to/file.ext

Using ASP.NET Membership with Oracle

So you’ve run the scripts to install the Oracle Providers for ASP.NET. You’ve verified that scripts executed without error. But no matter what you try, the code does not seem to use the OracleMembershipProvider.

The first difference is that the Sql Membership Provider’s root object is Membership while Oracle’s is OracleMembershipProvider.

You’ll quickly notice that the convenience overloads of Membership methods, implemented as extension methods, have not been implemented.

OracleMembershipProvider.CreateUser() - Note that the password question and password answer fields cannot be null (even if they’re not used).

Make sure that the v4 provider dll has been installed into the Global Assembly Cache (via OraProvCfg).

OracleMembershipProvider does not have a static method for constructing new instances (use new OracleMembershipProvider()).

The default constructor does not initialize the object with the settings specified in the application config file. To read those settings call provider.Initialize() with the name you gave the oracle provider in app.config and a non-readonly NameValueCollection containing the membership settings.

But how do we get such a NameValueCollection? Using ConfigurationManager.AppSettings causes an exception indicating that the collection is read-only.

1
2
3
4
5
            Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.PerUserRoamingAndLocal);

            NameValueCollection appSettings = ((System.Web.Configuration.MembershipSection)config.GetSection("system.web/membership")).Providers["MyOracleMembershipProvider"].Parameters;

            provider.Initialize("MyOracleMembershipProvider", appSettings);

Using the Visual Studio Diagnostics Hub with an Oracle back-end

Anyone else out there extremely stoked about the Diagnostics Tools that shipped with Visual Studio 2015? The visual display that happens in real-time provides a different kind of coverage than the call-graphs and static call tables you usually get with profiling.
My excitement was somewhat tempered by the lack of support for Oracle/ODP.NET clients. Fortunately, since the Diagnostics Hub is based on IntelliTrace and IntelliTrace is very extensible it is possible to get support for Oracle clients in the Diagnostics and Performance Hub with a few simple steps:
  1. Find CollectionPlan.xml in your Visual Studio installation directory (on my installation it’s in Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\CommonExtensions\Microsoft\IntelliTrace\14.0.0\en).
  2. Add a module specification for Oracle.DataAccess.dll (<ModuleSpecification Id="oracle.dataaccess">Oracle.DataAccess.dll</ModuleSpecification>)
  3. Add DiagnosticsEventSpecifications as listed below (ildasm helps with this part) then exit and restart Visual Studio for the changes to take effect.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
      <DiagnosticEventSpecification>
        <CategoryId>system.data</CategoryId>
        <SettingsName _locID="settingsName.OracleCommand.ExecuteReader2">ExecuteReader (Oracle)</SettingsName>
        <SettingsDescription _locID="settingsDescription.OracleCommand.ExecuteReader2">Command text was executed, building an OracleDataReader using one of the CommandBehavior values.</SettingsDescription>
        <Bindings>
          <Binding>
            <ModuleSpecificationId>oracle.dataaccess</ModuleSpecificationId>
            <TypeName>Oracle.DataAccess.Client.OracleCommand</TypeName>
            <MethodName>ExecuteDbDataReader</MethodName>
            <MethodId>Oracle.DataAccess.Client.OracleCommand.ExecuteDbDataReader(System.Data.CommandBehavior):System.Data.Common.DbDataReader</MethodId>
            <ShortDescription _locID="shortDescription.OracleCommand.ExecuteReader2">Execute Reader "{0}"</ShortDescription>
            <LongDescription _locID="longDescription.OracleCommand.ExecuteReader2">The command text "{0}" was executed on connection "{1}", building an OracleDataReader using one of the CommandBehavior values.</LongDescription>
            <DataQueries>
              <DataQuery index="0" maxSize="4096" type="String" name="Command Text" _locID="dataquery.OracleCommand.ExecuteReader2.CommandText2" _locAttrData="name" query="m_commandText"></DataQuery>
              <DataQuery index="0" maxSize="256" type="String" name="Connection String" _locID="dataquery.OracleCommand.ExecuteNonQuery.ConnectionString" _locAttrData="name" query="m_connection.m_conString"></DataQuery>              
            </DataQueries>
          </Binding>
        </Bindings>
      </DiagnosticEventSpecification>
      <DiagnosticEventSpecification>
        <CategoryId>system.data</CategoryId>
        <SettingsName _locID="settingsName.OracleCommand.ExecuteNonQuery">ExecuteNonQuery (Oracle)</SettingsName>
        <SettingsDescription _locID="settingsDescription.OracleCommand.ExecuteNonQuery">Command text was executed, returning the number of rows affected.</SettingsDescription>
        <Bindings>
          <Binding>
            <ModuleSpecificationId>oracle.dataaccess</ModuleSpecificationId>
            <TypeName>Oracle.DataAccess.Client.OracleCommand</TypeName>
            <MethodName>ExecuteNonQuery</MethodName>
            <MethodId>Oracle.DataAccess.Client.OracleCommand.ExecuteNonQuery():System.Int32</MethodId>
            <ShortDescription _locID="shortDescription.OracleCommand.ExecuteNonQuery">Execute NonQuery "{0}"</ShortDescription>
            <LongDescription _locID="longDescription.OracleCommand.ExecuteNonQuery">The command text "{0}" was executed on connection "{1}", returning the number of rows affected.</LongDescription>
            <DataQueries>
              <DataQuery index="0" maxSize="4096" type="String" name="Command Text" _locID="dataquery.OracleCommand.ExecuteNonQuery.CommandText2" _locAttrData="name" query="m_commandText"></DataQuery>
              <DataQuery index="0" maxSize="256" type="String" name="Connection String" _locID="dataquery.OracleCommand.ExecuteNonQuery.ConnectionString" _locAttrData="name" query="m_connection.m_conString"></DataQuery>
            </DataQueries>
          </Binding>
        </Bindings>
      </DiagnosticEventSpecification>  

The res:// Protocol And Phoning Home

Problem

An internal tool phones home.

Why does it phone home?

It’s a WPF app that defers most of the heavy lifting to a web app rendered through a Web Browser control.

Initial Approach

Easy enough – clone the website. But where to host it? The tool needs to run without network access.

Second step

Store the contents of the site in the executable itself. It’s a .NET WPF app and this seems like the sort of problem embedded resources were intended to solve.

Unfortunately the WPF web browser control, a light wrapper around IE’s ActiveX control, doesn’t know* how to read from managed resources. It *does* however, know how to read from native/win32/PE resources.

How do we store a native resource in a .NET executable?

.NET executables conform to the PE (Portable Executable) format that all windows executables use. Native resources (native is a retronym in this case, before .NET this was the only kind of resource) are defined as part of the PE spec.

It turns out that Visual Studio automatically creates a native/win32 resource for console projects. This resource stores the icon that the shell uses to represent the executable. And it allows you to specify this file, a compiled resource file (*.res), instead of having it automatically generated.

But how do we get a compiled resource file?

The resource related tooling in Visual Studio for .NET projects assumes use of .NET resources. These are a totally different kettle of fish than native/win32 resources. The MSDN article “About Resource Files” explains how to create compiled resource files (spoiler alert: there’s a resource compiler).

But HTML content assumes a directory structure

Native resources predate the WWW by several years. There’s no support for hierarchically arranged files. There are resource types (e.g., RT_BITMAP, RT_FONT, etc…) but other than resource type the structure is entirely flat. This necessitated some manual mapping/flattening of the HTML.

The cloned website included javascript, css and image content. Most of the image content was due to the ExtJS javascript library. Other than the image content there were very few links: each of these was manually converted to a res:// link.

CSS allows references to resources (e.g., background images) via its URL function. Modern CSS libraries make heavy use of the URL function. It turns out that there were several dozen files referenced this way. Fortunately URL accepts relative and absolute URLs. These were programmatically flattened, included in the compiled resource (.res) file and the referencing URL function updated with the res:// link.

All of the files, including images (*.png), javascript and style sheets, were included in the compiled resource file as type RT_HTML (#23) yielding internal URLs in the form res://MyExecutableName.exe/#23/MappedFileName.ext.

Web API 2, DateTime and AngularStrap’s DatePickers

Suppose your application uses DateTime objects but only cares about the Date portion. With ASP.NET Web Api 2, building on ASP.NET MVC support, you can take control over how DateTime objects are serialized via c# attributes or configuration.

But suppose you don’t want to change global configuration? In other words, you’d like to change serialization only for a particular page?

My first attempt at this was to use the HttpContent extensibility point by deriving a class specialized for the kind of content (in this case, JsonContent) as described in detail here.

But it struck me that this was overkill since I wasn’t intending to change any of the response headers and only wanted to change a single entry point in the application.

Web API 2 controllers can return an object that implements IHttpActionResult. By implementing this it is possible to control the response body content (json) and thereby control serialization for a specific request. ApiController defines several helper methods that create IHttpActionResult objects for various scenarios (e.g., Ok() returns an OkResult).

The method I needed was Json<T>(). It has an overload that accepts a JsonSerializerSettings object. Setting JsonSerializerSettings.DateFormatString to a pattern that matches the pattern expected on the client side means that the desired DateTime info successfully roundtrips (even through other widgets, in this case, angularStrap’s datepicker).

AngularJS: To input type=number or input type=text?

Suppose you have objects that contain numeric properties and these objects are bound to HTML form elements.
HTML5 introduced support for input type=”number” which preserves type fidelity (e.g., object.someProperty starts off as a number in the typeof === ‘number’ sense and remains that way after the user has entered a different number). Preserving type fidelity makes change detection (that is, detecting if the user has changed the model) very simple – angular.equals() can be used directly.
There seem to be two general approaches:
  1. Bind the numeric properties to input type=text form elements then, on save, make sure to cast them back to numeric form (ala parseFloat()).
    1. This tends to complicate the save handler. The data has to be massaged back into numeric form before sending it back to the database.
    2. It also tends to complicate change detection – since the original values from the database are Numbers but angular’s binding does not preserve this type fidelity when bound to input type=text.
    3. It also complicates internal manipulation – e.g., if changing one property requires the recalculation of other properties then this logic will need to account for numbers that get converted into strings via angular binding.
    4. This approach is supported in pretty much every browser.
  2. Bind the numeric properties to input type=number form elements.
    1. This simplifies change detection (angular.equals()).
    2. This simplifies the save handlers.
    3. This simplifies internal manipulation since all manipulators can be certain that they’re dealing with Number variables.
    4. NULL valued numerics are still a problem as these may need to be detected and then defaulted to 0 (or some other numeric default).
    5. Browser support for input type=number is not good.
I would much prefer approach 2 which takes advantage of browser support for numerics and let angular preserve type fidelity throughout model binding. The potential cost of handling NULLs when data is fetched from the database is IMHO small compared to the extra cost of converting back and forth between numbers and strings. Unfortunately, if you have to support any version of IE below 12/Edge then this approach won’t work. [8/2016 EDIT - thank you HTML5 Shim!]

Debugging JavaScript With Chrome’s Developer Tools

While going through this chrome developer tools doc to get a better handle on JavaScript debugging in Chrome, I ran across these:

  1. ESC toggles the console drawer.
  2. Chrome’s version of “step over library source” is to provide a way to Blacklist files. e.g., Blacklisting angular.js would prevent the debugger from stepping into angular.js during the debugging session.
  3. Chrome’s version of “first chance exceptions” is to provide a way to pause on caught exceptions (stop sign with a pause symbol on it, don’t forget to check “Pause on Caught Exceptions”).
  4. Right click a breakpoint to make it a conditional breakpoint.

Setting a Breakpoint when the DOM changes

Breakpoints can be set based on when an element’s DOM hierarchy changes, one of its attributes is changed and when the element is removed from the DOM. These breakpoints can be set through the right click menu on an element (Elements pane, element –> right click –>  Break On). These breakpoints show up in the DOM Breakpoints side panel.

Setting a Breakpoint when a JavaScript event is triggered

This can be done from the Sources panel, Event Listener Breakpoints side panel. All supported events are listed in tree form (e.g., mouse –> mouseout for the mouse out event).

Dealing with Minification

Minification, which reduces the amount of data downloaded by the browser, tends to make JavaScript much harder for human debugging. The {} icon at the bottom of the source pane (Sources panel) is the ‘Pretty Print’ button. The Pretty Print button attempts to de-minify the source.

Custom AngularJS Validation With Inline Formatting

Getting a little further along the AngularJS learning curve. Angular has a built in validation framework (including support for async validators!) but to really take advantage of it angular directives need to be created.

The built in validation provides a very convenient way to conditionally display error messages. There's support for minlength, maxlength, regex patterns and several others new in the HTML5 standard.

Given the following snippet of HTML:

1 <div ng-controller="ExampleController">
2 <form name="form" class="css-form" novalidate>
3 Name:
4 <input type="text" ng-model="user.name" name="uName" required="" />
5 <br />
6 <div ng-show="form.$submitted || form.uName.$touched">
7 <div ng-show="form.uName.$error.required">Tell us your name.</div>
8 </div>
9
10 E-mail:
11 <input type="email" ng-model="user.email" name="uEmail" required="" />
12 <br />
13 <div ng-show="form.$submitted || form.uEmail.$touched">
14 <span ng-show="form.uEmail.$error.required">Tell us your email.</span>
15 <span ng-show="form.uEmail.$error.email">This is not a valid email.</span>
16 </div>
The error messages are conditionally shown based on the built in required and email validators.


I was looking for a way to use something similar, e.g., form.myField.$error.phoneNumber, as the ng-show/if conditional but with application specific validation.


To do this in angular, a directive has to be created. This plunkr demonstrates a custom phoneNumber validator (along with inline formatting, that is, the phone number is normalized upon successful validation). This directive can be applied to a textbox bound to a phone number as folows:


1 <div>
2 Phone Number1:
3 <input
4 type="text"
5 ng-model="phoneNumber1"
6 ng-model-options="{ updateOn: 'blur' }"
7 name="phoneNumber1"
8 minlength="10"
9 maxlength="33"
10 phone-number /><br /> <!-- note: we set maxlength to 33 to allow e.g., 123-456-7890 extension 1234567890 -->
11 <span ng-show="form.phoneNumber1.$error.phoneNumber">The value is not a valid phone number!</span>
12 <span ng-show="form.phoneNumber1.$error.minlength || form.phoneNumber1.$error.maxlength">
13 There must be at least 10 digits in a phone number!</span>
14 </div>

In this snippet the error messages are condtioned on the custom validation supplied $error.phoneNumber. The textbox is decorated with the phone-number attribute which angular uses to apply the validator to this particular field.

AngularJS ng-options: to track by or not

It's been a while but I recently find myself working on a web project using AngularJS. It takes a little while to get the hang of 'thinking' in angular but keeping in mind that it's all about extending HTML makes bridging this conceptual gulf a little easier.

The Single Page Application (SPA) approach makes web development look more like traditional client development in that a lot of state is maintained and manipulated entirely on the client.

One of the core features of Angular is rendering multiple elements of an array (e.g., Orders, Customers, etc...). This can be done with the ng-repeat element or the ng-options attribute to a select list.

With ng-options, if the model backing the selection (the model that stores the currently selected value) is a scalar (e.g., an integral ID, a unique string code, etc...) I've found that using the select as form of ng-options works best. For example, for a dropdown list of customers the model might be a unique customer ID. In this case:

ng-options="customer.customerId as customer.fullName for customer in customers"

allows angular to correctly identify the currently selected customer in the array of customers. This is especially important for models that store previously saved selections.

If, on the other hand, the model is an object (non-scalar), e.g., a customer, then I've found that the label for/track by form of ng-options allows angular to correctly identify the currently selected customer. In this case:

ng-options="customer.fullName for customer in customers track by customer.customerId"

This is handy when the model expects an object for the corresponding property in the underlying Data Transfer Object (DTO).


From Startups to MegaCorp: Unlearning to fix everything

In the startup world when you discover an issue, whether it's a logic error or an environmental misconfiguration, being able to fix it quickly and, maybe, document the fix is a virtue.

A hard lesson that I'm learning is that in a different environment that which was a virtue in the startup world can actually be harmful. By fixing an environmental issue you can deprive the organization of the process correction necessary to handle it structurally.

This strikes me as analogous to the decision to *not* handle an error in code. Sometimes you intentionally don't handle an error even though it is anticipated because to do so masks a more fundamental problem. The "if this error occurs, something is deeply wrong in a way that can't be recovered automatically" is a widely established pattern in software construction.

The hard part is realizing that it generalizes to the organizational level. By fixing an environmental issue that another team is responsible for you may well deprive that team of a needed corrective. What happens when you're too busy working on problems that can only be handled by your team and the issue comes up again?

Expressivity vs Trickle-Down Layering

A basic architectural principle in software engineering about dependencies across layers is that the flow of dependencies should go in a single direction. For example, in a hypothetical 3 layer system composed of top, middle and bottom layers the top layer depends on the middle layer and the middle layer depends on the bottom layer.

If the middle layer reaches back up into the top layer then you've created a circular dependency. Circular dependencies make a system harder to understand (and therefore maintain). One way in which they do this is their tendency to create chicken-egg problems. Put another way, they introduce another axis of dependency (order/time) that may not be obvious from inspecting the code. Chicken-egg problems aren't necessarily intractable, clearly *something* came first, but by adding another dimension they exponentially increase the size of the solution space making for many more blind alleys in which to get lost.

Working on a large system built up over time by many different contributors has given me a much greater appreciation for the importance of this basic architectural principle. This is a bit of a hard-learned lesson because it tends against one of my favorite design principles: the composite pattern. A layer built based on the composite pattern can reference itself as well as lower layers. I'm playing a little loose with the terminology here because layers generally refer to larger pieces of a system while the composite pattern is usually applied to smaller pieces of a large system but bear with me.

Let's take a hypothetical system called Manuals. The Manuals system is maintained by the Manuals team. Manuals depend on Chapters. Chapters depend on Sentences. Sentences are the lowest level of the system - this is where the meat of the work is done. The higher layers (Chapters and Manuals) are primarily about organizing/ordering the work of Sentences.

A few months after the Manuals system ships Developer X encounters it and, being fresh full of ideas about design patterns, thinks it would be really useful if Chapters could be composites. That is, he'd like Chapters to be able to contain Sentences and Chapters. He thinks the system would be so much more expressive that way. And it would enable reuse at the Chapter level. Who doesn't love wringing more expressivity and reusability out of a system?

Fast forward 2 years. Half a dozen developers have rotated in and out of the project each with varying levels of expertise, varying exposure to the original designs and different constituents to satisfy. None of the original developers is still working on it. The design docs, which were originally shared on the shiny, linky (e.g., non-duplicative) intranet were lost long ago somewhere between conversions to Intranet v2 then v3 and now v4 (the current version). Even if they could be found they'd be horribly out of date as the codebase is 10 times its original size. You're tasked with what should be a simple fix: change the wording of a boilerplate sentence that appears in nearly every Manual produced since the project originally shipped.

Being new to the project what do you do? You look at the manuals you can find. You see that each Manual appears to be made up of Chapters and each Chapter appears to be made up of Sentences. So you start your search in the Chapter directory and change every instance of the boilerplate sentence in each of the Chapters.

In a system that maintains the downward flow of dependencies (Manuals -> Chapters -> Sentences) there's a very good chance that this particular bug has been quashed: the boilerplate sentence is updated everywhere it will appear.

But what if Chapters were Composites in the system: they contained both Sentences and Chapters? Although originally all the Chapters were in a single directory, someone once accidentally chose a Chapter name that was already in use (without knowing it) so there was a major refactoring project that moved Chapters contained by other Chapters into other directories. A few months after that refactoring there was a big push to support the creation of chapters by another team. The Manual team just didn't have the bandwidth to handle the demand for Manuals. But the other team stored their artifacts in a different location. So to keep them productive the Manuals team added support for chapter references - pointers to chapters defined in other locations.

Now the job of changing a single boilerplate sentence everywhere it appears has gone from being a grep/findstr in a single directory to a programming task in and of itself. One that crawls Chapter references and looks for boilerplate sentences in the pointed-to Chapters. And doesn't crash on stale references. And doesn't infinitely loop when circular Chapter references are found. All because the downward flow of dependencies was broken when the elegant composite pattern was introduced into the system.

Upgrading WiFi in a Lenovo T61 ThinkPad to a Windows 8.1 Compatible card via USB Drive

The stock WiFi card (an Intel 4965 AGN) on my ThinkPad crashes every 30 minutes under Windows 8.1 (netwlv64.sys every time).

It also seems not to like the 5Ghz channel (from an Asus RT-N66U) though this was true under Win 7. So I upgraded to an Intel 7260 which is performing flawlessly.

BEFORE Changing the WiFi card

  1. Download the Middleton BIOS iso.
  2. Download rufus from pendrivelinux.com
  3. Follow the instructions at pendrivelinux to format the usb drive as a bootable DOS disk.
  4. Copy the files from the Middleton BIOS iso to the usb drive. (either mount it or use 7-zip to extract the iso contents).
  5. Make sure your battery is near fully charged (battery icon is green not yellow on thinkpad indicator).
  6. Disable the TPM chip in the bios.
  7. Suspend BitLocker if it's on.
  8. Boot from the USB drive.
  9. Run the flash utility (lcreflsh.bat).
  10. Wait for several minutes.

Changing the WiFi card

  1. Follow the instructions here for removing the wifi card.
  2. Insert your replacement wifi card.
TPM can then be re-enabled and bitlocker resumed after rebooting.

Upgrading to an Intel Core i7-4770k CPU: UEFI, USB 3 and Windows 8.1

Finally got around to upgrading from an Intel Core i7-920 to a Core i7-4770k. These are roughly 3 processor generations apart (Nehalem microarchitecture to Haswell microarchitecture respectively). As has been their custom for as long as I've been building PCs, the switch necessitated a motherboard upgrade as the 920s used socket LGA1366 while the 4770k requires LGA1150.

I decided to stick with an Asus (pronounced uh-soos) motherboard and picked up their Z87-Plus. The board being replaced, the P6T, was also an Asus board. Come to think of it, so was its predecessor (the A8N-sli). Before that it was an Intel board. Even though I don't do much overclocking I love the attention to detail that Asus puts into its products. And its website is well organized - always easy to find drivers. A few years back a quick comparison of their website to the website of their competitors (Gigabyte, MSI, etc...) sent me flying into their arms. They're not the least expensive boards but I've had good experience with them.

I fallen in love with the front panel header connector they ship with their boards. You plug the front-panel connectors (power LED, reset switch, PC speaker, etc...) into the connector (pictured below) then plug the connector into the board. It's a lot easier to swap out motherboards because you don't have to reconnect the sometimes lilliputian shunts onto single pins.

Front Panel Connector
The processor itself is tiny. Back in the Pentium and Pentium 2 days the processor was huge. Size-wise I recall installing one that was somewhere between the size of an audio cassette and VHS tape (closer to the latter than the former). The processor's these days are not much bigger than a postage stamp though their fans seems to have gotten larger.

CPU and stock fan
One of the reasons I went with the Z87-Plus is that it comes with a UEFI firmware config (aka BIOS). UEFI is the next generation of computer firmware, the successor to BIOS with an emphasis on speed and security. UEFI includes a sophisticated menu system so there's no longer a need for add-on board ROMs to daisy chain prompts and increase the length of time it takes to boot. Beyond that, BIOS writers have richer libraries and greater access to machine resources. The UEFI BIOS on this thing blows me away - it's a mouse-driven modern graphical user interface instead of the standard text based interface that has been a staple of BIOS for over 2 decades.
UEFI BIOS for Asus Z87-Plus
And it displays each of the settings that have been changed in a confirmation dialog before saving them!
UEFI BIOS Confirmation Dialog (apologies for the fuzzy picture)
Performance-wise this thing is a beast. Even though it only has 8 Gigs of RAM Windows 8.1 consistently boots in under 10 seconds. The motherboard itself completes POST so quickly that I've had to turn on a 5 second pause (another nifty BIOS option) so that I have a chance to enter the BIOS if necessary before POST completes. There are all sorts of optimizations this BIOS offers; you can turn off any (or all) of the USB or SATA ports. You can disable initialization of pretty much every connected device. There's something called "hardware turbo" mode that is so fast that I had to turn it off (again because it was nearly impossible to enter the BIOS when POST completes in under a second).

As a pc hobbyist since the mid 90s it amazes me how much easier it has become to build your own PC. I haven't cut myself on an add-on card or connector in going on 10 years. :)