Ninja Nichols

The discipline of programming

Passing ADFS Claims to Shibboleth

According to the official Shibboleth documentation ( ingesting claims from ADFS into Shibboleth requires adding the following mapping rules to /etc/shibboleth/attribute-map.xml:

<!-- WS-Fed attributes -->
<Attribute nameFormat="" name="CommonName" id="cn"/>
<Attribute nameFormat="" name="EmailAddress" id="email"/>
<Attribute nameFormat="" name="UPN" id="userPrincipalName"/>
<Attribute nameFormat="" name="Group" id="group"/>

This is NOT correct. Maybe it worked with older versions, but it doesn’t work at all with Shibboleth 2.4 and ADFS 2.x.

After much digging I stumbled across some interesting entries in the Shibboleth daemon log file /var/log/shibboleth/shibd.log. After each login attempt there were a number entries about unmapped SAML attributes:

2015-01-14 16:00:37 INFO Shibboleth.AttributeExtractor.XML [90]: skipping unmapped SAML 2.0 Attribute with Name:, Format:urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified

Helpfully that log message tells us the Name and Format of each unmapped attribute. Plugging those values in results in an attribute mapping like this instead:

<Attribute nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified" name="" id="group"/>

Why do my custom datetime strings STILL depend on client locale?

C#’s built in culture support is great. Just call ToString() on any DateTime object and it will be converted into a localized string with the appropriate formatting for the client’s locale. Take a look:

Locale Culture DateTime.ToString()
English en 12/15/2014 10:48:51 PM
Arabic ar 23/02/36 10:48:51 م
Chinese zh 2014/12/15 22:48:51
Danish da 15-12-2014 22:48:51
German de 15.12.2014 22:48:51
Hungarian hu 2014.12.15. 22:48:51

Not only is the order different, but we’ve got slashes, dashes and periods. Fortunately it’s all taken care of for us! But while that’s great for the front-end, having the back-end pass around these localized date strings is just asking for trouble.

Of course, the Right Way to format DateTime strings for back-end use is with the Round-trip format specifier (“O”), which creates an unambiguous, Culture-invariant, timezone aware and ISO 8601-complaint string representation (which looks something like 2014-12-15T22:48:51.0000000-05:00).

But alas, not every application was written to emit/accept ISO 8601-formatted datetime strings. And so, dear reader, I now present one of the many pitfalls to doing custom format strings correctly. Here is the gist of some code I encountered recently:

WebService.ExternalCall(someData, timestamp.ToString("MM/dd/yyyy hh:mm:ss"));

Not too bad really, there might be some issues if the app happens to run in a different timezone than the web service, but at least that custom format string will make sure those timestamps are always in the same format. Except it doesn’t. Soon we start finding entries like this 12-15-2014 22:48:51. Dashes?!! WTF, we clearly specified *slashes*!

Turns out the “/” character in a DateTime format string is actually just a placeholder for the localized date separator. It will be replaced by whatever the client set it to. Keep in mind that users are not locked into the regional defaults and are free to customize their own date/time settings.

Windows 7 Region and Language Settings
Don’t assume that your English-speaking users won’t go and change the time format to something unexpected.

To make the “/” to always be a slash character, we have to either escape it or explicitly override the client’s culture (the same goes for the time separator, “:”, although none of the default Cultures/Locales use anything other than “:”):

timestamp.ToString("MM'/'dd'/'yyyy hh':'mm':'ss");


timestamp.ToString("MM/dd/yyyy hh:mm:ss", DateTimeFormatInfo.InvariantInfo);

Seriously though, just use “O” if at all possible. It’s quick, easy and solves this problem and many others.


Rare Database Entries and Partial Indexes

Today I discovered that PostgreSQL supports partial indexes and I’ll share my scenario for when a partial index has a number of space/speed advantages over a traditional index.

My application consists of many grid-like maps onto which users can place objects. Some objects don’t have to be placed right away; these appear in a staging area, which will become important later. The map creation process is collaborative and one user might make a change, save it and hand it off to another user to complete. Eventually when all the changes have been made, we attempt to finalize the modifications. Part of this process is to ensure that there are no objects left in the staging areas.

The staging area happens to have a special coordinate (-1, -1), so my query becomes:

SELECT object_pk FROM objects WHERE col=-1 AND row=-1;

It’s a simple query and there will typically only ever be a handful of results, but even so it has trouble scaling when there are hundreds of millions of objects in the objects table. We could try adding indexes to the “col” and “row” columns:

CREATE INDEX staging_col_idx ON objects USING btree(col);
CREATE INDEX staging_row_idx ON objects USING btree(row);

But since we only care about one specific lookup (-1, -1), nearly all of the index entries would be completely useless. Preliminary tests showed that it would result in several GBs of unnecessary index records and might slow down inserts/updates too.

This is where the partial index comes in. Partial indexes can be very useful in some specialized scenarios. They provide a way to avoid indexing common “uninteresting” values, thereby reducing the size of the index and speeding up queries that do use the index. Our partial index looks like this:

CREATE INDEX staging_idx ON objects USING btree(col, row)
WHERE col=-1 AND row=-1;

Only entries that satisfy the conditional will appear in the resulting index. I still get the same blazing fast lookups, but now my index will be a tiny fraction of the size and won’t need to be updated for the vast majority of insert or update operations on the table.

Copy Unselectable Text from Windows Error Message

For many years now I’ve been able to search for solutions only after painstakingly transcribing the error messages from Windows dialog boxes like the one shown below.


Today I discovered the easy way to copy the message text. While the text in these dialogs stubbornly refuses to be selected, surprisingly, the whole error itself can be copied by pressing ctrl-c with the window in focus. When pasted into a text editor, it copies everything including the title, message content and button text:

[Window Title]
replsync.dll - DLL Load Error

Cannot load resource dll: REPLRES.RLL

The specified module could not be found.


No more manual transcriptions! Life’s too short to waste keystrokes.

Unit Testable System.IO (C#)

It seems the hardest part of writing good unit tests is trying to mock out third-party dependencies.

A surprising number of the core Microsoft libraries like System.IO simply aren’t unit testable, so I was almost unreasonably happy when I stumbled upon System.IO.Abstractions, which describes itself as “Just like System.Web.Abstractions, but for System.IO. Yay for testable IO access!”.

From their documentation:

At the core of the library is IFileSystem and FileSystem. Instead of calling methods like File.ReadAllText directly, use IFileSystem.File.ReadAllText. We have exactly the same API, except that ours is injectable and testable.

Looks like it’s being actively maintained. Just use NuGet to add it your project:

PM> Install-Package System.IO.Abstractions

Why 256-bit Symmetric Keys are Enough

Recently Google announced that they are upgrading their SSL certificates from 1024-bit keys to 2048-bit keys. SSL uses asymmetric cryptography which requires much larger keys than symmetric cryptography to get equivalent security, and so increasing sophisticated cracking hardware is forcing the move from 1024 to 2048-bit keys.

But what about our symmetric ciphers? Before AES we had 56-bit DES, which today is brute-forcible by students for undergraduate cryptography class assignments. Do we have to be concerned that AES only supports up to 256-bit keys? Is Moore’s Law going to necessitate a jump from 256-bit to 512-bit ciphers?

The answer is simply, no.

In Applied Cryptography (pp. 157–8), Bruce Schneier argues there’s no reason to use anything larger than a 256-bit key for symmetric encryption. It’s also one of those rare arguments from the second law of thermodynamics that’s actually decent:

One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)

Given that k = 1.38×10-16 erg/Kelvin, and that the ambient temperature of the universe is 3.2 Kelvin, an ideal computer running at 3.2K would consume 4.4×10-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.

Now, the annual energy output of our sun is about 1.21×1041 ergs. This is enough to power about 2.7×1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn’t have the energy left over to perform any useful calculations with this counter.

But that’s just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.

These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.

PowerShell Variable Followed by Colon

I’ve got a little PowerShell script that writes some configuration settings. In it there’s a line that computes a certain URL for a given subdomain:

$url = "http://$"

Pretty standard stuff. But something interesting happens when you try to generalize it and make the domain itself variable:

$url = "http://$subdomain.$domain:8080/"

Instead of, I get http://sub./. What’s going on?

Turns out the colon has a special meaning in PowerShell, it’s used both to specify items on a PSDrive, $Env:Foo, or to associate a variable with a scope or namespace, $global:var. So when PowerShell tries to interpret $domain:8080 it sees one variable, instead of a variable + a string.

How to fix it? Just make the intention clear using curly brackets:

$url = "http://${subdomain}.${domain}:8080/"

AD FS 2.0 Service Fails to Start


  1. Instead of a login page, the user is presented with:
    There was a problem accessing the site. Try to browse to the site again.
    If the problem persists, contact the administrator of this site and provide the reference number to identify the problem.
    Reference number: 7aaab8f7-85ed-4910-9f4f-d105100cb604
  2. Going to Administrative Tools -> Services reveals that AD FS 2.0 service is not started. Trying to start the service manual results in:
  3. Event 220 appears in AD FS 2.0 Event Viewer logs:
    The Federation Service configuration could not be loaded correctly from the AD FS configuration database. 
    Additional Data 
    ADMIN0012: OperationFault
  4. Event 352 appears in AD FS 2.0 Event Viewer logs:
    A SQL operation in the AD FS configuration database with connection string Data Source=\.pipemssql$microsoft##sseesqlquery;Initial Catalog=AdfsConfiguration;Integrated Security=True failed.  
    Additional Data 
    Exception details: 
    A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)


  1. Start “Windows Internal Database” service.
  2. Now you can start the “AD FS 2.0 Windows Service”.