PhilipMat

Two Approaches to Searching Users in Active Directory

I’m sure there are more than two ways to perform searches against Active Directory, however I wanted to highlight two approaches: DirectorySearcher and PrincipalSearcher.

The former, DirectorySearcher comes from System.DirectoryServices and it’s the more “bare-metal” version of the two.

PrincipalSearcher, of System.DirectoryServices.AccountManagement provenance, is more of a query by example pattern and I’d say a higher level abstraction of directory searching.

To use DirectorySearcher, namely through it’s Filter property, one requires a bit more advance knowledge (or Googling skills) in order to decipher and employ the LDAP format filter string.

The payoff of using DirectorySearcher is the ability to construct complex query, including compound expressions across various objects: "(&(objectCategory=person)(objectClass=contact)(|(sn=Smith)(sn=Johnson)))" would find all contacts with a surname of Smith or Johnson.

However, for simple queries, the simplicity of PrincipalSearcher makes for easier to read code.

Consider the example of searching for all domain IDs (SAM account name) that begin with “john”:

var domain = "CORP";
var container = "DC=ad,DC=example,DC=com";

using(var context = new PrincipalContext(ContextType.Domain, domain, container)) {
    var principal = new UserPrincipal(context) {
        SamAccountName = "john*"
    };
    using(var searcher = new PrincipalSearcher(principal)) {
        PrincipalSearchResult<Principal> result = searcher.FindAll();
        result.Dump();
    }
}

Contrast with the same code using DirectorySearcher:

var ldapPath = "DC=corp,DC=ad,DC=example,DC=com";

using (var entry = new DirectoryEntry($"LDAP://{ldapPath}"))  {
    using(var searcher = new DirectorySearcher(entry)) {
        searcher.Filter = "(&(objectClass=user)(sAMAccountName=john*))";
        SearchResultCollection result = searcher.FindAll();
        result.Dump();
    }
}

Should we want to find a user with the last name being “Smith”, in the PrincipalSearcher case is as easy as setting the UserPrincipal’s Surname property - easily discoverable, whereas for the DirectorySearcher one would have to research and find out that the property is called, a bit more cryptical, sn.

What was also interesting to me is that perhaps owing to PrincipalSearcher formulating better criteria that I could, DirectorySearcher seems to be about 1.5-2x slower that the Principal version: whereas the former returns, in my attempts, in about 500ms, the directory searcher version takes 800-1,100ms for the same operation.

The type returned by the two methods is also another factor worth considering.

The SearchResult returned by the directory searcher method is sparse and all interaction is to be done through its Properties property, which is an implementation of System.Collections.DictionaryBase.
These properties are really LDAP properties and to get information out of a search result one needs to know what these properties represent – for example, knowing that “c” represent “country”, or “sn” is “surname”, or “cn” is “common name”.

In contrast, the UserPrincipal class offered by the PrincipalSearchResult<T> has more straighforward properties: Surname, GivenName, etc, although it might not have some of the properties stored in LDAP, for example the afore mentioned c = countryName.

Due to its more straightforward nature, I will be personally employing PrincipalSearcher for simple search queries and hope that I would never have to land in a case where I require the full power of the DirectorySearcher.

However, if I do - I now know what to search for.

Loading Claims when Using Windows Authentication in ASP.NET Core 2.x

Much like almost everything else in ASP.NET Core, enabling Windows Authentication in ASP.NET Core is well documented and has supperb step-by-step examples.

The Claims-based authorization system is documented just as well and the examples are well chosen.

Where I thought the documentation fell short was the marrying of the two concepts; there is little explanation given to how the claims are actually made available to be check and asserted on.

If we were to inspect the Identity of a User, we would notice that it already has a substantial Claims collection. These claims are all seemingly associate with specific Windows user properties, and to me have largely legible names yet indecipherable values, save perhaps for the .../name claim:

Type Value
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name HOME\philip
http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid S-1-5-21-616010284-1202357983-1921873989-1000
http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid S-1-1-0
http://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid S-1-5-4
etc etc

In contrast, the Claims examples make use of such nicely named claims like "EmployeeNumber" or ClaimTypes.DateOfBirth, none of which can be found in the claims collection of our Windows user.

To load claim in ASP.NET Core 2.x we make use of one or more claims tranformations, classes implementing IClaimsTransformation (used to be called IClaimsTransformer in earlier versions), which get access to the ClaimsPrincipal and can construct new ones or add claims to the loaded one.

In the following example we’ll look at adding our own claims to the collection. To make it a bit more interesting, let’s assume we have a table in the database that stores the ids of the users who are administrators of our own application and we would like to add a flag in claims if a user logging in is part of this table.

Assuming we use these in combination with Authorize attribute, likely to check for an "IsAdmin" claim: [Authorize(Policy = "IsAdmin")], we will be making the following changes to our application:

Packages required

If running against .NET Core 2.x, the Microsoft.AspNetCore.App meta-package is sufficient.

If running against .NET Framework 4.6+, we need to add:

  • Microsoft.AspNetCore.Authentication - provides a large host of authorization classes, policies, and convenience extension methods;
  • Microsoft.AspNetCore.Server.IISIntegration - adds support for IIS (and IIS Express) in further support of the authentication process.

Code changes

launchSettings.json

Enable Windows authentication for IIS. Also enable anonymous access if usage of [AllowAnonymous] attribute is needed:

{
  "iisSettings": {
    "windowsAuthentication": true,
    "anonymousAuthentication": true,
...

Startup.cs

Enable authentication by adding the following to the Configure(IApplicationBuilder app, ...) method:

app.UseAuthentication();

Add IIS authentication scheme in ConfigureServices:

services.AddAuthentication(IISDefaults.AuthenticationScheme);

We’ll be back here in a bit to register our claims loader

ClaimsLoader.cs

Before we implement IClaimsTransformation a couple notes about it.

First, they run on each AuthenticateAsync call, which means for IIS Authentication they run only once and whatever claims we add to the collection are cached for as long as the user is logged in.
If we remove a logged in user from the list of administrators, they will continue to behave as such until they log in again.

Second, they run on each AuthenticateAsync call, so we will heed this warning from the documentation of TransformAsync:

Note: this will be run on each AuthenticateAsync call, so its safer to return a new ClaimsPrincipal if your transformation is not idempotent.

This is because if any call (tests?) causes AuthenticateAsync to be called twice, the same claim is added twice to the collection as pointed out in this article by Brock Allen.

using System.Security.Claims; // for ClaimsPrincipal
using Microsoft.AspNetCore.Authentication; // for IClaimsTransformation

public class ClaimsLoader : IClaimsTransformation
{
    public const string IsAdminKey = "IsAdmin";
    private readonly UserContext _userContext;

    public MigrationsUserClaimsLoader(UserContext userContext)
    {
        _userContext = userContext;
    }

    public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)
    {
        var identity = (ClaimsIdentity)principal.Identity;

        // create a new ClaimsIdentity copying the existing one
        var claimsIdentity = new ClaimsIdentity(
            identity.Claims,
            identity.AuthenticationType,
            identity.NameClaimType,
            identity.RoleClaimType);

        // check if our user is in the admin table
        // identity.Name is the domain-prefixed id, eg HOME\philip
        if (await _userContext.IsAdminAsync(identity.Name))
        {
            claimsIdentity.AddClaim(
                new Claim(IsAdminKey, "So say we all"));
        }

        // create a new ClaimsPrincipal in observation
        // of the documentation note
        return new ClaimsPrincipal(claimsIdentity);
    }
}

Startup.cs - adding policy

Now that we created our claims loader, let’s register it with the service collection and add a policy for it too:

services.AddTransient<IClaimsTransformation, ClaimsLoader>();

services.AddAuthorization(options =>
{
    options.AddPolicy(
        "IsAdmin",
        policy => policy.RequireClaim(ClaimsLoader.IsAdminKey));
});

At this point we can decorate our controllers or controller actions and employ the policy we just added:

[Authorize(Policy = "IsAdmin")]
public Task<IActionResult> AddUser() {
    ...
}

Variation

The example adds the "IsAdmin" claim only if the user is an admin.

If we wanted to add the claim anyway and rely on the value of the claim, the code changes as following:

ClaimsLoader.cs - variation

bool isAdmin = await _userContext.IsAdminAsync(identity.Name));
claimsIdentity.AddClaim(new Claim(IsAdminKey, isAdmin ? "yes" : "no"));

Startup.cs - variation

services.AddAuthorization(options =>
{
    options.AddPolicy(
        "IsAdmin",
        policy => policy.RequireClaim(ClaimsLoader.IsAdminKey, "yes"));
});

or to add a JavaScript flavor to it ;)

services.AddAuthorization(options =>
{
    options.AddPolicy(
        "IsAdmin",
        policy => policy.RequireClaim(
            ClaimsLoader.IsAdminKey,
            "yes", "Yes", "true", "True", "1")); // ugh
});

Thirty_minutes_before_coding

By noticing them missing in a good deal of projects and the friction and occasional frustration that came with it, I have came up with a list of things I would like to have in place before I start writing even a single line of code.

This list attempts to address three questions:

  • What is this project and/or what purpose does it solve?
  • How does one run it or use it?
  • How can one contribute?

Those are answered, I believe, by having a few essential things in place:

  • A well written README file - this is the first introduction to my project, so I will put extra care to make sure it’s clear, concise, semantically, and syntactically correct.
  • A script (Makefile, npm scripts) that helps run or use this project; if it’s a library, have documentation, with copious examples of usage.
    It can be part of the README - if not, the README should include a prominent link to this documentation.
  • Build script and Continuous Integration - the latter is in particular so easy to set up nowadays there’s no good excuse not to.
    Include linters and style guide (e.g. pep8).
  • Guidelines on how to contribute:
    • call out expectations and requirements for pull requests (e.g. documentation, tests, other artifacts);
    • set up templates for submitting issues and request improvements;
    • Set up a code of conduct, even if I’m the only contributor. It’s something to read when I get upset or frustrated (we’re all humans);
  • Publish scripts: nuget, npm, pypi all have their own preferred formats (optional but highly recommended if it’s a library);

With that in mind, here how the first 30 minutes before I write any code look like:

T-30: Head over to GitHub, Bitbucket, or GitLab and create a new repo with a README.
Select the .gitignore appropriate to my project as well as the license.

T-29: Clone the repo on my computer. Open the README.

T-29: Take 5 minutes to document what the project does and what problem it solves. Add links to any relevant resources (e.g. blog posts, Stack Overflow).

T-24: Set up the directory structure.

T-23: Create a minimal build script. I use:

  • Cake for .Net projects;
  • scripts node in package.json for JavaScript project;
  • For Python projects I like using the requests Makefile and Kenneth Reitz’s setup.py.

Add nodes/entries for linting (pep8, flake8, eslint) and unit testing (even if I don’t have any yet).

T-18: Go back to the README and add a section about how to run the script.

T-15: Add CI integration: Travis CI for Linux and AppVeyor for Windows projects; or both.
The build script should come in handy for this step.

T-10: Back to the README, add the AppVeyor and Travis badges so visitors know the current status of my build.

T-9: Add support for Snyk to help with vulnerability monitoring.

T-7: If the provider support it, I spend a few minutes creating issue templates to help with reporting defects and suggest improvement; if not, document in the README the type of information needed for defects.

T-4: Consider adding contributing guidelines and adopting a Code of Conduct.

T-1: Add an ## Examples or ## Usage node to the README as a reminder to add more documentation once the code or interfaces get fleshed-out.

T-0: Happy coding.

TL;DR / Checklist

  1. Repo with .gitignore, license, and README;
  2. Immediately document what the project does and what problem it solves;
  3. Set up folder structure;
  4. Create minimal build script;
  5. Add to README instructions on how to run the build script or the project via the build script;
  6. Add CI integration; add badges for CI status;
  7. Add configuration for Snyk to help detect vulnerabilities;
  8. Write issue templates or document how to report issues;
  9. Add contributing guide lines and Code of Conduct;
  10. Add Examples and Usage entries in README to fill in later.

Converting to Base64 in Powershell

There are a variety of ways to send a file to Web end-point and encode it in the the process. For example, using Invoke-RestMethod -InFile (docs):

C:\PS> Invoke-RestMethod -Uri http://example.com `
  -Method Post -ContentType 'multipart/form-data' `
  -InFile c:\temp\test.txt

However, if we want/need to include one or more files as part of a larger JSON payload, perhaps with other information for each file, we will need to convert the file(s) to Base64.

To do so, we’ll make use of .Net functionality, in particular the System.Convert.ToBase64String method and the System.Web.Script.Serialization.JavaScriptSerializer class (see note).

# define parameters
Param(
    [Parameter(Mandatory=$true, Position=0)]
    [string]$InputFile
)

$content = [System.IO.File]::ReadAllBytes($InputFile)
$base64String = [System.Convert]::ToBase64String($content)

# Load System.Web.Extensions
Add-Type -AssemblyName System.Web.Extensions
$jsonSerializer = New-Object System.Web.Script.Serialization.JavaScriptSerializer
$json = $jsonSerializer.Serialize(@{ content = $base64String })

Write-Output -InputObject $json

# writes: {"content":"dGVzdAOK"}

For a full, proper script, with multiple parameters (writing to a file, copying to clipboard) see this ConvertTo-Base64.ps1 gist.

Note: I chose System.Web.Extensions over the more common Json.Net because I didn’t want to have to download/nuget a dependency; JavaScriptSerializer proves sufficient for this task.

Versioning Assemblies with Cake and Git

Requirements

  1. Given an application, I want to be able to trace its binaries back to the source code “version” it was built from;
  2. As such, I want this identification ability to be automatically employed during the build process;
  3. I want to have easy ways of retrieving this information, as well as the version of code it was built from;
  4. All files built at the same time have the same version;
  5. I use git for VCS and Cake for build scripts;
  6. I don’t want trivial commits, such as those containing only the identifying information.

Proposed solution

  • Employ a way of versioning (duh) that is tied to individual files;
    Satisfies requirement 1;
  • Include the git branch information;
  • Include the commit id (SHA1) of the HEAD as a way of identifying the exact state of the code;
    Satisfies requirements 2 and 5;
  • Only built from committed code (no uncommitted files);
    Satisfies requirements 2, 3, and 4;
  • If files are modified for versioning purposes, roll back the changes at the end of the build, even if the build had errors;
    Satisfies requirement 6.

Technical Details

As far as the versioning goes, there are several version numbers associated with a .Net assembly:

  • Assembly version, important to the .Net loader - set with AssemblyVersion attribute: usually in Properties/AssemblyInfo.cs.
    Must be in major.minor.build.version format, all numbers, or the compiler throw a CS7034 error:

    error CS7034: The specified version string does not conform to the required format - major[.minor[.build[.revision]]]

  • File version, a property of the file itself and inspect-able in the Details section of the file properties dialog is set using the AssemblyFileVersion attribute.
    There’s a warning, but not an error (unless we have \<TreatErrorsAsWarnings>true\</TreatErrorsAsWarnings>, which we should), if we don’t follow the same format as the assembly version:

    warning CS7035: The specified version string does not conform to the recommended format - major.minor.build.revision

  • Product version is another property of visible in the file properties dialog, is set using the AssemblyInformationalVersion attribute, and is the most permissible of the three as it literally accepts any string, although we should set it to something reasonable and meaningful to whomever inspects it. Product Version with Emoji

This Stack Overflow answer, and the ones that follow, provides really good descriptions of each attribute, its limitations, and intended use.

Because we want to include the branch name and the commit id (SHA1), the AssemblyInformationalVersion is the only we can use.

We propose the following format: Major.minor.branch-sha1.

The assembly version can be dynamically versioned by MSBuild using the format [assembly: AssemblyVersion("1.0.*")] as a way of providing supplemental information about the date and time of build - see the Remarks section of the AssemblyVersion docs.

Implementation

We’ll make use of the Cake.Git add-in and Cake’s ability to generate the assembly information using CreateAssemblyInfo method.

To simplify matters, we’ll split AssemblyInformationalVersion attribute from the Properties/AssemblyInfo.cs file into its own Properties/AssemblyInfoVersion.cs. Its content is unimportant, but we’ll start with a value of:

[assembly: System.Reflection.AssemblyInformationalVersion("1.0.0.0")]

Next we’ll create a Task("Version") in our build.cake file that creates the AssemblyInfoVersion.cs file, we’ll make the Build task depend upon it, and we’ll revert the changes at the end of the build process.

#addin nuget:?package=Cake.Git

var configuration = Argument("configuration", "Debug");
var thisRepo = MakeAbsolute(Directory("./"));
var assemblyInfo = File("./TestAssemblyVersioning/Properties/AssemblyInfoVersion.cs");

Task("Version")
    .Does(() => 
{
    var branch = GitBranchCurrent(thisRepo);

    // The following is not the best approach
    // We should use LibGit2Sharp's ObjectDatabase.ShortenObjectId(),
    // but Cake.Git doesn't currently support it.
    var sha = branch.Tip.Sha.Substring(0, 8);

    // TODO: branch.FriendlyName produces a name too long when using gitflow,
    // e.g. "1.0.12fa582d-feature/MYPROJ-2732-title_of_story_or_defect".
    // There should be an attempt to extract maybe the issue identifier
    // so that we end with something like "1.0.12fa582d-MYPROJ-2732"
    // or "1.0.12fa582d-f-title_of_story"
    CreateAssemblyInfo(assemblyInfo, new AssemblyInfoSettings {
        InformationalVersion = string.Format("1.0.{0}-{1}", branch.FriendlyName, sha)
    });
});

Task("Build")
    .IsDependentOn("Version")
    .IsDependentOn("Restore-NuGet-Packages")
    .Does(() =>
{
    if(IsRunningOnWindows())
    {
      MSBuild(sln, settings => settings.SetConfiguration(configuration));
    }
    else
    {
      XBuild(sln, settings => settings.SetConfiguration(configuration));
    }
})
.Finally(() =>
{
    // restore assembly.cs files
    GitCheckout(thisRepo, new FilePath[] { assemblyInfo });
});

That’s it. Now every time we build the project using our build script, the product version will reflect it accordingly:

Product Version with Git Info

Note 1: if we had multiple assemblies, like normal projects do, we would have a single AssemblyInfoVersion.cs, likely in the root of the project, and we would link that file into each project to ensure they all get the same product version:

<Compile Include="..\AssemblyInfoVersion.cs">
    <Link>Properties\AssemblyInfoVersion.cs</Link>
</None>

Note 2: it seems reasonable that we should maybe perform a check to see if all changes have been committed before the build, otherwise the build would incorporate the changes on disk while still picking up the HEAD SHA1.