Notes from Daily Encounters with Technology RSS 2.0
# Monday, September 15, 2014

It's always best to completely avoid using reflection, but unfortunately that's not always possible. Sometimes you need to use APIs, which are not strongly typed. In such cases you should transition from reflection to strongly typed code as soon as possible: because of performance, and because of code readability as well. In this blog post I'll describe a couple of techniques which are useful in situations like this.

ASP.NET MVC binders are a good example of an API forcing you into reflection, because it provides you with an instance of Type class instead with a generic type. In binders you often need to interact with a DAL (data access layer), therefore I decided to use a simplified DAL API for demonstration:

// repository class for retrieving data
public class Repository
    public IQueryable<TEntity> GetItems<TEntity>() where TEntity : class
        // empty result set instead of actual data
        return Enumerable.Empty<TEntity>().AsQueryable();

// interface implemented by entities for exposing the key
public interface IIdentifiable<TKey>
    TKey Id { get; }

// sample entity
public class Entity : IIdentifiable<int>
    public int Id { get; set; }

Such a DAL can be used very easily when you're using strongly typed code:

var entity = repository.GetItems<Entity>().SingleOrDefault(e => e.Id == key);

Of course, to do that, you need to know the type of entity, you're trying to retrieve. If instead of having the actual type you only have an instance of the Type class, representing it, it suddenly gets much more complicated. There's no other way, than to use reflection:

var method = repository.GetType().GetMethod("GetItems").Dump();
var genericMethod = method.MakeGenericMethod(entityType);
var queryable = genericMethod.Invoke(repository, new object[0]);

The code is certainly not nice, but it wasn't that difficult to do it, either. But we're only halfway there. The next step requires using LINQ. Since we don't know the type of key in advance either, we would need to create the lambda expression using reflection, as well. We don't really want to that, so we can take a different approach instead: we can write a method wrapping all our logic and use reflection only for calling it:

public TEntity GetEntityById<TEntity, TKey>(Repository repository, TKey key) 
    where TEntity : class, IIdentifiable<TKey>
    return repository.GetItems<TEntity>()
        .SingleOrDefault(e => EqualityComparer<TKey>.Default.Equals(e.Id, key));

var method = GetType().GetMethod("GetEntityById").Dump();
var genericMethod = method.MakeGenericMethod(entityType, key.GetType());
var entity = genericMethod.Invoke(this, new object[] { repository, key });

This approach allows us to minimize the amount of reflection code, we need to write, and simplifies it at the same time (we can design the method to have as simple signature as possible).

There is a way to avoid even this minimal block of reflection code, by taking advantage of generic arguments inference. We need a parameter matching the generic argument of the method, to make it work:

public TEntity GetByEntityId<TEntity, TKey>(TEntity dummy, Repository repository, TKey key) 
    where TEntity : class, IIdentifiable<TKey>
    return GetEntityById<TEntity, TKey>(repository, key);

Based on dummy parameter's TEntity type, the compiler will now be able to infer TEntity and TKey types. As a side effect we need to create an instance of the type which shouldn't be a problem as long as it has a default parameter-less constructor:

dynamic dummyEntity = Activator.CreateInstance(entityType);
var entity = GetEntityById(dummyEntity, repository, key);

Notice that I needed to use dynamic for my dummy instance. I can't cast it to the correct type, so without dynamic the compiler would assume it is of type object and the code wouldn't even compile. By using dynamic you postpone the type checking until runtime, when the actual type is already known.

I haven't done any measurements to compare the performance of both approaches: reflection and dynamic with dummy object instantiation. You shouldn't ever need to use such code in tight loops or other performance sensitive scenarios, anyway, so feel free to use the one, which you like better.

Monday, September 15, 2014 6:41:37 AM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | .NET | C#
# Monday, September 8, 2014

SlowCheetah is a very useful Visual Studio extension which builds upon Visual Studio’s built-in support for Web.config transformations. It adds similar XDT transformation file support to non-web projects. There is an important distinction between both implementations, though. Thanks to SlowCheetah transformations in non-web projects can be used during debugging, while built-in solution for web projects only works for web deployment. During debugging the unchanged default Web.config file is used with all configurations.

There is already a feature request for SlowCheetah to add support for web projects as well, and you can vote for it, but until it’s implemented, you’ll need to find a different solution, if you want such functionality in your projects. I’ve recently implemented it in one of my solutions based on a great blog post by Ralph Jansen. Although I’m posting my solution with minor modifications here, I encourage you to read the original article as well for some additional background.

These are steps, I’ve taken to make this work in a standard web project:

  1. Have the solution open in Visual Studio.
  2. Outside Visual Studio rename the configuration (Web.config) and transformation (Web.*.config) files to Web.template.config and Web.template.*.config, respectively, using the commands of your source control tool to keep the file history.
  3. Ignore Web.config file in your source control tool, because it will get regenerated every build and you won’t want to have it in source control any more.
  4. Delete the old transformation files (Web.*.config) from solution explorer in Visual Studio.
  5. Show all files in Visual Studio solution explorer and include the renamed files in solution.
  6. To nest the transformation files below Web.template.config, you can use File Nesting extension for Visual Studio.
  7. Save all files in Visual Studio and close it.
  8. Create a new file in the project directory, named <MyProjectName>.wpp.targets with the following contents (my modification in comparison to the original file from the before mentioned blog post include changed transformation file naming policy and the same casing of Web.config as used by Visual Studio):
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="">


  <!-- This target will run right before you run your app in Visual Studio -->
  <Target Name="UpdateWebConfigBeforeBuild">
    <Message Text="Configuration: $(Configuration): Web.template.$(Configuration).config"/>
    <TransformXml Source="Web.template.config"
              Destination="Web.config" />

  <!-- Exclude the config template files from the created package -->
  <Target Name="ExcludeCustomConfigTransformFiles" BeforeTargets="ExcludeFilesFromPackage">
      <ExcludeFromPackageFiles Include="Web.template.config;Web.template.*.config"/>
    <Message Text="ExcludeFromPackageFiles: @(ExcludeFromPackageFiles)" Importance="high"/>

After opening the solution in Visual Studio and rebuilding it, the Web.config file will be generated based on the transformation for the current configuration. Since now Web.config will be the file that changes, it will of course be used when debugging the application just like SlowCheetah works for non-web projects.

Monday, September 8, 2014 6:42:05 AM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | ASP.NET | Software | VisualStudio
# Monday, September 1, 2014

Let’s take a closer look at the following static class:

public static readonly string Field = "Static value";

static StaticClass()
  Debug.WriteLine("Static constructor called.");

Up until today I was convinced, that given such a class the following couldn’t happen:

StaticClass.Field = null

Obviously I was wrong: the static readonly field with an initializer is uninitialized. Of course this required closer examination.

We’ll start with the language specification. This is being said about static field initialization in section

The static field variable initializers of a class correspond to a sequence of assignments that are executed in the textual order in which they appear in the class declaration. If a static constructor exists in the class, execution of the static field initializers occurs immediately prior to executing that static constructor. Otherwise, the static field initializers are executed at an implementation-dependent time prior to the first use of a static field of that class.

The above class has a static constructor, therefore the following text about static constructors is also relevant (section 10.12):

The execution of a static constructor is triggered by the first of the following events to occur within an application domain:

  • An instance of the class type is created.
  • Any of the static members of the class type are referenced.

Still, no matter what you do, the field initializer should execute before it is accessed for the first time, hence it should never have null value. The above image proves this wrong. We need some more code to get to the bottom of this:

static void Main(string[] args)
  Debug.WriteLine("Field value: " + StaticClass.Field);

Now let’s a put a breakpoint before the only line of code is executed and after it. Once the first breakpoint is hit, we can take a look at StaticClass from Immediate Window:

    base: object
    Field: null

Field is not initialized yet. This doesn’t affect the code execution, though. As soon as we run the program to the second breakpoint, we get the following in the output window:

Static constructor called.
Field value: Static value

Repeating the same call in Immediate Window now also gives the expected result:

    base: object
    Field: "Static value"

As it seems, looking up the field value in debugger doesn’t have any side effects - no code gets executed, unlike when accessing properties. But that’s already a subject for another post about bad software development practices.

Monday, September 1, 2014 6:05:59 AM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | C#
# Monday, August 25, 2014

Recently I set up multiple different configurations for my ASP.NET project with different .config file transformations for different environments. After I started switching between them on a regular basis, the following build error started showing up every time I changed to a different configuration:

...\obj\debug\web.config(40): error ASPCONFIG: It is an error to use a section 
registered as allowDefinition='MachineToApplication' beyond application level.
This error can be caused by a virtual directory not being configured 
as an application in IIS.

As you can see the error points to the web.config file inside obj folder. It always belonged to a configuration which I was building before switching to the current one. The issue could of course be resolved by manually deleting the offending file, but I soon got tired of this and decided to investigate further.

I quickly noticed that the problem was limited to the ASP.NET MVC project only. The other web project in the solution, hosting a WCF web service also contained multiple configurations, but there were no build errors after switching between them. This was finally enough information to find that others were already writing about it. It turned out that the problem would go away if I disabled building of views by removing the following property from the project file:


By disabling this feature I would loose the build time check of view validity against their current models, therefore I didn’t want to do it and had to find a different solution. In the end I settled with just deleting the problematic files inside the obj folder before each build, by adding the following command to the ASP.NET MVC project pre-build event:

del "$(ProjectDir)obj\*.*" /s /q

I could have deleted just the web.config files, but this would mean having a delete command for each configuration and also remembering to add another command each time a new configuration was added. Since files in obj folder are regenerated each time any way, this seemed a more pragmatic solution.

Monday, August 25, 2014 6:48:13 AM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | ASP.NET | Software | VisualStudio
# Monday, August 18, 2014

Static code analysis is a very useful feature of ReSharper. The generated warnings can help you fix bugs which might otherwise remain unnoticed. Look at the following code:

Loop control variable is never changed inside loop

Do you see the issue? You would definitely have a more difficult time noticing it, if ReSharper didn’t warn you about it: Loop control variable is never changed inside loop. It might even remain unnoticed when surrounded by all the other code.

Unfortunately static analysis is not perfect and it might detect false positives - warning you about potential issues in code, which in reality aren’t issues at all. Let’s take a look at another example:

Access to disposed closure

ReSharper’s warning in this case: Access to disposed closure. The reason being: this code could fail if ExceptionAssert.Throws would store the lambda and execute it after scope was already disposed. Looking at its code, we can see, this is not the case:

public static T Throws<T>(Action action)
   where T : Exception
   catch (T exception)
      return exception;
   Assert.Fail("No {0} was thrown.", typeof(T).FullName);
   return null;

The lambda is not being stored and is executed before the method completes, i.e. this ReSharper’s warning is a false positive.

There are a couple of ways to tell ReSharper about it, so that it doesn’t warn us any more. The simplest one is being offered as a quick fix: the check can be disabled with a special comment:

Disabling a warning with a comment

Although this works, it’s not a perfect solution since you need to add such a comment every time you call ExceptionAssert.Throws by passing it an IDisposable.

When we know that it’s always safe to pass an IDisposable to our method, we can tell that to ReSharper using its annotation attributes:

public static T Throws<T>([InstantHandle]Action action)
   where T : Exception
   catch (T exception)
      return exception;
   Assert.Fail("No {0} was thrown.", typeof(T).FullName);
   return null;

This way we permanently got rid of the warning and the comment is not needed any more:

Disabling a warning with code annotations

To use the annotations, you either need to reference ReSharper’s JetBrains.Annotations.dll assembly or define the annotation attribute classes directly in your code. Unfortunately there’s no official NuGet package for either, but there’s plenty of unofficial options available. To my experience, it doesn’t really matter which one you choose, they all seem to achieve their goal just fine: you don’t need to reference an assembly from a proprietary location in your project or put it in source control.

The above of course only works when you have access to the source code. When you don’t, but you’d still like to improve ReSharper’s static code analysis, you can use external annotations. That’s how ReSharper’s authors have annotated the framework assemblies for you.

Monday, August 18, 2014 6:37:37 AM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | C# | Software | ReSharper | VisualStudio
My Book

NuGet 2 Essentials

About Me
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

All Content © 2014, Damir Arh, M. Sc. Send mail to the author(s) - Privacy Policy - Sign In
Based on DasBlog theme 'Business' created by Christoph De Baene (delarou)
Social Network Icon Pack by Komodo Media, Rogie King is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.