Notes from Daily Encounters with Technology RSS 2.0
 
# Monday, December 22, 2014

We've all heard a lot about object-relational mapping (ORM), but in multitier applications there's another type of mapping we have to deal with: object-object mapping. If you're wondering, what that is: mapping between the domain model objects and data transfer objects (DTOs) is a common example of it.

In contrast to ORM, there's a lot less libraries available for object-object mapping. In .NET space, one of the most popular ones is AutoMapper. In simple scenarios it can replace all your trivial error-prone mapping code:

public class SourceClass
{
    public int Id { get; set; }
    public string Name { get; set; }
}

public class DestinationClass
{
    public int Id { get; set; }
    public string Name { get; set; }
}

public DestinationClass MapManually(SourceClass source)
{
    return new DestinationClass
    {
        Id = source.Id,
        Name = source.Name
    };
}

public void InitAutoMapper()
{
    // called during application initialization
    Mapper.CreateMap<SourceClass, DestinationClass>();
}

public DestinationClass MapAutomatically(SourceClass source)
{
    return Mapper.Map<DestinationClass>(source);
}

The more properties there are in objects, the more code you save. Of course AutoMapper is not limited to cases where properties map one to one. Thanks to built-in conventions, it can handle more complex scenarios, as well. When even that's not enough, mapping for individual properties can be configured manually. By just browsing through thedocumentation it becomes obvious, that there's no lack of different configuration options. Just be careful, your AutoMapper configuration doesn't become more complex than your manual mapping code would be.

All that being said; based on the information so far, AutoMapper imposes a very serious limitation: both the configuration and the mapping function are static. This will cause problems when your mapping configuration depends on external object instances:

public class SourceClass
{
    public int Id { get; set; }
    public string Name { get; set; }
    public ICollection<int> NestedIds { get; set; }
}

public class NestedClass
{
    public int Id { get; set; }
    public string Name { get; set; }
}

public class DestinationClass
{
    public int Id { get; set; }
    public string Name { get; set; }
    public ICollection<NestedClass> Nesteds { get; set; }
}

public DestinationClass MapAutomatically(SourceClass source)
{
    using (var context = new TestDbContext())
    {
        Mapper.CreateMap<SourceClass, DestinationClass>()
            .ForMember(dest => dest.Nesteds, opt => opt.Ignore())
            .AfterMap((src, dest) =>
            {
                dest.Nesteds.Clear();
                foreach (var nested in context.NestedClasses
                    .Where(n => src.NestedIds.Contains(n.Id)))
                {
                    dest.Nesteds.Add(nested);
                }
            });

        return Mapper.Map<DestinationClass>(source);
    }
}

This sample assumes that nested collection of objects will be represented in DTOs with their ids only. To map them back to objects, Entity Framework DbContext instance is required. Not only that; it must be the same instance as used for saving changes, otherwise SaveChanges() will throw an exception. Although I moved the mapper initialization to the mapping function, the above code still won't work in multithreaded scenarios because there is always just one common static mapping used by all threads. Hence, all threads will use the context from the thread that last initialized the mapping.

Fortunately, AutoMapper allows non-static mapper configurations, as well. I don't think that's actually documented, but it's being used in AutoMapper's unit tests. With this technique, my mapping function can be fixed to also work correctly in multithreaded scenarios (notice the use of ConfigurationStore and MappingEngine instead of the static Mapperclass):

public DestinationClass MapAutomatically(SourceClass source)
{
    using (var context = new TestDbContext())
    {
        var config = new ConfigurationStore(
            new TypeMapFactory(), MapperRegistry.Mappers);
        config.CreateMap<SourceClass, DestinationClass>()
            .ForMember(dest => dest.Nesteds, opt => opt.Ignore())
            .AfterMap((src, dest) =>
            {
                dest.Nesteds.Clear();
                foreach (var nested in context.NestedClasses
                    .Where(n => src.NestedIds.Contains(n.Id)))
                {
                    dest.Nesteds.Add(nested);
                }
            });

        var mappingEngine = new MappingEngine(config);
        return mappingEngine.Map<DestinationClass>(source);
    }
}

As you can see, my mapping function is becoming more and more complicated, although it still needs further improvements before it will correctly map to entity objects in all cases. If you're interested in the subject of using AutoMapper with Entity Framework, you should read Roger Alsing's excellent blogpost about it. He makes a great point: don't blindly stick to AutoMapper when it starts getting in the way; use it only, when the resulting code is actually simpler to write and easier to understand.

Monday, December 22, 2014 6:45:03 AM (Central European Standard Time, UTC+01:00)  #    Comments [0] - Trackback
Development | .NET | EF
# Monday, December 15, 2014

Canon CanoScan LiDE 50 is a small reliable flatbed scanner which has been serving my needs for years. Unfortunately Canon has dropped support for it long ago and has never released a 64-bit driver for it. Thanks to the unofficial driver it's still possible to use it in the latest version of Windows, but it requires a bit of tinkering to circumvent the security measures.

Since I'm rediscovering the steps to make it work every time I connect it to a newly installed machine, I decided to document it here for future reference. Follow the instructions at you own risk. If it doesn't work for you or worse, don't blame me or ask for help.

  1. Download the unofficial driver and unpack it in a folder on your machine.
  2. If you just follow the instructions inside the archive and run Setupsg.exe, it will quietly fail and do nothing because Windows will block the installation of unsigned drivers. To avoid that, you'll need to reboot Windows in advanced startup mode and disable driver signature enforcement in startup settings. You can find detailed instructions for that elsewhere, so I'm not going to repeat them here. When Windows reboots, Setupsg.exe will successfully install the driver. Once you disconnect and reconnect the scanner, Windows will detect it and complete the installation. Reboot Windows once again to re-enable driver verification.
  3. Although the device will seem to be already properly installed when you look in the device manager, all attempts to open the TWAIN interface from another application (e.g. IrfanView) will result in the following error: "The program can't start because rmslantc.dll is missing from your computer. Try reinstalling the program to fix this problem." If you check the driver details in device properties window, you'll see where the file is on your computer, although the application obviously can't find it. To fix the problem, you'll need to add the folder with the file to your %PATH% environment variable and restart the application. CheckAaron Kelley's blogpost for details.

I suppose the above steps should also be helpful with other unsupported devices, as long as you manage to find a suitable driver. It feels nice when you don't have to replace your working hardware, just because the vendor decided not to support it any more.

Monday, December 15, 2014 6:33:59 AM (Central European Standard Time, UTC+01:00)  #    Comments [0] - Trackback
Software | Windows
# Monday, December 8, 2014

I'm often surprised, seeing Microsoft Access being used as a platform for applications which would arguably require a more advanced database engine with proper support for multiple simultaneous users. Although I've never really learned it, I can understand why its low barrier of entry can make it a great starting point for new simple data-oriented solutions; and we all know how these tend to grow out of hands. Unfortunately in spite of all the precautions, an Access database can still become corrupt. I've had a chance to try out Cimaware Software's AccessFIX tool which can be really useful, when it happens.

In contrast to official Microsoft's troubleshooting guides, the tool is very simple to use. This is true even for setup, which doesn't require administrative privileges to install; a very useful feature in enterprise environments, where users usually don't have administrative access to their computer.

Once you run the application, a wizard will guide you through the process of fixing a corrupt file. As soon as you select the file to fix, it will show you a preview of the data it managed to read. The authors claim, it can handle even files which don't open in Access any more. Not having any such file at hand, I had to settle with testing only its undelete functionality. In my case it worked perfectly: both deleted records and deleted tables have shown up completely. Unfortunately the latter have lost their original name, but the data was intact. Of course, this is only possible when data hasn't been already overwritten in the file. As with any undelete operation; the probability of success is higher, the sooner after the delete you stop using the file.

Deleted records in AccessFIX

If you're satisfied with the results, you can save all the data to a new fixed file. Only at this step you will need to buy the application. I like how this allows you to first make sure it will do what you need, before actually paying for it. Not only that - in case it still doesn't work for you after purchase, they offer to manually restore the file for you or return the money. They seem really confident in their product and want you not to risk anything when you decide for it.

Anyway, when saving the fixed file, make sure you check the option to save deleted records, otherwise you might wonder why they are not in the resulting file as I did. It's also worth noticing that they are saved into a separate table just like in the preview inside AccessFIX, and you'll need to move them back to the original table yourself afterwards.

Save fixed file in AccessFIX

Based on my experience, I can find no reason not to recommend AccessFIX if you're struggling with a corrupt Access database. Just download it and try it out; you have nothing to lose. You can also check other data recovery tools in their portfolio, while you're there. They have applications for Excel, Word and digital photos, as well.

Monday, December 8, 2014 6:47:17 AM (Central European Standard Time, UTC+01:00)  #    Comments [0] - Trackback
Personal | Reviews | Software | Office
# Monday, December 1, 2014

The topic of today's post is quite controversial - for a good reason. The general advice is to avoid using IoC containers in your test code altogether. If you manage to do that, you'll never need to change their configuration in between tests. Unfortunately achieving that when IoC usage is being retrofitted into an existing application, can be challenging. Under such circumstances it might make sense to settle with a suboptimal solution which will require the IoC container to be configured appropriately in tests as well.

Taking this approach will quickly result in having to reconfigure it for some of the tests. Let's take a look at some code to see why.

public PersonDto InsertPerson(PersonDto personDto)
{
    if (!_validator.Validate(personDto).IsValid)
    {
        throw new ArgumentException("personDto");
    }
    var personModel = _mapper.Map(personDto);
    _repository.Insert(personModel);
    _repository.Save();
    return _mapper.Map(personModel);
}

You're likely to see similar code in many LOB applications. The method has 3 dependencies: a validator, a mapper and a repository. Usually you'll want to initialize them using constructor injection:

private readonly IValidator<PersonDto> _validator;
private readonly IMapper<PersonDto, PersonModel> _mapper;
private readonly IRepository<PersonModel> _repository;

public PersonManagementService(
    IValidator<PersonDto> validator, 
    IMapper<PersonDto, PersonModel> mapper, 
    IRepository<PersonModel> repository)
{
    _validator = validator;
    _mapper = mapper;
    _repository = repository;
}

This makes it really easy to configure even without an IoC container.

[Test]
public void ValidPersonIsInsertedIntoRepository()
{
    var validator = new PersonValidator();
    var mapper = new PersonMapper();
    var repository = new PersonRepository();
    var service = new PersonManagementService(validator, mapper, repository);
    var original = new PersonDto();

    var inserted = service.InsertPerson(original);

    inserted.ShouldBeEquivalentTo(original);
}

As you can see the test can simply instantiate each dependency and pass it to the service constructor. If any of the dependencies should need to be mocked, a different class implementing the required interface can be instantiated instead; e.g. the validator in the above test should already be tested elsewhere, so it could be mocked to easily make the validation pass or fail without having to cause that by creating a suitable DTO:

var validator = new MockValidator<PersonDto>(isValid: true);

Now imagine that the InsertPerson method is in a class which doesn't give you full control of its instantiation, but you still want the dependencies to be injected into it in some way. Here's one way to do it:

private readonly IValidator<PersonDto> _validator = 
    NinjectKernel.Instance.Get<IValidator<PersonDto>>();
private readonly IMapper<PersonDto, PersonModel> _mapper = 
    NinjectKernel.Instance.Get<IMapper<PersonDto, PersonModel>>();
private readonly IRepository<PersonModel> _repository = 
    NinjectKernel.Instance.Get<IRepository<PersonModel>>();

Instead of being injected through constructor, the dependencies are now initialized by a call to an IoC container singleton. In the startup code this container can be initialized as needed:

NinjectKernel.Instance.Bind<IValidator<PersonDto>>().To<PersonValidator>();
NinjectKernel.Instance.Bind<IMapper<PersonDto, PersonModel>>().To<PersonMapper>();
NinjectKernel.Instance.Bind<IRepository<PersonModel>>().To<PersonRepository>();

Of course you can define different bindings in your test project, but since the container is a singleton, you don't have all that many options to vary these bindings between tests. The most obvious way would be to rebind a specific binding inside the test:

var validator = new MockValidator<PersonDto>(isValid: true);
NinjectKernel.Instance.Rebind<IValidator<PersonDto>>().ToConstant(validator);

Unfortunately you can't just revert back to the previous binding. If you don't do it, this test will a have a side effect: all tests run after it will use the new binding. To avoid that, you could initialize all the bindings before each test. Since there's no simple way to reset a kernel in Ninject, you only have 2 options left:

  • Using only Rebind calls instead of Bind for initialization and avoiding loading modules because you can't load them twice in the same kernel.
  • Create a new singleton instance and initialize it from scratch.

I recently found myself in a similar situation and didn't really like any of the options. After some investigation I found an alternative approach based on my previous post about object scoping. I took advantage of contextual binding support in Ninject:

var validator = new MockValidator<PersonDto>(isValid: true);

var testName = TestContext.CurrentContext.Test.FullName;
NinjectKernel.Instance.Bind<IValidator<PersonDto>>()
    .ToConstant(validator)
    .When(_ => TestContext.CurrentContext.Test.FullName == testName);

The above code also depends on NUnit's TestContext which ensures a unique name for each test. This way the added binding is only going to be valid for the duration of the test. Once I got it working, I wrapped the code in a simple to use extension method:

public static IBindingInNamedWithOrOnSyntax<T> 
    WhenInCurrentTest<T>(this IBindingWhenSyntax<T> binding)
{
    var testName = TestContext.CurrentContext.Test.FullName;
    return binding.When(_ => 
        TestContext.CurrentContext.Test.FullName == testName);
}

With its help the intent of the test code becomes much more obvious:

var validator = new MockValidator<PersonDto>(isValid: true);
NinjectKernel.Instance.Bind<IValidator<PersonDto>>()
    .ToConstant(validator).WhenInCurrentTest();

I still prefer not having to use IoC containers in my tests, but at least I made it more bearable when I can't avoid it.

Monday, December 1, 2014 6:40:32 AM (Central European Standard Time, UTC+01:00)  #    Comments [0] - Trackback
Development | .NET | Testing
# Monday, November 24, 2014

Encouraged by Scott Hanselman's Get Involved video, I started experimenting with DocPad. For someone, who's been mostly focused on .NET framework and Microsoft technologies, this turned out to be quite a step outside the comfort zone. Not in a bad way, though. Learning Node.js, CofeeScript, and Embedded CoffeeScript (ECO) templates is an interesting challenge and a great way to broaden my horizons.

Recently I wanted to perform a simple task of grouping a list of pages by the month, they were published in. In .NET I would use LINQ and be done with it in a matter of minutes. Doing this in my new technology stack took a bit more time. My first working solution was not something, I'd want to show to anyone. However, after some refactoring and cleaning up, I ended up with code, I decided to even blog about (mostly to serve as a reference while I continue to learn more).

My starting point in DocPad was the result of the following call:

posts = @getCollection("blogposts").toJSON()

Here's a sample of it:

var posts = 
[
  {
    "date": newDate("2014-11-10"),
    "url": "ScopeNinjectBindingsToIndividualTests.aspx",
    "title": "Scope Ninject Bindings to Individual Tests"
  },
  {
    "date": newDate("2014-11-03"),
    "url": "CustomizingComparisonsInFluentAssertionsWithOptions.aspx",
    "title": "Customizing Comparisons in FluentAssertions with Options"
  },
  {
    "date": newDate("2014-10-27"),
    "url": "LightMessageBusAStandaloneLightweightEventAggregator.aspx",
    "title": "LightMessageBus: A Standalone Lightweight Event Aggregator"
  }
];

It's a collection of page meta data, already sorted descendingly by date.

Surprisingly (at least to me) there's already a built-in reduce function available on JavaScript arrays, you just need to call it correctly:

postsByMonth = posts.reduce((previous, current, index, context) ->
  month = moment(current.date).format("MMMM YYYY")
  if previous[month]
    previous[month].push(current) 
  else
    previous[month] = [ current ]
  previous
{})

I started out with an empty object (passed in as context). For each month I added a property to it. Each property contains an array of posts for the month. The function returns the modified object so that it can be passed back in for the next call. I'm using moment.js library to convert the date into a string representing the month.

With just a little more effort this can be converted into a reusable grouping function:

arrayGroupBy = (array, aggregate) ->
  array.reduce((previous, current, index, context) ->
    group = aggregate(current)
    if previous[group]
      previous[group].push(current)
    else
      previous[group] = [ current ]
    previous
  {})

The aggregate function can be passed into it as a parameter:

postsByMonth = arrayGroupBy(posts, (post) -> 
  moment(current.date).format("MMMM YYYY"))

Since I was going to use the date formatting elsewhere, I extracted it into a function as well, and changed the call accordingly:

dateToMonthAndYear = (date) -> moment(date).format("MMMM YYYY")
postsByMonth = arrayGroupBy(posts, (post) -> dateToMonthAndYear(current.date))

Now I was ready to use my code in the page template:

<ul>
<% posts = @getCollection("blogposts").toJSON() %>
<% aggregate = (post) => @dateToMonthAndYear(post.date) %>
<% postsByMonth = @arrayGroupBy(posts, aggregate) %>
<% for month in Object.keys(postsByMonth): %>
  <li><%= month %></li>
  <ul>
  <% for post in postsByMonth[month]: %>
    <li>
      <a href="<%= post.url %>">
        <%= post.title %>
      </a>
    </li>
  <% end %>
  </ul>
<% end %>
</ul>

Notice, how I first iterated through properties of postsByMonth by calling Object.keys(), and then through the posts by accessing the object properties as items in an associative array.

For those familiar with DocPad, I should mention that I have declared dateToMonthAndYear and arrayGroupBy in docpad.coffee configuration file, that's why I'm accessing them using the @ shortcut syntax for "this" in CoffeeScript. Here's the relevant part of my configuration file:

moment = require('moment')

docpadConfig = {
  templateData:
    dateToMonthAndYear: (date) -> moment(date).format("MMMM YYYY")
    arrayGroupBy: (array, aggregate) ->
      array.reduce((previous, current, index, context) ->
        group = aggregate(current)
        if previous[group]
          previous[group].push(current)
        else
          previous[group] = [ current ]
        previous
      {})
}

It's also worth mentioning, that I've used the fat arrow when declaring the aggregate. This way "this" got bound to its value in the template, making dateToMonthAndYear function available.

Yes, I know there is a LINQ library available for JavaScript, which I could use instead, but I think it's a bit of an overkill in this case. And the end result wouldn't be all that different either, although a bit easier to understand for .NET developers:

moment = require('moment')
enumerable = require('linq')

docpadConfig = {
  templateData:
    dateToMonthAndYear: (date) -> moment(date).format("MMMM YYYY")
    arrayGroupBy: (array, aggregate) -> enumerable.
      from(array).
      groupBy((post) -> aggregate(post)).
      select((group) ->
        {
          key: group.key()
          values: group.toArray()
        }).
      toArray()
}

Since arrayGroupBy now returns an array instead of an object, the looping in the template would be implemented slightly differently as well:

<ul>
<% posts = @getCollection("blogposts").toJSON() %>
<% aggregate = (post) => @dateToMonthAndYear(post.date) %>
<% postsByMonth = @arrayGroupBy(posts, aggregate) %>
<% for month in postsByMonth: %>
  <li><%= month.key %></li>
  <ul>
  <% for post in month.values: %>
    <li>
      <a href="<%= post.url %>">
        <%= post.title %>
      </a>
    </li>
  <% end %>
  </ul>
<% end %>
</ul>

I don't know about you, but I decided not to use linq.js for now. I might change my mind if I'll need more complex array manipulation in my project.

Monday, November 24, 2014 6:48:20 AM (Central European Standard Time, UTC+01:00)  #    Comments [0] - Trackback
Development | CoffeeScript | JavaScript | Software | DocPad
My Book

NuGet 2 Essentials

About Me
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

All Content © 2014, Damir Arh, M. Sc. Send mail to the author(s) - Privacy Policy - Sign In
Based on DasBlog theme 'Business' created by Christoph De Baene (delarou)
Social Network Icon Pack by Komodo Media, Rogie King is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.