Notes from Daily Encounters with Technology RSS 2.0
 
# Sunday, May 17, 2015

When presenting demos in Visual Studio in front of an audience, you should make sure all text is large enough for everyone to see, so that they can follow along. If you're only focusing on code, it's usually enough to just zoom the text (using the mouse wheel or the combobox in the bottom left corner of the editor). Productivity Power Tools for Visual Studio 2013 introduced a great feature called Presenter Mode, temporarily increasing the fonts both in the text editor and Visual Studio itself, without having to change the settings and thus affecting your working environment.

Unfortunately Productivity Power Tools are not (yet) available for Visual Studio 2015, so for the time being we need to make do without it. Text zooming should work fine for code. For the rest of Visual Studio ZoomIt might work for you, if you're not planning to show too much interaction. You could also try changing DPI settings for the display to make everything larger.

If neither of those options works for you, you could try taking advantage of color themes, because each one of them has their own settings. Switching between themes therefore effectively means switching between different preset settings. This makes it possible to use one theme for presenting and a different one for regular work, keeping the settings separated.

You should never use dark theme for presenting because it makes the text much more difficult to read on projectors, so you're left with two themes which you can use for presentation: light and blue. If you're used to work using one of them, stick to it and modify the other one for presenting.

To set it up, open the Options window and on the Environment > General page select the Color theme you want to dedicate to presentations (Light in my case). Click on OK to make the switch and reopen the window.

Switching the color theme

Now navigate to Environment > Fonts and Colors page. You'll need to change only two settings here (the fonts below should match the defaults in Productivity Power Tools Presenter Mode; feel free to set them to your liking):

  • First select Text Editor in the Show settings for dropdown. In Display items listview select Text and set Font to Consolas and Size to 14.
  • Then select Environment Font in the Show settings for dropdown. Display items will only contain Plain Text. Set its Font to Segoe UI and Size to 11.

Setting the fonts

Again click OK to commit the changes. Now you can switch between the two themes to switch between your own working font settings and the presentation font settings. Try it out!

Sunday, May 17, 2015 5:46:58 PM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Software | VisualStudio
# Saturday, May 16, 2015

Most .NET developers are really spoiled as far as debugging goes. Most of the time we just start the application from Visual Studio and the debugger is already attached to it and ready to break the execution at breakpoints or when an exception is raised. If that's not possible, Visual Studio debugger can easily be attached to an already running process either on the local machine or a remote machine with Remote Tools installed. It works even when the application is hosted in Azure.

However, none of the above options can be used when an application starts misbehaving in production (slow response times, seemingly random and non-reproducible exceptions or application crashes, etc.). When logging and instrumentation are not enough to resolve the problem, it's time to create a memory dump and analyze it in WinDbg. Thanks to its steep learning curve, using it for the first time is quite a scary thought. And since most of us only need to use it on rare occasions, just enough time passes between them, that it's not much easier doing it the next time after such a pause. I've created this short cheat sheet to make it a bit easier for me, when I'm in such a situation again. I hope it will prove helpful to others as well.

Although it is possible to create a memory dump from Task Manager by right clicking the process of interest and selecting the appropriate option, this is not the recommended way of doing it. Not only does it provide only basic functionality, it doesn't even create a useful memory dump for 32-bit processes running in 64-bit Windows.

Your best choice for creating memory dumps is ProcDump from the Windows Sysinternals suite. Just like other tools from the suite it can be just copied to the machine and run without installing, making it even more suitable for use in production environment. To create a full memory dump of a specific process at that point in time, you should call:

procdump -ma <pid> <filename>

Of course, you should replace <pid> with the process ID value of your process (available in Task Manager) and <filename> with the desired name for the generated memory dump file (it's best to use .dmp extension). The tool supports many command line options to tailor it to your needs. For example, the following call will create a memory dump when the process crashes:

procdump -t -ma <pid> <filename>

In spite of being the least invasive way for obtaining run-time information for debugging, depending on the size of the process, creating the dump can still take several seconds. During this time the process is paused, so take this into consideration when using it in production. Of course, for processes consuming a lot of memory the generated file will also be quite large, therefore plan appropriately where to save it to and how to transfer it to your development machine, where you will be able to analyze it.

To analyze memory dumps you will need to install WinDbg on your development machine. It is a part of Windows SDK. Assuming you already have Visual Studio installed, there's no need to download and install it in its entirety; it's enough to only select Debugging Tools for Windows in the dialog:

Windows SDK Setup Dialog

In case you haven't paid attention at the very beginning of the setup wizard, WinDbg gets installed into C:\Program Files (x86)\Windows Kits\8.1\; in Debuggers subfolder, to be exact.

Once you start the correct version of WinDbg (either x86\windbg.exe or x64\windbg.exe, based on whether you want to analyze a memory dump of 32-bit or a 64-bit process respectively), the first step is to load the memory dump (File > Open Crash Dump... or Ctrl+D). From here on, you'll need to proceed by typing commands.

To make any sense of it, you'll need to set the symbol path:

.symfix C:\Symbols

This will automatically set it to Microsoft's symbol store and at the same time create a local cache for the symbols at the given path. If you need to troubleshoot the loading of symbols, you can enable verbose output before calling the above command:

!sym noisy

Since you're not always debugging the same version of CLR that you have on your machine, you should do a verbose reload of debug modules which will report any missing files:

.cordll -ve -u -l

If it complains about missing mscordacwks.ddl or SOS.dll, you will need to retrieve them from the machine where the memory dump was created and repeat the above command.

The next issue are usually missing symbols (.pdb files) for your own assemblies. You'll either need to include a folder containing them to symbol path or copy them directly to the cache folder, as reported by the above command.

Now you're finally ready to start doing the analysis. Depending on the type of the issue, you'll want to start with one of the following commands:

That should be enough to get you going. For more information refer to SOS debugging extension documentation, check Tess Ferrandez's blog or search the internet.

Saturday, May 16, 2015 8:36:34 PM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | .NET | Software | Windows
# Saturday, May 9, 2015

Effective development of diagnostic analyzers strongly depends on unit testing. Even if you're not a proponent of TDD or testing in general, you'll start to share my opinion as soon as you'll attempt to debug an analyzer for the first time. Debugging diagnostic analyzers requires a second instance of Visual Studio to be started which will host the debugged analyzer as an extension. This takes far too long for being useful during development, therefore using tests instead is a must.

Fortunately a test project is automatically created for you by the template which gets you going without having to understand all the details of how it actually works. You can base your tests on the two examples that are included in the generated test project:

//No diagnostics expected to show up
[TestMethod]
public void TestMethod1()
{
    var test = @"";

    VerifyCSharpDiagnostic(test);
}

//Diagnostic and CodeFix both triggered and checked for
[TestMethod]
public void TestMethod2()
{
    var test = @"
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Diagnostics;

    namespace ConsoleApplication1
    {
        class TypeName
        {   
        }
    }";
    var expected = new DiagnosticResult
    {
        Id = Analyzer9Analyzer.DiagnosticId,
        Message = String.Format(
            "Type name '{0}' contains lowercase letters", 
            "TypeName"),
        Severity = DiagnosticSeverity.Warning,
        Locations =
            new[] 
            {
                new DiagnosticResultLocation("Test0.cs", 11, 15)
            }
    };

    VerifyCSharpDiagnostic(test, expected);

    var fixtest = @"
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Threading.Tasks;
    using System.Diagnostics;

    namespace ConsoleApplication1
    {
        class TYPENAME
        {   
        }
    }";
    VerifyCSharpFix(test, fixtest);
}

To create your own test class, you need to inherit it from DiagnosticVerifier or CodeFixVerifier, based on whether you will only test a diagnostic analyzer or a code fix as well. Make sure, you instantiate the right analyzer, include some source code to try it out on, and you're ready to test:

[TestClass]
public class UnitTest : CodeFixVerifier
{

    //No diagnostics expected to show up
    [TestMethod]
    public void TestMethod1()
    {
        var test = @"
    using System.Text.RegularExpressions;

    namespace RegExSample
    {
        public class Class1
        {
            public void Foo()
            {
                Regex.Match("""", """");
            }
        }
    }";

        VerifyCSharpDiagnostic(test);
    }

    //Diagnostic triggered and checked for
    [TestMethod]
    public void TestMethod2()
    {
        var test = @"
    using System.Text.RegularExpressions;

    namespace RegExSample
    {
        public class Class1
        {
            public void Foo()
            {
                Regex.Match("""", ""["");
            }
        }
    }";
        var expected = new DiagnosticResult
        {
            Id = RegexAnalyzerAnalyzer.DiagnosticId,
            Message = String.Format(
                "Regular expression is invalid: {0}", 
                @"parsing ""["" - Unterminated [] set."),
            Severity = DiagnosticSeverity.Error,
            Locations =
                new[] 
                {
                    new DiagnosticResultLocation("Test0.cs", 10, 33)
                }
        };

        VerifyCSharpDiagnostic(test, expected);
    }

    protected override DiagnosticAnalyzer GetCSharpDiagnosticAnalyzer()
    {
        return new RegexAnalyzerAnalyzer();
    }
}

The only tricky part is, getting the correct values for the expected diagnostic. In my experience, it's best to set your best guess there at first, let the test fail because of mismatched values and set the expected properties one by one, until the test passes. At least for getting the correct location, this should be easier than trying to calculate it yourself.

The above described approach should work fine, as long as your code snippets for testing the analyzers require no assembly references. If they do, your analyzer will stop reporting the diagnostic (or misbehave differently depending on how you have implemented it) because it won't find the expected symbols. With the current templates (RC) there is no simple way to add those references per test method or test class - you'll need to modify the plumbing code that comes with the templates.

If you take a closer look at that plumbing, you will soon drill down to CreateProject method inside DiagnosticVerifier.Helper.cs:

/// <summary>
/// Create a project using the inputted strings as sources.
/// </summary>
/// <param name="sources">Classes in the form of strings</param>
/// <param name="language">The language the source code is in</param>
/// <returns>A Project created out of the Documents created from the source strings</returns>
private static Project CreateProject(string[] sources, string language = LanguageNames.CSharp)
{
    string fileNamePrefix = DefaultFilePathPrefix;
    string fileExt = language == LanguageNames.CSharp ? 
        CSharpDefaultFileExt : VisualBasicDefaultExt;

    var projectId = ProjectId.CreateNewId(debugName: TestProjectName);

    var solution = new AdhocWorkspace()
        .CurrentSolution
        .AddProject(projectId, TestProjectName, TestProjectName, language)
        .AddMetadataReference(projectId, CorlibReference)
        .AddMetadataReference(projectId, SystemCoreReference)
        .AddMetadataReference(projectId, CSharpSymbolsReference)
        .AddMetadataReference(projectId, CodeAnalysisReference);

    int count = 0;
    foreach (var source in sources)
    {
        var newFileName = fileNamePrefix + count + "." + fileExt;
        var documentId = DocumentId.CreateNewId(projectId, debugName: newFileName);
        solution = solution.AddDocument(documentId, newFileName, SourceText.From(source));
        count++;
    }
    return solution.GetProject(projectId);
}

Here a project is created from your sample source code and references are added, before the compilation is done. By default only 4 assemblies are referenced: mscorlib.dll, System.Core.dll, Microsoft.CodeAnalysis.CSharp.dll, and Microsoft.CodeAnalysis.dll. Since the method is static, you'll need to add additional references for all your tests directly to this method, unless you want to do some major refactoring of that code.

You can find existing MetadataReference instances initialized at the top of the class. I suggest you add your own right there beside them:

// create System.dll reference
private static readonly MetadataReference SystemReference = 
    MetadataReference.CreateFromAssembly(
        typeof(System.Text.RegularExpressions.Regex).Assembly);

Now you can include it in the project using the fluent API:

var solution = new AdhocWorkspace()
    .CurrentSolution
    .AddProject(projectId, TestProjectName, TestProjectName, language)
    .AddMetadataReference(projectId, CorlibReference)
    .AddMetadataReference(projectId, SystemCoreReference)
    .AddMetadataReference(projectId, CSharpSymbolsReference)
    .AddMetadataReference(projectId, CodeAnalysisReference)
    // include System.dll reference in the project
    .AddMetadataReference(projectId, SystemReference);

This should be enough to get you over the initial humps of unit testing your analyzers. I suggest you still try them out in Visual Studio in the end, but for most of the development time, you should be able to manage without it. Doing it like this should save you a lot of time and make the development a much more pleasant experience. Now, start writing that analyzer, you always wanted to have.

Saturday, May 9, 2015 8:26:19 PM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | Roslyn | Testing
# Sunday, April 19, 2015

Diagnostic analyzers are a new extension point in Visual Studio 2015, enabled by the new .NET Compiler Platform (codename: Roslyn). They are custom pieces of code, analyzing the source code, which run at compile time, but are also scheduled in the background inside Visual Studio when the source code changes. Results of the analysis are warnings and errors treated the same way as the ones built into the C# (or Visual Basic) compiler.

In the past such code analysis could only be added to Visual Studio using high profile extensions, such as ReSharper, CodeRush and JustCode which had to reimplement the compiler themselves. Each one of them included their own custom API for writing plugins. Such market fragmentation of course resulted in pretty small number of available plugins.

Thanks to Roslyn the price of entry for performing custom code analysis has significantly reduced, hence there are already many diagnostic analyzers available, both by Microsoft (FxCop Analyzers, Code Analysis for Azure) and open source initiatives (.NET Analyzers, Code Cracker, C# Essentials), although the final version of Visual Studio 2015 has not even been released yet. I can imagine a future, where library authors will include diagnostic analyzers as a guidance to their users and larger development companies will develop internal diagnostic analyzers to enforce their coding style.

I've already experimented with diagnostic analyzers in the first Visual Studio 2015 Preview, but since I'm going to speak about this subject at the upcoming local conference, I had to configure a working environment using the latest Visual Studio 2015 CTP 6. Although I knew exactly what I needed, it turned out quite a hassle to get everything up and running, so here's a complete list of required software. Just keep in mind that it is very likely going to change with the next release of Visual Studio 2015.

  1. Obviously you first need to install Visual Studio 2015 CTP6.
  2. To add support for building VSIX packages (installers for Visual Studio extensions) you will also need to install Microsoft Visual Studio 2015 SDK. Diagnostic analyzer template includes a VSIX project and will fail without this component installed. You will also have no simple way to debug your diagnostic analyzer without it.
  3. .NET Compiler SDK Templates for CTP 6 Visual Studio extension will add the Diagnostic with Code Fix (NuGet + VSIX) project template to Visual Studio which will create a simple working diagnostic analyzer to give you a jump start.
  4. Not strictly required but highly recommended is the .NET Compiler Platform Syntax Visualizer for CTP 6 Visual Studio extension which gives insight into the syntax tree of the code you are currently editing, as Roslyn compiler sees it. It will prove extremely useful when you start developing your own diagnostic analyzer.

.NET Compiler Platform Syntax Visualizer

Once you have installed everything, you're ready to create your first diagnostic analyzer project. Just open Visual Studio 2015, create a new project and select Visual C# > Extensibility > Diagnostic with CodeFix (NuGet + VSIX) template in New Project dialog. There's a nice ReadMe.txt file included in the generated solution. Just follow the instructions in the second paragraph to try out the generated code analyzer: make sure the VSIX project is set as startup project and press F5 to start debugging.

This should open a new instance of Visual Studio 2015 with the code analyzer already deployed to it. Create a new class library project inside it and you should already see you code analyzer in action, suggesting you to make the type name all upper case:

Diagnostic analyzer template in action

Sunday, April 19, 2015 9:43:52 AM (Central European Daylight Time, UTC+02:00)  #    Comments [0] - Trackback
Development | Roslyn | Software | VisualStudio
# Sunday, April 5, 2015

Rich Helton: Learning NServiceBus SagasAfter I published the review for Learning NServiceBus - Second Edition, I got contacted by a Packt Publishing representative, whether I'd like to review another book about NServiceBus, they have recently released: Learning NServiceBus Sagas by Rich Helton. Seeing it as a logical continuation of the book I have already read, I agreed to do it.

In spite of the introductory chapter, the reader should already be familiar in advance with the concepts of an enterprise service bus and the patterns related to it, as well as with the basics of NServiceBus. Also the book is not focused only on sagas at all; it gets distracted by many other technologies, such ASP.NET MVC, WCF, Microsoft Azure and others, not really telling much about any of them either.

The samples are very contrived and almost impossible to make sense of by just reading the book, because their crucial parts are not explained and can only be found in the accompanying download file. Not even the basic functionality and structure of sagas is properly explained, much less any advanced concepts and usage scenarios. On the other hand; just to prove how simple it is to change the transport from MSMQ to other queuing technologies, the process is described 5 times in different parts of the book, making NServiceBus use RabbitMQ, ActiveMQ (twice), Azure Storage Queues and Azure Service Bus, instead.

The content is not well structured; the author keeps jumping from one topic to the other, often repeating himself, which makes the book very difficult to follow. On top of that, the text is occasionally even misguiding; e.g. unit testing (much less TDD) is not about initially writing the code in a test and then copying it to its correct location. I can't really recommend the book to anyone. It's just a waste of time and money. You'll learn more by reading the official documentation and blog posts on the topic.

The book is available on Amazon and sold directly by the publisher.

Sunday, April 5, 2015 11:01:07 AM (Central European Daylight Time, UTC+02:00)  #    Comments [1] - Trackback
Development | .NET | Personal | Reviews
My Book

NuGet 2 Essentials

About Me
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

All Content © 2015, Damir Arh, M. Sc. Send mail to the author(s) - Privacy Policy - Sign In
Based on DasBlog theme 'Business' created by Christoph De Baene (delarou)
Social Network Icon Pack by Komodo Media, Rogie King is licensed under a Creative Commons Attribution-Share Alike 3.0 Unported License.