Showing posts with label C#. Show all posts
Showing posts with label C#. Show all posts

2025-03-03

Where did the C# Programming Language Name Originate

In the late 1980s at Microsoft, there was an attempt at constructing some extensions to the C++ programming language, which were internally named C#. This project failed, was cancelled, and never used. The database used internally by the language required more resources than the hardware that was available at the time could provide.

However, about a decade later when Microsoft again attempted to form a programming language, they recycled the name. This name is now used for one of the most popular programming languages. The name was meant to represent C++++, if you view the # sign as four plus signs.

2021-01-24

Solving Mastermind

A question was posted online recently about the best strategy for winning the game of Mastermind. Mastermind is a game played between two players, a Code Maker and a Code Breaker. The Maker makes a code of colored pegs, and the Breaker has to guess the code. After each guess, the Maker gives feedback of how many of the pegs were the right color in the right place and how many are the right color in the wrong place, indicated by black and white pegs in the board. The Breaker then makes another guess.

The parameters of the game are how many possible colors there are, how many pegs are in the code, and whether the same color is allowed to be repeated in the code, as in (4 of 6, repeats).

Donald Knuth wrote a paper on optimal play for the Breaker and showed that in four pegs in the code of six possible colors with repeats, it can be solved in no more than five tries. Donald Knuth is a deity of Computer Science, having written The Art of Computer Programming. I wrote a program to implement Knuth's algorithm in C#. It also creates a table at the end of how to make perfect play.

In my program, I replace colors with numerals since the colors are arbitrary. I have placed the code on GitHub. You can try the suggested algorithm on this site.

The program uses a MinMax algorithm, which finds the code that will reduce the number of possible remaining codes on each play. Because of the way it works, sometimes it will make a code that might not actually solve it on the next play, but instead guarantee that it solves it in the least number of tries. There are some other algorithms that will solve it in a smaller average number of tries, but possibly having a larger maximum.

2018-08-30

Using the New Features in the Latest Versions of C#

The current version of Visual Studio 2017 (15.8.2 the day this is posted) actually supports C# version 7.3. You can see the new features by looking at the C# feature list. However, by default, Visual Studio will use C# version 7.0. To use versions after 7.0, you will need to go to the project properties, select Build, then click the Advanced button. In the dialog is a setting for Language Version. Changing this to 7.3, for example, will enable the latest features.

You can use this same setting for turning off features. If you don't like the stuff they added to C# version 7, you can go back to 6, or even back to 3. They have been pretty good, however, at not screwing up the language with features added in later versions. I can't think of a feature where I went, "I wish they didn't put that in the language." I think lambda expressions are overused by a lot of people, but there are places where they are appropriate. I also use "var" as little as possible, but there are places where var is necessary and useful. The usage of these features is a coding style issue, not a problem with the language itself.

You can see the features that might be coming in future versions of C# at this page. The biggest feature that is being discussed is non-nullable reference types. With these, you can specify that a specific reference type cannot ever be null. This will likely change how a lot of C# code gets written.

2018-06-10

Using UserControls with Caliburn.Micro

It is common to want to create a reusable UserControl, to be placed into a WPF (Windows Presentation Foundation) screen. This can be done one of two ways:
  • ViewModel First
  • View First
The techniques below will show how to do both of these schemes using Caliburn.Micro to perform the plumbing to connect them up. It took me quite a bit of research to figure out how to make these happen, particularly the View first, scheme. Both of these techniques can be used to create a UserControl in a Window that was itself generated using the other technique. For example, a window that was created using the ViewModel First scheme can include a UserControl that is created using the View First scheme.

In the example code below, the main window View is called  MainWindowView and has a ViewModel called MainWindowViewModel. The ViewModel First control has a ViewModel called ViewModelFirstTestControlViewModel, which is displayed with the View called ViewModelFirstTestControlView. The View First control has a View called ViewFirstTestControlView and has a ViewModel called ViewFirstTestControlViewModel.

The ViewModel first scheme places a ContentControl into the MainWindowView, with a x:Name attribute. For example:

<ContentControl
 x:Name="ViewModelFirstTestControlViewModel" />

The MainWindowViewModel then has this code:

namespace TestSystem.ViewModels
{
 using Caliburn.Micro;
 
 /// <summary>A ViewModel for the main window.</summary>
 /// <seealso cref="T:Caliburn.Micro.PropertyChangedBase"/>
 public class MainWindowViewModel : PropertyChangedBase
 {
  /// <summary>Initializes a new instance of the <see cref="MainWindowViewModel"/> class.</summary>
  public MainWindowViewModel()
  {
   this.ViewModelFirstTestControlViewModel = new ViewModelFirstTestControlViewModel("ViewModel First Set Content");
  }
 
  /// <summary>Gets the ViewModelFirst test control view model.</summary>
  /// <value>The ViewModelFirst test control view model.</value>
  public ViewModelFirstTestControlViewModel ViewModelFirstTestControlViewModel
  {
   get;
   private set;
  }
 }
}

So the constructor of the MainWindowViewModel instantiates the ViewModel of the UserControl, passing any arguments to initialize the values in the control. A property with the same name as the x:Name of the ContentControl exposes that ViewModel to the ContentControl. When the ContentControl needs to display the ViewModel, Caliburn.Micro finds the appropriate View and displays that as the content of the ContentControl.

The content of the actual UserControl View in this example looks like this, but could be virtually anything you want:


<UserControl
 x:Class="TestSystem.Views.ViewModelFirstTestControlView"
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
 <StackPanel
  <TextBlock
   Text="{Binding Path=Caption}" />
 </StackPanel>
</UserControl>

The ViewModel for the control in this example look like this:

namespace TestSystem.ViewModels
{
 using Caliburn.Micro;
 
 /// <summary>A ViewModel for the ViewModelFirst test control. This class cannot be inherited.</summary>
 /// <seealso cref="T:Caliburn.Micro.PropertyChangedBase"/>
 public sealed class ViewModelFirstTestControlViewModel : PropertyChangedBase
 {
  /// <summary>The caption.</summary>
  private string caption = "Default ViewModel first caption";
 
  /// <summary>
  /// Initializes a new instance of the <see cref="ViewModelFirstTestControlViewModel"/> class.</summary>
  public ViewModelFirstTestControlViewModel()
  {
  }
 
  /// <summary>
  /// Initializes a new instance of the <see cref="ViewModelFirstTestControlViewModel"/> class.</summary>
  /// <param name="caption">The caption.</param>
  public ViewModelFirstTestControlViewModel(string caption)
  {
   this.caption = caption;
  }
 
  /// <summary>Gets or sets the caption.</summary>
  /// <value>The caption.</value>
  public string Caption
  {
   get
   {
    return this.caption;
   }
 
   set
   {
    if (value != this.caption)
    {
     this.caption = value;
     this.NotifyOfPropertyChange(() => this.Caption);
    }
   }
  }
 }
}

The main point about the code is that there is a constructor that takes any initial values to be set for the control. You may not actually need the default constructor.

Now, let's examine how to do virtually the same thing, but do it View First. In the MainWindowView, there is this code to place the control into the View:

<ctl:ViewFirstTestControlView
 cm:Bind.Model="TestSystem.ViewModels.ViewFirstTestControlViewModel"
 Caption="View First Set Content" />

For this Xaml to work, two namespace must be defined:

 xmlns:cm="http://www.caliburnproject.org"
 xmlns:ctl="clr-namespace:TestSystem.Views" 

The cm namespace comes from the Caliburn.Micro project. Many people use "cal" instead of "cm", but I've got a namespace for "calendrics" in some of  my projects, so use cm instead. The "ctl" namespace is where your views reside.

The cm:Bind.Model specifies the ViewModel for the control. The Caption passes in the initial value of the control.

This retrieves the View for the control. The View looks very similar the the ViewModel First View, with some additions:

<UserControl
 x:Class="TestSystem.Views.ViewFirstTestControlView"
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
 xmlns:vm="clr-namespace:TestSystem.ViewModels"
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
 <UserControl.Resources>
  <vm:ViewFirstTestControlViewModel
   x:Key="ViewFirstTestControlViewModel" />
 </UserControl.Resources>
 <StackPanel
  x:Name="root"
  DataContext="{StaticResource ViewFirstTestControlViewModel}">
  <TextBlock
   Text="{Binding Path=Caption}" />
 </StackPanel>
</UserControl>


The additions specify the ViewModel for the control as a resource and binds the DataContext of the first child control to that ViewModel. However, with View First, the thing you can't avoid is having code behind. The code behind for the UserControl looks like this:

namespace TestSystem.Views
{
 using System.Windows.Controls;
 
 using TestSystem.ViewModels;
 
 /// <summary>A view first test control view.</summary>
 /// <seealso cref="T:System.Windows.Controls.UserControl"/>
 /// <seealso cref="T:System.Windows.Markup.IComponentConnector"/>
 public partial class ViewFirstTestControlView : UserControl
 {
  /// <summary>The view model.</summary>
  private ViewFirstTestControlViewModel vm;
 
  /// <summary>Initializes a new instance of the <see cref="ViewFirstTestControlView"/> class.</summary>
  public ViewFirstTestControlView()
  {
   this.InitializeComponent();
   this.vm = (ViewFirstTestControlViewModel)this.root.DataContext;
  }
 
  /// <summary>Gets or sets the caption.</summary>
  /// <value>The caption.</value>
  public string Caption
  {
   get
   {
    return this.vm.Caption;
   }
 
   set
   {
    this.vm.Caption = value;
   }
  }
 }
}

The code behind does the InitializeComponent(), then sets the ViewModel to the DataContext that was set in the view. This, in turn, is used to have the property of the control talk to the ViewModel. The ViewModel of the control looks like this:

namespace TestSystem.ViewModels
{
 using Caliburn.Micro;
 
 /// <summary>A ViewModel for the ViewFirst test control. This class cannot be inherited.</summary>
 /// <seealso cref="T:Caliburn.Micro.PropertyChangedBase"/>
 public sealed class ViewFirstTestControlViewModel : PropertyChangedBase
 {
  /// <summary>The text.</summary>
  private string caption = "Default View first caption";
 
  /// <summary>Gets or sets the text.</summary>
  /// <value>The text.</value>
  public string Caption
  {
   get
   {
    return this.caption;
   }
 
   set
   {
    if (value != this.caption)
    {
     this.caption = value;
     this.NotifyOfPropertyChange(() => this.Caption);
    }
   }
  }
 }
}

This is almost the same as the ViewModel of the ViewModel First control, except it does not need the constructors, since the property is changed from the MainWindowView. (It has a default constructor that does nothing.)

A zip file for the entire project is found here. Included are all the files, including the Caliburn.Micro bootstrapper that sets up the files.

If you know of more efficient ways of doing any of the things I've described, please let me know in the comments.

2017-11-21

WPF RibbonSplitButton Activates Twice

On a WPF (Windows Presentation Foundation) RibbonSplitButton there are two parts. There is a button at the top, and a down arrow. The down arrow causes a menu to appear. If you click one of the menu items, there is what I would consider to be a bug, but what Microsoft considers to be "By Design" where it triggers the event code twice. Essentially, it triggers it once for the menu item, and once for the button.

There is a way to handle the problem. Essentially on the first trigger, you need to set the "Handled" property of the RoutedEventArgs to be true. The solution posted on the Microsoft site requires an event handler in code-behind, which isn't compatible with the MVVM architecture. Here is how I handled it using Caliburn.Micro for a button in my application that is supposed to start Excel in one of two different ways. The button at the top executes it with #0, and the two menu items executes it with #1 and #0.

First, here is the XAML. The key part of this is to pass the $executionContext as an argument to the method. This gets the necessary property to where it can be modified.


<ribbon:RibbonSplitButton
 cal:Message.Attach="[Event Click]=[Excel(0, $executionContext)]"
 IsEnabled="{Binding CanExcel}"
 KeyTip="X"
 Label="{x:Static loc:ShellViewResources.Excel}"
 LargeImageSource="/Xoc.MayaCalendar.Windows;component/Assets/Images/Ribbon/ExcelLarge.png"
 SmallImageSource="/Xoc.MayaCalendar.Windows;component/Assets/Images/Ribbon/ExcelSmall.png">
 <ribbon:RibbonMenuItem
  Header="{x:Static loc:ShellViewResources.Excel}"
  ImageSource="/Xoc.MayaCalendar.Windows;component/Assets/Images/Ribbon/PrintSmall.png"
  cal:Message.Attach="[Event Click]=[Excel(1, $executionContext)]" />
 <ribbon:RibbonMenuItem
  Header="{x:Static loc:ShellViewResources.ExcelExample}"
  ImageSource="/Xoc.MayaCalendar.Windows;component/Assets/Images/Ribbon/PrintSmall.png"
  cal:Message.Attach="[Event Click]=[Excel(0, $executionContext)]" />
</ribbon:RibbonSplitButton>

The next part is to handle the event. In the Caliburn.Micro code, it starts with:


public void Excel(ContentLevel contentLevel, ActionExecutionContext executionContext)
{
 RoutedEventArgs routedEventArgs = (RoutedEventArgs)executionContext.EventArgs;
 routedEventArgs.Handled = true;
 // other code
}

This handles the event, which causes it not to cause the second event.

2017-06-01

C# Optimization of Switch Statement with Strings

C# does some interesting things when you have a switch statement comparing a lot of strings: Suppose you have this:

   switch (input)
    {
        case "AAAA":
            Console.WriteLine("AAAA branch");
            break;

        case "BBBB":
            Console.WriteLine("BBBB branch");
            break;

        default:
            Console.WriteLine("default branch");
            break;
    }

    Console.WriteLine("Complete");


When you look at the IL (intermediate language) that it compiles into, it is essentially the same as a bunch of  if and else if statements. Converted back into C# code, it is as if you wrote this:

    if (input == "AAAA")
    {
        Console.WriteLine("AAAA branch");
    }
    else if (input == "BBBB")
    {
        Console.WriteLine("BBBB branch");
    }
    else
    {
        Console.WriteLine("default branch");
    }

    Console.WriteLine("Complete");

 However, if you continue to add case statements, this becomes inefficient. There are a lot of string comparisons that are really expensive. At a certain point, as you add cases, the compiler uses an entirely different technique to handle the cases. It creates a hash table of the strings. The IL looks like this, if it were converted back into C# code (assume there are more case statements):

    string s = input;

    switch (ComputeStringHash(s))
    {
        case 0x25bfaac5:
            if (s == "BBBB")
            {
                Console.WriteLine("BBBB branch");
                goto Label_0186;
            }

            break;

        case 0xff323f9:
            if (s == "AAAA")
            {
                Console.WriteLine("AAAA branch");
                goto Label_0186;
            }

            break;
    }
    Console.WriteLine("default branch");
Label_0186:
    Console.WriteLine("Complete");

The ComputeStringHash method is a pretty simple hash function that looks like this:

    internal static uint ComputeStringHash(string s)
    {
        uint num = 0;

        if (s != null)
        {
            num = 0x811c9dc5;
            for (int i = 0; i < s.Length; i++)
            {
                num = unchecked((s[i] ^ num) * 0x1000193);
            }
        }

        return num;
    }

This is a version of the FNV-1a hashing algorithm.

The change to using hashing seems to occur at about eight string case statements. The advantage is that there will be, on average, just one string comparison, the other comparisons are all comparing uint values. There is some overhead in performing the computing of the hash, which is why it doesn't use it for small number of case statements.

This actually becomes important when you are trying to write unit tests for the code. If you are trying to cover all of the branches in the unit tests, you will need to write code that hashes to 0xff323f9 but is not "AAAA" to get the goto Label_0186 branches to get covered. Your chances of finding something that hashes to the same value as your legitimate "AAAA" string without being "AAAA" is unlikely unless you are specifically trying to get a hash collision. This means that your code coverage will show branches as not being covered, even though you test every case statement in the switch statement. This will show a failure in your code coverage branch statistics (usually around only 60% covered), even though your unit test are actually adequate.

I have been working with the AxoCover and OpenCover programmers to try to get the coverage statistics for branches to be meaningful, but there may be no way to handle this correctly.

Addendum: The logic of the switch statements when it optimizes is slightly more complicated that what is presented above. The C# compiler actually performs a binary search on the hash index rather than just linearly searching through them, before getting to the comparison of the string. Performing hash collisions will raise your coverage to more than 90%, but will not go through all of the code for the binary search.

2017-04-19

Colorizing C# Code in Blogger

I went back and revised all of the C# code in this blog using the info from here. I then colorized the keywords using this web page. This made all of the code examples a little nicer. Some day I might do the same thing for other languages and XML.

2015-12-16

Using File.WriteAllText() with Encoding.UTF8 Writes Byte Order Mark (BOM) EF BB BF


First a little background on ASCII, Unicode, and UTF-8. ASCII (American Standard Code for Information Interchange) is a 50 year old standard, first adopted for teleprinters. It has 127 codes, and works rather well for representing English. As computers were used in other parts of the world, though, they needed some way to represent characters outside the ones available in ASCII. Various schemes were developed, but the one that has become the standard is Unicode.

Unicode represents each character as a numbered code point, allowing most characters in most languages to be represented. The first 127 code points are exactly same values as ASCII, making it a superset of ASCII. Unicode does not have a defined way of representing its code points in bytes, though, and various methods are used. The most popular encoding scheme is called UTF-8.

UTF-8 has the advantage that if the text characters are in the ASCII range, that the length in bytes is the same as ASCII. The length is only larger for representing characters outside the ASCII range.

So, given all that, you might think that the following four lines of C# code should all output the same bytes:

File.WriteAllText(@"c:\temp\Sample.txt", "Hello World!");
File.WriteAllText(@"c:\temp\Sample.txt", "Hello World!", Encoding.Default);
File.WriteAllText(@"c:\temp\Sample.txt", "Hello World!", Encoding.ASCII);
File.WriteAllText(@"c:\temp\Sample.txt", "Hello World!", Encoding.UTF8);

Since the "Hello World!" text is all in the ASCII range, you would expect that all four lines would write the same bytes. The first three lines, do write the same thing, but the fourth line writes something different. Here is a hex dump of the first output of the first three lines:

00000000  48 65 6C 6C 6F 20 57 6F 72 6C 64 21              Hello World!


Here is the hex dump of the Encoding.UTF8 file:

00000000  EF BB BF 48 65 6C 6C 6F 20 57 6F 72 6C 64 21     ...Hello World!


What are those first three bytes, EF BB BF? They are called the Byte Order Mark (BOM). They are supposed to indicate to a system reading the bytes how they are supposed to be read. When encoding the number 1 in binary, it could be encoded 1000000 or 00000001. The first is called Big Endian, and the second is called Little Endian. Most computers today use Little Endian ordering of bits.

Furthermore, when encoding the decimal number 400 in Little Endian, it could be encoded 00000001 10010000 or 10010000 00000001. In other words, the order of the bytes could change. The Byte Order Mark is meant to put a known three bytes at the beginning of the text so the system can figure out what the order of bits and bytes is being represented.

When a system reading Unicode text sees the Byte Order Mark, it is supposed to eat those bytes. However, if the system isn't expecting the BOM, then it displays what looks like three random letters at the beginning of the text, like .

So if you want to write UTF-8 with the BOM, then you should use:

File.WriteAllText(@"c:\temp\Sample.txt", "Hello World!", Encoding.UTF8);

On the other hand, if you don't want the BOM, then you should use:

File.WriteAllText(@"c:\temp\Sample.txt", "Hello World!");

They are not the same!

Incidentally, the output of the first four lines are way different from each other if the text included non-ASCII characters, but that is a whole other topic.

2015-12-04

Could Not Load the Assembly Because the Assembly Wasn't Signed

I was struggling for a few hours trying to get the Microsoft.Framework.Configuration code to run. It wouldn't because it kept complaining that:

FileLoadException was unhandled

As unhandled exception exception of type
'System.IO.FileLoadException' occurred in mscorlib.dll
Additional information: Could not load file or assembly
'Microsoft.Framework.Configuration.Abstractions,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or
one of its dependencies. A strongly-named assembly is
required. (Exception from HRESULT: 0x80131044)

The wasted hours came because I didn't scroll down in the dialog to see the last two lines of the error (for which I feel a little annoyed at myself). It was complaining that it couldn't load the assembly, and I kept trying to debug why the assembly wasn't being found. Except it was being found, just not loaded because it wasn't signed. The group working on this library made a mistake and failed to sign the assembly. As soon as saw the last two lines of the error, I immediately knew what was going on.

I've written about this exact problem in my book. Quoting from  my book, The Reddick C# Style Guide:

An assembly that has strong name can only reference other assemblies with strong names.

And a little further in the book:

⊗    There may be cases where a reference is needed to a required library has not been given a strong name, and there is no way of adding one. There is no other solution than not signing the assembly. If possible, though, work on acquiring a version of the library that has a strong name.

The simple solution to get the code working was to go into the Project Properties of my project and uncheck the "Sign the Assembly" check box on the signing tab. A more complex solution, since the library is open source, would be to get the sources and build it myself, signing the assembly. The best solution is to get the maintainers to sign the assembly so everyone doesn't run into this problem. Unchecking the "Sign the Assembly" check box has the drawback that it causes the Code Analysis CA2210 warning. It also leaves the assembly unsigned, which is a worse problem.

The reason why you want to sign an assembly is that if the private key is kept private, it prevents someone from making unauthorized changes to the assembly. When a program links to the library, it does it in a way that if the bytes of the assembly are changed in any way without re-signing it with the private key, .NET will not load the assembly. This prevents malware from being added to the library and passing it off as legitimate.

2015-10-15

Printing the Copyright Symbol in a C# Console Application

This is a tiny little tip. I wanted to print the copyright symbol in a console application. The trick is to set the console output encoding to UTF8. If you don't, it prints a c instead of a ©. This applies to all characters outside the ASCII range.

Console.OutputEncoding = Encoding.UTF8;
Console.WriteLine("Copyright © 2015 Xoc Software");

2015-10-09

Installing StyleCop in Visual Studio

StyleCop is a tool that reports problems with the source code in C#. Up through Visual Studio 2013, it is provided as an extension to Visual Studio. For Visual Studio 2013 and earlier, StyleCop could be downloaded and installed from http://stylecop.codeplex.com/. The source code is available there as well.

For Visual Studio 2015, the way that extensions are installed changed. Because extensions from 2013 don't work the same way as 2015, the 2013 version of StyleCop won't work in Visual Studio 2015. A person branched the StyleCop 2013 sources and produced a 2015 version. The sources are available at https://github.com/Visual-Stylecop/Visual-StyleCop, however the package for it can be downloaded from the Visual Studio 2015 Extensions and Updates menu item. Search for "Visual StyleCop" in the online extensions and install the version there. This will work essentially the same as Visual Studio 2013 using the same parser for C# and interface into Visual Studio. Currently there are some issues with parsing the new syntax available with C# version 6, but they are getting resolved.

However Visual Studio 2015 has added a new feature: Custom Analyzers. These use the features of the Roslyn compiler to parse the source code. They also integrate into Visual Studio to provide an interface for fixing the reported problems. They can report problems at the time the code is written, not just after a compile. Custom Analyzers are clearly the way forward for tools like StyleCop.

A group of people have re-implemented StyleCop as a Visual Studio 2015 Analyzer. The source code is available at https://github.com/DotNetAnalyzers/StyleCopAnalyzers. The package can be installed from NuGet console by typing "install-package stylecop.analyzers -pre". Because, unlike previous versions of StyleCop, these analyzers don't have a configuration dialog, they must be configured in JSON code. Read the document at https://github.com/DotNetAnalyzers/StyleCopAnalyzers/blob/master/documentation/Configuration.md on how to configure the settings.

2015-09-08

New Features in C# Version 7

I previously commented on the features being developed for C# version 6, shipped with Visual Studio 2015. They are now working on C# version 7. The discussion can be followed at https://github.com/dotnet/roslyn/labels/Design%20Notes. Right now they are just talking about things, and nothing is set in stone.

Notable things being discussed:
  1. Tuples (#347)
  2. Pattern matching (#206)
  3. Records / algebraic data types (#206)
  4. Nullability tracking (#227)
  5. Async streams and disposal (#114, #261)
  6. Strongly typed access to wire formats (#3910)
  7. Method contracts (#119)
  8. Params IEnumerable (#36)
  9. Readonly parameters and locals (#115)
  10. Immutable types (#159)
  11. Object initializers for immutable objects (#229)
  12. Array slices  (#120)
  13. Local Functions (#259)
  14. Covariant returns (#357)
Some comments on these:
  • The tuples feature would allow a method to return multiple values, among other things. The LUA language has this ability, and I find it rather nice.
  • Pattern matching would have some syntax similar to a switch statement that could match a string to a pattern and execute some code.
  • Records would simplify having a simple type essentially only made of properties.
  • Nullability tracking would allow you to specify whether a reference variable would ever be allowed to be null. For example, you could make a string variable that could never be null. It sounds like this would be implemented as a language feature, not a CLR (common language runtime) feature, so under some circumstances it could be defeated. I'm unsure whether implementing this halfway is the right thing to do here.
  • Method contracts would move code contracts into the language. This would be welcome feature, as now they are implemented as method calls and a tool that patches the C# code.
  • Readonly parameters and locals. This would improve code quality. I'm not sure I like the keywords they are talking about, but the feature is good.
  • Immutable types. Another thing to improve code quality.
  • Array slices. There is one place in my code that this would improve performance by several orders of magnitude. They are saying that this would require CLR support, so is unlikely to make it into version 7.
  • Local functions. Syntactic sugar, don't think it is really that helpful.
Some nice things. I do worry that they are making the language more complicated with each version. This increases the learning curve for new programmers.

2015-03-10

.NET Programming Tools

There are a set of tools that I use for .NET Programming. I am pretty conservative about buying tools, so there needs to be a lot of bang for the buck to make a tool worth paying for. Free is always good. Here is the set of tools that I use:

Visual Studio is, of course, the primary environment for developing .NET code. As mentioned in a previous blog post, Microsoft has now made Visual Studio Community Edition free for small companies. Before that I was buying the Professional Edition, which is essentially the same. I wish it had a few features that are in the more expensive editions, but it does almost everything I want.

Reflector is an essential tool. It allows decompiling a .NET assembly into its source code. This is useful in many ways. I've used this to see how something was implemented, seeing what the name of the actual resource is inside an assembly, checking .NET libraries for security issues, and translating C# into Visual Basic code, among other things. I can't imagine working without it.

StyleCop is a tool that complains about bad coding practices in C# code. It produces warnings about bad formatting problems, security problems, internationalization problems and so forth. It can complain about comments and formatting, which Code Analysis cannot. It can be extended. I have added several rules that are not in the standard set.

StyleCop+ is an addition to StyleCop that adds additional rules. The defaults, particularly capitalization, are not exactly compatible with my coding style, but everything is configurable. By changing a number of rules, this helps with formatting the code better.

Atomineer is a tool for reducing the tedium of documenting source code. It does a great job of putting the headers on every file, class, method, property, and everything else. It has rules for what it constructs. It's not perfect, but it is a great start for the documentation that needs to be put in place. The cost is well worth it. There's a free tool called GhostDoc, but it is not as good as Atomineer.

Resharper is a tool a lot of people swear by. I'm actually not its biggest fan. However, it does make certain operation much easier, such as resorting methods and properties.

Caliburn.Micro is a library that makes programming WPF and Silverlight much, much easier. It's free from NuGet. I've posted a number of articles about using it.

NLog is a logging tool for .NET. Free from NuGet. It allows you to log stuff to many different places by just changing the config file.

InstallShield Limited Edition is distributed free with Visual Studio. You can pay more for the full version. I find the limited version does what I need. In a few cases I have to clobber it into doing what I want. I created a program that modifies the config file when I do a release. I also need to do a network share back to the local machine to solve one problem with file locations. With a few workarounds, though, it does the job.

HTML Help Workshop is an ancient tool for creating help files. It hasn't changed in probably 20 years, and there still doesn't seem to be anything from Microsoft that's any better. Creating help files has always been way more trouble than it should be. Still, with this tool and Microsoft Expression Web, I can get the job done.

Microsoft Expression Web is a tool that is now abandoned by Microsoft. However, you can still find it on their web site for free. I only use it for working on help files, for which it works just fine.

GIMP is a free image manipulation program. I use it for creating icons, bitmaps, gifs, and jpegs for use in programs. It does everything that I used to use Photoshop for, but for a much better price. It takes a little getting used to, but works pretty well for what I need.

Team Foundation Server using VisualStudio.com. This allows me to perform project source code control and bug tracking. I have previously run this on my own server, but I find it more convenient to let Microsoft run the server, since I am sometimes on the road, and getting the proper holes punched into the firewall to allow remote access is difficult.

Code Contract Tools is a free download from NuGet that adds Code Contracts to .NET. This is a very useful tool for proving code is correct. It requires a lot of work to get set up right, but it's found a lot of very subtle problems in my code.

Productivity Power Tools is a free extension to Visual Studio that adds a number of features that I use. Available from NuGet.

StopOnFirstBuildError is a simple free tool that does just what it says. The moment it gets a build error, it stops the build. I am not interested in continuing the build past the first error. Available from NuGet.

XAML Styler is a free Visual Studio extension that reformats XAML code to be much nicer. Available from NuGet.

2014-12-19

MVC Not Finding HttpPost Method

I have a MVC Web Form that when I posted, it was not finding the method decorated with the [HttpPost] attribute.

There in the form .cshtml page there was code like this:
@using (Html.BeginForm("Login", "Account"))
{
}

And code like this in the Account Controller

// Finding this
[AllowAnonymous]
public ActionResult Login(string returnUrl)
{
//...
}

// Wasn't finding this
[AllowAnonymous]
[HttpPost]
[ValidateAntiForgeryToken]
public async Task Login(LoginViewModel model, string returnUrl)
{
//...
}

When the form posted, it was going to the "GET" method instead of the "POST" method. After a bunch of debugging I finally figured out that the reason why is that I have an URL Rewrite rule on the web server that converts all URLs to lowercase for Search Engine Optimization (SEO) and logging consistency. This can also be done in the Web.Config file.

Because the form URL is in mixed case, it wasn't finding the correct method. All it takes to fix the problem is change the .cshtml form to have the parameters in lower case. Like this:

@using (Html.BeginForm("login", "account"))
{
}

2014-12-06

New Features in C#

There are two new features showing up in the next version of C#. Actually, there are more than two, but I find these interesting. They are still being thrashed out, but are nearing final form.

The first is the nameof() operator, that will give you the name of the thing in the parentheses. A typical example of using it is:

void f(string s)
{
    if (s == null)
    {
        throw new ArgumentNullException(nameof(s));
    } 
}

The second is string interpolation. This allows you to do

string foo = $"{name} works in {department} and has {employeeCount} employees";

rather than

string foo = string.Format("{0} works in {1} and has {2} employees", name,
    department, employeeCount);

The exact details are in the links.

Also interesting is the discussion about these as they work out the details. You can find the details of the design decisions on the CSharp Language Design Notes page. The process they are going through to design these features gives you insight into the features themselves. It is cool to see this being done in public rather than in some conference room in Redmond.

2014-11-19

Microsoft Open Sourcing .NET is the Right Move

Microsoft announced a few days ago that it was open sourcing .NET. I consider this a huge move in the right direction. One of the principles of .NET was to separate the code from the underlying hardware. You code to the framework, not to the machine. If the framework (and the CLR) is portable, then the code is portable. In theory, Microsoft could then just port the framework to Apple or Linux hardware, and the code would just work, possibly without even a recompile.

But that was just theory. In practice, Microsoft only ported the framework to different versions of Windows, including the Windows phone and the Xbox, but not any of the competitors operating systems. This lost one of the huge potential advantages of .NET.

The Mono project tried to code around this by creating a compatible version on the other platforms, but they had to do a complete reimplementation. I have compared the code produced in one class in the Mono project to the code in the .NET Framework, and they are not at all the same. This tends to result in subtle differences in functionality between running on Windows with Microsoft's framework versus running on Mono. With the new open source available, Mono moves out of the fringes and becomes the main cross-platform version of the .NET Framework, as they incorporate Microsoft's code.

With the new announcement, Microsoft essentially makes porting possible without spending any resources to actually make it work on these other platforms. This is a win for Microsoft, but also for all of the programmers who have spent the last 13 years programming for .NET. Our investment in learning the ins and outs of .NET can now be used on a much wider set of machines. We don't have to learn a separate technology for programming for Android or the iPhone.

It is still unclear exactly how much Microsoft is going to open up. My biggest question is whether Microsoft will be open sourcing Windows Presentation Foundation (WPF). Mono had avoided working on this because it was a difficult project. The Mono developers estimated 30-60 person years to implement it in Mono. This cost would go down considerably if Microsoft open sources WPF. It still will take quite a bit of resources to port it, since WPF uses DirectX, whereas the ports would probably have to use OpenGL underneath it. However, the port would become a possible thing for Mono if they can just grab the source code.

Microsoft also simultaneously announced that Visual Studio would be free for classroom, education, and very small companies. The version they are giving away is essentially Visual Studio Professional, although with the name "Community Edition". Before this, they were giving away the Visual Studio Express version, but it had serious limitations, and wasn't really practical for developing real production code in many cases.

I would still like to see two things in the cheap/free versions of Visual Studio: 1) Code Coverage. The easiest way to tell whether your test suite has hit all the methods and properties in your code is Code Coverage. This currently only ships in the Premium edition (or better) of Visual Studio. 2) CodeLens. CodeLens provides information about your code directly in the Visual Studio interface. It is currently only available in the Visual Studio Ultimate edition. There may be other features I'd like to see move down, but these are the two that I could make immediate use of. I'm crossing my fingers and hoping they are in the Community and/or Professional Edition in Visual Studio 2015.

All of these moves are great moves for all of us who have invested in learning .NET, and I think this will eventually pay off for Microsoft as well. The .NET platform and the C# programming language are generally considered to be a well thought out easy-to-use environment for getting work done. Being able to leverage that across other platforms makes it a win for everyone. It's a bold move by Microsoft and its new CEO Satya Nadella.

2014-09-22

Breaking Change in Caliburn.Micro 2.0 for WPF Controls with Names

In Caliburn.Micro 1.x, this would work

<Button x:Name="OKButton"
 Content="OK"
 cal:Message.Attach="[Event Click]=[Action OK($source)];[Event PreviewMouseUp]=[Action Special($eventArgs)]">

However, in Caliburn.Micro 2.0, the events will not get called correctly. The problem is that the x:Name handling overrides any Message.Attach handling. The way to solve the problem is to remove or rename the x:Name attribute if you use Message.Attach. Thus, this works:

<Button
 Content="OK"
 cal:Message.Attach="[Event Click]=[Action OK($source)];[Event PreviewMouseUp]=[Action Special($eventArgs)]">

If you specify events through the Message.Attach code, you don't need the automatic binding of the x:Name attribute.

2014-04-30

Help.ShowHelp Fails Silently

I was trying the .NET Framework command to show help, System.Windows.Forms.Help.ShowHelp(), and passed a file name of a file that didn't exist. I was expecting it to throw an exception, except nothing happened. It just failed silently. Looking at the .NET Framework sources, the code just does:

SafeNativeMethod.HtmlHelp(...);

where HtmlHelp is defined as

[DllImport("hhctrl.ocx", CharSet=CharSet.Auto)]
public static extern int HtmlHelp(HandleRef hwndCaller,
 [MarshalAs(UnmanagedType.LPTStr)] string pszFile, int uCommand,
 [MarshalAs(UnmanagedType.LPStruct)] NativeMethods.HH_FTS_QUERY dwData);

You'll notice that it does nothing with the int return value. To get around this failure on Microsoft's part, you can do one of two things. The cheap technique is to check for the existence of the file using System.IO.File.Exists() before making the call to Help.ShowHelp(). However, this doesn't account for other problems with the help file such as permission problems or file corruption. The other technique is to call the native method yourself, using the above PInvoke declaration. You can then check the return value. The return value should be the handle of the help window when it is created. It should return zero for failure.

Microsoft really needs to provide a different call to bring up help that throws exceptions on failure. This call is in the Windows Forms library anyway, which really shouldn't be called from WPF.

2014-03-12

10 Steps for Implementing Code Contracts Static Analysis

I'm a fan of the Code Contracts from Microsoft Research. Adding them to an existing code base can be an exercise in frustration, though. If you turn on static analysis, it can generate thousands of warnings. I have some tips on how to attack those warnings, based on my experience.

Just like C#, I start counting at zero, and actually this list goes to 11.

 0. Change the settings for contracts.

Perform Static Contract Checking in the Background. Turn on all the check boxes except Baseline. It will be slightly faster if you have SQLExpress installed on your computer. Add .\SQLExpress into the SQL Server field. Set the warning level to low. Set the Contract Reference Assembly to build. Change the Extra Static Checker Options to -maxWarnings 5000.

 1. Reduce the problem.

In the assemblyinfo.cs file, add the following line:

[assembly: ContractVerification(false)]

This causes the static analyzer to ignore all of the warnings. Now go to the most basic class in your project and add ContractVerification(true) to the top of it. You can identify these basic classes because they have no dependencies on the other classes in the project. You may find that adding a Class Diagram to the project helps identify these basic classes.

An example of using ContractVerification on a particular class:

[ContractVerification(true)]
public class Foo
{
}

This will cause the static analyzer to only report warnings for that one class. Fix those warnings, then move the [ContractVerification(true)] to the next class you want to work on.

2. Work from the bottom up.

Work on classes that have no dependencies inside your project. Then after you have those cleaned up, work on the ones that only depend on the ones that you have already cleaned up until the entire assembly is complete. Again, adding a Class Diagram to the project may help finding the next class to work on.

3. For each class, add ContractPublicPropertyName attributes to your property backing fields.

For each field that is used to back a property, add the ContractPublicPropertyName attribute to the field showing what public property accesses the field.

[ContractPublicPropertyName("Count")]
private int count = 0;

public int Count
{
    get
    {
        return this.count;
    }
}

4. Add invariants.

Don't worry about other issues until the invariants are in place. I add an invariant section to the bottom of each class like the code below. (I sort my method names alphabetically and I like mine at the bottom of the code, hence the Zz. You can name the method anything you want, but be consistent throughout the project.)

/// <summary>Object invariant.</summary>
[ContractInvariantMethod]
private void ZzObjectInvariant()
{
    Contract.Invariant(this.count >= 0);
}

For each thing will always be true after the constructor is complete, add a contract to the ZzObjectInvariant method. You want the invariants to be in place first because it makes it so you don't need contracts in each individual method or property.

5. Go through each constructor, method, and property set in the class and add contracts.

For each parameter to the method, add appropriate Contract.Requires<exception>() contracts. For example:

public int Foo(Bar bar, int increment)
{
    Contract.Requires<ArgumentNullException>(bar != null);
    Contract.Requires<ArgumentOutOfRangeException>(increment > 0);

    // more stuff
}

For properties, validate the set clause value.

public int Count
{
    get
    {
        return this.count;
    }

    set
    {
        Contract.Requires<ArgumentOutOfRangeException>(value >= 0);
        this.count = value;
    }
}

If there are two conditions, put them into separate contracts rather than using &&. For example:

public int Count
{
    get
    {
        return this.count;
    }

    set
    {
        // not Contract.Requires<ArgumentOutOfRangeException>(value >= 0 && value <= 100);
        Contract.Requires<ArgumentOutOfRangeException>(value >= 0);
        Contract.Requires<ArgumentOutOfRangeException>(value <= 100);
        this.count = value;
    }
}

If a member overrides an inherited member or implements an interface, you will not be able to add Contract.Requires contracts to the member. First see if you can add the Contract.Requires to the class or interface that you are overriding or implementing. If you can't, then add a Contract.Assumes to the code. Adding Contract.Requires to an interface or abstract class requires creating a class that implements the interface or abstract class and decorating it with the ContractClassFor attribute. See sections 2.8 and 2.9 of the Code Contracts user manual.

Understand that depending on your project contract settings these contracts may not be present at run time. Other warnings may report that you are trying to invoke a method or access a property of a null (the dreaded CA1062). Even though it will be impossible to pass in a null if the contract is in place, the code analysis can't count that the contract will be in the delivered code. You will need to add additional code that acts as if the contract doesn't exist and properly handles the condition. It's redundant and has some expense. I like none of the other options, though.

You can throw an exception. This is similar to the legacy Custom Parameter Validation, except this happens after the Contracts, and isn't actually part of them. There is no Contract.EndContractBlock().

public int Foo(Bar bar, int increment)
{
    Contract.Requires<ArgumentNullException>(bar != null);
    Contract.Requires<ArgumentOutOfRangeException>(increment > 0);

    if (bar == null)
    {
        throw new ArgumentNullException("bar");
    }

    // more stuff
}

You can also return an appropriate value, like this:

public int Foo(Bar bar, int increment)
{
    Contract.Requires<ArgumentNullException>(bar != null);
    Contract.Requires<ArgumentOutOfRangeException>(increment > 0);

    int result = 0;

    if (bar != null)
    {
        // more stuff that assigns result
    }

    return result;
}

In both cases, the contract makes it impossible for bar to be null, yet I handle it anyway as if the contract wasn't there. Some day they may make the coordination between the static analyzer and the compiler such that this isn't necessary. You would then have to have code contracts turned on in the shipping version, which you may not want.

6. Go through the methods and properties and add Contract.Ensure contracts.

Add Contract.Ensure methods for all of the return values of the constructors, methods, and properties. You will need Contract.Result<T>() frequently to examine return values. For example:

public int Foo(Bar bar, int increment)
{
    Contract.Requires<ArgumentNullException>(bar != null);
    Contract.Requires<ArgumentOutOfRangeException>(increment > 0);
    Contract.Ensures(Contract.Result<int>() > 0);

    // more stuff
}

7. Fix Bugs.

Compile the code. The static analyzer will complain about various things, such as calling methods on possibly null values. Fix those. This is really the point of the whole exercise. It will warn you about many subtle things that you might have initially thought impossible, but are actually real edge cases.

When you compile the project, it may make suggestions for Contract.Requires and Contract.Assumes. Do not automatically add these. See if there is an invariant that you can add that would handle this for all members. Also see if the code should be handling whatever the warning is suggesting. For example, if the warning suggests adding a Contract.Requires<ArgumentNullException>(value != null), it may be legitimate to be able to pass in null here, but you are performing a method on the value object. You really need an 'if (value != null)' in the code, not a contract. Determining whether null should be allowed requires intimate knowledge of the code.

8. Judiciously add Contract.Assume() calls.

Static analysis is hard. The analyzer has to figure out what the code is doing without running it. In some cases it can't. This is especially true if you use a library that you don't control. Microsoft has been better about adding contracts to the .NET framework, but there is not 100% support yet. For example, using the .NET Framework 4.0, I have this code:

PathGeometry pathGeometry = new PathGeometry();
pathGeometry.Figures.Add(figure);

The static analyzer complains that pathGeometry.Figures might be null. This should never be the case. There is a missing Contract.Ensures in the .NET Framework PathGeometry constructor that .Figures is not null. You can help the analyzer by adding an Contract.Assume() call. Like this:

public int Count
{
    get
    {
        return this.count;
    }

    set
    {
        Contract.Requires<ArgumentOutOfRangeException>(value >= 0);
        this.count = value;
    }
}

The warning about .Figures possibly being null now goes away.

Add Assumes for things that should be impossible in your code.

9. Don't use Contract.Assert.

If the analyzer reports that an Assume can be proven and can be turned into an Assert, just remove the line. The analyzer actually has that knowledge at that point, so you don't need an assert.

10. Don't give up.

When all the warnings are killed, move on to the next class. Move the [ContractVerification(true)] and fix those. When you actually have all the warnings fixed, go back to the assemblyinfo.cs and turn on warnings for the entire assembly and remove the attribute from the individual classes. Then try bumping the warning level up a notch in the project properties and fix some more.

Some of the warnings are a real puzzle as you try to trick the static analyzer to certify your code. Adding more contracts, ensures, and assumes will cause them to go away.

2014-03-09

Listing SuppressMessage Justifications

If you perform Code Analysis on a project, it will complain about many things. Most are legitimate. A few are not. When they are not, you can make the warning go away using the SuppressMessage attribute. This attribute decorates the method, property, or whatever, and causes the Code Analysis tool to ignore that warning in that scope. For example:

[SuppressMessage(
    "Microsoft.Design",
    "CA1031:DoNotCatchGeneralExceptionTypes",
    Justification = "Want to catch all exceptions here.")]

If you use StyleCop on the project, it will require the SuppressMessage to have the Justification property filled in. If you don't, then StyleCop will give a warning about a missing justification, which is a warning about the code that you used to make a different warning go away. So I fill in all my Justification properties.

I wanted to look at the justifications that I had used across an entire code base to make them consistent, and see if there were any that weren't really needed. As code gets refactored, the SuppressMessage attributes may no longer apply. The compiler, by the way, does not remove the SuppressMessage attributes; they get placed into the executable code.

I created a little tool that examines a directory and performs reflection on all the .exe and .dll files in it. It extracts all of the SuppressMessage attributes and writes their details to the console output. It's actually a nice little example of using reflection.

Below is the code for the Program.cs file of a console application. This is a quick implementation, for my own purposes, not a commercial product, so probably could be better in many ways. For example, it only searches for dlls with a lower case extension, because I never have upper case ones. In a commercial project, I'd test for either. I could fix that in less time than it took to write this sentence (using .ToUpper()), but it doesn't solve any problems for my environment. Making code commercial grade as opposed to a tool takes time.

The code takes a directory path on the command line and documents the justifications for all the .NET assemblies in that path.

//--------------------------------------------------------------------------------------------------
// <copyright file="Program.cs" company="Xoc Software">
// Copyright © 2014 Xoc Software
// </copyright>
// <summary>Implements the program class</summary>
//--------------------------------------------------------------------------------------------------
namespace Xoc.Justification
{
    using System;
    using System.Diagnostics.CodeAnalysis;
    using System.Diagnostics.Contracts;
    using System.Globalization;
    using System.IO;
    using System.Reflection;

    /// <summary>The Justification Program.</summary>
    public static class Program
    {
        /// <summary>Main entry-point for this application.</summary>
        /// <param name="args">Array of command-line argument strings. args[0] must be path to examine.</param>
        public static void Main(string[] args)
        {
            if (args != null && args.Length > 0 && !string.IsNullOrEmpty(args[0]))
            {
                DirectoryInfo directoryInfo = new DirectoryInfo(args[0]);
                if (directoryInfo != null)
                {
                    Console.WriteLine("Finding Justifications in {0}", directoryInfo.FullName);
                    try
                    {
                        foreach (FileInfo file in directoryInfo.GetFiles())
                        {
                            switch (file.Name.ToUpper(CultureInfo.InvariantCulture))
                            {
                                // Ignore NLog. Add other DLLs to ignore here.
                                case "NLOG.DLL":
                                    break;
                                default:
                                    ProcessAssembly(file);

                                    break;
                            }
                        }
                    }
                    catch (IOException)
                    {
                        Console.WriteLine("Directory is invalid");
                    }
                }
            }
            else
            {
                Console.WriteLine("Syntax:\nXoc.Justification.exe <directory-path>");
            }
        }

        /// <summary>Process the assembly described by file.</summary>
        /// <param name="fileInfo">The FileInfo assembly for the assembly to document.</param>
        private static void ProcessAssembly(FileInfo fileInfo)
        {
            if (fileInfo.Extension == ".dll" || fileInfo.Extension == ".exe")
            {
                Assembly assembly = Assembly.LoadFrom(fileInfo.FullName);
                string assemblyName = assembly.GetName().Name;
                try
                {
                    foreach (Type type in assembly.GetTypes())
                    {
                        string typeName = type.Name;
                        BindingFlags flags =
                            BindingFlags.Static
                            | BindingFlags.Instance
                            | BindingFlags.Public
                            | BindingFlags.NonPublic;

                        var attributes = type.GetCustomAttributes<SuppressMessageAttribute>();
                        if (attributes != null)
                        {
                            foreach (SuppressMessageAttribute attribute in attributes)
                            {
                                if (attribute != null)
                                {
                                    AddToList(assemblyName, typeName, "type", attribute);
                                }
                            }
                        }

                        foreach (MemberInfo memberInfo in type.GetMembers(flags))
                        {
                            if (memberInfo != null)
                            {
                                EnumerateTypes(assemblyName, typeName, memberInfo);
                            }
                        }
                    }
                }
                catch (ReflectionTypeLoadException)
                {
                    Console.WriteLine("{0} not a .NET assembly.", assemblyName);
                }
            }
        }

        /// <summary>Adds to list.</summary>
        /// <param name="assembly">The assembly to document.</param>
        /// <param name="type">The type to document.</param>
        /// <param name="member">The member to document.</param>
        /// <param name="attribute">The SuppressMessage attribute.</param>
        private static void AddToList(
            string assembly,
            string type,
            string member,
            SuppressMessageAttribute attribute)
        {
            Contract.Requires<ArgumentNullException>(attribute != null);

            Console.WriteLine(
                "{0} | {1} | {2} | {3} | {4}",
                assembly,
                type,
                member,
                attribute.CheckId,
                attribute.Justification);
        }

        /// <summary>Enumerate types.</summary>
        /// <param name="assembly">The assembly to document.</param>
        /// <param name="type">The type to document.</param>
        /// <param name="memberInfo">Information describing the member.</param>
        private static void EnumerateTypes(string assembly, string type, MemberInfo memberInfo)
        {
            Contract.Requires<ArgumentNullException>(memberInfo != null);

            var attributes = memberInfo.GetCustomAttributes<SuppressMessageAttribute>();
            if (attributes != null)
            {
                foreach (SuppressMessageAttribute attribute in attributes)
                {
                    if (attribute != null)
                    {
                        AddToList(assembly, type, memberInfo.Name, attribute);
                    }
                }
            }
        }
    }
}

You can then add this as an external tool in Visual Studio tools menu. Set the argument to be  $(BinDir).

If you'd really like the complete project, let me know in the comments, and I'll do the extra work to publish it.