Sunday, December 28, 2014

Using Web API 2 with Entity Framework 6

I feel that the most difficult part of starting a new project is deciding what technologies to use.  Over the last several months I've been working with Web API 2 & EF 6 to create a REST service for use with a WPF application that I've been writing.  Here I'd like to detail the reasons I chose this technology combination and some of the lessons I've learned along the way.
  • The testing story is improved with EF6 DbSet<T>
  • Had used EF4 & EF5 on other projects
  • Web API has OWIN (Open Web Interface for .NET) allowing it to run locally without a web server
  • Web API is fairly straightforward in concept being based on the HTTP verbs GET,POST,PUT,DELETE.
  • Both Microsoft Technologies with lots of information floating around the web
Looking back of the last few months, I've had very little trouble with Web API 2. It has been pleasant to work with.  The majority of my issues have been with Entity Framework.  This may be due to how I chose to use the technology though I haven't done anything that radical.
The basic structure of my project followed the aggregate root concept.  Our application is basically a giant calculator, taking several input parameters and returning a calculation.  For example, when a project entity is loaded, I want to load all of its child entities otherwise I don't have the data needed for a calculation.
(AR) Fluid - Polymorphic type
        -int Id
        -string Name
        -List<FluidPointBase> FluidPoints - Polymorphic list of items
(AR) Material
        -int Id
        -string Name
        -double Density
(AR) Project
        -int Id
        -Fluid Fluid
             -int Id
             -string Name
             -List<FLuidPointBase> FluidPoints
         -Material Material
             -int Id
             -string Name
             -double Density
Testing Controllers
There is a heated debate on StackOverflow and elsewhere regarding the question of whether one should use the repository pattern or forgo it in favor of mocking the methods on the underlying ORM.  I was swayed by the argument that in EF it is possible to create testable code without a repository.  Following the patterns described below I was able to set up tests for my controllers.

  • Entity Framework 6 now has a very easy to mock implementation of DbSet<>, this technically makes it possible to test against EF, without the Repository or UOW patterns. These patterns take time to code, and maintain, and add another layer to the application.
  • Response in Web API 2 should be of type IHttpActionResult this will allow for easier testing.
  • Fiddler - HTTP debugger
Web API Considerations
I soon found that there are many things to consider when building a Web API service. 

  • What are REST best practices?
  • Does it need to be truly RESTful? If so add HATEOS
  • HATEOS (Hypermedia as the Engine of Application State)
  • Provide links when doing operations such as
  • Creating new content
  • Updating content
  • Navigation for paging
  • Can be in the form of url, rel, and method(get,post,put,delete)
  • HAL (Hypertext Application Language) & Collection + JSON are current standards
  • Will the API be public? 
  • Versioning of Web API's is important for consumer compatibility
  • CORS - Cross origin resource sharing 
  • Modern browsers support this, and there is support for it in web api2
  • No need to support JSONP if you don't need it because of CORS
  • How to handle paging?
  • Use a class as a data envelope for paging
  • Include pagination information in the header
  • Optimistic Concurrency in Web API 
  • How to handle polymorphic type serialization with JSON?
  • REST & JSON polymorphic types seem to fly in the face of REST best practices, probably not a good idea to use them on a public API.
  • Requires configuring the JSON Formatter to enable type name handling in the Web API configuration.
  • config.Formatters.JsonFormatter.SerializerSettings.TypeNameHandling = Newtonsoft.Json.TypeNameHandling.All;
Entity Framework Pain Points
Most of my gripes are with EF.  As I've worked on this project, EF has been an unending source of issues.  

  • No eager loading of polymorphic types 
  • This means you either have to explicitly load each derived type, or rely on lazy loading.
  • Table per Type (TPT) inheritance breaks cascade delete
  • Makes you feel crazy when the fluent API is configured correctly and your keep getting reference constraint errors!
  • Workaround by executing Sql in seed method
  • The worst thing ever - NO GRAPH MERGING FOR DISCONNECTED ENTITIES
  • At first I manually wrote the logic to merge the graph, I got depressed writing books of code and thankfully found GraphDiff.
Without GraphDiff the update logic for one related graph in an entity would look something like this...

                // Update scalar/complex properties of parent
                TheContext.Entry(currentWell).CurrentValues.SetValues(entity);
               // Updated related geometry items
                var geometryItemsInDb = currentWell.Geometries.ToList();
                foreach ( var geometryInDb in geometryItemsInDb)
                {
                    // Is the geometry item still there?
                    var geometry = entity.Geometries.SingleOrDefault(i => i.Id == geometryInDb.Id);
                    if (geometry != null)
                        // Yes: Update scalar/complex properties of child
                        TheContext.Entry(geometryInDb).CurrentValues.SetValues(geometry);
                    else
                        // No: Delete it
                        TheContext.WellGeometryItems.Remove(geometryInDb);
                }
                foreach ( var geometry in entity.Geometries)
                {
                    // Is the child NOT in DB?
                    if (geometryItemsInDb.All(i => i.Id != geometry.Id))
                        // Yes: Add it as a new child
                        currentWell.Geometries.Add(geometry);
                }
With GraphDiff updating an entire graph looks like this....
//Note: GraphDiffUpdateGraph is a virtual method for mocking...  Actual method with GraphDiff is UpdateGraph()
TfContext.GraphDiffUpdateGraph(entity, f => f.OwnedCollection(p => p.Geometries)
                                                         .OwnedCollection(p => p.SurveyPoints)
                                                         .OwnedCollection(p => p.Temperatures));
Decisions for my API

  • Choose to include HATEOS in the API - though in retrospect this may have been a YAGNI violation. 
  • Used Polymorphic types, this increases the size of the JSON payload and is likely not a REST best practice
  • Used lazy loading in EF to avoid explicitly loading derived types
  • Wrapped DbContext methods such as Entry() and UpdateGraph() to make them mockable
  • Used an object for paging instead of passing the page information in the header

Conclusion
There are several things that are still unresolved in my mind.  RESTful API's by their very nature seem to be in conflict with the Aggregate Root concept.
Do I have an API where I hit the endpoint Project/1 and get back my Project object with all its data or do I hit get back a Project, with a link to a Fluid entity and a link to a material entity?  What about polymorphism?  If I get rid of my polymorphism in the entities, doesn't that increase the number of controllers I'll need to serve the API?
Obviously there is still much to be learned...


Wednesday, December 3, 2014

Web API 2 - Using [FromUri] with GET Methods

Web API 2 has the default route specified as:

     webApiConfig.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
                );

However, consider the following two GET method signatures.

   public async Task<IHttpActionResult> GetAsync(int id);
   public async Task<IHttpActionResult> GetAsync(int page, int pageSize);

Web API first does route template matching, and then it selects the method.   Based on that, these GET methods would map like this...

api/controller                              (matches route, can't find a method as all require parameters)
api/controller/1                           (matches route, get's resource with Id = 1)
api/controller/1/10                      (matches route, get's paged data)

What I really wanted was to be able to specify a route with query string parameters...  In Web-API query string parameters get bound to parameters of the same name in the controller actions, so I could also call my GetAsync(int page, int pageSize); method as follows.

api/controller?page=1&pageSize=10

That's great but without default values, the route to the resource at api/controller is still going to return this message. "The requested resource does not support http method 'GET'."
The first thought that occurred to me was to make the page and pageSize parameters optional.

public async Task<IHttpActionResult> GetAsync(int page=1, int pageSize=10);

However, this makes the methods indistinguishable from one another.  That lead me to look for another way to pass multiple parameters to my Get method. This is where the [FromUri] attribute comes into play.  Essentially it works much like the [FromBody] attribute used with Put & Post methods. Using [FromUri] specifies that we are going to use data passed in the Url to build up our object.  


The result is the following code.  Inspired by this code... 

Monday, November 24, 2014

Detecting Binding Errors in WPF

The binding engine in WPF is magical.  It makes possible the separation of the view from the view logic.  The MVVM pattern is dependent on this magical binding engine.  One not so great aspect of this is that binding errors fail silently in WPF.  There are several ways ways to get binding errors. For example, changing a property name and forgetting to update the Xaml binding expression.

WPF does log these data-binding errors to the output window.  The issue with that is it can sometimes be hard to find them amid all the other stuff being output by the debugger.  If you do manage to find one, it will look something like this....

System.Windows.Data Error: 4 : Cannot find source for binding with reference 'ElementName=DiamSlider'. BindingExpression:Path=Value; DataItem=null; target element is 'Slider' (Name='PipeDiamSlider'); target property is 'ToolTip' (type 'Object')

Another method for finding these errors is using a tool like Snoop or WPF Inspector. This works but it's tedious.

Recently I ran across a great simple solution to this issue.  Behind the scenes WPF is writing out errors to the PresentationTraceSources class.  This means that a simple trace listener can be created to show the output of the binding errors in a message box.  This trace listener is registered in the applications main window .xaml.cs.  This is a very simple idea, but it has been incredibly helpful in ridding our application of binding errors.

I discovered this method on tech.pro in an article written by Michael Kuehl. Bea Stollnitz also has a great blog post that digs deeper into methods of detecting binding errors.

Friday, November 21, 2014

Helix 3D Toolkit - Well Viewer Part 2

In September I wrote a post about creating a well profile viewer using Helix Toolkit.  At work the requirements were extended to include showing a tube representing the work-string inside the well-bore.  Additionally, the ability to show a color gradient denoting a set of values was also needed. I've re-factored the original code in my first post and update the Gist.

Our torque and drag model generates a list of calculated values at a regular interval of depth. These values are then mapped to the work-string inside the well-bore.  There are two interesting issues here.

The work-string is confined by the well profile in real-life, and similarly when modeling this issue our work-string path will follow the path of our well.  To code the new requirements, the base class created in the first post will be extended, and the reliance on the well profile path, will introduce temporal coupling in code.

The other interesting thing to note is that we don't have a one to one relationship with values coming from the torque and drag model being mapped to 3D points.  It might take 100 or 200 points to represent the 3D center-line of the well-bore, while the torque and drag model can return several thousand data points.   To get around this issue, the data needs to be interpolated by depth at each 3D point along the profile.

Overview of Mapping Values as Texture Coordinates in Helix Toolkit
Helix toolkit uses the concept of texture mapping to apply a skin over the 3D objects.  The concepts are explained in detail on Wikipedia.  For what we are doing the concepts can be simplified to this, the texture coordinate will determine which value from a linear brush will be applied to the geometry.  Values on a linear gradient brush range from 0 to 1, so the values being mapped will also need to be normalized into this range.    Once we have a list of texture coordinates, one for each 3D point, we can simply bind to them from our pipe in Xaml.


Here is the code.
It was requested that I provide a working example, I've placed somewhat simplified project in OneDrive that will help you get up and running.

Monday, November 17, 2014

Why I Attend a .Net User Group

When I started programming, I wished I had another avenue to meet and connect with those that shared a like passion for code.  Frankly, it's fairly difficult to strike up a conversation with most people about coding.  If you work for a large organization you may have plenty of workmates to chat with, however, many programmers are working alone for small organizations.  

User groups provide a great way to meet other passionate developers in the local community.  The act of attending a user group can mark you as a developer that has drive and passion.  When I moved to the Houston area, I joined the NHDNUG  (North Houston .Net User Group) and quickly started to reap the benefits of information I received at the monthly meetings.  One speaker talked about Software Craftsmanship and the tools of the trade causing me rethink how I approached programming.  That talk also led me to NCrunch - a tool that I can hardly imagine not having.  

I've since become a board member at our local user group. My role has been small but I still enjoy the participation. My wife refers to our meetings as "Nerds Anonymous" :)  If you've got a programming addiction, its time to find a user group near you.

Saturday, November 8, 2014

Prism - Custom PopupWindowAction

One of the nice features in the Prism is the interaction request pattern supported by the InteractionRequest class. I'll quickly provide a brief overview here. In your view model, you can define properties of type InteractionRequest<INotification> and InteractionRequest<IConfirmation>. In your view you bind to a blend interaction trigger with an action of type PopupWindowAction. Manipulating the InteractionRequest from the view model will trigger the PopupWindowAction, showing you pop-up window. This is a neat way to show dialogs while still adhering to the MVVM pattern for testability.

If you do not set the WindowContent property of the PopupWindowAction class, then the DefaultConfirmationWindow or DefaultNotificationWindow will be displayed. These windows are very basic. In fact, they don't have text wrapping. In my application, I mainly use them to display messages to the user. Without text wrapping, messages shown in the DefaultNotifiactionWindow were truncated.

The WindowContent property can be used to show custom content declared in a user control. Here is a custom user control where I've defined a TextBlock with text wrapping. Note that in the .xaml.cs of the user control, I implement the IInteractionRequestAware interface. This will provide the FinishAction and Notification properties, which the control will use.
Often, I favor having a window factor for more complex windows, however, for simple dialog's and interactions this works very well.

Tuesday, October 28, 2014

Exception Handling in Web API 2

When working with Web API 2, unhandled exceptions thrown by controllers are returned to the client as web HTTP status code 500 (Internal Server Error).  An exception filter can be used to replace this generic error with something a little more meaningful.  Another benefit to using exception filter is that it centralizes exception handling.  This means that the most of the time you will not need a try catch statement in your controller methods. Exception filters will catch any exceptions thrown by a controller method that is not an HttpResponseException.

For debugging purposes I decided I wanted to return the entire exception message in my response. For release I have several exception filters targeted to specific exceptions that return meaningful messages to the client. 

To create an exception filter, derive from System.Web.Http.Filters.ExceptionFilterAttribute class and override the OnException method.

Exception filters can be applied as an attribute to the entire controller class, to individual methods, or registered globally in the WebApiConfig.  I registered mine with the debug compiler switch to ensure that this particular exception filter is not used when compiled for release.


One more thing to consider is the status code your sending back with the exception. Sending back the correct exception is important to help the client determine what has gone wrong.  Brockallen has a nice guide for which HTTP status code to use on his blog

This takes care of the majority of error that are a concern when building out a Web API solution, however there are still exceptions that will not be caught by exception filters. That is where Web API Global Error Handling comes into play.

Saturday, October 18, 2014

C# Conventions & Best Practices - devdoc2013

Several months ago I ran across devdoc2013 over on David Anderson's blog.  The sub title of this paper sums it up "Programming conventions, and general developer best practices for software developers writing C# with .NET."  Good stuff!

Wednesday, October 15, 2014

Sorting ListCollectionView as Object Values Change

Recently, a new requirement popped up.  The requirement was to sort a list of objects in a data-grid, bound to a ListCollectionView as object values in the collection were changed.  In our case, the object type we wanted to sort on was a double.

At first I tried to implement this using a SortDescription, but I quickly realized this had one drawback.  The SortDirection seems to set an alphanumeric sort on the data column.  Values (1, 2, 10, 200, 100) would be sorted as (1, 10, 100, 2, 200) which was precisely not what I wanted.   To get a pure numeric sort it seemed I'd have to write my own logic. 

Thankfully the list collection view provides another property CustomSort, which takes an IComparer<T>.  The MSDN docs mention that it's preferred to derive from Comparer<T> instead of the interface.  With that in mind I was able to create the following class. 


This custom sorter was then wired into my ListCollectionView like so.
This provided the desired functionality of sorting each row into the correct position as the values were edited.
Row in edit mode is not sorted

Edit completed and row is sorted into place

While this worked great for what we were doing, I can't speak to how this would perform with extremely large sets of data.  

Sunday, October 5, 2014

XAML - Vertical & Horizontal GridSplitters

Often it's nice to allow the user to re-size segments of the screen.  WPF makes this possible with the GridSplitter class.   The MSDN documents are pretty clear on its usage with examples but here is another example I created in XamlPadX.

Horizontal & Vertical GridSplitter 


Beware, if you have a parent container that is scroll-able you will have trouble getting the GridSplitter to work correctly.   I made this mistake while working with PRISM.  Initially in the shell I wrapped my main content region with a ScrollViewer.  It seemed like a good idea at the time and I promptly forgot I'd done so.  While working in a module that loaded into the main content area, I noticed that my horizontal GridSplitter between two ListBox controls was not working correctly.  When adding items to the top list box, the grid splitter would be pushed down the screen.  Using WPF Inspector, I was able to find the issue with the ScrollViewer in the shell.


Monday, September 22, 2014

Helix 3D Toolkit - Well Viewer Part 1

The last few years I've been working in the oil industry, programming small custom engineering applications.  One common requirement is to have a 3D plot of the well trajectory.  My first crack at this several years ago was rather crude, basically using a 3D to 2D orthographic transformation.  In fact the first rendition of it was done in an Excel VSTO application and plotted using an Excel chart. After migrating the application to WPF I plotted it using the SciChart chart package.  Though not beautiful, this worked and did enough to get us by while we worked on more technical aspects of our program.


3D well profile plotted with SciChart
I was aware of WPF's native 3D capability, but I never could find the time to dig into it to make a new 3D well survey plot to draw this graph.  I stumbled on Helix 3D Toolkit on NuGet, several months ago and just recently I found the time to dive in and replace our 3D plot with a more attractive and functional tool.

Helix wraps the core WPF 3D functionality, to provide an extra level of sweetness and ease of use.  It recently moved to GitHub and appears to be a project with growing activity.  Documentation is nearly non-existent, however the source code has a great suite of sample applications.  Code samples are worth thousands of words :)

I decided to have a preview well plot and then allow the user to open up a new window which would contain a larger version of the plot and expose more controls to manipulate the 3D view-port.

3D well profile preview plotted with Helix 3D Toolkit

Window giving a larger view-port and more user controls

One of the nice things about the toolkit is that the controls provide a huge amount of functionality.  For example I was able to databind to a collection of 3D points representing my tube path.  The Helix3DViewport class provided panning, zooming, rotation, copy image clipboard etc.  Honestly, after reading through several of the code samples creating this plot was very simple.


It's not all sunshine and roses, I did run into one issue.  For my preview plot I keep one instance of the view instantiated and when a user clicks on a new well, a new WellSurveyPlotViewModel is created and the data context of the preview view is updated.  When the user clicked the Open Well Viewer button a window service creates a new window passing off the WellSurveyPlotViewModel as its data context.  I could not get the view to zoom extents with the new camera information.  After reading the source code I arrived at the idea of using an attached property to reset the camera and call for a re-zoom.  The following two lines, in an attached property hooked into a ReZoom boolean property in the preview view model solved the issue.

  viewport.Camera = viewport.DefaultCamera;
  viewport.ZoomExtents();

Overall this toolkit is great, using the MVVM pattern I was able to recreate our 3D plot using Helix in a weekend of several short coding sessions.

Saturday, September 20, 2014

PRISM & WPF Resource Dictionaries

The web is a great spot to get information, getting something useful out of the information can be another matter. At work, I've been spending a large amount of time working on a PRISM based modular WPF application. One of the things on my to-do list has been to figure out how to "correctly" structure resource dictionaries in a modular application.

The longer I work with WPF, the more I value the concept of blend-ability. One of the issues I found with several of the approaches to managing resource dictionaries is that they are compiled at run time, and at design time they aren't available. It's quite annoying to never know what anything is going to look like until the application runs.

After digging around on the net, I realized there was some contradicting advice. For example, one camp believes in cramming every thing into one dictionary, in the name of performance. The other camp believes in using a new resource dictionary for every style. I decided to go with organization over performance, knowing that if performance was an issue things can be restructured.

By piecing together blog posts, MSDN articles, and stackoverflow questions I arrived at a system that is working well.

Resources structure:
  • I found this blog post which outlines a folder structure for resource files, and goes by the style per resource dictionary rule.  After adding a new resource dictionary file into the folder structure, a reference to that file is added to the a merged dictionary referred to as the ResourceLibrary.  The ResourceLibrary is then referenced by all the project modules.  
  • In each PRISM module, a local resource dictionary could be merge into that modules App.xaml file for further specific module styling.
Design Time Blendability: 
  • This SO post contains the key to a very simple solution to allow resources to be visible in Blend and VS at design time. Leave the App.xaml in each PRISM module. If you've already deleted it, add one back into the project. In the App.xaml, use a merged dictionary and a Pack Uri to reference the resource dictionary in your shared infrastructure project.
 <Application x:Class="KNE.Athena.ProjectModule.App"  
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"  
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">  
   <Application.Resources>  
     <ResourceDictionary>  
       <ResourceDictionary.MergedDictionaries>  
         <ResourceDictionary Source= "pack://application:,,,/KNE.Athena.Infrastructure;component/ResourceDictionaries/ResourceLibrary.xaml"/>  
       </ResourceDictionary.MergedDictionaries>  
       <Style TargetType="{x:Type Rectangle}"/>  
     </ResourceDictionary>  
   </Application.Resources>  
 </Application>  

Note the interesting line inside the merged dictionary  <Style TargetType="{x:Type Rectangle}"/> 
I can't actually say for certain that it is necessary anymore, as I seem to have gotten this to work with out it, however it is there to fix a Microsoft bug...

Other Notes - BasedOn Styles need merged dictionary, and performance concerns:

After reorganizing my styles into the structure described in section one,  I was surprised to find that styles that made use of the BasedOn property where not correctly inheriting the styles.  The answer for this was occurring was of course on stackoverflow. When the base style and derived style are not defined in the same .xaml file, one must first merge in the style from the other resource dictionary.   Below you can see my AddButton is based on Button, and at the top of the file I merge in the button style.

 <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"  
           xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">  
   <ResourceDictionary.MergedDictionaries>  
     <ResourceDictionary Source="../BaseControlStyles/ButtonStyle.xaml"/>  
   </ResourceDictionary.MergedDictionaries>  
   <Style x:Key="AddButtonStyle" BasedOn="{StaticResource {x:Type Button}}" TargetType="{x:Type Button}">  
     <Setter Property="Content" Value="+" />  
     <Setter Property="Width" Value="25"/>  
     <Setter Property="Height" Value="25"/>  
     <Setter Property="ToolTip" Value="Add Item"/>  
   </Style>  
 </ResourceDictionary>  

Last, be aware that performance issues could arise.  Apperently in WPF, each time a control references a resource dictionary, a new instance of the resource dictionary is created. Repeatedly parsing many lines of xaml can have a negative performance impact.  This is another thing that I'm not sure if Microsoft has addressed, this blog post is several years old.  The fix outlined in this blog post is simple, but I have not yet found a need to implement it in my application.

Friday, September 19, 2014

Web API 2 Attribute Routing - Error 500

I ran into a 500 Error last week while working on a project that uses Web API 2 for a REST like web service.  Looking up upon looking up error 500, I found that it was returned when there is more than one possible route for a request.  I went and looked at my convention based route definition which looked fine.  Then I paid a visit to the controller, where I found that I had two route attributes on my put and post methods that where defined as empty strings.  Whoops!

 [Route("")]  

Lesson learned :) In Web API 2, if you mix route attributes with convention based routing, the route attribute takes precedent over the convention based route.  If a route attribute is not present, the default convention based route will be used. The route can't be duplicated!


Sunday, September 14, 2014

WPF DataGrid or ListView does not follow underlying ICollectionView selection

If you've ever used a selector derived control and ICollectionView with the IsSynchronizedWithCurrentItem property set to true, you may have noticed that the UI can get out of sync with the actual selected item.  This seems to be a bug in the underlying implementation of the selector class.

I ran into this bug recently and on Tim Valentine's blog coderelief.net was glad to find this solution.  It involves using an attached property to keep the UI and ICollectionView synced.

Sunday, August 31, 2014

Entity Framework - Do Not Delete LocalDb !

Entity Framework and LocalDb go together like bread and butter when developing a new application.  However, if you have you ever had a problem with your LocalDb instance manually deleting the .mdf file is probably not the way to go.  

I deleted one of my .mdf files the other day and thought it was strange that I could no longer create a new instance of the database using the default EF conventions.  A brief web search set me straight. This blog tells the whole story.  

The short version is SQL Server maintains a reference to the .mdb files it has created, when you manually delete the file, SQL Server doesn't know the file is gone.  So, don't manually delete the file.  

If you like me have already deleted LocalDb, you may want to still use your Db name, so your looking for the fix.  The fix is in the reference blog post, however for completeness I'll repeat what it says.  Go download the SqlCmd Utility.  Run the following command...

C:\>sqlcmd -S (localdb)\v11.0 -E -d master -Q "DROP DATABASE [myApp]"

You will be presented with an error like the following

Msg 5120, Level 16, State 101, Server User1-PC\LOCALDB#5725A8FF, Line 1
Unable to open the physical file "C:\Users\User1\myApp.mdf". Operating
system error 2: "2(The system cannot find the file specified.)".
File activation failure. The physical file name "C:\Users\User1\myApp_log.ldf"
may be incorrect.


The LocalDb database file will now be correctly unregistered.  If in the future you want to delete your database, use this method instead of a hard delete.  

Friday, August 29, 2014

Issue using WPF DataGrid ColumnHeader with a DataTemplate & ClipboardCopyMode="IncludeHeader"

While working on a project recently, I ran into a small hitch with the WPF data grid.  The software I spend most of my time working on relies heavily on units of measure.  A common requirement is to have units of measure displayed in the data grid column headers.  Something like this...


However, you can't directly bind to the Header property of the data grid in XAML.  There are probably several ways this could be worked around, but a common solution is to use a HeaderTemplate.  The XAML to create a data template for the data grid column header would look like this.

 <UserControl.Resources>  
     <DataTemplate x:Key="FlowRate" DataType="DataGridColumnHeader">  
       <TextBlock Text="{Binding Source={x:Static units:UnitsContext.CurrentSymbols}, Path=FlowUnit, StringFormat=Flow-Rate ({0})}" />  
     </DataTemplate>  
     <DataTemplate x:Key="Pressure" DataType="DataGridColumnHeader">  
       <TextBlock Text="{Binding Source={x:Static units:UnitsContext.CurrentSymbols}, Path=PressureUnit, StringFormat=Pressure ({0})}" />  
     </DataTemplate>  
   </UserControl.Resources>  

All is well and good until the next requirement comes along.  Users need to be able to copy the data from the data grid into Excel, including the column headers.  The data grid has a clipboard copying setting, ClipboardCopyMode="IncludeHeader", if the column uses a HeaderTemplate, it will show as an empty header in Excel.  A HeaderTemplate could include almost anything, and it isn't guaranteed to be text.  Therefore, I had to admit to myself that, while annoying, this data grid issue does make sense.

I arrived at a simple fix by using an attached property to move the the text from the header template, into the data grids header.

  /// <summary>  
   /// WPF Data grid does not know what is in a header template, so it can't copy it to the clipboard when using ClipboardCopyMode="IncludeHeader".  
   /// This attached property works with a header template that includes one TextBlock. Text content from the templates TextBlock is copied to the  
   /// column header for the clipboard to pick up.  
   /// </summary>  
   public static class TemplatedDataGridHeaderText  
   {  
     private static readonly Type OwnerType = typeof(TemplatedDataGridHeaderText);  
     public static readonly DependencyProperty UseTextFromTemplateProperty = DependencyProperty.RegisterAttached("UseTextFromTemplate", typeof(bool), OwnerType, new PropertyMetadata(false, OnHeaderTextChanged));  
     public static bool GetUseTextFromTemplate(DependencyObject obj)  
     {  
       return (bool)obj.GetValue(UseTextFromTemplateProperty);  
     }  
     public static void SetUseTextFromTemplate(DependencyObject obj, bool value)  
     {  
       obj.SetValue(UseTextFromTemplateProperty, value);  
     }  
     private static void OnHeaderTextChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)  
     {  
       var textColumn = d as DataGridTextColumn;  
       if (textColumn == null) return;  
       if (textColumn.HeaderTemplate == null) return;  
       var headerTemplateTexblockText = textColumn.HeaderTemplate.LoadContent().GetValue(TextBlock.TextProperty).ToString();  
       textColumn.Header = headerTemplateTexblockText;  
     }  
   }  

An alternative approach might be to directly set the header text through an attached property...

  /// <summary>  
   /// Allows binding a property to the header text. Works with the clipboard copy mode - IncludeHeaders.  
   /// </summary>  
   public static class DataGridHeaderTextAttachedProperty  
   {  
     private static readonly Type OwnerType = typeof(DataGridHeaderTextAttachedProperty);  
     public static readonly DependencyProperty HeaderTextProperty = DependencyProperty.RegisterAttached("HeaderText", typeof(string), OwnerType, new PropertyMetadata(OnHeaderTextChanged));  
     public static string GetHeaderText(DependencyObject obj)  
     {  
       return (string)obj.GetValue(HeaderTextProperty);  
     }  
     public static void SetHeaderText(DependencyObject obj, string value)  
     {  
       obj.SetValue(HeaderTextProperty, value);  
     }  
     private static void OnHeaderTextChanged(DependencyObject d, DependencyPropertyChangedEventArgs e)  
     {  
       var textColumn = d as DataGridTextColumn;  
       if (textColumn == null) return;  
       textColumn.Header = GetHeaderText(textColumn);  
     }  
   }  

In XAML, the attached property can be used on each data grid column...

 <DataGrid ItemsSource="{Binding }" AutoGenerateColumns="False" IsReadOnly="True" VerticalScrollBarVisibility="Auto" VerticalAlignment="Stretch">  
     <DataGrid.Columns>  
       <DataGridTextColumn Binding="{Binding FlowRate.UserValue, StringFormat=N3}" HeaderTemplate="{StaticResource FlowRate}"  
                 attachedProperties:TemplatedDataGridHeaderText.UseTextFromTemplate="True"/>  
       <DataGridTextColumn Binding="{Binding Pressure.UserValue, StringFormat=N3}" HeaderTemplate="{StaticResource Pressure}"  
                 attachedProperties:TemplatedDataGridHeaderText.UseTextFromTemplate="True"/>  
     </DataGrid.Columns>  
   </DataGrid>  

Thus far, this has proved to be a simple solution for this particular data grid issue :)




Thursday, August 14, 2014

.NET Rocks!

This past year the .Net Rocks! Road Trip stopped in Houston, and I took the opportunity to go to the event and watch Carl & Richard do a live show.  Speaking with several developers at the event, I was surprised to learn that it was the first time they'd heard .NET Rocks!  Keeping technical content entertaining and enlightening is not an easy task, so kudos to these guys for a great show.

With over 1000 shows and counting there is plenty of content.  On my daily commute I listen to the podcast, and often find myself diving deeper into stuff I hear about on the show.

If you haven't had opportunity to listen, check out .NET Rocks!  it's an entertaining way to stay up to date on all that's going on in the fast paced world of programming.



Monday, July 28, 2014

Joe Rainsberger - Integration Tests Are a Scam

Just wanted to pass on this interesting video I came across on the web.  The video is titled Integration Tests Are a Scam by Joe Rainsberger in 2009.  Joe suggests using smaller more targeted tests as a first line of defense to guard against regression caused by refactoring.

INFOQ - Integration Tests Are a Scam

Saturday, July 26, 2014

IOC - Newables & Injectables and the Abstract Factory

In my reading around the blogs and StackOverflow posts this week I came across the topic of newables and injectables in Ioc.  If your not familiar with the topic the following two posts explain in detail the issue of these two types of objects in our code.

To new or not to new
How to write testable code

To summarize, newables should only ask for other newables, and injectables should only ask for other injectables in their constructors.  Newables can be aware of injectables (pass them through methods), but should not take instances of injectables in the constructor.  If these rules are followed, life will be good, your system will be loosely coupled and you'll be in a state of testing nirvana.

The reality is that very often you'll have a condition in the code that is unknown until run time. Say you have a class that coordinates some sort of work, and you end up with something like this.
public class WorkDoer : IWorkDoer
{
   public WorkDoer(taskRunner ITaskRunner, workToBeDone WorkObject){}
}


In the above example, the rule of injectables and newables has been broken, and yes it does complicate things, but not horribly.  Now to deal with this in your code, an Abstract Factory will be needed to create a WorkDoer instance.  The Abstract Factory is great because it allows us to keep our code testable and loosely coupled.  In the test harness, if your using a mocking framework you can simply have your abstract factory return a mock of whatever the factory creates.
Following the newable and injectable rule certainly makes life easier.  It keeps each part of the code relatively simple to test in isolation from other parts of code.  However, in the case where an injectable must have an instance of a newable passed through the constructor, the Abstract Factory pattern will save the day. 


Mark Seemann, author of Dependency Injection in .Net, has this post on SO which is an answer to a the issue discussed above.  I point this out as Mark's SO answers have helped clarify many of my questions around IOC.

Monday, July 21, 2014

Linq Distinct - Override Equals and GetHashCode

Recently in our code base I ran across a portion of code that was selecting out a distinct set of objects. The Equals method had been overridden to provide equality based on values of the object, and was being used to compare object equality between objects in a list. If the objects proved to be equal, the first instance was transferred into a new list. There was a comment beside this code that said something like "Check into using LINQ Distinct for this"...

After setting up a unit test and overriding the equals method, I noted that I could not get my tests to pass.  I turned to the MSDN docs for help. The issue was the GetHashCode method must also be overridden. The idea behind this is that when doing an equality comparison GetHashCode, and Equals are both used by GenericEqualityComparer<T>.   GetHasCode is used to help determine if an object is possibly equal, if it is then the Equals method will be called to determine absolute equality. If your object is mutable, getting a hash code from it can be impractical.  In this case returning 1 as the hash code is probably the best thing though it will affect look-up performance.

Once this was understood then there comes the more nuanced issue of choosing to implement IEquatable<T>. If you don't implement this interface, methods in the BCL that use the GenericEqualityComparer<T> will still function, so what's the point? Once again the MSDN docs explain how things work.

"For a value type, you should always implement IEquatable(Of T) and override Object.Equals(Object) for better performance. Object.Equals boxes value types and relies on reflection to compare two values for equality. Both your implementation of Equals and your override of Object.Equals should return consistent results.
If you implement IEquatable(Of T), you should also implement IComparable(Of T) if instances of your type can be ordered or sorted. If your type implements IComparable(Of T), you should also always implement IEquatable(Of T)."


A few more things to think about...  Don't forget to override the equality and inequality operators to keep object behavior consistent.  If your overriding Equals, you may want to think about sealing your class, to keep inherited classes from messing up on an Equals implementation.


Sunday, June 29, 2014

Switching from SVN to TFS - via Visual Studio Online

At work we've been using SVN for source control and we were using www.codespaces.com as our hosting provider until they evaporated into thin air last week.  Apparently, they had a weak password....critical mistake.  It was rather scary to realize that the code you thought was safe could so quickly be gone. Thankfully, I had recent pulls of all our SVN repositories on my laptop.   

I viewed this tragedy as an opportunity to move over to Visual Studio Online, which is the cloud version of Team Foundation Server.  I'd been using this service for a few personal projects. It's free for teams of up to five and you can't beat free for a price. 

It became apparent rather quickly that there are several decisions that must be made when setting up a TFS team project. Do you use one team project for each VS solution that you want in source control?  What project management template do you use - Agile vs. Scrum?  Honestly, after using SVN for so long it felt a little daunting.  Not everything SVN equates in the world of TFS.

As I started migrating code across from SVN to TFS, I tried to keep track of the decision points and lessons learned.  

Visual Studio Online 

TFS = ALM = More than Source Control
TFS is more than just source control.  TFS is also an ALM (Application Life-cycle Management system).  Microsoft has put together a set of documentations on CodePlex designed to quickly bring you up to speed. 

Branching
SVN convention is to use three directories (trunk, tags, and branch) to manage your branching workflow.  TFS convention seems to be to use a main folder in place of trunk.  Branches derive from main, much like in SVN, however, there is not an equivalent concept of tags.  TFS provides a label (which could be thought of as a weak tag) and it really just serves as a marker commit chain.  On the upside, TFS provides a very cool feature called Shelvesets, essentially allowing a commit to source control that doesn't go directly back into the branch of code your working from.

Information sourced from 

One or Many - Team Projects 
My first attempt at setting things up in TFS mimicked what I did with SVN and I started creating team projects for every SVN repository that I had.  However, work planning often spans project boundaries, making a team project for every set of source code a less than ideal solution.  

The takeaway is this
    • Having one team project provides greater flexibility than having several
    • Team projects at this point in time cannot be renamed 
    • Within one team project renaming and reorganize is simple
    • Work areas provide separation for project planning
    • Within one team project, separate teams can be created as needed
Information sourced from 

External Dependencies
At first, it was not obvious to me how to manage several separate, sometimes interconnected projects in TFS.  In SVN, every solution was committed to its own repository.  If there was a dependency between projects, it was simple to use tags with an external dependency to compose interdependent projects together.

In TFS the recommendation is to use binaries for external dependencies.  There are two ways to do this. Commit the dependency .dll's along with the source code or the fancy new method of using a local NuGet server.  Microsoft's ALM quick-start leans in favor of  using a local NuGet server if possible. If you do go the route of including the dependencies in a folder within the project, make sure it gets checked in. The .dll's are excluded from source control by default.

Information sourced from 


Removing a Team Project
After switching to one team project to manage all our code, I realized that I wanted to delete all the single team projects I had created.  This at this point, cannot be done through the online management portal.  A simple command line argument can be used to safely remove old team projects.




Information sourced from

Setting up Builds, Dropping to TFS server
TFS in Visual Studio Online has an integrated build server.  This has varying levels of usefulness depending on your project.  One of the build drop options is to drop it directly onto the TFS server.  This is a nice feature, which hopefully in the future will allow for some way to publicly expose the drops for external project stakeholders.

Information sourced from
Team Foundation Service updates - Oct 29

Scrum Vs. Agile
Previous to TFS, we managed our work backlog through emails, notes on scratch paper and verbal banter.  Needless to say, anything is probably an improvement to that.  We'd tried several things, but in reality we found that we never used our tools, simply because they were so separate from VS, where we needed them.  VS 2013 has very nice integration with Visual Studio Online, which has solved this issue for us.

When setting up a team project, you are asked to choose a process template.  Several templates are presented, among them templates for Agile and Scrum.  Not knowing a whole lot about either methodology, I ended up choosing the Scrum process template based the following two blog posts.

Agile vs Scrum process templates - TFS
Scrum for Everyone

While we are not currently using the project management tools to their full potential, I'm sure thatas we learn more about the methodology the tools will give us more value.

The Verdict
Microsoft is providing a great service at a great price.  The holistic approach of providing project management tools along with source control makes sense.  Currently there are a few week points. SharePoint via Office 365 doesn't integrate well with Visual Studio Online. And there is no simple built-in way to provide external stakeholders with code drops from automated builds. However, I'm sure as the product matures MS will continue to integrate and innovate to continue providing value.  

Saturday, June 14, 2014

Uncle Bob's Advice on Clean Code

Several years ago, I read Robert C. Martin's book, Clean Code: A Handbook of Agile Software Craftmanship and I was inspired to write cleaner, higher quality code.

Uncle Bob has created a video series that goes beyond the material covered in his book. Recently, in an effort to better grasp all of the SOLID principles, I started watching the video series from the beginning. The format is entertaining and the material is thought provoking.

The basis of episode 1 "Clean Code," (and for that matter, the entire series), is that code is written for humans. Code has to be maintained. After code is written, it is read over and over again during maintenance. While it may be subjective as to what clean code actually looks like, Uncle Bob makes many strong arguments about what makes code maintainable.

So far I've watched four episodes and I've been thoroughly entertained and inspired. I'm amazed at how much I have learned.  The second episode talks about code, space, class and function names. The third and forth episodes dig into functions.  Here is a short outline of episodes three and four to give you an idea of what the video contains:

After a brief astronomy lesson, Uncle Bob starts appearing in all varying places all over the world talking about functions.  A set of guidelines for writing functions is given.  For example, functions shouldn't have more than three parameters and functions shouldn't take a Boolean as a parameter. He gives solid arguments as to why that is. If the pointy haired boss threw something like that into a requirement document, you'd probably be inclined to question the sanity of his reasoning.  Thankfully, it's coming from Uncle Bob, who's time in the industry has served as a proving ground for his reasoning on clean code.

If you haven't ever read the book Clean Code, check it out.  If you have, check out the videos...They offer a deeper dive into the material in a very entertaining package.

Saturday, June 7, 2014

NuGet - Failed to initialize the PowerShell Host.


While loading AutoMapper from NuGet, I encountered the following error.

Failed to initialize the PowerShell host. If your PowerShell execution policy setting is set to AllSigned, open the Package Manager Console to initialize the host first.

A little searching yielded the following post.



The trick is to close Visual Studio, open PowerShell as an administrator, then change the execution policy to something less restrictive.  Then re-open Visual Studio, and re-attempt the NuGet install. I've noticed if I don't leave PowerShell running when I do this, it doesn't work.  Notice that the magic PowerShell command is "Set-ExecutionPolicy" Unrestricted


I had to iterate on this a few times, and to get AutoMapper to install I had to go all the way down to Unrestricted.  After updating I changed it back to AllSigned...


Tuesday, June 3, 2014

Generating Help Documentation from XML Comments in .Net

Everyone seems to agree that using XML comments/documentation in code is the best way to keep code documents maintainable.

I've been working on a project, diligently filling in XML documentation on the public classes and methods in our API.  I knew I'd read plenty of articles on the web that described how documentation should be in one place and how a tool should be used to help generate documentation from the code base.  However, I had not dug into the actual implementation surrounding how to accomplish this utopia in a project.

After a few searches, the following became evident.
  • There is not a tool from Microsoft to transform the XML documents from code into something useful like a help file or website. 
  • However...
    • Microsoft used to have one called Sandcastle, development stopped a few years back.  
    • Development continues as an open source project called Sandcastle Help File Builder
      • Sandcastle Help File Builder provides
        • VS integration.
        • Command line
        • GUI
  • Doxygen is a great option for several languages, including C#
    • Doxygen can technically be used with VB.Net, but currently the links to any working scripts to get that functioning seem to be broken.
    • Doxygen provides
      • Command line
      • GUI
With that information, I was able to make an informed decision as to what tool I'd use on my current project. I ended up going with Sandcastle.  Like most new things, it takes some getting used to.  The documentation for how to use Sandcastle Help File Builder is top notch, so I won't go into detail.  The integration with Visual Studio is great, as it makes all the configuration feel somewhat familiar.  If you're looking for a documentation tool for your project, I'd recommend checking it out.

General XML Comment References:
MSDN Magazine June 2002
MSDN Magaize May 2009
C# Programming Guide
XML Comment Cheat Sheet

Monday, May 26, 2014

AutoMapper - IOC vs. Static Mapper

I've been using AutoMapper for almost a year and I never noticed that it has an interface for the mapping engine cleverly named IMappingEngine.  I stumbled upon this post by Jimmy Bogard.  In the post Jimmy shows how to use AutoMapper with an IoC (Inversion of Control) container and more specifically, StructureMap.

I'm in two minds over this matter.  Technically, a test written against code that uses the static AutoMapper mapper class would be considered integration tests.  You loose one tiny seam in your code... Additionally, code written in this manner will hide its dependency on the external mapper.  However, how big of a deal is it?  When writing code that uses dependency injection, we are always monitoring our constructors for injection bloat but that's a separate issue. On one hand, if I use the static over the injected, I have one less parameter in the class constructor.  On the other, I hide a dependency and I can't write a mock to verify that AutoMapper was called.

I've tried out both instances of this in my code and for now I'm leaning toward injection for one reason alone. I'm not a big fan of writing a test to find it's failing because I forgot to configure AutoMapper in my unit test. Having it in the constructor forces the issue and it reminds me to either write a mock for it or configure and inject the static mapping engine. Typically, when writing my unit tests I use Moq. However, sometimes mocking auto mapper seems to make little sense as the mapping is an integral part of the code. Interestingly, in the comments on Jimmy Bogard's post, there is a conversion about this very issue.

Which should you use?  The answer is, it depends...  What is important to you?  What are you trying to test?

In case you find yourself wanting to use your IoC container, here is an example from StackOverflow  on how to register the IMappingEngine for resolution in Unity. 

container.RegisterType<IMappingEngine>(new InjectionFactory(_ => Mapper.Engine));

This works with any static global configuration done using Mapper.CreateMap<>(). Just ensure that your configuration is called once before using the mapping engine.

XAML Parse Error: Failed to Create a Type from the Text

Recently, while working on a data template in XAML, I encountered this error:  


After lots of searching and testing, I found the issue. It ended up being with the following line:
 xmlns:viewModels="clr-namespace:RZP.Project.ViewModels"

It was missing the assembly name and that caused the error.  
xmlns:viewModels="clr-namespace:RZP.Project.ViewModels;assembly=RZP.Project"

Wednesday, May 21, 2014

Moqing Prism's EventAggregator

Writing unit tests against the Prism EventAggregator using Moq should be easy right?  After all there is the IEventAggregator interface, and all of our events are separate classes.

Below are two examples.  Example 1 verifies a certain event is published twice.
That is fairly simple, and it works!   Great, so what is the problem?

Example 2 shows how to trigger an event using Moq.  Imagine you have a subscription to an event, and the Action in the Subscribe() method is private.  Using Moq the event needs to be published and the system under test then needs to be examined for a change in state.

The key to example two seems to be this, a real event must be used not a mocked event.  I was able to find two separate examples for Rhino Mocks and adapt them to Moq. RhinoMock:Exp1 RhinoMock:Exp2

As a side note there is allot of information and miss-information surrounding Moq and IEventAggregator.

Stack Overflow: Moq Event Aggregator is it Possible?

Stack Overflow: Mocking Prism Event Aggregator using Moq for Unit Testing

Stack Overflow: How to verify Event Subscribed using Moq

CodePlex: Mocking Event Aggregator with MOQ

Note: As pointed out in the Stack Overflow post titled How to verify Event Subscribed using Moq, the only virtual call to Subscribe takes lots of parameters, so makes a messy looking test.  Not to mention even then its not working for me, need to get the call back parameters right, the parameters in the SO example make my test error.