Saturday, November 28, 2015

Visual Studio - Clean leaves file residue

Visual Studio's clean project and clean solution don't do what I thought they did. I had never really paid much attention to these simple function, if the solution had problems trigger a clean, and magically stuff in the bin folder was cleaned up and issues fixed.  I was under the impression that these function simply triggered a delete one everything in bin.  How wrong I was!
All the referenced compiled files are removed, however other files and folders can still be hanging out in the build directories.  After some digging on the net I came across the following build target which can be added to the project file.

<Target Name="DeepClean"
        AfterTargets="Clean">
    <RemoveDir Directories="$(Outdir)"/>
</Target>
This will delete everything in the build directory, which more often than not is the desired action in my projects.

Thursday, November 26, 2015

Life with Alexa

When Amazon announced their new smart assistant, Alexa, I was somewhat skeptical but decided to bite the bullet and buy one.  At the introductory price of $100, I thought it worth the money even if it turned out to be just another WiFi enabled speaker.

At this point we've had Alexa several months.  The idea of interacting with the digital world through speech sounds incredible.  However, I've found it an annoyance at its best and at its worst a downright obstacle to getting anything accomplished. Often Alexa gives replies are totally unrelated to my questions.  After speaking simple commands that are answered with "I'm sorry I didn't understand the question", I feel frustrated.  When Alexa gives long winded replies to simple things she's hard to interact with.  I find myself scolding her, "Alexa No, Alexa Stop!"

Here are two concrete examples of things that I tried to do with Alexa.

Shopping Lists:
Brilliant idea. Why not use the built in shopping list functionality.  "Alexa, add carrots, apples, ham, cheese, milk, and eggs to my shopping list."  She replies "Okay carrots, apples, ham, cheese, milk, and eggs added to your shopping list."   Great!  I head to the store pull out my smart phone, and notice that my signal inside the store isn't very good.  I open the Alexa app as my phone drops from LTE down to 3G.  The app is taking forever to open. I start shopping.  I check my phone. App isn't open yet.  After filling my cart I walk to the checkout line and try again.  Signal being better, app opens, I check my shopping list.  Its full of stuff that I have added before and never cleared out.  I can't quite recall what's new on the list and what isn't but I have enough groceries to last the week.  I delete the entire shopping list and think "fail" but I'll try again next time.

Yelp:
I was notified that Alexa and Yelp have teamed up.  Great!  I'll try that as soon as I'm home :)  I get home and ask "Alexa, can you give me the Yelp information for the New London Cafe?"  She replies, "here is your flash briefing".  Thinking that maybe I slurred my speech, or didn't enunciate, I slowly and calmly repeat myself.  Alexa again replies "here is your flash briefing."  My wife is in the other room snickering.  Once again I try "Alexa can you tell me about restaurants in the area."  She replies with a listing of the restaurants, listing New London Cafe first.  Hopeful that she would know that I didn't make up the name of the cafe, I start trying all sorts of things. The only thing that works is selecting the first Yelp listing that shows up in the app...  I think that's how it's designed to be used.

This is my main gripe about Alexa.  If commands and questions are not phrased exactly right Alexa won't understand.  You really do need to train yourself how to use Alexa, you have to learn the key words for everything you want to do.  It is not like engaging in conversation with a person, it's very unnatural.

Alexa, we hope you get smarter!
The most useful thing we do with Alexa, "Alexa play Pandora" and our last played Pandora channel starts streaming.   In our house Alexa is essentially a glorified WiFi speaker.


Thursday, November 5, 2015

Winning a Developer Express DXperience Subscription from .NET Rocks!

The other day at work I was mildly surprised when I received an email from none other than Carl Franklin himself.  The title read .NET Rocks Fan Club Winner!, as to leave little doubt as to the contents of the email.  Clever title!  :)

I've started off using my Developer Express DXperience Subscription by downloading CodeRush.  It's a little different than ReSharper.  Most of the functionality that I used in ReSharper is in CodeRush, and in most cases things work in a similar fashion. I've been poking at the control libraries that come with the subscription.  Looks like they are a very robust set of controls, covering almost every .Net project type .  I'm looking forward to finding a use for some of them in my side projects!

Winning is fun!

Wednesday, October 14, 2015

Starting a .NET User Group

In February of this year I took a new job and we moved to Duluth, MN to be closer to my wife's family.  One of the things that I'd  noted while researching the area, was the presence of a well established meetup surrounding web development.  I saw they had monthly meetings and a surprisingly a large number of members for the size of the local area.  I also noted the absence of any formal meetup centered around the .NET technology stack.  Having served for a short period on the board of the North Houston .NET User Group, I felt that the community could benefit from a similar organization.

At work I'd mentioned to a coworker the desire to start a .NET user group in the local area.  I was directed toward another co-worker who was also wanting to formalize a group around .NET.  After a few months of throwing ideas back and forth, we finally decided that the best way to get it off the ground was to set a date and have a meeting.

I tasked myself with setting up a basic web-site http://www.duluthdnug.org/, and putting our information out at http://www.meetup.com/Duluth-NET-User-Group/.  It feels mildly redundant to do both, but we figured having a website couldn't hurt when it comes to the matter of obtaining sponsors.  I did some research and found http://www.ineta.org/ which "maintained" a listing of .NET user groups.  I say "maintained" because upon submitting our user group for listing, I was notified that INETA is dissolving.  Bummer!  Next step was to submit information about our group to all the companies I could find online that offer user group sponsorship, Jetbrains, APress, elastic, etc.  Our workplace was gracious enough to provide a conference room for the meetup.  With all that in place our group is official as it gets!

We are headed into our third meeting next month.  Our first meeting had 6 people, our second had 10 and now the total group membership is 24 members.  We look forward to growing the group within the next year!

Tuesday, October 6, 2015

Raspberry Pi Adventure - Setting up Icecast & Darkice

Recently I've had the privileged of helping a friend setup a Raspberry Pi for the purpose of streaming audio.  I haven't dabbled in the Linux world in a while, but I was quickly drawn in by all the fun typing at the terminal. 

Linux is very flexible, but not without challenges.  I had been told that Darkice and IceCast2 would give me audio streaming capabilities we were after.  The plan was simple, fire up the Raspberry Pi, terminal into it, install darkice and icecast2 configure them and get some sleep.  

My sleep routine was destroyed by this project :)  I didn't fully comprehend how difficult it would be to get into the Raspberry Pi.  I started by taking the SD card out, finding the file /etc/network/interfaces and setting a static ip for the network card.  Then I ran a network cable between the Pi and my PC's network card.  Using Putty,  I logged into the Pi.  Then I realized that I needed an internet connection. :p Thankfully the Pi had a USB WiFi  adapter.  Diving back into /etc/network/interfaces I followed the instructions here to setup the wireless connection.  

Once I was on the wireless network, I logged into the router to determine the ip of my device.  Once I had that I was able to use Putty once again to log back into the Pi.  By this point I was craving a GUI.  Typing sudo apt-get install xrdp gave me a Pi side service I could use with a remote desktop client to login and see the Raspbian Desktop.  

From that point I was able to install the apps I needed to get thing working.  So, once more sudo apt-get install darkiceicecast2.  After starting darkice, I was confronted with an error.  

DarkIce: AudioSource.cpp:122: trying to open ALSA DSP device without support
compiledhw:1,0 [0]
After some searching for this error, I was drawn to the following post.  DarkIce needed compiled with ALSA DSP support.  So after carefully following the article, on the second try I was able to get everything to work.  Note: if you are following this guide, you still have to use common sense.  For example package names may not be exactly the same as what is in the guide.  

All said, after a few false starts it turns it wasn't hard at all to set this up on the Pi.  Just very time consuming :)




Inline image 2

Sunday, July 19, 2015

Using Firebase in an Angular Application

I've done a fair amount with Firebase lately and I've learned a thing or two.  Firebase is a non-relational real-time database which makes it perfect for applications that need some form of data stream. Essentially you take objects and shove them into storage.  The most important thing to remember when working with a non-relational store is that Denormalizing Your Data is Normal.

In my application I also used Angular, so I pulled in the AngularFire library which provides some great helpers.  In my app I abstracted all the Firebase calls out into a service which is inject into my angular controller.  Firebase is designed for high IO situations, in my application Firebase is really overkill.  The app could have used a relational store...  That said Firebase has worked very well and there are few concepts you need to grasp to begin using it in an application.  One of the big advantages has been that a simple email authentication based security model is built into the Firebase API. 

Firebase uses the concept of $priority which is similar to the concept of a indexed field in SQL.  A $priority can be a string or a number, and can then be used to query data later on.  In my application in most cases I wanted to retrieve data based on the logged in user.  I used email address as the priority, and then was able to filter data by user.  The following shows a query that implies orderByPriority.  

var gameList = $firebaseArray(ref.startAt(email).endAt(email));

This shows that Firebase has a notion of a queries, which is great.  However, it pays to note that while there are several ways to order and limit the data returned, you can't combine logic to filter down subsets of data.  The queries section of the documentation does a pretty thorough job of explaining. After retrieving your initial results further, filtering must be done client side.  For example below is a function that returns games in a date range.  

function getGameListByCreatedOnDate(startDate, endDate){
    return $firebaseArray(ref.orderByChild("createdOn").startAt(startDate).endAt(endDate));
}

Because I've ordered that data by a child field 'createdOn', I can't apply an additional orderByPriority method that limits the results to just the logged in users games. In this case I need to perform that additional operation after returning the results.  



Friday, June 19, 2015

Reclusive Scrum Poker - & a Faster Planning Meeting

At work we follow the agile methodology and typically play a weekly scrum poker game to estimate the relative cost of upcoming development work.  The project manager was using a free online scrum poker game.  Really the game worked well except for two issues.  The first was that we use JIRA for ticket tracking, so each story being estimated required each player to copy paste the link into a new browser tab.  Then the tab flipping started, then confused players out of sync with what story was currently being estimated.  The other issue was with the time spent waiting for everyone in the room to finish reading the current stories acceptance criteria.  The one guy in the room who was in the middle of fixing a bug after estimating the last story could really burn up time as everyone else waits for him to catch up.  Ultimately, it felt like our planning meetings were becoming a big time burner. The meetings were easily taking two hours to play.

An engineer on our team suggested we play the stories on our own time, and then just discuss the pointing in the planning meeting.  This way we wouldn't waste time waiting on each other.  A brilliant idea, but it would require new software.  That however was a problem we could solve.  I decided it'd be a great way to learn some new technologies, so I volunteered to take it on as a side project.  Our manager created a wish list of functionality for the new game, and I worked from that prioritized list.  I'd also like to mention one of my co-workers at the time Mr. Jaques, who volunteered to help with the UI layout and design. 

Drawing inspiration from http://firepoker.io/#/ I started working on the project in my spare time. Over the next couple months I developed an Angular based scrum game that allows players to play each story in isolation, yet facilitates discussion in our planning meeting.  It integrates with JIRA, allowing auto creation of games based on JQL queries.  It pulls JIRA stories to be estimated into an iframe, and saves estimated story points back to JIRA. Players are organized into teams, and anyone can create teams and games.

- Technologies -
Firebase seemed like it'd be the simplest quickest route to persist data.  It has worked very well.  I hope to do a post on lessons learned soon.  The client side code is all Angular, and for the UI Twitter Bootstrap was used.  Integration with the JIRA api is accomplished by wrapping calls with a Node.js service built using express.js.  The client side controllers are all tested using Mocha & Sinon-Chai.  

- Outcome -
Now our typical planning meetings are lasting an hour and feel much more focused.  All the story point is being done on our own time, so for now its a success.  For now it's been decided to keep the code internal, but at some point I would like to open source our version of the game. 

- Screenshots -



 

Friday, May 22, 2015

Configuring a Dev Environment with PowerShell & Chocolatey

At work this month I was tasked with developing scripts to bootstrap windows machines into our development environment. Many years back I was in charge of a few small networks, and I'd used PowerShell (PS) and Group Policy to control virtually every aspect of our machine setup and deployment. Drawing on that experience, I knew that I could use PS to setup and configure developer machines.

I was aware of Boxen for OSX, so went searching and found Boxstarter which is similar. While cool and all, I decided to focus on developing the scripts and not how they'd be delivered to the machines.

To start in on the process I first determined what needed to be installed and configured, and any steps that would have dependencies.

1. Enable Windows Features - IIS 
2. Configure IIS 
3. Install Scale Out State Server 
4. Install URL-Rewrite 2 
5. Copy files from server to local drive 
6. Install developer tools (using Chocolatey) 
7. Clone Repositories

After determining the requirements, I started working on the scripting and that is when I learned all that I did not know.

- Power Shell Tools for Visual Studio -
Upon starting to script up the various steps, I realized that the PS ISE that comes with Windows lacks the niceties of other editors. I tried Sublime, Atom, and a few custom PS editors before I found PowerShell Tools for Visual Studio. This VS extension allows you to edit, debug, and even test (with Pester) your PS code. While not entirely bug free, I have to say this VS tool made the development experience much more pleasant.

VS Screenshot showing  Pester tests running

- Pester Tester -
The Pester framework bring mocking and BDD style test assertions to your PowerShell scripts. The documentation is good, and the framework simple enough that it doesn't take much time to learn. One interesting thing to note is that you can run your Pester tests in the VS test runner (using PS Tools for VS). At least .ps1 files work, tests for .psm1 (PS Modules) always showed as not run tests. You can get around this by simply running the PS command Invoke-Pester in the root directory of you project to run all the tests in the PS test runner.

While Pester is great because it helps lock down your code and protect from regression bugs, you still need to test your scripts by running them. I used Windows Azure virtual machines for this. I spooled up Windows 7, 8.1, & 10 machines and ran the script against all of them. That's how I discovered that some of the configuration code I was using for IIS was failing on Windows 7 but working on 8.1 & 10. I was able to tweak the code, spool up more virtual machines, run the script and check configurations. Tedious as it is I'm not sure I know of a way to get around this step. Here is an example of some code tested with Pester.

- Chocolatey -
Chocolatey in concept is pretty awesome. Its a package manager for windows, based on PS and NuGet. I made use of it in my project by first having Chocolatey install itself and then install all the apps that a dev typically needs. The Chocolatey gallery has most common applications and the installers accept flags allowing for install customization.

Wednesday, April 29, 2015

Using npm to download bower packages

At work we are using npm packages along with require.js  and the webpack module bundler in our gulp build chain.  Overall this works quite nicely, however as many folks do, we were also using bower as the front-end package manager.  While there is nothing wrong with this, bower is just one more piece of the JavaScript jig-saw puzzle that was adding noise in our system.

My coworker had the idea to drop bower from the chain, and simply use npm in its place.  This change is very simple.  In our package.json file he added entries inside the dependencies object for each bower package we had a dependency on, including its repository path.

  "dependencies": {
    "Class.js": "git://github.com/arinet/Class.js", <-- bower package
    "angular": "=1.3.14", <-- npm package
    "angular-cookie": "git://github.com/ivpusic/angular-cookie"  <-- bower package
   }

This simple change has allowed us to clean up our build tasks and make the process just a little simpler.

Friday, April 17, 2015

Minneapolis - Tech Conference - MinneBar

Last weekend, I attended the MinneBar (un)conference.  I've been to other (un)conferences in the past and they seemed quite diminutive in scale compared with MinneBar.  I don't have an exact figure, but the head count was quite high.  Overall it was a very good experience.  As always I felt the best thing I personally got out of the conference was a boost of motivation.  Software development can be tedious and draining work. Meeting with group of like minded passionate people looking to learn something new is very encouraging.  

If your in the area check out the minnebar website and look into attending next year...

Monday, April 13, 2015

Easy Angular Performance Boost

We've been working on profiling and optimizing our Angular app at work.  During that process I found an easy performance tweak that was slipped into Angular somewhere around version 1.3x.  It's simply setting provided by the compile provider.
myApp.config(['$compileProvider', function ($compileProvider) {
  $compileProvider.debugInfoEnabled(false);
}]);
Surprisingly, this provided a very visible performance increase in our application.  It works by removing ng-scope and ng-isolated-scope directives from the code, meaning less work for the browser as it runs.

If your running Angular >=1.3x and you haven't set this already check it out!

Wednesday, March 18, 2015

Jumping into JavaScript

In passing on .NET Rocks! and in blog posts, I've noticed folks whine and complain about JavaScript. I'm now starting to understand a little of what that's about :)  Now that I'm using JavaScript as my daily language, I too have stumbled on several of the language features that make in mildly difficult for a developer coming from another language to truly appreciate it.  However, I feel like to use JS effectively you have to embrace it flaws and all, and learn to have fun with the language.  I've read Douglas Crockford's JavaScript the Good Parts, but honestly I'll probably have to reread it several times before it all sinks in.

One of the things that I've noted about JavaScript is the huge number of libraries available to work with.  This is both a blessing and a curse.  On .NET Rocks! they sometimes talk about a tribe of JS frameworks, as several frameworks that work well together.  I like the concept of a witch's brew, with several JS frameworks stirred together, with a few JS incantations for good measure.  At first it all seems a little magical.

To illustrate my point, below is a table showing a few of the changes I've made in switching over to JS in my day job.

.NET - C# JS
Testing MS-Test & Moq Karma, Mocha, Sinon, Sinon-Chai, Chai-as-Promised, & Coffee Script
Ide Visual Studio webstorm, atom, & sublime - pick your poison, i use all three for different things
Build MS-Build Gulp & Webpack
Package management NuGet, & Chocolatey Bower, Node Package Manager
Frameworks WPF Angular
Declarative UI Code XAML HTML5, CSS

Many of the core concepts of front-end development are shared, MVC, MVVM, MV*, DI.  Good stuff that doesn't need to be re-learned.  The tooling and frameworks though are another matter.  The upside of the current JS world is the enormous amount of flexibility, the downside is the complexity and the learning curve to become productive.  Honestly, .NET keeps programmers fairly constrained to world of standard libraries, strongly typed code, and compile time error checks.  Stepping outside that world can be a little scary.

Microsoft has embraced the current development landscape of JS and is bringing more JS tooling into Visual Studio 2015.  I'm hopeful that the new tooling in VS2015 will make it the development platform of choice over Sublime, Atom or WebStorm.  Microsoft is really good at reducing the friction in the development experience, here's hoping they get it right for JavaScript!

Sunday, March 15, 2015

New Job

At the beginning of February I started working at ARI in Duluth MN.  Due to my work change I haven't had much time to blog.  I hope to make up for that in the near future.  My time has instead been dedicated to learning lots of new things as I spool up at work.  

Overall, I enjoyed my previous position where I worked mainly with .NET doing front end work with C# and XAML.  I was part of a very small team, creating custom lots of small custom desktop apps.  However, to grow we often need to leave our comfort zone.  In my new position I'm working as part of a mid-sized development team, on a very large web application.  Parts of it are written in ASP.NET using Web Forms, while new portions of the application are written using AngularJs.  I was given a choice of what I wanted to focus on when I started and I jumped at the chance to work on front end web development.  

After a little over a month on the job, I'm enjoying the challenge and learning lots along the way.  In the next few months I hope to share some of things I've noticed in switching from full time .NET development to JavaScript.

Saturday, March 14, 2015

f.lux

There are more nights than I care to admit where I find myself burning the midnight oil...eyes watering from staring at my computer screen and endless amounts of code.  F.lux is a handy utility that changes the color temperature of the screen with sunset and sunrise.  It took a while to get used to the screen dimming down, but after a few weeks of use, it was unnoticeable.  That was until I switched to a different computer that didn't have f.lux installed and noticed that my eyes were hurting in the evenings.

Check it out:  https://justgetflux.com/


Saturday, January 31, 2015

Is Test Driven Development Dead?

While catching up on some blog reading over the holidays, I ran across this interesting conversation between Kent Beck, Martin Fowler & David Heinemeier regarding TDD.  I soon found myself watching several hours of Google Hangouts soaking in the wisdom.

After watching it, all I have to say is that I still see TDD as a very valuable tool.  I try to use it when I can, but there are times I can't get it to fit certain situations.  I don't agree with the concept of test induced design damage.  I'd have to say that every time I've written a test for a portion of code, it has caused me to think about the design and most of the time I learn something I didn't know before.

Kent posted an interesting response on Facebook. Worth the read :)

Saturday, January 24, 2015

Using INotifyDataErrorInfo & Data Annotations [Required] Attribute in WPF on Objects

INotifyDataErrorInfo in combination with Data Annotations works great for error validation in WPF when using .NET 4.5.  A typically example of data annotations shows them being used on properties with primitive types such as integer or string.  When binding in WPF, one would bind a control such as a text box directly to the primitive property.  Error validation would occur on that binding.

     [Required]  
     public string Name  
     {  
       get { return _name; }  
       set { SetProperty(ref _name, value); }  
     }  

      <TextBlock Text="{Binding Name}"/>  

Recently, I was working with some code that had a [Required] attribute on an object.  In the view-model the binding was to a nested property on that object. 

     [Required]  
     public MaterialModel LinerMaterial  
     {  
       get { return _linerMaterial; }  
       set { SetProperty(ref _linerMaterial, value); }  
     }  

    <TextBlock Text="{Binding LinerMaterial.Name}"/>  

Validation was not working!  Validation attributes only work directly against the object to which they are applied.  I could have created a separate primitive property "MaterialName" and set that when a material was selected on the view model.  However, I decided to go with a slightly different solution.  I wrapped my text block with a stack panel, setting the data context of the stack panel to the MaterialModel object.


 <TextBlock HorizontalAlignment="Left" Text="Liner Material:"/>  
 <StackPanel DataContext="{Binding LinerMaterial}">  
      <TextBlock Text="{Binding .Name}"/>  
 </StackPanel>  

  Doing this causes validation to be raised on the stack panel, instead of the TextBlock.  
Error validation of the object showing on the stack panel
This could be considered a bit of a hack, but on the view model I was working on, it saved me from having to add three extra properties for primitives, and additional logic.  Also, I feel it conveys the intent better.  The view model doesn't require a material name, it requires a material.

Friday, January 16, 2015

Entity Framework - Configurable Database Connection String in ClickOnce Applications

On a recent project, I used Web API 2 and OWIN self hosting along with Entity Framework and LocalDb to allow our web-service to run on a local machine.  When the application was conceived, it was designed to run with LocalDb.  As the application evolved, it was asked if it could be connected to a shared database.

The architecture looks something like this: 

 WPF Shell
  >> OWIN SelfHost WebAPI2
        >>WebAPI2 Controller Project
             >>Entity Framework 6 DAL

Entity Framework code first has several methods of getting a connection string for the DbContext to use. 
  • Connection by convention
    • Uses the DbContext name to create the database
  • Connection by convention with specified database name
    • Creates the database with the name given in the DbContext constructor
  • Connection with full connection string
    • The whole connection string can be passed into the DbContext constructor
  • Connection string in app.config/web.config 
    • If the connection string matches name matches the DbContext name, or a string passed to the DbContext constructor, it will be used.  
The technical challenge is to get the connection string from the WPF application where it's entered by the user, to the Web API controller.  The app.config of the compiled WPF shell contains the connection string that will be used by EF.  If this connection is modified, when the application is restarted, EF will use the new connection string as it initializes.

While researching different ways of approaching this on the web, I noticed that several articles referenced the fact that an Entity Framework connection string contained metadata and used the EntityConnectionStringBuilder class. With Code First this is unnecessary.  Also, many examples online use the SqlConnectionStringBuilder to piece together a connection string from several user inputs.  I opted to expose the entire connection as a string. If someone wants to change this they better know what they are doing :)  I do make use of the SqlConnectoinStringBuilder using the ConnectionString property to throw errors if the connection string contains unknown sections etc. This provides a basic level of validation for the connection string.

The user interface is mildly unattractive and simple :)



Here is the code.
If an empty database is created beforehand, the test connection button will provide validation that the connection worked.  If a valid connection string is supplied with a database that does not yet exist, the test connection functionality isn't so useful.  However, EF will in either case create the correct database schema or database and schema upon restarting the application.

This method has a few drawbacks. Every time the application is updated via the ClickOnce installer, the user will have to reset the connection string.  This is because the connection string is an application setting not a user setting.  Another small annoyance is that the application must be restarted for the new connection string to be read and piped over to the OWIN Self Host project. I would love to see a better way of doing this. Suggestions are welcome.

Saturday, January 3, 2015

MVVM Light RelayCommand - Broken & Fixed

If you've done much in the MVVM space, your probably familiar with the MVVM Light Toolkit created by Laurent Bugnion.  I've done a few projects recently using MVVM Light and I noticed that the relay command in WPF was not behaving as it normally did.  It was not magically handling the CanExecuteChanged event to update the status of the RelayCommand.  I decided to take the time to research the problem.  It didn't take long to find the following posted by Laurent on CodePlex for work item 7659

"WPF is the only XAML framework that uses the CommandManager to automagically raise the CanExecuteChanged event on ICommands. I never liked that approach, because of the "magic" part, but this is a "feature" of WPF and of course I have to support it. No question here.

In V5, I moved to portable class library for all the newest versions of the XAML frameworks, including WPF4.5. Unfortunately, there is no CommandManager in PCL, and I have to admit that I didn't realize that at first sight. So of course now the automagical part doesn't work anymore. Again, so sorry about that."


This seems to affect V5 - V5.01, with the fix coming in V5.02. Not long after he posted the fix, which is simply a change in namespace for the RelayCommand when using it with WPF.

"Change the namespace of the RelayCommand class from GalaSoft.MvvmLight.Command to GalaSoft.MvvmLight.CommandWpf."

This does indeed fix the issue :)