Sunday, August 26, 2018

A .Net Core Proxy for ArcGIS Server Resources

Allow me to first set the stage... For the past year and a half, I've been working on a large GIS centered web application. ESRI technology powers our GIS stack with ArGIS Server hosting service endpoints. The application is an ASP.Net MVC application that houses an Angular SPA. The MVC application hosts the ESRI resource proxy enabling secure communication with the server.

Opening the code for the .Net version of the proxy on GitHub, several things become clear. The code feels intimidating at 1250 lines. It's packaged in a .ashx file with several methods going beyond a hundred lines of code. There are methods to serialize objects to JSON, as well as methods to retrieve values from JSON. Below is an example.
    private string getJsonValue(string text, string key) {
        int i = text.IndexOf(key);
        String value = "";
        if (i > -1) {
            value = text.Substring(text.IndexOf(':', i) + 1).Trim();

            value = value.Length > 0 && value[0] == '"' ?
                // Get the rest of a quoted string
                value.Substring(1, Math.Max(0, value.IndexOf('"', 1) - 1)) :
                // Get a string up to the closest comma, bracket, or brace
                value = value.Substring(0,
                            indexOf_HighFlag(value, ","),
                                indexOf_HighFlag(value, "]"),
                                indexOf_HighFlag(value, "}")
        return value.Replace("\\/", "/");

While there is no doubt that this is a marvel of ingenuity and genius, it's glory days where likely over 10 years ago. You don't have to read far before realizing that some very basic refactoring would do this proxy a world of good.

When I realized I'd need to use the proxy, I pulled the code off Github, installed, configured and forgot about it. For a day or two... We had some issues in our environment that the proxy couldn't handle. With OAuth2 flows the proxy assumed that the token endpoint would allow anonymous traffic. When ArcGIS Server used IWA to secure the token endpoint the proxy does not pass credential. This a generally reasonable assumption, but not true in our environment. I forked the code and set about to fix it.

In the process I discovered several other issues. One such issue was the code calling an authorization endpoint with incorrect parameters. Unit tests are not present in the code base. Reading the issues section on Github makes one loose confidence in all 1250 lines of code.

Now that I've presented a picture of the code quality, lets get into the more bizarre. ESRI licenses products for large quantities of money. If you want to secure your data while still exposing it to a client application, your likely going to need a proxy.

ESRI has taken the position on Github that the proxy is a community supported effort. Furthermore, they have stated that they will not be creating a proxy for .Net Core. I guess there is no point to customer support when you have a monopoly.

The .Net Core proxy includes a built in memory cache. If you are running a load balancer, fork the project and replace the cache provider. Or submit a PR to make the cache provider configurable. :) The code base contains unit tests, and uses the new HttpClientFactory in .Net Core.

If you are moving your GIS application to .Net Core and are in need of a proxy, try the ArcGIS Server resource proxy for .Net Core.

Thursday, August 9, 2018

The Auth0 Access Token Paradox

The Auth0 Access Token Paradox

Identity management is hard. Auth0 makes it much easier and helps you secure your applications and services. One reoccurring scenario is that of calling API’s while logged in as a user and needing some information about the user in the API that is called.

Auth0 has put together a nice article about why you should use access tokens to secure APIs.
No arguing with that…

They provide some guidance around the fact that an access token doesn’t contain user info. The following is a quote.

“Note that the token does not contain any information about the user itself besides their ID (sub claim), it only contains authorization information about which actions the application is allowed to perform at the API (scope claim).
In many cases, you might find it useful to retrieve additional user information at the API, so the token is also valid for call the /userinfo API, which returns the user's profile information.”

That seems reasonable enough, when I need user info call the Auth0 /userinfo endpoint and problem solved. After implementing this as a solution in a development environment this probably seems slick (besides the extra network traffic and grossness of it all). However, after rolling it out to a UAT environment you might find your testers complaining of transient issues in calls that require user info.

A scenario much like above described played at out at work recently. After first consulting the debugger and realizing that requests where being rate limited, I revisited the Auth0 API documentation.

What I found is more than a little confusing. It looks like the legacy user info endpoint supported a reasonable hit rate, while the new one supports a fraction of that. The difference is a firehose vs. a leaky faucet.

To summarize: Use access tokens, they are cool. User info, that’s valuable stuff, if you want it you should be willing to wait for it.

This makes me wonder what people are doing to engineer around this. A queue perhaps? Somewhere there is a site with a banner that reads “Please wait! We use Auth0 and your user info is being requested. You’re 150th in line, your wait time is roughly 15 or 20 minutes.”

There is away around this. However, it seems to only be appropriate in situations where you don’t want to wait in line.
Source =

Auth0 provides the ability to modify what data is included in the access token through node.js middleware that runs on the platform.The process is thoroughly documented in the Auth0 API documentation, so I’ll link to it rather than boring with the details.

I’m not a huge fan of this solution as every time you need another chunk of user info you end up sticking it in the access token. Access tokens shouldn’t be gigantic. This approach doesn’t scale well. In our scenario we need just the users email address so we can get away with it.

I'd love to hear from the folks at Auth0 regarding this...

Update 09/10/2018: I've heard back from Auth0. The documentation is missing a few key pieces of information. The user info endpoint is 5 request per minute for each user id. Also, the intent is that people calling the API cache the results. Makes sense, just not well set forth in documentation.

Sunday, July 29, 2018

Using Swashbuckle or Swagger-UI with Auth0

Auth0 is a great solution for authentication. Swagger-UI is great for kicking the tires on your API. If your using .Net you can pull in Swashbuckle, which is a .Net wrapper of Swagger. The sad part is that currently Swagger-UI 3.17.6 doesn't play well with Auth0.  After spending more than a few hours trying to configure OAuth2 via the Swashbuckle, I realized that the underlying code doesn't support the passing of an audience parameter.

Our API needs user information. To test it from Swagger-UI we needed to be able to execute an Implicit Grant flow, and then use the authorize token from that flow in proceeding calls in the authorization header.

Given the currently somewhat crippled capability of Swagger-UI, and the need to still get things done, I settled on a pragmatic but not all that clever solution.

I decided to override the version of Swagger-UI that comes packaged with Swashbuckle, and in doing so add in a little code to accomplish what I wanted. In the image below you can see an additional button in the UI, Get Auth Token. This button hits the API endpoint which redirects to Auth0. The user logs in, and is redirected back to the Swagger-UI endpoint. The token is in the URL, and is extracted and shown in a prompt for the user to copy to the clipboard. The user must then hit the authorize button and paste the code from the clipboard into the dialog at which point they are logged in.

While this may sound terrible, it's a good deal easier than logging into another application and pulling an access token out using Fiddler...

The secret to this working is that Swashbuckle allows you to specify a new index file.Download the Swagger-UI source from github and keep the following files. Set the index files build action to embedded resource in Visual Studio.

Replace the body of the code in index with the code body of the index file from the gist above.  If your using Swashbuckle over-ride the default index with your modified file by setting the IndexStream in the config.

c.IndexStream = () => GetType().GetTypeInfo().Assembly.GetManifestResourceStream("Project.API.Swagger.index.html");

If you find yourself using Swagger and Auth0, you might find yourself doing something similar. :)

Saturday, January 27, 2018

URL Redirect Tools for Development

Many developers are familiar with Fiddler from Telerik as a tool for creating browser redirects. However, if you need a lighter weight solution a browser plugin may be a better solution. The last few years I've hardly opened Fiddler, as simple browser plugins have been meeting my needs.

The first redirect browser plugin I used was Trumpet in the Chrome browser. You can still find Trumpet in the Chrome store.

Trumpet works great, however the UI is not super polished, and some aspects of it are not intuitive. After a minute or two of tweaking the settings you'll get the desired redirect results. Hopefully...

Firefox 57 is a great browser and I've started to spend more time in dev tools. That being the case, I found myself in need of URL redirection. That's when I found REDIRECTOR. Same concept, but better UI, and it feels more polished and friendly. And it's available in both Chrome and Firefox.

When creating a new redirect, I noticed and appreciated the validation that appears in REDIRECTOR. It helps provide clarity, showing your pattern match does indeed work before you hit save.

It's the little things... If you don't need all the power of Fiddler, give a redirect extension a try.

Monday, January 15, 2018

Messaging Pattern - With RxJS

While doing WPF development, one of the go to patterns is the messenger or pub/sub. The power of this pattern is individual components in a system can remain decoupled, yet communicate with each other. Reactive extensions for JavaScript make it easy to create a messenger for a modern client side application framework such as Angular.

The code flow is simple to describe. Two components use a common messenger service. In one component, a subscriber is registered against the service to listen for messages. In the other component the service is used to send a message. Both components rely on the service, but do not have any knowledge of each other.

Here is an example that can be adapted to your needs.

There are a few things to be aware of. If you do not use messaging class, you may fall into the trap of trying to message data received on an AJAX call . Your code will fail to pass the instance of  check, as you haven't actually created an object instance. This is one reason to follow the pattern of using a separate class for each type of message, instantiating the message and attaching the data.

For some applications it may work better to break the types of messages into categories. Each category can then have a method for sending and a Subject object to subscribe on. This reduces the amount of messaging noise on each channel.

Saturday, November 18, 2017

Keep a Changelog

Over the years developers have utilized several methods to keep user updated on the changes occurring with each application release. One approach is to email users upon each application update, this is intrusive and annoying for users. Another is to dump the git commits into a changelog.  Git commits aren't for user consumption :) The best approach is make the changelog an integral part of your application. The single source of truth that's ever present within the application.

Most developers are familiar with Markdown, as most of our repositories contain a file. Markdown is great at using minimal syntax to describe document layout. This makes it an ideal format to use for changelogs as well.

In a recent project we decided to follow the guiding principles from There are a variety of libraries in NuGet that can convert markdown to html, I utilized CommonMark. With a few lines of code I added an easy to follow changelog to my project.

Here is the core code. Note: the changelog.txt should be a, but Github converts markdown to html, making it render in this post as html. In your code, this file would be a .md file not .txt.

Saturday, November 4, 2017

Entity Framework Core - InMemory Db Names in Tests

The new in memory database provider that's available for EF Core is awesome. If you aren't familiar with it, it lives in the "Microsoft.EntityFrameworkCore.InMemory" NuGet package. One of the quirks of using this provider is that it requires a unique database name for each test. This ensures that you don't have state bleed over between tests. The creation of the in memory database usually looks something like this.
 var options = new DbContextOptionsBuilder()
                .UseInMemoryDatabase(databaseName: "Unique_db_name_here")
At first when writing tests I was utilizing a magic string as in the example above. I came back to some unit test where I'd used unique names such as "test1", "test2" etc. After deleting a test and seeing my magic strings get out of sequence, it became apparent that this was going to be a maintainability nightmare. Hmmm, did I use "test6" yet?

Unit tests methods already have unique names, so why not use the test name? With magic strings this approach would be terrible, but with the C# 6.0 nameof operator this approach works well.

This keeps the in memory names unique and keeps refactoring simple.