Injecting all the WebAPI Things

Dependency Injection is all the rage and has become a must for just about any application, in particular if you are using Entity Framework as it allows for each Request level DbContext instance management. Let me explain.

Think of the DbContext as a pipe into the database. I often see code like this in applications I review for West Monroe:

using (var context = new SomeContext())
{
    // some data access code here
}

This is bad because each time you use this you create another pipe into your database. And if your code is complex it may contain multiple calls to service which open a context each time. This can quickly overwhelm the database as it can drain the connection pool and create performance problems.

The reason Dependency Injection helps with this is it allows us to inject instances at varying levels of scope. And in the case of the web the most ideal scope level is the Request.

The present gold standard for DI (Dependency Injection) is the AutoFac library. It is easily downloadable from Nuget and integrates with most everything including WebAPI; it even comes with prebuilt classes to perform the injection.

You will need the Nuget package: Autofac.WebApi2

GlobalConfiguration.Configure(WebApiConfig.Register);
var config = GlobalConfiguration.Configuration;

var builder = new ContainerBuilder();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
builder.RegisterModule(new ModelsModule());
builder.RegisterModule(new ServiceModule());

var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);

This is my Global.asax in the Application_Start. A few things to point out.

  1. The RegisterApiControllers tables the executing assembly (our Api project) and performs conventions based injection (basically it maps anything ending with Controller).
  2. Web API operates off the DependencyResolver pattern. Rather than overloading the Controller factory (what we do in MVC) this enables us to more properly contain our type resolution logic. In this case AutofacWebApiDependencyResolver is provided by the Integration namespace.
  3. My general approach to DI is to use an Autofac Module for each project. These modules specify the injection mappings for types defined in those assemblys. It just keeps things separated in a very nice way.

Once you this in place you can receive injections through the constructor of each controller.  Here is an example of my UserController constructor.

public UserController(IUserService userService, PasswordService passwordService, IAuthorizationService authorizationService)
{
    UserService = userService;
    PasswordService = passwordService;
    AuthorizationService = authorizationService;
}

In the past, I would normally use Property Injection to fulfill my dependency properties. However, recently I have begun to shift to using the constructor as it makes unit testing easier.

This code should just work with your dependency mappings and the code above in the Global.asax.

One of the points of emphasis I have made here is the fact that DI can help with scoping our DbContext to the request level. I do this in my ModelsModule since the context is defined in my Models project and not the web project. Here is the code:

public class ModelsModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        base.Load(builder);

        builder.RegisterType<GiftListContext>()
            .As<IContext>()
            .InstancePerRequest();
    }
}

That is it. The key here is the built in method InstancePerRequest. I also chose to use IContext to define the public pieces of my DbContext but you can omit this and it will still work. I cannot express how nice it is to have this type of flexibility and letting a reliable third party mechanism control this part of the process.

The final bit I want to talk about is authorization. I generally tend to use ActionFilterAttribute types to authorize the user and establish the CurrentUser property so that my calls can reliably run knowing what user is making the request. I wont be going into how to set this up as it varies project to project.

Here is an updated version of the Global.asax to support injecting dependencies into filters.

GlobalConfiguration.Configure(WebApiConfig.Register);
var config = GlobalConfiguration.Configuration;

var builder = new ContainerBuilder();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
builder.RegisterWebApiFilterProvider(config);
builder.RegisterType<UserAuthenticateFilter>().PropertiesAutowired();
builder.RegisterModule(new ModelsModule());
builder.RegisterModule(new ServiceModule());

var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);

One critical note is that Web API (due to the way it is designed) creates Filters as singletons. So, ANY dependency that is not in Singleton scope will cause a scope resolution failure so, the workaround is to utilize a SeviceLocator pattern to resolve the type.  Here is an example:

var authorizationService = (IAuthorizationService)message.GetDependencyScope().GetService(typeof(IAuthorizationService));

This is the the big advantage to the DependencyResolver approach that Web API uses. It allows us to resolve types without have to create something custom.

Hopefully this clears some of this up re: Dependency Injection.

Exploring Azure with Code First

For my upcoming talk at Codemash I decided to create a brand new API with which to use as the source data for demos, the API would feature data from Starcraft 2, mostly around unit information for now.

Initially, I wanted to use .NET Core with Entity Framework Code First for this, but after a few hours it was clear that there is still not enough maturity in this space to allow me to organize the solution the way I want (separate library project for all EF models and context). That being the case, I decided to work the .NET Framework 4.6.1.

I was able to easily setup my Context and models using code first. I enabled migrations and built the database up. I considered this part difficult as StarCraft features a lot of interesting relationships between units for each of the three races. In the end I was able to setup the database with what I wanted. In addition to annotated relationships in each of the models I had to include some context based customizations, below is my class for the Context:

public class StarcraftContext : DbContext, IContext
{
    public IDbSet<Unit> Units { get; set; }
    public IDbSet<UnitDependency> UnitDependencies { get; set; }
    public IDbSet<UnitProduction> UnitProductions { get; set; }

    public StarcraftContext() : base("StarcraftConnectionString")
    {
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Entity<UnitProduction>()
            .HasKey(x => x.UnitId);

        modelBuilder.Entity<Unit>()
            .HasRequired(x => x.Production)
            .WithRequiredPrincipal(x => x.Unit)
            .WillCascadeOnDelete(false);

        modelBuilder.Entity<UnitProduction>()
            .HasRequired<Unit>(x => x.ProducedBy)
            .WithMany(x => x.UnitsProduced)
            .WillCascadeOnDelete(false);

        modelBuilder.Entity<UnitDependency>()
            .HasRequired(x => x.TargetUnit)
            .WithMany(x => x.Dependencies)
            .WillCascadeOnDelete(false);
    }
}

Essentially, I wanted each Unit to exist in a single table with other tables indicating their dependencies and where they can be produced from. The model is still not complete as I have not found the best way to represent the Tech Lab (Terran) requirement or the Archon (Protoss) production but it will only be minor tweaks.

Once I had this created an API calls to return me all units or units by faction I was ready to deploy to Azure. That is where the real difficulty came in.

Azure App Service

Not to date myself too much, but my preference for things like this has always been the Mobile Service apps on Azure, those are merged now with Web apps to create the App Service – this would be the first time I setup an instance fresh.

Deployment was made difficult by a sporadic error I would receive in Visual Studio telling me that a key already exists each time I would try to publish. I found the only consistent way to overcome this was to recreate the solution, which happened a few times given the struggles with getting the Code First Migrations to run when deployed to Azure (more on that in a bit). I now believe it had to do with me rewinding the git HEAD as I undid my changes.

Reading this tutorial (https://www.asp.net/mvc/overview/getting-started/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) and a few others the suggestion was made to create the Publish profile such that Execute Code First Migrations (runs on application start) would force the migrations to run. And indeed this is true, however, there was one thing missing, one thing that this and other tutorials failed to mention.

When Migrations are enabled Visual Studio (or rather the EF tool run as part of Enable-Migrations cmdlet) creates a class called Configuration. This class configures the migrations. Funny enough in its constructor there is a single line:

public Configuration()
{
#if DEBUG
    AutomaticMigrationsEnabled = false;
#else
    AutomaticMigrationsEnabled = true;
#endif
}

Interestingly, and why its not mentioned or handled for you (with the selection of the checkbox) this properly must be enabled (or set to True). Since I am not sure what effect this will have when I am working locally I choose to use a PreProcessor directive to check the compile type. When I published with this change and hit the API the database migrations ran and the database was setup as expected.

With this change I was able to deploy the starting point of my Starcraft API to Azure and can begin using the Unit list in demos, as well as using it to build a site which allows me to learn Angular 2. More on that later. Cheers.

My First MVP Summit

So back in October I received word that I have been made a Microsoft MVP for my contributions to the Xamarin community. This was a huge honor and something that I have been working towards my whole life; it also meant that I was invited to the Microsoft MVP Summit at the Microsoft Campus in Redmond, WA. So this whole week was spent in sessions with Microsoft employees learning about the current state of .NET and where Microsoft is taking things. I cant share specifics but I can share some high level things that might continue to be useful.

.NET Core
.NET Core was the attempt made by Microsoft to break up .NET and mitigate the huge DLL loading problem that many apps saw, in particular those that took advantage of Microservices. There are certainly some big changes coming, many announcements will be made at Microsoft Connect in the future. But .NET Core will contain to play a huge role and as Microsoft continues to expand Docker support that will be a win for all.

Xamarin
The focus on Xamarin at MVP Summit was immense and you can tell that Microsoft is really all in behind this company; there a ton of things that are changing and will come out over the next 6 months. I am very excited for it mainly because many of these things will address issues that I have had with Xamarin for quite some time. Also, it is always great to meetup with Miguel and talk to him. I am really impressed by their continued dedication to allowing .NET developers to use C# on iOS.

Obviously there is a lot more, but it really felt like drinking from a fire hose. I have a ton of notes and I hope to address these topics, as best as I can, more indepth in the future and once Microsoft makes certain announcements and I dont have to worry about NDA. The one thing they did say is that much of this stuff is open source so, if you can find it, you can learn about; they have quite the GitHub repo.

For now, I am looking forward to getting back to Chicago and being back with my fiancee as we move into more serious territory with regard to wedding planning. Cheers all. It was a blast and I could totally see myself living in Seattle in the future.

Thank you DevUp

I have had the great fortune to be selected to speak at St. Louis Days of .NET (now called DevUp) 4x in a row. I am always impressed by the organization of the event and the staff; plus who doesnt love getting a few days at a casino.

This year I gave one normal talk, my UI Testing with Xamarin talk and a new workshop which focused on Xamarin.Android. For the most part, the UI Testing talk went as expected. I feel like its beginning to lose its luster and I always feel like I come off as preachy during it; its hard not to when talking about testing. Nevertheless I do get positive comments and it does engage the audience, so thats good. However, I think this will be the last event I use it at, I am moving more towards development and DevOp centric topics, especially as my experience with Bitrise and HockeyApp grows.

As for the Xamarin.Android workshop, it was a classic case of being too greedy. I am very grateful the audience hung in there and a few even got it working. But, in the end, I was upset that I didnt take into account actually running the code on Windows as well as there being people who were unfamiliar with mobile programming in general. Overall, I made it work and it seemed well receive. Next time I give it, I will try for something smaller then a Spotify preview player.

As I prepare to head back to Chicago tomorrow, I reflect that it was one of my best conference experiences in some time and a great way to close out the year. I look forward to unveiling the updated version of my Xamarin.Forms Intro talk at Codemash in January.

An Amazing Month

Normally I do not like to use this blog to talk about my personal life but I wanted to talk about the month of September because of its significance. The month was very trying and stressful at work, our team took on a mobile project that involved us rewriting a mobile application in 1.5 months that a previous team took 1yr to write. I am happy to report that we did succeed thanks to the heroic efforts of ever member of the team. We have now entered a period of hardening the application.

In addition to this, I am proud to say that on September 7, 2016 I proposed to my girlfriend of 3yrs at the same User Group we had our first “date”. Yes, you heard that right, I proposed to my girlfriend at a User Group (the WMP office specifically). She laughed and cried and said yes, and it was the unique proposal she always dreamed of; definitely found a good one.

Later in the month, I received another surprise, I was selected to present on Xamarin.Forms at Codemash. Codemash, for those who do not know, has become one of the largest conferences in the US and being selected is extremely difficult; this was my 5th attempt. I am very honored by this charge and am going to be spending the time ahead of the event making sure I prepare fully for this experience.

I also, learned that I was promoted to the elite PAM level at West Monroe. PAM stands for Principal, Architect, and Manager; essentially the three paths the general consultant path divides into; you pick based on your preferences. For me, I am going to he Principal route, basically a Tech Lead and Tech expert. After my experiences at Centare which made me wonder if I had lost a step to have the success at West Monroe was gratifying and inspiring. I really cant thank WMP enough for the chance to be successful. I would also be amiss if I didn’t thank the great people I have gotten to work with; this promotion is a much a reflection of them as it is of me.

Then on the final day in September (ok Oct 1) I was informed I had become a Microsoft MVP, fulfilling a lifelong dream that, honestly, I never thought I would attain. I know that I am very active in local communities, but I know people who are much more active so I always thought the bar was out of reach. I am very humbled to be counted among the many other outstanding individuals who hold this honor. I promise to do my very best to uphold the sterling reputation MVPs have; also I am looking forward to the Summit.

Frankly, it would be hard to ever have a month like that again and I am still getting used to many of the changes (being engaged, being an MVP, and being a Principal) but I will get there. It really was an amazing month and I look forward to the next month.

Using Android Studio with Xamarin Studio

Let me be clear: I do not have much respect for the visual design tools that Xamarin creates. There is not anything particularly wrong with them but they just lack refinement, most likely because Xamarin is too busy keeping their bindings updated to stay with changes. This is something I have come to accept on the iOS side and so, when I do create visual layouts for Storyboards or Xib files I use Xcode exclusively, despite the existence of an editor in Xamarin Studio.

Recently, West Monroe engaged in a rare Android project and I got a chance to see how Xamarin Studio worked with Android. Regrettably I found the tooling to be even further behind Android Studio than the iOS side is behind Xcode. Forget even hoping for a sane layout if you used any tools from the support libraries. The few times I got something to render it was so far off that I ended up just running on the device. While this gave me accuracy is slowed my dev time down considerably. I happened to complain about this in one of the Xamarin Support Slack channels I frequent when a fellow dev offered some advice that showed me how to use Android Studio to do the design work. I knew I had to share this.

1) Assuming you have a pre-existing Xamarin Studio solution create an empty Android Studio project somewhere away from the Xamarin project.  I am using a directory called Test while my main code resides in my Projects directory.

2) Once you have setup the Android Studio project open the build.gradle file under your Gradle Script section. Hint: you want the one Module: app.

Within this file, under the android section, add the following block:

sourceSets {
    main {
        res.srcDirs = ['/path/to/Xamarin/Resources/Directory']
    }
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

3) The directory may reload on its own (may have to use Sync Now), if it doesn’t restart Android Studio.

4) Open a layout file. Notice you WILL have to use the .xml extension as Android Studio will not recognize .axml (not sure why Xamarin uses this extension).  You know have full access to the designer. You can freely change themes and use the AS designer to layout and preview your layouts.

Note: you may experience rendering problems or controls missing. I did so I added the following to my dependencies/compile section

compile  'com.android.support:appcompat-v7:23.4.0',
         'com.android.support:design:24.2.0',
         'com.android.support:cardview-22.2.0',
         'com.android.support:recyclerview-v7:22.2.0'

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Hopefully in the future, Xamarin releases a better Android Designer, I would count on it though. I am still having some problems getting this to work fully but the designer has already saved me about an hour with my design work.

Understanding Workflows

In the previous post, I explained the basics of getting Bitrise up and running with a very basic build. I explained the various default steps that are included and the basics of triggers. Now I want to get more in-depth on workflows and triggers.

What is a workflow?

At the very heart of every workflow is the YAML file, bitrise.yml. You can access this file through the Manage Workflow option. This is, in essence, your Bitrise configuration. You can freely distribute this file if you want a client to setup your same build configuration elsewhere; we actually just did this for a client that we finished an engagement with.

Here is a chunk from the workflow we created previously:

yaml

If you read through this and consider the current configuration, you can see that parts easily map to the Web components you have been interacting with. I want to call attending to default_step_lib_source.

One of the great things about the way Bitrise is organized is the way steps are used. They are basically downloaded when each step is executed its source is downloaded from a Git repo (in this case the default is listed above). This being the case, you can add your own steps or customizations of existing steps from your own repository; this is also what allows Bitrise to run locally via its CLI implementation. My plan is to go through how to create your own custom steps in a later article; however, this is still something I am learning.

At the end of the day, a workflow really is just a collection of these steps downloaded on command and executed against the code stored on the virtual machine.

Building a a Workflow

Bitrise does a great job giving you a working workflow out of the box. The default one does what you expect and can serve well for continuously integrating your code. In general, the start of a workflow must do the obvious things such as authenticate itself, get the code, and then perform any other operations ahead of building it; for example with a Xamarin project you are going to need to use the User Management step.  However, once you get beyond the basic case, you might want to consider a few of the features that Bitrise offers.

The first of these is the idea of subworkflows. With subworkflows, larger workflows can be comprised of smaller workflows; its like coding, we want to reuse things where appropriate. And if something changes with the build process, better to change it in one area than across many.

To show you what I mean, click the + icon next to primary in the Manage Workflow view. Name of the new workflow whatever you like (I will name mine prepare). Once this is complete you will see this new workflow listed next to primary.

2016-09-11_0820

By default, Bitrise copies the steps in the currently selected workflow into the new one, so this will come prepopulated and look the same as primary.

For this exercise, go back to primary and delete all of the steps (select each one individually and use the trash can icon) with the exception of Certificates and profile installer, Xamarin Builder (assuming your repo is a Xamarin project) and Deploy to Bitrise.io.

Hit Save and then use the interface to rename the primary workflow to CI. To do this, simply click on the workflow name in the tab, Bitrise will allow you to rename it inline.

2016-09-11_0827

Before we go farther, I want to point out a naming convention that I like to use. Main workflows (those that will be targeted by triggers) are capitalized, sub workflows (those that will not be targeted by workflows) use lowercase. This is my personal convention, you can use it if you like or develop your own; consistency is the most important thing.

Go now to the prepare workflow and remove the steps that are listed in prepare. Return to the CI workflow and create a pre-workflow phase. This is done by adding a subworkflow to run before the Main steps. (Add prepare).

2016-09-11_0831

I recognize, and so does the Bitrise team, that this looks awkward as the very first step in ANY workflow (sub or otherwise) is a “Preparation” step (there is also a clean up at the end). These steps are both smart enough to detect the presence of pre and post workflow phases and WILL NOT run until those workflows complete.

Hit Save and you are just about ready, head over the to Triggers configuration and clean up any extraneous triggers created by Bitrise (by default a trigger is created for each workflow name and that name is assumed to match a remote branch in your Git repo). For now you can add a specific branch name from your repo or stick with * to match all. The important thing is that this trigger kick off the  CI branch (so specify CI for Triggers Workflow).

2016-09-11_0839

About Pull Requests and Order of Operations

While adding your triggers you may have noticed a couple things: 1) Triggers can be added in any order you desire and 2) when creating a trigger you have the option to select whether a pull request triggers the workflow.  Let’s talk about these.

Bitrise triggers are order based which is to say the first pattern matched is the one that fires. So in this case, having * first would prevent any other workflow from ever firing; * matches all so the matcher would never get passed this match.

Many teams utilize pull requests (West Monroe certainly does) as a way perform code reviews of code entering the code base. At West Monroe, we often do these when code enters feature branches allowing Senior Developers to review the code and ensure that nothing heinous is being coded. When you check this box, these pull requests will triggers a build as well. This is a very useful feature as it allows the build server to validate the code will be valid if allowed into the code base (including running the unit tests). This greatly speeds up our process and allows the reviewers to focus on the code and not have to worry about whether it builds or not.

How does West Monroe use workflows?

At West Monroe our workflows are triggered with this configuration, though it varies project to project:

  • task/* – triggers CI – used to validate a remote task branch being developed for a feature. This is optional and only used on certain projects
  • feature/* – triggers CI (pull requests build) – validates the feature and generates a developer build that can be downloaded by local QA staff (not distributed to the client)
  • bugfix/* – triggers CI – similar to task, identifies code which is applied to fix a known bug found in a stable version of the code
  • QA – triggers QA – kicked off by a merge of code into itself. Generally this is one or more features being added to this branch. This process generates a build that the client receives for QA
  • stable – triggers Stable – kicked off by a merge of code into itself. This code is one or more approved feature branches (code that has passed QA testing). This generates a build for client stakeholders and the team demos from this code at the conclusion of each sprint

By utilizing workflows and subworkflows, West Monroe is able to managed an automated build and distribution process driven by the fundamentals of Scrum and Agile. The clients receives builds throughout the sprint and provides validation and feedback that guides the team. At the end, the client receives a stable build which can be shown to stakeholders to yield additional feedback. Because only Done items reach this build, the client is able to dictate the features in each build.

App vs Workflow Environment Variables

The final point for this entry is in regard to variables. When you setup your Bitrise project the process forces you to define environment variables, depending on your target platform, for Xamarin they include: BITRISE_PROJECT_PATH, BITRISE_XAMARIN_CONFIGURATION, BITRISE_XAMARIN_PLATFORM. But you can define more, and I often do, especially when you start including other integrations outside the norm (talk about this next time). But what is important is to understand scoping.

My general practice is the default values for my variables are based on what I would need in CI (my lowest level build). Once I get into other workflows (QA, stable, master, etc) I would need different values that I want to feed the build process (for example, a different Xamarin Configuration to force my code to build a certain way) to alter it for that build. In cases like these you can use Manage env. vars to override your existing variables.

2016-09-11_0856

Regrettably, and Bitrise knows this, this interface is awful. I recommend copying the name of the variable you wish to override before entering since you will have to specify it exactly to get the override. Remember, even in subworkflows, it is the local Environment variables which take precedence. This being the case, the best way to avoid confusion is NEVER define environment variables for subworkflows, it will cause you pain and create confusion.

You can also define static values in your steps directly, this makes sense in some cases, but its something I try to avoid. It should be noted that Bitrise does not, as far as I know, have a secure way to store credentials, so for now the best way is to limit access to the build server.

So that is it for workflows, hopefully that helps explain them. Until next time.