New Project: The Gift List

My family has many traditions around Christmas time and with the anticipated expansion my youngest brother and my getting married will cause we are seeing more traditions emerge. One of these involves the increasing number of people that we buy gifts for which has caused the present system to become untenable.

By our tradition, each family member releases a Christmas list of sorts just before Thanksgiving enabling other family members to purchase gifts for that person for Christmas. Traditionally, we have used email and its a nightmare as I, for example, have to buy for over 10 people. For someone like me who is used to a lot of email its not a huge deal but the rest of my family is not me and its not uncommon for people to get emails from others detailing their buying plans. So I thought, what can I do?

The idea has been germinating for quite a few years and, I realize, there is probably something like this already out there, but I wanted to build something if only to keep my web backend skills sharp and play with new technologies. The idea is The Gift List.

The essence of the Gift List is simple. As a user you create a list and add items to it. You can invite others to view the list and they can indicate their buying plans for individual items or add items they are buying to prevent duplication. Now, you may be thinking, “well if I can see what others are getting, what is the point?” And that is where the kicker comes in.

You see, when you create a list as the Owner, you can see the items that you added, but never their status. When you create the list as an associated user you can see everything and do everything. In this way, only the people not you can see what is being bought. Now, you could still be a dick and invite yourself to your own list and see everything but, there really is no way to prevent that and the positives that will come of not having this in email, in my mind, outweigh that drawback.

Currently, I am just about done with the API, working through the invite process. I hope to deploy some time before February and start working on a simple mobile app that can be used. If it all goes well, we should be able to use it for my Father’s Birthday in April as a sort of dry run. My other hope is that I can convince my middle brother Sean Farrell (www.brandclay.com) to get engaged as a designer and maybe, sometime next year, well make this a polished app with Facebook integration. For right now though, its just about getting something out the door.

Codemash and Xamarin.Forms

Codemash is one of the largest tech conferences in North America and an event that I had always wanted to speak at. I was finally successful as they accepted my Intro to Xamarin.Forms talk which I got to give on Thursday (1/12). It went very well and generated a lot of interest. I thought it would be good to summarize the finer points here:

The mobile market has a lot of interesting situations that make it different from past platform races, such as the browser wars or the OS wars. The reason is that, while Android hold an 80% (or higher depending on who you ask) marketshare, iOS controls roughly 60% of the profits. So its a situation where there is not one clear favorite and you end up needing to support both.

Xamarin is good for this because its code sharing model means its easy to share business logic between the two platforms in a unified language. This is actually something many in the community are working on but, in my view, Xamarin has the best option at the present time. However, Xamarin is not a silver bullet and does suffer from many issues:

  • Because the bindings are written using Mono and Mono is sitting on top of iOS/Dalvik we have to recognize that we have duplicate memory management systems at work. To be clear, objects are not duplicated, but rather the disposal of the objects held in iOS (using ARC – Automatic Reference Counting) is initiated by the Generational Garbage Collector in Mono. Synchronization is therefore key and developers need to be aware of this so as to avoid strong reference cycles.
  • While Xamarin does offer the ability to bind Objective-C (no Swift yet) and Java libraries into Xamarin so they can be used, this is an extra step. And, we have seen that such bindings can be cumbersome and difficult to maintain. That being said, when using Xamarin developers get access to the wealth of packages available via NuGet. And with Microsoft progressively moving towards the idea of a standard for .NET we will likely see more projects become cross platform compliant in the future.
  • In any application the UI always seems to take the most work, it was true with Desktop apps, it is true with web apps and it is especially true with mobile apps. Even if you use Xamarin, you still have to create unique UIs for each platform.

With respect to the last bullet, this is the reason Xamarin created the Forms framework. It enables us to write a single UI Definition and then the framework renders this definition appropriately giving us a nice native look. This is where it differs from tools such as Cony  and Cordova/PhoneGap which employ a one UI for all. Effectively using Forms enables the creation of multiple UIs at the same time.

It is important to understand that Forms is not intended to replace general development for mobile and should generally be used for apps that can be generalized: POCs, Enterprise data apps, Data Entry apps for example. If your app will require a lot of UI customization it might be better to adopt traditional Xamarin instead of Forms. My rule of thumb tends to be:

If you are spending most of your time in traditional Xamarin, you should be using traditional Xamarin

What this means is, we often utilize things like Custom Renderers to take over the UI customizations with Forms. These renderers are often written within out platform specific projects because they are invoked when the parser wants to know to render a control on the current platform. If you find yourself in these a lot, I advise that you weigh whether you are getting true benefit from Forms.

At the end of the day, Forms is a tool and it is incumbent on us as developers to determine the appropriate utility of that tool. But, we are seeing people using the tool in interesting ways, one of the most impressive of which is the Hybrid App.

Because Forms enables us to quickly create a page for all platforms supported some have started to use it for only certain pages in their application, such as a settings screen. Usually for screens that maybe complicated due to the volume on the page, but not necessarily feature any customized UI that developing separate interfaces for each platform would yield a positive gain.

I plan to give the full talk again at the .NET Mobile User Group in Chicago in the near future. If you are interested please dont hesitate to ask me about it or Forms in general.

Injecting all the WebAPI Things

Dependency Injection is all the rage and has become a must for just about any application, in particular if you are using Entity Framework as it allows for each Request level DbContext instance management. Let me explain.

Think of the DbContext as a pipe into the database. I often see code like this in applications I review for West Monroe:

using (var context = new SomeContext())
{
    // some data access code here
}

This is bad because each time you use this you create another pipe into your database. And if your code is complex it may contain multiple calls to service which open a context each time. This can quickly overwhelm the database as it can drain the connection pool and create performance problems.

The reason Dependency Injection helps with this is it allows us to inject instances at varying levels of scope. And in the case of the web the most ideal scope level is the Request.

The present gold standard for DI (Dependency Injection) is the AutoFac library. It is easily downloadable from Nuget and integrates with most everything including WebAPI; it even comes with prebuilt classes to perform the injection.

You will need the Nuget package: Autofac.WebApi2

GlobalConfiguration.Configure(WebApiConfig.Register);
var config = GlobalConfiguration.Configuration;

var builder = new ContainerBuilder();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
builder.RegisterModule(new ModelsModule());
builder.RegisterModule(new ServiceModule());

var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);

This is my Global.asax in the Application_Start. A few things to point out.

  1. The RegisterApiControllers tables the executing assembly (our Api project) and performs conventions based injection (basically it maps anything ending with Controller).
  2. Web API operates off the DependencyResolver pattern. Rather than overloading the Controller factory (what we do in MVC) this enables us to more properly contain our type resolution logic. In this case AutofacWebApiDependencyResolver is provided by the Integration namespace.
  3. My general approach to DI is to use an Autofac Module for each project. These modules specify the injection mappings for types defined in those assemblys. It just keeps things separated in a very nice way.

Once you this in place you can receive injections through the constructor of each controller.  Here is an example of my UserController constructor.

public UserController(IUserService userService, PasswordService passwordService, IAuthorizationService authorizationService)
{
    UserService = userService;
    PasswordService = passwordService;
    AuthorizationService = authorizationService;
}

In the past, I would normally use Property Injection to fulfill my dependency properties. However, recently I have begun to shift to using the constructor as it makes unit testing easier.

This code should just work with your dependency mappings and the code above in the Global.asax.

One of the points of emphasis I have made here is the fact that DI can help with scoping our DbContext to the request level. I do this in my ModelsModule since the context is defined in my Models project and not the web project. Here is the code:

public class ModelsModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        base.Load(builder);

        builder.RegisterType<GiftListContext>()
            .As<IContext>()
            .InstancePerRequest();
    }
}

That is it. The key here is the built in method InstancePerRequest. I also chose to use IContext to define the public pieces of my DbContext but you can omit this and it will still work. I cannot express how nice it is to have this type of flexibility and letting a reliable third party mechanism control this part of the process.

The final bit I want to talk about is authorization. I generally tend to use ActionFilterAttribute types to authorize the user and establish the CurrentUser property so that my calls can reliably run knowing what user is making the request. I wont be going into how to set this up as it varies project to project.

Here is an updated version of the Global.asax to support injecting dependencies into filters.

GlobalConfiguration.Configure(WebApiConfig.Register);
var config = GlobalConfiguration.Configuration;

var builder = new ContainerBuilder();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
builder.RegisterWebApiFilterProvider(config);
builder.RegisterType<UserAuthenticateFilter>().PropertiesAutowired();
builder.RegisterModule(new ModelsModule());
builder.RegisterModule(new ServiceModule());

var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);

One critical note is that Web API (due to the way it is designed) creates Filters as singletons. So, ANY dependency that is not in Singleton scope will cause a scope resolution failure so, the workaround is to utilize a SeviceLocator pattern to resolve the type.  Here is an example:

var authorizationService = (IAuthorizationService)message.GetDependencyScope().GetService(typeof(IAuthorizationService));

This is the the big advantage to the DependencyResolver approach that Web API uses. It allows us to resolve types without have to create something custom.

Hopefully this clears some of this up re: Dependency Injection.

Exploring Azure with Code First

For my upcoming talk at Codemash I decided to create a brand new API with which to use as the source data for demos, the API would feature data from Starcraft 2, mostly around unit information for now.

Initially, I wanted to use .NET Core with Entity Framework Code First for this, but after a few hours it was clear that there is still not enough maturity in this space to allow me to organize the solution the way I want (separate library project for all EF models and context). That being the case, I decided to work the .NET Framework 4.6.1.

I was able to easily setup my Context and models using code first. I enabled migrations and built the database up. I considered this part difficult as StarCraft features a lot of interesting relationships between units for each of the three races. In the end I was able to setup the database with what I wanted. In addition to annotated relationships in each of the models I had to include some context based customizations, below is my class for the Context:

public class StarcraftContext : DbContext, IContext
{
    public IDbSet<Unit> Units { get; set; }
    public IDbSet<UnitDependency> UnitDependencies { get; set; }
    public IDbSet<UnitProduction> UnitProductions { get; set; }

    public StarcraftContext() : base("StarcraftConnectionString")
    {
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Entity<UnitProduction>()
            .HasKey(x => x.UnitId);

        modelBuilder.Entity<Unit>()
            .HasRequired(x => x.Production)
            .WithRequiredPrincipal(x => x.Unit)
            .WillCascadeOnDelete(false);

        modelBuilder.Entity<UnitProduction>()
            .HasRequired<Unit>(x => x.ProducedBy)
            .WithMany(x => x.UnitsProduced)
            .WillCascadeOnDelete(false);

        modelBuilder.Entity<UnitDependency>()
            .HasRequired(x => x.TargetUnit)
            .WithMany(x => x.Dependencies)
            .WillCascadeOnDelete(false);
    }
}

Essentially, I wanted each Unit to exist in a single table with other tables indicating their dependencies and where they can be produced from. The model is still not complete as I have not found the best way to represent the Tech Lab (Terran) requirement or the Archon (Protoss) production but it will only be minor tweaks.

Once I had this created an API calls to return me all units or units by faction I was ready to deploy to Azure. That is where the real difficulty came in.

Azure App Service

Not to date myself too much, but my preference for things like this has always been the Mobile Service apps on Azure, those are merged now with Web apps to create the App Service – this would be the first time I setup an instance fresh.

Deployment was made difficult by a sporadic error I would receive in Visual Studio telling me that a key already exists each time I would try to publish. I found the only consistent way to overcome this was to recreate the solution, which happened a few times given the struggles with getting the Code First Migrations to run when deployed to Azure (more on that in a bit). I now believe it had to do with me rewinding the git HEAD as I undid my changes.

Reading this tutorial (https://www.asp.net/mvc/overview/getting-started/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) and a few others the suggestion was made to create the Publish profile such that Execute Code First Migrations (runs on application start) would force the migrations to run. And indeed this is true, however, there was one thing missing, one thing that this and other tutorials failed to mention.

When Migrations are enabled Visual Studio (or rather the EF tool run as part of Enable-Migrations cmdlet) creates a class called Configuration. This class configures the migrations. Funny enough in its constructor there is a single line:

public Configuration()
{
#if DEBUG
    AutomaticMigrationsEnabled = false;
#else
    AutomaticMigrationsEnabled = true;
#endif
}

Interestingly, and why its not mentioned or handled for you (with the selection of the checkbox) this properly must be enabled (or set to True). Since I am not sure what effect this will have when I am working locally I choose to use a PreProcessor directive to check the compile type. When I published with this change and hit the API the database migrations ran and the database was setup as expected.

With this change I was able to deploy the starting point of my Starcraft API to Azure and can begin using the Unit list in demos, as well as using it to build a site which allows me to learn Angular 2. More on that later. Cheers.

My First MVP Summit

So back in October I received word that I have been made a Microsoft MVP for my contributions to the Xamarin community. This was a huge honor and something that I have been working towards my whole life; it also meant that I was invited to the Microsoft MVP Summit at the Microsoft Campus in Redmond, WA. So this whole week was spent in sessions with Microsoft employees learning about the current state of .NET and where Microsoft is taking things. I cant share specifics but I can share some high level things that might continue to be useful.

.NET Core
.NET Core was the attempt made by Microsoft to break up .NET and mitigate the huge DLL loading problem that many apps saw, in particular those that took advantage of Microservices. There are certainly some big changes coming, many announcements will be made at Microsoft Connect in the future. But .NET Core will contain to play a huge role and as Microsoft continues to expand Docker support that will be a win for all.

Xamarin
The focus on Xamarin at MVP Summit was immense and you can tell that Microsoft is really all in behind this company; there a ton of things that are changing and will come out over the next 6 months. I am very excited for it mainly because many of these things will address issues that I have had with Xamarin for quite some time. Also, it is always great to meetup with Miguel and talk to him. I am really impressed by their continued dedication to allowing .NET developers to use C# on iOS.

Obviously there is a lot more, but it really felt like drinking from a fire hose. I have a ton of notes and I hope to address these topics, as best as I can, more indepth in the future and once Microsoft makes certain announcements and I dont have to worry about NDA. The one thing they did say is that much of this stuff is open source so, if you can find it, you can learn about; they have quite the GitHub repo.

For now, I am looking forward to getting back to Chicago and being back with my fiancee as we move into more serious territory with regard to wedding planning. Cheers all. It was a blast and I could totally see myself living in Seattle in the future.

Thank you DevUp

I have had the great fortune to be selected to speak at St. Louis Days of .NET (now called DevUp) 4x in a row. I am always impressed by the organization of the event and the staff; plus who doesnt love getting a few days at a casino.

This year I gave one normal talk, my UI Testing with Xamarin talk and a new workshop which focused on Xamarin.Android. For the most part, the UI Testing talk went as expected. I feel like its beginning to lose its luster and I always feel like I come off as preachy during it; its hard not to when talking about testing. Nevertheless I do get positive comments and it does engage the audience, so thats good. However, I think this will be the last event I use it at, I am moving more towards development and DevOp centric topics, especially as my experience with Bitrise and HockeyApp grows.

As for the Xamarin.Android workshop, it was a classic case of being too greedy. I am very grateful the audience hung in there and a few even got it working. But, in the end, I was upset that I didnt take into account actually running the code on Windows as well as there being people who were unfamiliar with mobile programming in general. Overall, I made it work and it seemed well receive. Next time I give it, I will try for something smaller then a Spotify preview player.

As I prepare to head back to Chicago tomorrow, I reflect that it was one of my best conference experiences in some time and a great way to close out the year. I look forward to unveiling the updated version of my Xamarin.Forms Intro talk at Codemash in January.

An Amazing Month

Normally I do not like to use this blog to talk about my personal life but I wanted to talk about the month of September because of its significance. The month was very trying and stressful at work, our team took on a mobile project that involved us rewriting a mobile application in 1.5 months that a previous team took 1yr to write. I am happy to report that we did succeed thanks to the heroic efforts of ever member of the team. We have now entered a period of hardening the application.

In addition to this, I am proud to say that on September 7, 2016 I proposed to my girlfriend of 3yrs at the same User Group we had our first “date”. Yes, you heard that right, I proposed to my girlfriend at a User Group (the WMP office specifically). She laughed and cried and said yes, and it was the unique proposal she always dreamed of; definitely found a good one.

Later in the month, I received another surprise, I was selected to present on Xamarin.Forms at Codemash. Codemash, for those who do not know, has become one of the largest conferences in the US and being selected is extremely difficult; this was my 5th attempt. I am very honored by this charge and am going to be spending the time ahead of the event making sure I prepare fully for this experience.

I also, learned that I was promoted to the elite PAM level at West Monroe. PAM stands for Principal, Architect, and Manager; essentially the three paths the general consultant path divides into; you pick based on your preferences. For me, I am going to he Principal route, basically a Tech Lead and Tech expert. After my experiences at Centare which made me wonder if I had lost a step to have the success at West Monroe was gratifying and inspiring. I really cant thank WMP enough for the chance to be successful. I would also be amiss if I didn’t thank the great people I have gotten to work with; this promotion is a much a reflection of them as it is of me.

Then on the final day in September (ok Oct 1) I was informed I had become a Microsoft MVP, fulfilling a lifelong dream that, honestly, I never thought I would attain. I know that I am very active in local communities, but I know people who are much more active so I always thought the bar was out of reach. I am very humbled to be counted among the many other outstanding individuals who hold this honor. I promise to do my very best to uphold the sterling reputation MVPs have; also I am looking forward to the Summit.

Frankly, it would be hard to ever have a month like that again and I am still getting used to many of the changes (being engaged, being an MVP, and being a Principal) but I will get there. It really was an amazing month and I look forward to the next month.