Mobile Center vs Bitrise

At West Monroe one of the biggest pain points we had with developing mobile applications was we lacked an effective DevOps process. That all changed in April last year when we adopted Bitrise; nothing but smiles for the most part on this end.

One of our early attempts was to use the build engine in Visual Studio Online to carry out builds, but this came with the requirement to host a Mac Build Agent on-prem. To be frank, this was a clusterfuck. During the pilot project I spent as many hours over the first three weeks trying to keep it running as I did actual development work. Eventually, we terminated the pilot and went naked without a build process; our quality suffered. It was towards the end of this project that we brought Bitrise into the fold.

I will save you reading the whole article: Bitrise beats Mobile Center right now. Bearing in mind that Mobile Center is still in Preview this isnt surprising as it lacks any maturity (they just added VSTS support a few days ago). But the direction does look good and it does have managed Mac build agents so that is a huge plus over Visual Studio Online.

From my early experiences using it, this is still very much a work in progress and I am communicating a ton of feedback to the team; I even got early access to the VSTS connector by complaining on Twitter. But the end result is the same: Btirise is leaps and bounds ahead of Mobile Center at the current time. Will Mobile Center get there? Possibly. One of the great strengths of Bitrise is the Step library which has both curated and independently submitted Steps; we use a lot of these on more complex projects.

By contrast, Mobile Center has no flexibility and only a single build path that can be leveraged, little customization. Also, while Bitrise’s dashboard lets you see In Process and past builds, Mobile Center focuses on showing ALL remote branches and bubbling those performing build actions to the top; I find this busy.

However, I see a lot of runway for Mobile Center because of what it offers beyond a simple build engine: integrated analytics, crash reporting, and distribution. No having to use HockeyApp as a separate app. Having everything centralized makes a lot of sense, but if the build processes are not feature rich it wont be worth it; personally given the number of tasks available in VSO its hard to believe this wont change soon.

So, for right now, I would not even consider using Mobile Center for anything but the simplest of projects where the support was limited and the scope diminished. Its not ready to join the big boys….yet.

Advertisements

Adding DevOps for GiftList

One of my aims with my GiftList project is to build sensible CI/CD support processes to enable easy deployments as I continue to develop the API. In addition, one of my aims is to move away from pure mobile dev and return to my roots as a backend developer while maintaining my strong focus on solid development techniques and processes.

Setting up the build

The first step was to setup the build such that I could confirm that I didnt miss a build error with a given checkin. VSO makes this incredibly easy using the Build Solution task; just remember to have it follow a Nuget Restore task.

I always recommend using individual build configurations for each build type, it just makes it easier to be more selective as you move through the process. I use to recommend this as a way to change your connection string at each stage but, this is falling out of practice as Cloud can hold your credentials and do the replacement for you. Such an approach maintains the “wall” between development and production so the credentials for Production are known only by the cloud and select individuals.

Adding the Unit Tests

So apart from adding the unit tests (128 as of this writing) which took me a month after I had completed a majority of the API code. But honestly, this worked out well as going back and adding the unit tests found a few bugs that I would have otherwise missed.

In terms of organization I have always been a big fan of the “give an instance … verify …” pattern where you attempt to construct logical sentences describing what is being tested with the class name and method name. Example:

given an instance of itemcontroller call getitems verify bad request is returned if id is empty guid

This allows the first portion of this to describe the general action being tested which is calls to getitems on the itemcontroller. The second portion indicates what test case we are exploring, bad request is returned if id guid is empty in this case. This approach enables good organization when you have hundreds and hundreds of tests. It also helps prevent accidental duplication.

Now, once I added the tests I needed to instruct VSO to actually run those tests as part of the build. To do this, you need to tell the task WHERE tests are located. Tests CAN be added as part of your projects but this means this code will get added to all versions of the assembly (you could remove it with a PreProcessor directive) which would inflate the overall size of the application. My preference here is to add new assemblies with a .Test suffix. I then instruct my other build profiles to NOT build these beyond development.

Once you do this, configuring the VSO tasks is rather easy. Here is a screenshot of mine:

test_task

Remember, the default setting in Visual Studio is that the DLL name matches the project name, so this works here. The ** means recursive.

Truth is, even with this your tests will not run if you are not using MSTest. In my case, I was using NUnit and I needed to ensure I had the TestAdapter Nuget package referenced (there are two versions, one for 2.x and one for 3.x – use the one correct one for your version).

Now I have 128/128 tests passing.

Let’s deploy it

So this proved to be the most difficult part for me, but it gets to be easy for you now. First, make sure you are publishing your artifacts using the Publish Artifacts task. Documentation here is kind of sparse but what you want to do is publish your output directory (mine is /bin) to a staging directory which we can use for a release.

publish_task

Here is what my task looks like. The key here is the Artifact Type which I have set to Server which indicates the artifact is stored on the build server. The artifact name will represent the directory, remember it for the next step.

Ok, so it used to be that when you ran your builds if you also wanted to deploy you would code that as part of the steps. While it works, it is actually not a good strategy since it made built processes less reusable. In seeing that Microsoft has now added a second section called Releases. Releases contain you release workflows, they are independant of your Build workflows.

I am still exploring this environment but, it appears to be full featured and allows signoff for builds. For my example, I just want to deploy to my Dev environment whenever the build changes. All this takes is the Deploy Azure App Service task. Here is my configuration:

azure_task

Most of this is simply selectable, very easy. Once you authorize the task it can pull your subscriptions and from that it can get your App Service instances. Really the hardest part was the Package or Folder bit, which once I got the Publish working worked great. Here is an example of what the … brings up if your Publish is working properly.

package

Very easy. You notice the lowest folder matches the Artifact Name from the Publish.

All in all, very easy to set this up. WIthout the time to add all of my unit tests, probably took me 2hrs tops to create this process end to end while doing other things. Hat’s off to Microsoft, cant wait to see what is coming next.

Indy.Code()

I had the chance this year to be a speaker at the inaugural Indy.Code() event in Indianapolis, Indiana. I was accepted to present two talks with a mobile focus: Intro to iOS (with Xamarin) and Intro to Android (with Xamarin). My goal with each talk was to talk less about Xamarin and focus more on the foundational concepts of the individual mobile platforms: iOS and Android.

For iOS I spent a lot of time focusing on the use Xcode as the means to build the UI for iOS apps and also how Xamarin registers its types with iOS so they can be used. I tend to stay away from the Xamarin iOS Designer due to the performance differences between it and Xcode, I also do not like how certain design aspects of iOS are represented there (Auto Layout in particular).

Beyond this, I covered how Storyboards have changed the game when designing iOS applications and how they integrate with TableViews, Navigation, and overall design. I then dipped the audiences toes in the vast topic that is AutoLayout.

I will be presenting this topic again at CodeStock in May.

For Android, which is a topic that, relative to iOS, I have much more experience with I had a much harder time breaking things down just to sheer volume and the changes that have taken place in the platform since its inception. I started off talking about Activities and how we can use annotation in Xamarin instead of registering them directly in the Manifest file.

I spent much of the time helping the audience understand the various features Android offers to deal with the fragmentation of the platform, in particular the density classification and relative pixel definitions that enable you to get 90% of the way there; I even had an example where I showed how a different layout can be defined for various traits of the platform: layout, density, sdk. Beyond, I showed an example where we used the RecylerView and Fragments to construct and Album information lookup app.

Both presentations were well received and the event was done very well. I am hoping to submit next year if they have it again. However, as I have previously noted, I am starting to move away from my mobile focus and returning to my focus as a backend developer so I can leverage my Azure knowledge and AWS certifications moving forward.

Indy.Code() too place March 29 – 31. Apologies for the delay.

New Project: The Gift List

My family has many traditions around Christmas time and with the anticipated expansion my youngest brother and my getting married will cause we are seeing more traditions emerge. One of these involves the increasing number of people that we buy gifts for which has caused the present system to become untenable.

By our tradition, each family member releases a Christmas list of sorts just before Thanksgiving enabling other family members to purchase gifts for that person for Christmas. Traditionally, we have used email and its a nightmare as I, for example, have to buy for over 10 people. For someone like me who is used to a lot of email its not a huge deal but the rest of my family is not me and its not uncommon for people to get emails from others detailing their buying plans. So I thought, what can I do?

The idea has been germinating for quite a few years and, I realize, there is probably something like this already out there, but I wanted to build something if only to keep my web backend skills sharp and play with new technologies. The idea is The Gift List.

The essence of the Gift List is simple. As a user you create a list and add items to it. You can invite others to view the list and they can indicate their buying plans for individual items or add items they are buying to prevent duplication. Now, you may be thinking, “well if I can see what others are getting, what is the point?” And that is where the kicker comes in.

You see, when you create a list as the Owner, you can see the items that you added, but never their status. When you create the list as an associated user you can see everything and do everything. In this way, only the people not you can see what is being bought. Now, you could still be a dick and invite yourself to your own list and see everything but, there really is no way to prevent that and the positives that will come of not having this in email, in my mind, outweigh that drawback.

Currently, I am just about done with the API, working through the invite process. I hope to deploy some time before February and start working on a simple mobile app that can be used. If it all goes well, we should be able to use it for my Father’s Birthday in April as a sort of dry run. My other hope is that I can convince my middle brother Sean Farrell (www.brandclay.com) to get engaged as a designer and maybe, sometime next year, well make this a polished app with Facebook integration. For right now though, its just about getting something out the door.

Codemash and Xamarin.Forms

Codemash is one of the largest tech conferences in North America and an event that I had always wanted to speak at. I was finally successful as they accepted my Intro to Xamarin.Forms talk which I got to give on Thursday (1/12). It went very well and generated a lot of interest. I thought it would be good to summarize the finer points here:

The mobile market has a lot of interesting situations that make it different from past platform races, such as the browser wars or the OS wars. The reason is that, while Android hold an 80% (or higher depending on who you ask) marketshare, iOS controls roughly 60% of the profits. So its a situation where there is not one clear favorite and you end up needing to support both.

Xamarin is good for this because its code sharing model means its easy to share business logic between the two platforms in a unified language. This is actually something many in the community are working on but, in my view, Xamarin has the best option at the present time. However, Xamarin is not a silver bullet and does suffer from many issues:

  • Because the bindings are written using Mono and Mono is sitting on top of iOS/Dalvik we have to recognize that we have duplicate memory management systems at work. To be clear, objects are not duplicated, but rather the disposal of the objects held in iOS (using ARC – Automatic Reference Counting) is initiated by the Generational Garbage Collector in Mono. Synchronization is therefore key and developers need to be aware of this so as to avoid strong reference cycles.
  • While Xamarin does offer the ability to bind Objective-C (no Swift yet) and Java libraries into Xamarin so they can be used, this is an extra step. And, we have seen that such bindings can be cumbersome and difficult to maintain. That being said, when using Xamarin developers get access to the wealth of packages available via NuGet. And with Microsoft progressively moving towards the idea of a standard for .NET we will likely see more projects become cross platform compliant in the future.
  • In any application the UI always seems to take the most work, it was true with Desktop apps, it is true with web apps and it is especially true with mobile apps. Even if you use Xamarin, you still have to create unique UIs for each platform.

With respect to the last bullet, this is the reason Xamarin created the Forms framework. It enables us to write a single UI Definition and then the framework renders this definition appropriately giving us a nice native look. This is where it differs from tools such as Cony  and Cordova/PhoneGap which employ a one UI for all. Effectively using Forms enables the creation of multiple UIs at the same time.

It is important to understand that Forms is not intended to replace general development for mobile and should generally be used for apps that can be generalized: POCs, Enterprise data apps, Data Entry apps for example. If your app will require a lot of UI customization it might be better to adopt traditional Xamarin instead of Forms. My rule of thumb tends to be:

If you are spending most of your time in traditional Xamarin, you should be using traditional Xamarin

What this means is, we often utilize things like Custom Renderers to take over the UI customizations with Forms. These renderers are often written within out platform specific projects because they are invoked when the parser wants to know to render a control on the current platform. If you find yourself in these a lot, I advise that you weigh whether you are getting true benefit from Forms.

At the end of the day, Forms is a tool and it is incumbent on us as developers to determine the appropriate utility of that tool. But, we are seeing people using the tool in interesting ways, one of the most impressive of which is the Hybrid App.

Because Forms enables us to quickly create a page for all platforms supported some have started to use it for only certain pages in their application, such as a settings screen. Usually for screens that maybe complicated due to the volume on the page, but not necessarily feature any customized UI that developing separate interfaces for each platform would yield a positive gain.

I plan to give the full talk again at the .NET Mobile User Group in Chicago in the near future. If you are interested please dont hesitate to ask me about it or Forms in general.

Injecting all the WebAPI Things

Dependency Injection is all the rage and has become a must for just about any application, in particular if you are using Entity Framework as it allows for each Request level DbContext instance management. Let me explain.

Think of the DbContext as a pipe into the database. I often see code like this in applications I review for West Monroe:

using (var context = new SomeContext())
{
    // some data access code here
}

This is bad because each time you use this you create another pipe into your database. And if your code is complex it may contain multiple calls to service which open a context each time. This can quickly overwhelm the database as it can drain the connection pool and create performance problems.

The reason Dependency Injection helps with this is it allows us to inject instances at varying levels of scope. And in the case of the web the most ideal scope level is the Request.

The present gold standard for DI (Dependency Injection) is the AutoFac library. It is easily downloadable from Nuget and integrates with most everything including WebAPI; it even comes with prebuilt classes to perform the injection.

You will need the Nuget package: Autofac.WebApi2

GlobalConfiguration.Configure(WebApiConfig.Register);
var config = GlobalConfiguration.Configuration;

var builder = new ContainerBuilder();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
builder.RegisterModule(new ModelsModule());
builder.RegisterModule(new ServiceModule());

var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);

This is my Global.asax in the Application_Start. A few things to point out.

  1. The RegisterApiControllers tables the executing assembly (our Api project) and performs conventions based injection (basically it maps anything ending with Controller).
  2. Web API operates off the DependencyResolver pattern. Rather than overloading the Controller factory (what we do in MVC) this enables us to more properly contain our type resolution logic. In this case AutofacWebApiDependencyResolver is provided by the Integration namespace.
  3. My general approach to DI is to use an Autofac Module for each project. These modules specify the injection mappings for types defined in those assemblys. It just keeps things separated in a very nice way.

Once you this in place you can receive injections through the constructor of each controller.  Here is an example of my UserController constructor.

public UserController(IUserService userService, PasswordService passwordService, IAuthorizationService authorizationService)
{
    UserService = userService;
    PasswordService = passwordService;
    AuthorizationService = authorizationService;
}

In the past, I would normally use Property Injection to fulfill my dependency properties. However, recently I have begun to shift to using the constructor as it makes unit testing easier.

This code should just work with your dependency mappings and the code above in the Global.asax.

One of the points of emphasis I have made here is the fact that DI can help with scoping our DbContext to the request level. I do this in my ModelsModule since the context is defined in my Models project and not the web project. Here is the code:

public class ModelsModule : Module
{
    protected override void Load(ContainerBuilder builder)
    {
        base.Load(builder);

        builder.RegisterType<GiftListContext>()
            .As<IContext>()
            .InstancePerRequest();
    }
}

That is it. The key here is the built in method InstancePerRequest. I also chose to use IContext to define the public pieces of my DbContext but you can omit this and it will still work. I cannot express how nice it is to have this type of flexibility and letting a reliable third party mechanism control this part of the process.

The final bit I want to talk about is authorization. I generally tend to use ActionFilterAttribute types to authorize the user and establish the CurrentUser property so that my calls can reliably run knowing what user is making the request. I wont be going into how to set this up as it varies project to project.

Here is an updated version of the Global.asax to support injecting dependencies into filters.

GlobalConfiguration.Configure(WebApiConfig.Register);
var config = GlobalConfiguration.Configuration;

var builder = new ContainerBuilder();
builder.RegisterApiControllers(Assembly.GetExecutingAssembly());
builder.RegisterWebApiFilterProvider(config);
builder.RegisterType<UserAuthenticateFilter>().PropertiesAutowired();
builder.RegisterModule(new ModelsModule());
builder.RegisterModule(new ServiceModule());

var container = builder.Build();
config.DependencyResolver = new AutofacWebApiDependencyResolver(container);

One critical note is that Web API (due to the way it is designed) creates Filters as singletons. So, ANY dependency that is not in Singleton scope will cause a scope resolution failure so, the workaround is to utilize a SeviceLocator pattern to resolve the type.  Here is an example:

var authorizationService = (IAuthorizationService)message.GetDependencyScope().GetService(typeof(IAuthorizationService));

This is the the big advantage to the DependencyResolver approach that Web API uses. It allows us to resolve types without have to create something custom.

Hopefully this clears some of this up re: Dependency Injection.

Exploring Azure with Code First

For my upcoming talk at Codemash I decided to create a brand new API with which to use as the source data for demos, the API would feature data from Starcraft 2, mostly around unit information for now.

Initially, I wanted to use .NET Core with Entity Framework Code First for this, but after a few hours it was clear that there is still not enough maturity in this space to allow me to organize the solution the way I want (separate library project for all EF models and context). That being the case, I decided to work the .NET Framework 4.6.1.

I was able to easily setup my Context and models using code first. I enabled migrations and built the database up. I considered this part difficult as StarCraft features a lot of interesting relationships between units for each of the three races. In the end I was able to setup the database with what I wanted. In addition to annotated relationships in each of the models I had to include some context based customizations, below is my class for the Context:

public class StarcraftContext : DbContext, IContext
{
    public IDbSet<Unit> Units { get; set; }
    public IDbSet<UnitDependency> UnitDependencies { get; set; }
    public IDbSet<UnitProduction> UnitProductions { get; set; }

    public StarcraftContext() : base("StarcraftConnectionString")
    {
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        modelBuilder.Entity<UnitProduction>()
            .HasKey(x => x.UnitId);

        modelBuilder.Entity<Unit>()
            .HasRequired(x => x.Production)
            .WithRequiredPrincipal(x => x.Unit)
            .WillCascadeOnDelete(false);

        modelBuilder.Entity<UnitProduction>()
            .HasRequired<Unit>(x => x.ProducedBy)
            .WithMany(x => x.UnitsProduced)
            .WillCascadeOnDelete(false);

        modelBuilder.Entity<UnitDependency>()
            .HasRequired(x => x.TargetUnit)
            .WithMany(x => x.Dependencies)
            .WillCascadeOnDelete(false);
    }
}

Essentially, I wanted each Unit to exist in a single table with other tables indicating their dependencies and where they can be produced from. The model is still not complete as I have not found the best way to represent the Tech Lab (Terran) requirement or the Archon (Protoss) production but it will only be minor tweaks.

Once I had this created an API calls to return me all units or units by faction I was ready to deploy to Azure. That is where the real difficulty came in.

Azure App Service

Not to date myself too much, but my preference for things like this has always been the Mobile Service apps on Azure, those are merged now with Web apps to create the App Service – this would be the first time I setup an instance fresh.

Deployment was made difficult by a sporadic error I would receive in Visual Studio telling me that a key already exists each time I would try to publish. I found the only consistent way to overcome this was to recreate the solution, which happened a few times given the struggles with getting the Code First Migrations to run when deployed to Azure (more on that in a bit). I now believe it had to do with me rewinding the git HEAD as I undid my changes.

Reading this tutorial (https://www.asp.net/mvc/overview/getting-started/getting-started-with-ef-using-mvc/migrations-and-deployment-with-the-entity-framework-in-an-asp-net-mvc-application) and a few others the suggestion was made to create the Publish profile such that Execute Code First Migrations (runs on application start) would force the migrations to run. And indeed this is true, however, there was one thing missing, one thing that this and other tutorials failed to mention.

When Migrations are enabled Visual Studio (or rather the EF tool run as part of Enable-Migrations cmdlet) creates a class called Configuration. This class configures the migrations. Funny enough in its constructor there is a single line:

public Configuration()
{
#if DEBUG
    AutomaticMigrationsEnabled = false;
#else
    AutomaticMigrationsEnabled = true;
#endif
}

Interestingly, and why its not mentioned or handled for you (with the selection of the checkbox) this properly must be enabled (or set to True). Since I am not sure what effect this will have when I am working locally I choose to use a PreProcessor directive to check the compile type. When I published with this change and hit the API the database migrations ran and the database was setup as expected.

With this change I was able to deploy the starting point of my Starcraft API to Azure and can begin using the Unit list in demos, as well as using it to build a site which allows me to learn Angular 2. More on that later. Cheers.