The Android Experiment

Mobile development is something that I have always wanted to try, but I never owned a smart phone and so really had no deep drive to develop mobile applications. This all changed recently when I purchased a Motorola Droid and immediately began playing with the Android OS and developing applications for various purposes; mainly for conference management.  I have always hated how I would go to a conference and I would receive a piece of paper of have to check a website to know what sessions happened when.  I set out to create an Android app to help me manage the sessions available for the Codemash conference that I attend annually.

Setting up the Environment
The first step for me is always to set up an IDE and as is the case with most non-Microsoft products and technologies, the IDE in use will be Eclipse. I am not an overly big fan of Eclipse, mostly because I have been spoiled by Visual Studio, but it is a very useful tool for developing.  Aside from Eclipse you will also need the Android 2.0 SDK which will provide you with the emulator necessary for testing your app.  I found this links helpful in walking me through the process of setting things up: http://developer.android.com/guide/developing/eclipse-adt.html

Understanding Android Applications
The first thing to understand about Android apps is they are centered around an Activity.  An application can have many activities of differing types.  In general you will use a vanilla Activity, but other activity types, such as ListActivity, are also available. The primary functions of the Activity class is the onCreate functions which you will override to perform tasks associated with the app startup.  Creating an Android Project in Eclipse you will see your onCreate method defined as such:

@Override
public void onCreate(Bundle savedInstanceState)
{
     super.onCreate(savedInstanceState);
     setContentView(R.layout.main);
}

The value being passed into setContentView is actually an integer which maps to a special class called “R” which contains mapped elements from the layout, I will elaborate more on this in the next section.

Understanding the layout

The layouts in Android can be defined in the program if desired but Android also supports defining the layout in XML.  By default, Eclipse will create a layout file called main in a directory called “layout” in a special folder called “res”.  Now look above at what is being passed to setContentView.  A special class called R is maintained by Android which contains references to elements in the view.  This makes it easier to references the items in the layout.

The next thing to understand is that everything that can be shown to the user or used to control the structure of the UI is a “view”.  TextView for example is used to show text, Button is a view which is used to represent a button, ListView which is a list of items.  Views can have IDs which can be added to R and allow them to be easily referenced in the code.  LinearLayout is also a view.  The important thing to remember when defining your layout in XML is all elements must define a layout_width and a layout_height.  If you fail to do this, your application will crash on startup when the content view is loaded.

Understanding Population of ListView

I want to talk about this as it caused a significant amount of pain for me when I was learning it, though looking back I am confused as to why it did, but that is normal I suppose :).  So the first thing to understand is if you are from the .NET world, throw what you know away since Android prefers to make the population of a List as complicated as possible.

First you need to have a ListView on the page, as you might expect.  Next if your activity does not derive from ListActivity you will need to create an Adapter class.  The following is my definition for the adapter I use in my CodeMash app:

public class CustomListAdapter extends BaseAdapter
{
     @Override
     public int getCount()
     {
          return _data.size();
     }

     @Override
     public T getItem(int position)
     {
          return _data.get(position);
     }

     @Override
     public long getItemId(int position)
     {
          return position;
     }

     @Override
     public View getView(int index, View renderer, ViewGroup parent)
     {
          TextView view = new TextView(this);
          return view;
     }
}

This example uses generics for the internal list that will be applied to the ListView, but it is not required.  Here are some additional things that must be added to the class to make it work.

  1. Create a constructor that takes as parameters a reference to the Activity that is called it, a parameter to hold the list you are going to bind to the ListView, and finally a reference to the ListView that we are going to bind to.
  2. The most important thing to do is store references to the Activity and the data internal within the class, as they will be referenced and needed throughout.  You must also call the setAdapter method of the ListView reference and pass as a parameter “this”.

You will notice that inheriting from the abstract class BaseAdapter will require the implementation of several abstract methods: getCount, getItem, getItemId, getView.  Here is an explanation of these functions:

  • getCount()
    • This function is responsible for returning the size of the underlying data set
  • getItem(int position)
    • This function is responsible for returning an object from the underlying data set at a given position
  • getItemId(int position)
    • This function is responsible for returning an identifier for the selected object at a given position. In most of my examples I just return the position that was passed, depends on if I have a key to pass out.
  • getView(int index, View renderer, ViewGroup parent)
    • This is the most important function in the adapter, it allows you to define the ItemRow, that is how each item in the dataset is represented in the ListView.  Most often you will do this programmatically, though I have been working with XML inflation and that shows promise of removing UI building code from the code, which I think helps forge good separation.

So the basic idea with the adapter is to pass in the data, the activity, and the ListView that you intend to reference and use.  You then make a simple call to setAdapter and read your data from within the object creating a view to represent the object for each row in the ListView using the getView method required by the BaseAdapter abstract class.

Conclusion:

My general feelings toward developing on Android are a mixture of annoyance and sheer fun.  On the one hand it is nifty to develop applications for a mobile platform and Android has a wide assortment of features to help you.  On the other hand, I have never been a fan of Java as it feels like a hack next to C# and the absence of the many native syntax features in Java makes some development difficult.  Some of the common things seem more difficult then they should be, but that could be that coming from Silverlight and Xaml, I have a higher expectation for describing layouts through markup.

Overall, I like developing in Java because of how close it is to C# and .NET, and once you get used to the way it does things like events and generics it becomes less about the language and more about the platform, which is the idea.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Separation of Responsibilities with MVC

Recently I have been working on a new pattern with ASP .NET MVC, based mainly on what I have learned from MVVm and other patterns.  That said, I dont know if I can really call this new, mainly because it really is an amalgamation of patterns working together to keep the architecture clean.

Step 1: The Controller Architecture
As with any system enforcement of DRY (Don’t Repeat Yourself) is essential.  This can be achieved any number of ways, though the most common way is the abstraction of what is repeated into a separate layer, to that end we introduce the BaseController class.

public abstract class BaseController : Controller
{
     protected ServiceController CurrentServiceController
     {
          get; private set;
     }

     public void InitalizeServiceController()
     {
          CurrentServiceController =
               ServiceController.CreateNewServiceController();
     }
     
     public void DestroyServiceController()
     {
          CurrentServiceController.Dispose();
     }
}

Some things to note here.  This class is inheriting from Controller which is provided by Microsoft and is the standard class which all ASP .NET MVC Controller derive from.  The two public methods we will get to later.

Step 2: Creating the Controller

ASP .NET MVC has many points of extensibility, and we will take advantage of two of them, the first one is how we “build” a controller.  Whenever a request is made, MVC must determine which controller to instantiate based on the same found in the request (http://www.yourmvcwebsite.com/Series/Edit/1).  Once it has this it will attempt to call the action.  The reason we need to do this is for our data layer.  This is a very simple instance where the Dependency Injection pattern is used, though we are choosing to not use a DI framework such as Ninject or StructureMap because our needs are simple.  The goal is to set the Context reference at creation and then dispose it at release.  We can achieve this by using what is already available and simply leverage basic inheritance.

public class AnimeControllerFactory : DefaultControllerFactory
{
     public override IController CreateController(
          RequestContext requestContext,
          string controllerName
     )
     {
          IController controller = base.
               CreateController(requestContext, controllerName);
          if (controller as BaseController != null)
          {
               ((BaseController)controller).InitalizeServiceController();
          }

          return controller;
     }

     public override void ReleaseController(IController controller)
     {
          if (controller as BaseController != null)
          {
               ((BaseController)controller).DestroyServiceController();
          }

          base.ReleaseController(controller);
     }
}

We could have used Reflection here to determine the type name string and then used Activator to actually get an instance, but why do that when MVC already does it for you.  Thus we call the overridden method, which will return to us an instance implementing IController, from there we can do some simple casting to get to BaseController.  We then call our methods which take care of the initialization process while continuing to remain unaware of what is happening.  This is key, if we were to simply make a call to new() we would be coupling the two projects together, we dont want to that in our ControllerFactory.

Step 3: The Data Access Layer

One of my goals with this architecture was to totally separate generated entities from DTOs.  To that end we create a “service” layer which performs the translation of a DTO to and from a Model.  (Note: I tend to call my generated classes models and my DTOs entities).  The following in an excerpt from the ServiceController class which acts as a “service store” for all services in the application and provides access to these services for all controllers via BaseController.

public class ServiceController : IDisposable
{
     private ServiceController()
     {
          CurrentEntityContext = new AnimeEntityContext();
     }
     
     private AnimeEntityContext CurrentEntityContext { get; set; }
     public static ServiceController CreateNewServiceController()
     {
          return new ServiceController();
     }

     #region Service References
     private SeriesService _seriesService;
     public SeriesService SeriesService
     {
          get
          {
               if (_seriesService == null)
                    _seriesService = ServiceFactory.
                         GetServiceReference(CurrentEntityContext);

               return _seriesService;
          }
     }
     #endregion

     public void Dispose()
     {
          CurrentEntityContext.Dispose();
     }
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

As you can see we create a private property in this controller for the EntityContext (we are using Entity Framework for the curious).  To reduce the repetitive and monotonous nature of writing the code for instantiating each Service class, I created a base class ServiceBase and then a Factory class (ServiceFactory) which takes care, generically, of setting the underlying context for those providers.

public static class ServiceFactory
{
     public static T GetServiceReference(AnimeEntityContext context)
          where T : ServiceBase, new()
     {
          T returnObject = new T();
          returnObject.SetEntityContext(context);

          return returnObject;
     }
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Step 4: The Model Binder

For the unaware, ASP .NET MVC permits a special type of parameter passing which analyzes a form collection and assigns properties based on name to an object of a given type. This instance is then passed to the action.  In a previous post, it was discussed how to do this – please visit http://www.jfarrell.net/2009/10/experiments-with-asp-net-mvc-model.html

Step 5: Validation of Models

One of the most monotonous tasks of any web application is validation.  As developers we cant always understand why users do what they do, but we do know we have to guard against it.  How many times have you made pages which touch the same object, thus you have to violate “DRY” and you pay for it later when something changes and you forget one of the spots.

Recently one of the new ways of enforcing validation is by annotating the objects, what I mean by that is the use of attributes to describe what kind of values a property can hold.  You will find these in the System.ComponentModel.DataAnnotations namespace (look for v3.5) as part of MVC 2.  The following is an example of decorating a class to enforce validation.

public class SeriesEntity : IEntity
{
     // simple data references
     public int SeriesId { get; set; }

     [Required(ErrorMessage = "Series name is required")]
     public string Name { get; set; }
        
     public bool IsActive { get; set; }

     // complex data references
     public IList Seasons { get; set; }
     
     [CollectionLength(1, ErrorMessage = "At least one Genre must be selected")]
     public IList Genres { get; set; }
     public StudioEntity Studio { get; set; }
}
Note: CollectionLength is a non-standard validation attribute

Validation takes place in the Model Binder, so we will return to the custom model binder from the referenced blog post. The OnPropertyValidating method is called for each property the binder is called to bind, thus we need to have its value first, hence the previous post will explain how to accomplish this.

protected override bool OnPropertyValidating(
     ControllerContext controllerContext,
     ModelBindingContext bindingContext,
     PropertyDescriptor propertyDescriptor,
     object value)
{
     var validationAttributes = bindingContext.Model.GetType().
          GetProperty(propertyDescriptor.Name).GetCustomAttributes(false).
          OfType();
     foreach (var validationAttribute in validationAttributes)
     {
          bool result = validationAttribute.IsValid(bindingContext.Model.
               GetType().GetProperty(propertyDescriptor.Name).
               GetValue(bindingContext.Model, null));

          if (!result)
          {
               bindingContext.ModelState.AddModelError(
                    propertyDescriptor.Name, validationAttribute.ErrorMessage);
               return false;
          }
     }
     return true;
}

Something that I noticed working with the standard binder is that it tends ignore properties containing complex types.  That is if you had an array of type Genre, so Genre[], it would not bind because of Genre.  If you instead made it int[] and gave it an array of GenreId values, it would work just fine.  But I wanted to take things one step further and have my entity comeback mostly complete and not have extra properties just to support the binder.  Further, I wanted to be able to validate things, hence the reason for my custom validation attribute CollectionLength.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

public class CollectionLengthAttribute : ValidationAttribute
{
     private int MinLength = 0;
     private int MaxLength = int.MaxValue;

     public CollectionLengthAttribute(int minlength)
     {
          MinLength = minlength;
          MaxLength = int.MaxValue;
     }

     public CollectionLengthAttribute(int minlength, int maxlength)
     {
          MinLength = minlength;
          MaxLength = maxlength;
     }

     public override bool IsValid(object value)
     {
          if (value != null)
          {
               var collection = value as IList;
               if (collection != null)
               {
                    return collection.Count > MinLength
                         && collection.Count <= MaxLength;
               }
           }
           return false;
     }
}

Conclusion:

The idea behind this pattern is separate things in a very thread safe sort of way, also to unify the context being used as well remain highly testable.  In addition, we want to reduce dependencies and the use of reflection.  Reflection is an expensive operation that we don’t want to do more then we have to.  By using DefaultControllerFactory we are able to rely on what already exists.  This action alone reduces the amount of foreign dependencies (interfaces, base classes) we need to get our code to work.  We have two assets that are used for this purpose IEntity and BaseController.

The next piece to this is how to structure the controllers themselves and how to get the data from the DTOs to the Views in a structured way such that type safety is preserved and we refrain from duplicating as much as possible.  That post will follow early next week.

Reflecting on my first tour of duty in New York

So as I lay here in a hotel in Queens, New York I have had a chance to reflect a little on the experience I have gained over the last few months as I worked for the client out on Long Island.  There were many high points and a few low points, mistakes made and successes gained.  All in all, I really couldn’t have asked for things to have gone better.  I wanted to take some time and share with you some of the the things I learned and/or experienced.

1) The virtues of planning
As a programmer we often hear the stories about staying out late or working long hours to finish a project.  I can recall many people in college pulling all nighters to finish projects and the constant stream of rumors from the West Coast that programmers out there often sleep at the office.  I have never been a fan of such a thing and usually feel that when you find yourself in such a situation it is most often because you did not plan accordingly.

With this project, I made every attempt to plan. I planned what I was going to do, I planned what I was going to concentrate and think about for the future. I thought about what I would train my colleagues on for a given day. And in the end it really paid off, I worked very little hours and accomplished a great deal.  The application that was built was built in accordance with known practices.  In fact, a colleague was sent to a Microsoft technology seminar where practices for developing an application was discussed. Upon his return he shared with us what we learned, and essentially described the application’s architecture as I had intended it.  This was all thanks to proper planning as well as the invaluable lessons I learned from working with a host of people at RCM.

2) Communication is vital and can be redundant
I have always had a hard time remembering things, so I write stuff down.  But when I write stuff down I become too detailed and I miss stuff being said as I am writing stuff down.  Thus I am a person who likes to try to understand small detailed chunks and maintain a high level conceptual understanding of what I am doing.  I am also not someone who can understand everything via reading, I am very kinetic learner.  Thus sometimes I had to ask questions repeatedly to make sure I understood things, especially when it came to the sometimes complex business rules of my client.

To help with this I created a great working relationship with all of the members of the team by being myself.  By the end, I felt like one of the members in the company, not a consultant and I was able to speak candidly around them, always maintaining who was in charge.  There were times I had to take charge of a situation and explain what can and cannot be done on the fly.  I remember once that I had to explain that adding values to a table was great but unless the application knew what the value mean’t you could not add functionality this way, at least with the architecture that was put into place.

I owe this great level of communication to the high satisfaction the client has with what was produced and what they learned from this experience.  At the start of the project a couple people knew some .NET others had only ever done AS400 programming. By the end today, they were able to speak about various abstract concepts that they had come across in their own research or things that I had spoke about during training. And they were able to understand why I had made the decisions that I had and how to help each other continue with the design concepts already in place.

3) Dont accept tools and frameworks blindly
If you read and follow this blog you know that we decided to use the Coolite UI framework for this project.  Coolite is an set of controls built around the ExtJS framework that is designed to help developers quickly create a rich and visually appealing websites quickly.  With the lack of a designer on the team this framework was very appealing and was chosen as the backbone for the application; this was all before I arrived on the scene.

Initially, people were ready to go with MVC and Linq2Sql so we started down that path. But I quickly noticed that this application was going to be complicated and require an advanced architecture to support the requirements.  I have no doubt that such an architecture is possible in MVC and I would really loved to have done this application in MVC.  However, given that I have far more experience with Webforms and Coolite worked better with the event model offered by Webforms I convinced the client switch to Webforms.  While I also tried to convince them to move away from Linq2Sql in favor of the the Kinetic Framework, which we have used at EIS for numerous large web projects.

In the end, because the application was designed so modularly it really could support a change in ORM without a major change, but the reason that was given to me was since the client wanted to use SQL stored procedures for the operations Linq2Sql seemed the best choice to them, so I lost that argument. However, by the end I spoke with the client again about this decision, and he agreed that given the chance he would have revised his decisions and decided to use Linq in place of the stored procedures.

All in all the experience was exceptionally positive for both myself and the client. The application that was created was visually appealing and rich in functionality, and all through the QA sessions very few bugs were found and no critical bugs.  Of course, as the application moves to the official QA team I expect that problems will be found but with the excellent communication regarding project requirements I feel that these problems will not be serious.  In addition, I was able to impress upon those I was training the value of not just .NET and its languages, but also the value and ability to think abstractly enough to work in a object oriented environment.  I look forward to my return in two weeks to assist the client with a second .NET project while the approval for phase two of the current project is decided.

Experiments with ASP .NET MVC Model Binding

I am really becoming a fan of the ASP .NET MVC framework, it has a lot of flexibility and lends itself well to design patterns that I currently use.  I also like the emerging MVVm pattern that has really become widely used among RIA apps.  It functions well with MVC and SilverLight because it gives us the flexibility to use ORMs like Entity Framework and Linq2Sql without having to battle with the lost context problem.

I decided this weekend to sit down with MVC and play around with a feature that I really dig: Model Binding.  For the uninitiated, model binding in MVC offers developers a way to specify an entity as the argument to a function and based on data sent over via the request create the object for you.  The out of the box binder works great and will cover the vast majority of cases.  I decided to take this one step further and I wanted to be able to totally bind up my proxy classes (essentially the ViewModels) without having throw away properties that were just used for this purpose and without creating an inheritance hierarchy to allow subclass proxy objects to be used for the binding, offering essentially the same thing as option 1.  I wanted this to all happen inside the model binder, so it would be totally transparent.

My result was pretty good, but not perfect.  It relies on a dependency and therefore is not 100% transparent and standalone.  This is because of what we are doing, we need to abstract the set of the property we need, this could be done via an enforced rule that constructors for proxy entities must follow or have you entities inherit from a common interfaces to guarantee that you have such a setter.  I choose to go with the later and defined IEntity as such:

public interface IEntity
{
     int ID { get; set; }
     string Name { get; set; }
}

For our example, the Name setter will not be used, but it is there because it really makes sense that an Entity should provide a name for its object.

So very briefly, the view model pattern relies on the creation of entity classes which are translated from generated entities.  This ensures that extraneous information is not passed to the client which can happen frequently when you use generated code in an all-or-nothing scenario.  Furthermore, you can reuse these objects within each other and thus gain higher amounts of code reuse which usually translates to greater maintainability.  In this example we are going to be focusing on my SeriesEntity class, defined as such:

public class SeriesEntity : IEntity
{
     // simple data references
     public int SeriesId { get; set; }
     public string Name { get; set; }
     public bool IsActive { get; set; }

     // complex data references
     public IList Seasons { get; set; }
     public IList Genres { get; set; }
     public StudioEntity Studio { get; set; }
     
     #region IEntity Members
     public int ID
     {
          get
          {
               return SeriesId;
          }
          set
          {
               SeriesId = value;
          }
     }
     #endregion
}

The use of IList here is crucial to the solution I am going to propose, though you could make this any collection type, with the exception of an array.

One of the really neat things about MVC is that in addition to allowing you the ability to override the default binder, you can also specify a binder of a specific type, all you need to is define the binders dictionary in Global.asax.

protected void Application_Start()
{
     RegisterRoutes(RouteTable.Routes);
     RegisterModelBinders(ModelBinders.Binders);
}

private void RegisterModelBinders(
     ModelBinderDictionary binderDictionary)
{
     binderDictionary.DefaultBinder =
          new Binders.AnimeModelBinder();
     binderDictionary.Add(
          typeof(SeriesEntity), new SeriesModelBinder());
}

I am showing this as an example of what you can do with the binding at a type level.  We will be using the the DefaultModelBinder base class, so lets start with that class, firstly it must inherit from DefaultModelBinder.  This class provides a variety of virtual methods that you can override, the first one we will look at is BindProperty, this is my implementation:

protected override void BindProperty(
     ControllerContext controllerContext,
     ModelBindingContext bindingContext,
     PropertyDescriptor propertyDescriptor)
{
     // need name of the property we are trying to bind
     var propertyName = propertyDescriptor.Name;
 
     // find the property on the object we are binding
     var property = bindingContext.Model.GetType()
          .GetProperty(propertyName);
            
     if (property != null)
          property.SetValue(bindingContext.Model,
          GetObjectValueFromProperty(property,
               bindingContext.ValueProvider[propertyName]),
               null);
}

The BindProperty method is called each time a new property is discovered that could potentially be binded to.  You can control this action by specifying Exclude when defining the parameter list for your action, example:

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

public ActionResult Create([Bind(Exclude = "SeriesId,Seasons")]SeriesEntity series)

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Notice the use of Exclude here, which ensures that the binder will not get SeriesId and Seasons to bind.  While our binder can handle it not being there, the lessens the load on the binder.

So lets understand the Binder, first we make a call into GetObjectValueFromProperty function which essentially determines if we need to do anything special with the values, if we dont we make a call to Convert.ChangeType:

return Convert.ChangeType(result.AttemptedValue, property.PropertyType);

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

I love this method, as it calls the underlying parsing logic to convert the value from (in this case) a string to whatever I desire.  It will fail, of course, if the value is not as expected, but this would happen with the normal binder as well.  So at this point, we can handle simple one-to-one arguments for value types now.  Lets add a means to work with complex objects.

At this point we need to introduce a dependency within our code, that is we need a way to construct our proxy entities and since these are generally custom based on the application, this is where IEntity comes into play.  Thus we create a method to generate an instance of a type that, using IEntity, we can set a value to:

private object GetInstanceWithId(int entityId, Type type)
{
     var constructor = type.GetConstructor(new Type[] { });
     var value = constructor.Invoke(null);

     ((IEntity) value).ID = entityId;
     return value;
}

Looking at this code there is one flaw and one major area of improvement.  The first one is this casting is unsafe, we have no idea if we can convert the type passed to IEntity and we are hardcoding IEntity rather then letting it being provided.  This is not that big of a deal as this is not code most developers working with this binder would ever see, but it is a hard dependency.  The best way to check this is before we call the function, we validate the the property type CAN be cast to IEntity, example:

private bool CanConvertToType(Type type, Type conversionType)
{
     var interfaces = type.GetInterfaces();
     return interfaces.Length > 0 &&
          interfaces.Select(t => t.Name)
               .Contains(conversionType.Name);
}

Simply pass the type of the property and the a type reference to the type we want to check the cast for and this method will tell you, though it only works for interfaces.

The final pieces to this is binding a collection of values.  One of the things we can do with HTML is send a comma delimited list of values from the view by naming controls with the same name.  This will form the basis for our collection binding.  The reason we use IList for our entities is so we can perform the binding and as I came to find, creating generic arrays is something .NET does not allow.  So the first step is to determine if the property we are looking at IS in fact a collection, we can reuse our CanConvertToType function and pass typeof(ICollection) as the conversionType parameter.

The next step is determining the actual type being held by the collection, so we can determine if we can make objects that can be stored in this collection.  We can make a simple call to get this information:

Type elementType = property.PropertyType.GetGenericArguments()[0];

Because this is a generic, there will always be at least one argument, and because its an IList we know that the first argument is the contained type.  Once we confirm that this type can be used with IEntity we can break apart incoming value and create the IList reference we are going to return.  These two lines handle these tasks:

var valueArray = result.AttemptedValue.Split(',');
var valueList = (IList)Activator.CreateInstance(
     (typeof(List).MakeGenericType(elementType)));

This code is fairly self explanation, but to reiterate, create a string array from the values coming from the form submit, and use the Activator class to create an instance of List that we then cast to an IList.

Finally, we iterate over the string array, and call our GetInstanceWithId function to generate an instance of the underlying type with the ID property set via the abstraction provided by IEntity.

So there you have it, with some very clean reflection we can build an entire proxy entity via the custom model binder, this sets us up for validation and then actions within the controller method.  This also centralizes how we set the values for our objects helping with maintainability.  Finally, if the case arises that a totally different binder is needed for a certain type, MVC does allow type level model binding configuration which allows MVC to use a certain binder for a certain type.  I wonder if you can specify a interface type thus you can declare an model binder for a set of entities.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

I hope to add validation to this example for the next entry.

http://cid-18f9d8896416d2fb.skydrive.live.com/embedicon.aspx/BlogFiles/AMCode.zip

The MVVM pattern and some developer reflection

It is not uncommon in the world of web programming to see a new acronym floating around every other week.  I saw MVVM for the first time a few months ago, but I put off really learning what it was, I was focused on other things and since all the conversation pertained to its use with SilverLight and I was not focused on SilverLight at the time I put it on the list of things to look at.

Then I attended the ASP .NET MVC FireStarter in New York city where they talked about the ViewModel pattern being used for MVC applications.  I immediately saw the benefits of this pattern with respect to Microsoft Entity Framework as it took the hassle out of the lost/unloaded context problem I had to circumvent whenever using the framework.  I was able to fully create a set of objects that translated the generated entities into classes that could be efficiently used by the View, in addition, I found added benefits from having these classes inherit from each other.  And since every OO programmer is continually striving for a way to reuse as much as possible without creating too much complexity, this was a win in my opinion.

What I did not realize at the time was, this is the essence of the MVVM pattern that has been gaining popularity.  It happened recently when I was thinking about the application I am developing for the client out here in New York.  I realized that in fact I had architected the application to follow this pattern without even realizing it.  As is the case whenever you realize such a thing, you start wanting to refactor bits that you did not design according to the pattern, without taking too much time in doing so.  I have a number of cases that I am correcting where we are using anonymous objects where we could be using these “proxy” objects as I call them, but they really are ViewModels, I am even starting to create facades for other objects that are composites of other properties within the object.

What I did was I had Linq2Sql generate a bunch of classes based on the tables we are using, each of these classes is proceeded by either a t (table) or a vw (view).  The ActiveRecord pattern is used to make these calls into the model, thereby creating a clean separation that allows data access calls to be refactored without changing calling code.  At the start, we were returning the data from these calls to the View via anonymous objects. (reasons for using this architecture where mostly due to using the Coolite ExtJs ASP .NET UI Framework).  However, as we began to develop the controllers which handled the brunt of the more involved functionality, I began creating classes for use by these controllers.  It became almost instantly apparent that these classes could also be used in the Read calls.  And now, I am rewriting old code to use objects themselves.  The value of this was recently demonstrated when the client asked for a formatting change that required a minimal to change, but the effect was application wide.

The one thing I would like to do differently next time, is make the static methods use to call into the model hang off the view model objects instead, this way there is no visible usage of the generated model classes, this is how my current personal application is setup, though it uses a service based pattern for data access, this is the recommended approach when using a context driven ORM as is the case with Linq2Sql or EF.  I have handled the cases in which I want to share my context across many functions (for transacted scenarios) by creating internal functions that can accept an existing context as a parameter, though it is only visible within the assembly it is defined in, this keeps calls in the view from being able to see these functions, thereby hiding the use of a context reference entirely from the view.

The idea of taking the static methods off the generated views and onto the view models themselves is that, like the service pattern, the translation is shared and not implemented all over the place.  This means that if you want to change a property that needs to come back, you do it in one spot.  For custom queries, you can still define the custom translation in a custom view model.  But the point is that you can better control over your translation which helps with maintenance and code cleanliness.  One of the ways I am doing this now is creating constructors that build the ViewModel objects for me.  But if I can find the time to perform this refactor, it is something I will do.  The other concern is that I am training other developers to develop this application, so I need to be careful changing patterns on them.

Experiments with MVC 2

This weekend I once again sat down to work with my existing MVC application and see what I could find and how I could better improve the pattern and my overall understanding of the framework.  I found a few gotchas and came up with some tips to help improve development.

  • Unit Testing
    • I was disappointed to find that as of right now, due to breaking changes, updated unit test project templates will be needed to use your favorite unit testing framework with MVC 2.  This includes my favorite MbUnit v3.  This does not mean that you cannot use the framework, you will just have to do the setup manually.
  • Validation
    • One of the things that I am most excited for moving forward is the new way to define validations on the server side.  Using the DataAnnotations library, located curiously in the System.ComponentModel parent namespace, you can easily define many of the most common validation routines, from Required to Range.  This validation ties tightly in with the MVC framework
    • One of the downsides to this, however, is that it does not generate client side validation via JavaScript.  Thus requests must always be validated on the server, which can be expensive for a busy site.
    • In addition, you cannot validate complex types (objects, arrays, etc) via this method.  I tried with an array that was assigned via ModelBinding, and the validation routine is not even called.  However, based on what I read regarding .NET 4, this functionality will be available via the CustomValidator attribute.  I really must comment on how it is strange that you can define your own custom validation attribute, but if it is attached to a complex type, you get nothing.

That all aside, I am really impressed with how well this ViewModel pattern works with a deferred execution system like Entity Framework.  Since my EF entities never cross into the web layer, only their “dumb” proxy counterparts, the separation is really very clean and easy to maintain.  At this point the main problem is the distinct Contexts that are open each session, but I plan to address that next week with Ninject.

The MVC View Model pattern

Microsoft ASP .NET MVC Framework is Microsoft’s implementation of the popular MVC pattern which was made popular for web applications chiefly by Ruby on Rails.  The purpose of the framework is to give developers a choice over using ASP .NET webforms which, until the advent of the MVC framework, had been the only way to develop a web application with ASP .NET.

As with any new framework, especially the web, having a pattern for doing this is essential.  Without a pattern developing an application that is clean and a codebase that properly leverages reusability of code in very difficult to do.  One of the popular patterns for MVC is the ViewModel pattern.  In this entry I will talk about developing an application that users this pattern and also explain why we make the decisions we do in developing the application so that we follow this pattern:

In the project I developed I created a web project to house my MVC project which has the following folder structure:image

As you can see I have mutated the default project and removed the Models folder and replaced it with the ViewModels folder.  In this folder are the classes that I will be passing to my strongly typed views.

I have created two other projects for this solution that are both class libraries: AM.DataAccess and AM.Models.

This pattern follows some of the advice that I got from @sbohlen at the ASP .NET MVC Firestarter I attended on 10/3.  His advice was that it is ok to use an ORM like the ADO .NET Entity Framework to generate your entities but you should not pass them to the View.  The reason for this makes sense because the generated entities come with a lot of “extra” functionality.  This extra functionality is great in data access scenarios, but really do not make sense in the view.

This is why the ADO .NET Entities are in their own separate project.  The Service classes in the DataAccess project are responsible for the translation of these generated entities to the entities that will be used by the view models.

Below is an example of this translation:
image

This is simply a query that talks to the database via the EntityContext and takes the results and makes a List of these SeriesEntity objects.  This list is then passed to my Controller.  Once the controller receives it it then creates the necessary ViewModel class to be passed to the View, this is an example of such a piece of code:
image

The advantages of this model is the isolation of the generated entities with the entities designed for the web.  This can reduce the amount of data sent across the wire and decreasing load size.  In addition, there entities are more readily serialized into JSON results by MVC. The key thing to keep in mind is the separation of the Web from the Model layer, the two should never cross.

However, a disadvantage with using the Entity Framework in this way is the management of the context object (EntityContext in the example code).  The way I have this setup is a lazy loaded property in a base class.  The base class also has a destructor which disposes of the EntityContext. It doesn’t permit the sharing of the context by the request.  This means you have as many EntityContext references as the number of Service instances that you have.  Generally the best way to handle this problem is to use something like StrucutreMap or Ninject for Dependency Injection, however, that conversation is outside the scope of this post.

The main goal of this pattern is to provide a very clean and lightweight means to get data from a database and pass it to the view while maintaining type safety and leveraging code reuse.  For example, the ViewModel classes can inherit from each other, like such.  First this is the base class for all the ViewModel classes that are in use within this application:
image

So this base class defines a couple properties that will be in use on just about every page.  (Side note: The validation in use here is called FluentValidation).  Now lets look at the next level, the SeriesViewModel class:
image

So this defines all the information we need for the Details view.  Now we can use this ViewModel and inherit from it to provide the view with additional information for say the Edit/Create view.
image

(Note: CheckboxItem is a custom class that I created) Notice what we have available to us now in the View.  We have two properties that can tie into existing MVC HtmlHelper methods to provide us with select boxes or a set of Checkboxes.

In conclusion, the ViewModel pattern is a very popular pattern among MVC developers because it reduces clutter in the controller and views, which is what the essence of MVC is: reduction of code and simplification of responsibilities.  Furthermore, using ViewModel allows you to share and reuse code across multiple view and can truly make your View agnostic from the controller. This was one of the main weaknesses of the MVP pattern used by ASP .NET, the controller (code behind) was very tightly tied to the ASPX view.  

My Trip to the New York ASP .NET MVC Firestarter

Being in New York has its many perks, one is the awesome big city an hour train ride away. This city and its surrounding area offer one of the biggest markets for technology in the world and it is no surprise that Microsoft has a tremendous presence here.  I was informed of this event by my colleague here in New York, Anis.  Upon seeing it I immediately registered, not just for its content, but also its location: At the Grand Hyatt in New York City next to Grand Central station.  Any excuse I can find to go into the city is a valid one in my book.

I want to thank Peter Laudati, Stephen Bohlen, and Sara Chipps for taking the time to put on this wonderful event, while a lot of the information for me was review, it was still very helpful to get other people’s perspective on the technology as well as meet local members of the New York technology community.

I really enjoyed this session because it got me into thinking mode and finally got a design pattern to click in my head for MVC, I have to thank Stephen for this as he pointed out something that I was doing which I see now is a bit of an overstep in terms of the responsibility of the control.  I assumed that I should use EF or L2SQL entity classes as the models that are passed to my strongly typed views.  Stephen pointed out that why should I pass these objects, which contain so much extra information, to the model. Make simple versions, he said, and passed those.

At first I didn’t agree with this point as I thought it would lead to class clutter.  But as I started to think about it I realized this is already what I am doing on the project out here in New York, where we use Linq2Sql unfortunately.  When you look at the application the L2SQL entity NEVER leave the Model layer except as an anonymous object or as a simplified entity object (realizing this made me recognize some areas that should be refactored starting Monday).  By using this approach with MVC we can open the ability to TRUELY leverage the idea of the Universal Model.  Where a single View can be used by n controllers to display data.

Think about a maintenance screen for satellite data.  Generally you just have a Name field and then your normal Create or Update logic.  You could design a Web level model class with an interface that has the desired methods on it and simply work with that in the View passing it off as necessary without ever know what you are actually working with.  In turn this would use just one view, and save you the headache of many satellite maintenance screens.

That aside, the majority of what was discussed was review, though the conversations were interesting and I was able to get to know a few of the local technologists.  Thinking I may look into visiting the New York .NET User group.

Understanding Coolite ComboBox Mode

This week I came across a very valuable piece of information concerning Coolite combo boxes.  A combobox has attribute called Mode. What this is set to greatly affects the overall behavior of the control, and can create extra queries where you may not expect them.  I find that having an understanding of this attribute can help you develop more efficient pages using the Coolite ComboBox.

  • Default – The default behavior for this control is based on how it initially receives its data.  If it is connected to a Store of any kind, it will act according to the rules of Remote if no store is provided it will act according to the rules of Local
  • Remote – whenever the dropdown is expanded make a query to the data store to get the latest data and bind the data according to the DisplayField and ValueField.  This is where the second query comes in.  Even setting AutoLoad to true on the Store will cause this to happen. Be wary of this second query, especially if the query to get the data is heavy, this could easily create a bottleneck
  • Local – once the data is loaded do not go out and get new data unless load is called on the Store.  This has the greatest chance of working with potentially stale data (depending on your model) but is also the most efficient as it allows you to control when loads happen.  Remember, you can easily tell your Store to load by calling the load() method on the store via JavaScript

So what is the best approach, well as my college mentor Dr. George Nezlek used to say “it depends”. I would say for the majority of these calls you are better off explicitly defining the the Mode as Local and controlling the loading of the store through JavaScript and event handlers.  However, if the user is going to spend a lot of time on the page and the data is critical and very likely to change, Remote may be the better option.

I would assume there is a way to define poll time for the DropDown query, however, I have not explored that, but something I would look at for the later case.

Hope this was helpful, I discovered this and, as usual, found no documentation from Coolite explaining this, so I thought it would be useful to talk about it here.

Experiments with RIA Services

Unfortunately I have been so busy as of late I have not had the time I would like to dedicate to exploring the .NET RIA services.  I have been quite impressed from what I have seen on video and heard from colleagues (one colleague is already planning to use it in an enterprise project he is writing).

This weekend I decided to sit down and at least understand the mechanics of transmitting data to and from the server and how to extend existing classes to increase functionality.  What I found is that RIA Services have a bit of a learning curve while you get your head around the model, but once you do, it works very nicely.  My plan was to demonstrate basic binding with an entity generated by the Entity Framework, then move on to include a single related entity, then finally a Many to Many relationships.  To start I brought in 4 classes from my AnimeManager database.  As expected EF picked up the Many-to-Many and hide the relational table, leaving me with the following:image

Let me state that THIS DOES NOT WORK!!  The RIA services DO NOT support a many to many relationship defined this way.  It took me a bit of scouring but I finally came across this blog entry which explains things: Creating Apps with RIA Services (part 3)

Essentially the key is HOW you add the entities, because you WANT the relational tables.  So in this case I deleted the existing EDMX and added this as follows:

  1. Added Studios, Series, SeriesGenres
  2. Allowed the generation of the models
  3. Used Update Model from Database to add the Genres table
  4. Removed the generated relationship between Genres and Series
  5. Recreated this relationship manually by relating Genres to SeriesGenres via this dialogimage
  6. IMPORTANT!!!! THE NAME OF THE RELATIONSHIP MUST MATCH THE FOREIGN KEY NAME FROM THE DATABASE
  7. After adding this you will set get an error about Association type, use Mapping Details to ensure that this relationships maps to your Many type (SeriesGenres in this case)
  8. Build, everything should go through without fail, once it does you can proceed to the next portion, creating the domain service.

RIA Services creates Services that handle the interaction between the underlying WCF web services and the data extracted via the model layer. To create this access the Add New Item dialog and select Domain Service Classimage

The following dialog is subsequently displayed:image

A service class can support multiple entities, though convention generally dictates that each service represent one entity.  For the sake of this I am going to have all three, you will see why in a moment, it has to do with the many to many relationship.  

The important thing here is that you click “”Generate associated classes for metadata”.

Following this screen VS will update your solution and add two files: SeriesServices.cs and SeriesService.metadata.cs.  It is alright to modify these files as they are only generated once.

By using the SilverLight Business Application template, there is something else going on behind the scenes.  VS is looking for these metadata files and is using them to project your classes out to your SilverLight application so they can be used transparently there, with the same namespace I might add.  I find it helpful to include this generated file in the solution, otherwise your code looks like its broken until you compile:image

Notice the file AnimeManager6.Web.g.cs, this is a generated file which contains a representation of the classes in Silverlight.  The one key thing to remember is that your service class (SeriesService in this case) will get generated as a context class instead (SeriesContext in this case).  So the code to bind a datagrid would look something like this:

protected override void OnNavigatedTo(NavigationEventArgs e)
{
     SeriesContext sc = new SeriesContext();
     sc.Load(sc.GetSeriesQuery());
     myDataGrid.ItemsSource = sc.Series;
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Note that while this doesn’t appear as such, this is an asynchronous call.  You will find the definition of GetSeriesQuery as GetSeries in your SeriesService method as it is auto generated.  The result is returned, in this case, the Series property of the context, if it where a list of a different type it would likely go there.  This means that attempting to step through this in real time is useless, more often then note you will get an empty set from Series.

As this object is right now, attempting to bind to either our Studio reference or Genres collection would not yield anything. You have to include these data points via the Include when you query the context, and in the case of Studio, mark it with the Include attribute to get VS to project it to the SilverLight class representations.

So lets first take care of the Studio reference, open up your metadata file and add the [Include] attribute to the reference:image

You will note that we are also applying it to the SeriesGenres property, more on this in one second.  With this change in place, update your XAML to look for the Studio navigation property and access its Name property: ie {Binding Studio.Name}

Now this is all good and fine and you may be thinking about extending the projected version of Series in the SilverLight project to include custom properties; don’t, this doesn’t work. I am not entirely sure why, but I will show you how to extend the class and in my mind its a better way anyway.

The best way to add these custom properties is by extending the generated models themselves.  In this case, this is the code I used to create a property for Series which lists out its Genres in a comma delimited list:

public partial class Series
{
    [DataMember]
    public string GenreDisplay
    {
        get
        {
            if (SeriesGenres.Count > 0)
            {
                var sb = new StringBuilder();
                foreach (var genre in SeriesGenres)
                {
                    sb.Append(genre.Genre.Name);
                    sb.Append(", ");
                }

                return sb.ToString().Substring(0, sb.ToString().Length - 2);
            }

            return "No Genres";
        }
    }
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

The key thing here is the [DataMember] attribute, which causes it to be included in the serialization down to the client.  By doing this, it is automatically available to use in the XAML.  Bare in mind that as this being done purely on the server before the object is sent down, it is ONLY reliant on either the EntityContext being open or Include being used when making the query.  It doesn’t not require the objects themselves to use the [Include] attribute.

This gives you flexibility in the design of the class.  It is my opinion, that the generated code is sealed for a reason and modifying the metadata file is likely not a good idea, as this may be something you want to regenerate over time.  However, the other effect of using the [DataMember] attribute will cause VS to project the property out to the SilverLight project automatically.

I hope this helps out, getting this information took me some time and drove me nuts in getting it to work.