Fluent NHibernate and a New Project

This week marked the end of my stint on one of the biggest projects we have had in the last year.  Things went very well, as the remaining members of the team begin to move into the final beta phase, I am being reassigned as a billable consultant to a company in Holland, MI. The upside of this assignment is I get to work with PHP which is one of the first languages I did serious programming in and was my forte in college.  Getting the opportunity to work with it for the first time since I became strictly .NET is going to be fun, challenging, and exciting; its an opportunity that I am very much looking forward to.  The only downside is my current 10m commute to work now becomes an hour long commute as I must be onsite in Holland, but I think its well worth it.

One of the other happenings is I have finally started to take a serious looking into NHibernate as something that RCM may go to from our current open source custom framework (The Kinetic Framework).  Its part of the normal aging process for a framework.  NHibernate is highly touted and looks to be a good alternative to using the still not ready for prime time Entity Framework.  However, one of the long standing problems with NHibernate, is configuration and that really is true for most ORMs that exist today.  Microsoft has generally released good tools built into Visual Studio to aid programmers in configuring the ORM products it releases, this is not the case for NHibernate.  To counter this, Fluent NHibernate was created that allows you to “fluently” define mappings for entity objects.

The main advantage behind this notion of fluent programming is that we take procedures that normally are relegated to external programming and bring them under the control of the compiler, to allow for easier compile time checking.  An example of this is Microsoft LINQ – where developers are able to querying databases (for example) using standard type safe C# code as opposed to passing a string off to a black box (the database) and magically getting a rectangular result set.

This is the same idea with Fluent NHibernate which removes the XML configuratioin files and allows you to define this configuration using C#, below is an example:

   1:  public class SeriesMap : ClassMap
   2:  {
   3:       public SeriesMap()
   4:       {
   5:            Id(x => x.SeriesId);
   6:            Map(x => x.Name).WithLengthOf(100).Not.Nullable();
   7:            References(x => x.Studio).TheColumnNameIs("StudioId");
   8:            HasMany(x => x.Seasons).WithKeyColumn("SeasonId");
   9:            HasManyToMany(x => x.Genres)
  10:                .WithTableName("SeriesGenres")
  11:                .WithParentKeyColumn("SeriesId")
  12:                .WithChildKeyColumn("GenreId");
  13:       }
  14:   }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

This being the mapping file, it would help to show you what the entity itself looks like:

   1:  public class Series
   2:  {
   3:       public virtual int SeriesId { get; set; }
   4:       public virtual string Name { get; set; }
   5:       public virtual IList Seasons { get; set; }
   6:       public virtual Studio Studio { get; set; }
   7:       public virtual IList Genres { get; set; }
   8:  }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Note that we define properties we want NHibernate to “fill” as virtual.  But take note of the correlation between the property names and the calls to properties in the mapping class.  The link below will provide more of a reference to the various calls available from the mapping class:

http://wiki.fluentnhibernate.org/show/HomePage & http://fluentnhibernate.org/

Once you have these classes in place you can use the standard NHibernate calls, either via the Criteria API or HQL (Hibernate Query Language) to extract your data. Also have a look at the Linq to NHibernate project. While it is still in its infancy stages it has great potential. I think Linq and NHibernate are actually made for each other as the DataContext pattern that Microsoft employs to track changes matches very well with the repository pattern employed by NHibernate, I look forward to some great stuff in the future from the meshing of these two technologies.

Context Driven JavaScript Programming

First let me say that while I will be focusing on JQuery for the framework of choice, this can really be accomplished with most of the frameworks available.

The idea of context driven JavaScript programming is actually use JavaScript to do what you are intending and make things easier.  Many programmers often assume that JavaScript can do very little and thus create tremendous tasks for themselves when in fact the code is very simple if they would simply employ the concept of context and utilize what JavaScript is giving them.  Lets look at an example of some bad JavaScript:

  1: var selectBox = null;
  2: window.onload = LoadingFunction;
  3: 
  4: function LoadingFunction() {
  5:     selectBox = document.getElementById('selectGender');
  6:     selectBox.onchange = HandleChange;
  7: }
  8: 
  9: 
 10: function HandleChange() {
 11:     var value = selectBox.options[selectBox.selectedIndex].value;
 12:     alert(value);
 13: }

First thing I want to point out is that some of you may say, well why do I need to define my handlers in JavaScript, I can just define them in the markup and pass what I need. I would say, yes that does work, but its a mixing of your presentation and interaction layers and will get you into trouble as you work on larger more complex JavaScript driven processes.  But lets turn our attention to this code.  Seems alright, but lets rewrite it using context:

   1:  window.onload = function() {
   2:      document.getElementById('selectGender').onchange = function() {
   3:          alert(this.options[this.selectedIndex].value);
   4:      }
   5:  }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

This is quite a few lines less then what we were using.  You see the use of the word this within an anonymous function assigned as event handler gives you a reference to the object firing the event, this allows you to easily use the same event handler over and over.  Using anonymous functions in this way also allows you to utilize scope convergence and include scope outside of the function, with static functions from the first example you would not have this ability.

  1: window.onload = function() {
  2:     var name = prompt("What is your name?");
  3:     document.getElementById('selectGender').onchange = function() {
  4:         alert(name + " is " + this.options[this.selectedIndex].value);
  5:     }
  6: }

Notice how name is declared outside the anonymous function but used inside without anything special. If you were to do this with the first sample of code you would need to make name a global variable or find a way to pass it to the function, using context you can have it passed automatically.  Now keep in mind this is a very simple example.  One of the places where I use this principle heavily and it shows its worth is client side binding.

The scope of these functions is “saved” to the runtime of JavaScript and thus can be referenced later on when an instance of the event is called.  I invite you to read more about my experiments with JQuery as this comes in huge when we start talking about data driven programming as it eliminates the need to store lots of data in hidden fields for client side operations.

As our final step with this code, we will introduce JQuery and see how it simplifies this code:

  1: $(document).ready(function() {
  2:     var name = prompt("What is your name?", "Fred");
  3:     $("select").change(function() {
  4:         alert(name + " is " + $(this).val());
  5:     });
  6: });

As you can see, the code is not that different from the previous example, but I do want to point out a few things.  First is $(“select”); in this example we only have one so this query will return to us just that one instance and bind the event handler to the change event.  However, if we had more then one, then it would do this for each one – this is one of the things about JQuery that make it so great, being able to apple context specific event handlers en masse, this way. You also notice the simplified syntax for getting the value of the select box using the val() function. Finally the one weakness with the second example of code is the lack of graceful degradation.  In the case of JQuery, if the select box were somehow removed without removing the JavaScript, the page would still load error free, not the case with the JQuery-less code, thus you would need to employ a check to make sure you actually have an object to bind the event to, for example:

  1: window.onload = function() {
  2:     var name = prompt("What is your name?");
  3:     var elem = document.getElementById('selectGender');
  4:     if (elem != null) {
  5:         elem.onchange = function() {
  6:             alert(name + " is " + this.options[this.selectedIndex].value);
  7:         }
  8:     }
  9: }

Context driven programming has many advantages, but yet few use it heavily or near the level appropriate, why? Perhaps the biggest reason is laziness; both in desiring to understand and have the will to do it right, rather then quick and dirty. In the same way that it takes discipline to separate the layers of our web app and ensure that no SQL strings are ever found in a code-behind, so to do programming unobtrusive JavaScript.  Truthfully, unobtrusive is very difficult without a framework, as you see in the code above for the final example, but frameworks such as JQuery render this point moot and really shows the laziness of the person programming or perhaps their stubbornness with changing the way they program.

Additionally, most developers seem to scoff at the idea of a user browsing the site without JavaScript and in some cases its worked into the project contract that the is assumed to have JavaScript.  I find this silly, when really a simple toggle box that shows if JavaScript is not enabled performs the tasks quite admirably of steering off users who don’t have JavaScript.  This is certainly better to the alternative.

Finally, most designers, if you ask them, are driven nuts by having their HTML littered with event handlers.  They do not belong here, they are not part of the presentation, they are part of the interaction, like entity objects and data access layers, they should be separated into their own layers so you can emphasis code reuse.

The last point I have heard made against this style of programming is maintainability. Some appear to feel that wiring up your events like I have shown above limits your ability to change if element names were to change.  There is certainly some truth to this, but context driven programming counters this with its emphasis on The Principle of Localization.  When you are coding your handlers correctly the brunt of the code that could change is concentrated in a single block, thus negating the need to scroll around making sure you found all the references.  Furthermore a framework like JQuery is very flexible in how it can query for elements, and understanding the ways you can do this is essential.  Simply knowing the basic selectors is not enough; in fact I have observed that if you start to break away from attaching to simply ids the code becomes more abstract and easier to change.  The downside is that because JQuery is designed to degrade gracefully, you will not see any errors when you are running, so your unit tests are not properly designed you may miss a bug.

The final thing I would like to say is that JQuery is and unobtrusive JavaScript are designed to work no matter how your elements are arranged or how many there are.  I have seen cases where programmers try interesting methods to mask the elements being passed, its a interesting method because it tends to result in ugly code that does not operate using context, all because they fear an element name changing.  Such an approach is very Web 1.0ish and quickly shows its limits when you start doing any kind of client side binding, which is very common in Web 2.0 apps.

One of the most important concepts Web 2.0 is bring us is that there is a dire need to understand how to architect our JavaScript in the application.  Most good developers understand the importance of properly architecting server side code (think of the number of patterns that exist), but there is a lack of understanding for architecting client side code. Understanding what you can make a function and what you should not will be critical along with understanding core JavaScript concepts like prototyping and JQuery concepts like extending.

To conclude, as the web continues to move forward in terms of the expectation for web site interaction JavaScript is becoming the front line fighter in creating the memorable experience people want.  The age of static JavaScript programming is coming to an end and is being replaced by context driven programming where developers leverage what the language gives them to make their code cleaner and more coherent. Though certain concerns exist about this new methodology most are simply borne from either worrying too much about the odd case, or a lack of desire to change on the part of the programming. With a framework like JQuery programming your JavaScript to be unobtrusive is easy and quick, further understanding how to leverage context in your code gives you an easy way to reference the elements you care about without tying your interaction to your design.  This makes changes easier and makes your code better and easier to understand for the next person.

My first development experience with Surface

Microsoft has been generating a lot of buzz with Surface and this concept of Natural User Interface (NUI).  To briefly describe it, you have to think about to the way computer applications were originally designed; with command line interfaces (CLI).  Operating systems, such as Windows, brought the graphical user interface (GUI) to fame.  GUIs have been in heavy use for the better part of a two decades, however, with GUIs there is still a level of indirection between the user and the system.

Enter NUI, which is what Microsoft is billing as the next popular form of application interface design.  NUI strives to make interaction very similar to how we interact with everyday objects; this is the goal.

Recently I had a chance to actually use a Surface and then, at CodeMash, actually learn how existing WPF code can be easily transformed into Surface ready WPF.  After about a month of speaking with my contacts at Microsoft I was finally given the go ahead to download the exclusive Surface SDK and this evening I had the opportunity to program my first application and play with the simulator.

For myself not being graphically inclined, creating these ultra rich and vibrant applications is not easy; mostly from the graphical perspective as writing code does not change much.  But it really is enticing playing with the simulator and just seeing what is possible and thinking of the many uses for these types of applications.  So I wrote a simple application that utilizes the fly out keyboard with the Surface Textbox and then says ‘Hello’ to the person who enter’s their name.

Overall, developing on the Surface is very simple and straightforward.  Having existing knowledge of WPF will certainly help, but is not required, I will keep a running update of my continuing ventures in Surface.

Data Driven Programming with JQuery – Part 2

In the first part of this series, we showed how to use JQuery to select various elements on an HTML page with amazing efficiency and effectiveness. We also demonstrated the use of the JQuery data cache and ChainJS plugin for executing HTML templates for JSON result data.  In this new section we will build on lessons learned and what are some key concepts to take away from this.

So lets first summarize what we are doing and the decision we have made. We have decided to create a service based system that will call upon various web services requesting data.  This data will be transmitted back using JavaScript Object Notation and then bound to an HTML template for each time in the result. For this process we will use ChainJS to perform equivalent work on each iteration.

Using this approach we realized a couple things: 1) We generated good code with excellent verbosity and clarity, 2) we moved HTML generation from JavaScript to the view where it belongs.  This second point is critical in modern JavaScript application design.  In the past it was acceptable to generate JavaScript in script blocks as means of creating repetitive or dynamic content.  However, we now have new options when it comes to addressing these scenarios. 1) If the piece of HTML is repetitive it can be displayed via templates and hidden content blocks, this will increase the amount that is INTIALLY sent to the browser, but by having the HTML there it allows you to more easily perform client side operations, this allows you to make the page more utilitarian.

You other option, which I like to use if the action requires heavy processing, is a lazy load. This works by calling a service from the server which returns pure HTML, this HTML can then be used on the client. The advantage of this system is that it trains the user to understand that this operation takes time. And by combining with a system providing instant feedback, the user patience for this feature can be increased.

In both of these features, you notice that at no time did we mention we would generate or build the HTML in the script itself.  This is because such a practice is highly unmaintainable and results in high coupling between the Interaction and Presentation/View layers.  The methods above both either retrieve or generate their HTML in other places.  In the second method, for example, one of the common tricks in ASP .NET programming is to render a user control as a string in a web service and returns its HTML, or use the JQuery.Load function to bring existing HTML into the model.

So now that we have talked about why it is important to think about what you want to accomplish and explore the options available, lets look at another aspect of Data Driven programming: obfuscation.

Obfuscation becomes critical in client data driven applications because we have to remember that JavaScript is client side and fully visible to the client, be they benign or malignant.  One of the ways I obfuscate my code is making intensive use of JavaScript closures, let me demonstrate:

$("#rpter").items( jsonResult ).chain(function() {
    var data = this.item();
    $(".aLink", $(this)).click(function() {
        CallWebService( data.PersonId, function(success) {
            // handle the success callback
        });
    });
});

You will notice that at no time do we ever store PersonId on the page, yet we can reference this code over and over again from different instances of the link and we will get a sort of late binding for the data here.  This is because of the way JavaScript implements “scope convergence”. In this example, two different scopes are combined via the use of an anonymous function (we say an example of this in part 1), and the data is persisted to the JavaScript runtime for later use.  This sort of obfuscation makes it much more difficult for a would-be attack to mutate data prior to being sent to the server.

This bring me to the final piece of discussion in data driven programming: security.  One of the great things about ASP .NET and its view state model is that it is able to pick up mutations to the DOM with respect to controls and throw an error to prevent the data from posting.  However, as we all know, view state information can make pages heavier and decrease performance.  A data driven programming circumvents these performance concerns but at the cost of opening up the system more.

Generally speaking, data driven application rely heavily on services and JavaScript and thus expose more of what can be done to the client.  Using a tool like FireBug, an external user can call and receive data from web services in any form they want, they can even wire up their own events on the page to call the services.  Because of this, it becomes crucial to secure your services and ensure that a request to them is from a legitimate user and for a legitimate cause.  Most services are capable of reading existing session information, and can access values stored in this matter. But as you clean and validate user inputted data, even more so do you need to clean the user data in addition to ensuring proper credentials.  Consider the following scenario:

You implement a web service that provides a variety of commands for both authenticated users and unauthenticated users.  One of these methods allows you to return a list of the users on the system and another takes a username and changes the password to a new password.  An attacker sniffing around the unauthenticated part of your site, see’s this reference and explores the service.  Upon noticing the GetUsers service, he adds an HTML link to your page (using FireBug) and wire up a call to this service.  He is able to get the list of users and finds the admin user, he then creates another button in which he changes the admin password to something he can use.  He then logs into your system with full admin rights.  What can he do?

This is a very real scenario and something that enterprise applications must be prepared to do.  Obviously, the first problem with this setup is the lack of division amongst services.  The first level of obfuscation is preventing users from seeing the entire directory structure or where certain resources might be located.  By separating out the service methods depending on context you can hide the methods you dont want certain people to see.  But the most important thing is to validate your users.  For instance, clearly only a certain set of users can get a list of all users, you might want to check that the user requesting has those rights.  Changing the password should require the original password as well as a validation check that you are allowed to change this password.  There are other things you should look/check for, but those are a couple of the more obvious ones.

In conclusion, the game has changed again; people expect sites to do more and expect them to do it faster and with more style then ever before.  Using the concepts of data driven programming in conjunction with a top notch JavaScript framework and back end programming infrastructure you can quickly and easily create these sort of applications.  JavaScript has become a very important cog in the way we create web applications.  Modern end user machines tend to be very powerful.  We can leverage the power of these machines by moving certain operations from the server to the client and thus lessening the server’s work and relegating it to performing operations more important then simply serving out repetitive bits of HTML.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Data Driven Programming with JQuery – Part 1

Data Driven programming is centered around the idea that data is central to user interaction and that an application should work only with data on the client side.  In this matter, the application adheres strictly to the less is more principle of Ajax development, in that we send only what the server needs and we return only what the client needs.  You can think of it as a push/pull model where our client will push only what it has to, to the server and then pull only what it ABSOLUTELY needs and display the information on the client.

Because of their strict adherence to the “less is more” principle of Ajax development as well as heavily relying on the use of JavaScript Object Notation (JSON) as a transmission protocol, data driven application ends to be much lighter and perform better over traditional web applications.

To actually properly develop these types of applications a framework, such as JQuery is essential, and also this methodology makes use of a number of features of JavaScript/JQuery.  JQuery selectors, in particular, are crucial to designing a lighter applications.  By having a thorough understanding of these selectors we are able to write less code which is more concise.  To briefly review the basic JQuery selectors allow us to select elements by ID, Class, and Tag Name, like such:

$("#someId");   // returns all elements with id="someId:
$(".someClass"); // returns all elements with class="someClass"
$("span");  // returns all  on the page
$("span input"); // returns all  within 
$("span > input"); // returns all  direct children of any 

These work very well as basic selectors and really become the foundation of any application, however, when you begin to get into cases where JQuery is working on a generated HTML layout, such as with ASP .NET you quickly find yourself having to make concessions on the cleanliness and conciseness of your HTML to assist JQuery with target elements whose ID is constantly changing.  For example, when I first started programming with JQuery I tried all sorts of ways to simplify targeting an element inside multiple Naming Containers.  Now I have a solid understanding Attribute and Filter based selection like such:

$("input[maxlength]");  // returns all  with attribute maxlength
$("input[id=someId]");  // returns all  with id="someId"
$("input[id$=someId");  // returns all  whose id ends with "someId"
$("input[id^=someId"); // returns all  whose id starts with "someId"
$("input[id*=someId"); // returns all  whose id contains "someId"

This type of selection works VERY well with ASP .NET due to the fact that the NamingContainer always place the ID the developer users at the end we can make good use of the $= syntax to select these elements.  The next set of selectors are Filters, which are based heavily on CSS3 and their primary use is to “filter” the set of all HTML elements on a page to a subset based on criteria and state.  The aim with filters in general is to give CSS developers more control over how elements look and act depending on their state.  Here are a few of them:

$(":text"); // returns all elements with type="text"
$(":visible"); // returns all elements not visible
$(":radio");  // returns all elements with type="radio"
$(":checked"); // returns all elements that are checked

Using JQuery selectors, in particular those outside the realm of the basic selectors, can greatly simplify the code you have to write and amplify the ways you have to target elements.  But why is understanding this so important:

When we design software we often talking about coupling and cohesion, mostly relating to class maintainability.  The same principles apply here.  Designs will change and, in the past, this often meant that JavaScript had to change with it.  However, with advanced frameworks like JQuery, it doesn’t have to as you can know properly separate your design from your interaction and thus allow changes to made that do not, or have little impact on your interaction layer.

So now that we have covered JQuery selectors there is one final piece of technology to talk about: JSON.  In the past, many developers used XML as the primary protocol for transmitting data back to the server, and while JavaScript is very good for parsing XML (mainly because its use in parsing the DOM), it is a very heavy protocol and contains a lot of bloat that we do not need.  JSON is a much simpler protocol that utilizes the serialization of JavaScript objects and transmits a string back to JavaScript which is then evaluated into a simple JavaScript objects.  Most of the JQuery widgets utilize JSON as the means for supplying the data we will display, because of its clean format and immense flexibility.

As an example of data driven programming I am going to show how one might construct a filterable grid that permits adding of new data.  Something to notice is the lack of use for hidden fields.  We store all state data using JavaScript closure, which I will explain later.  So the first part is to understand the nature of templates, since each row in the grid will share the same HTML, so here is what the main table could look like:

<table id="dataTable">
    <thead>
         <tr>
              <th></th>
              <th>First Name</th>
              <th>Last Name</th>
              <th>Age</th>
         </tr>
    </thead>
    <tbody>
          <tr>
             <td class="rowIndex"></td>
             <td class="firstname"></td>
             <td class="lastname"></td>
             <td class="age"></td>             
          </tr>
    </tbody>
</table>

Because this is a table, to make it easier for us to target specific sections, I am using the

and

tags.  Our goal here is to take the rowTemplate HTML and store it somewhere that we can easily reuse it.  That place is the JQuery data cache, as shown below:

var $row = $("#dataTable > tbody");
if ($row.data("rowTemplate") == null) {
     $row.data("rowTemplate", $row.html());
}

Using JQuery, each element on the page can be given a data cache to store relevant data in. This data is stored in the JavaScript engine and is tracked by JQuery, you will find no reference to this data anywhere in the HTML source. When I say each element, I mean “each element”, thus you could store the original data presented in each row and read it without fear of the user modifying it with a tool such as FireBug and knowing the data is unique.  In this case I am storing the template HTML to the table because contextually it is the table body that cares about the HTML. In addition, we could be adding/removing rows, so storing the HTML on the rows themselves would not be a good idea.

In modern JavaScript it is considered bad practice to create lengthy amounts of HTML in JavaScript primarily for maintainability concerns as well as bloat and added coupling, remember we want to separate the HTML from the JavaScript as much as we can, that is why we are targeting a block of HTML and storing it with JavaScript.  This way the block can be updated without much change to

The follow code uses the ChainJS plugin to create a row based on the template for each item in the array – the template is defined by the HTML contained within the targeted block:

var $body = $("#dataTable > tbody");
$body.empty().html($body.data("rowTemplate"))
    .items( jsonResult ).chain(function() {
    var data = this.item();
});

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Note: I always precede any stored references to JQuery objects with a “$”, this is merely my convention, and is not anything official or leveraging anything special.

So what is happening here.  We make a reference to the tbody element with JQuery (this may already exist from when you checked the cache) and we call empty.  empty() will clear all the HTML from within the targeted node; we write our code this way to permit easy rebinding as we filtering or reorder the results.  Next we access the data cache for the “tbody” and place the HTML we stored earlier into the tbody, this is the beauty of this technique, we can now very easily and cleanly reuse this HTML as much as we like.

The next call starts the chaining process: we pass a variable (jsonResult) to the items functions that expects an array of JSON objects.  This prepares the set for display, and our call to chain finishes the process.  chain takes an optional anonymous function called a builder.  In many ways this builder is similar to the ItemDataBound and RowDataBound events, and they operate the same way.  Finally we make a call to item() within the builder to get a reference to the current item being bound; note also that $(this) would reference the current container (in this case

).

One of the interesting things about ChainJS is you do get some primitive automatic binding for your data based on class name; if you recall the HTML excerpt above:

<tbody>
          <tr>
             <td class="rowIndex"></td>
             <td class="firstname"></td>
             <td class="lastname"></td>
             <td class="age"></td>             
          </tr>
</tbody>

In this case, if the JSON objects in the set contained properties firstname, lastname, age then the values of these fields would be placed in the elements with the corresponding class names BEFORE the builder function is executed.  When the builder function executes it also contains the scope of variables inside and outside the function, thus the follow can be done:

var index = 0;
$row.empty().html($row.data("rowTemplate")).items(
    jsonResult ).chain(function() {

    var data = this.item();
    $(this).find(".rowIndex").text(index++);
});

In this case you see that even though index is declared outside the anonymous function we have the ability to see it and update it.

So this should give you an idea of what you can do understand the use of data centric principles as they relate to application design.  Remember, by leveraging new tools and the advanced features of modern frameworks like JQuery you can cut out working directly with HTML by utilizing templates and the JQuery data cache. Finally, you can add a layer of obfuscation to hide your codes purpose as well as the data from the transparent web.

In the next posting, well talk about how to utilize this strategy to clean-up event binding code as well as how the use of a service oriented approach and the related positives and negatives with that approach.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

CodeMash 2009 – In Review

I had the chance to attend CodeMash 2009 in Sandusky, OH at the Kalahari Resort.  Apart from being truly impressed with such a fantastic venue I had the chance to attend a number of remarkable sessions with some great speakers and enjoy great conversation with a number of influencers in the industry.

Among the topics I had the chance to take in were Microsoft Surface, Test Driven Development, Groovy and Grails, Soft Skills, and Water Sliding.  I mention the last one because the hotel has an awesome and tremendously large indoor waterpark and I had the chance with some downtime on Wednesday to take it all in, complete with at least 10 water slides, many of which are at least 5 stories tall. 

But back to technology, I really did enjoy the Surface presentation the most out of all the great presentations I attended.  It was interesting to see how the NUI style interface is developing and how computer system interfaces have continued to evolve ever closer to the ultimate goal of interacting like real objects, from old Command Line Interface systems, to some of the great Graphic User Interfaces of today, and to the Natural User Interfaces (NUI) of tomorrow. It was especially impressed with how easy it is to convert normal WPF XAML to Surface XAML, it really is no more then including a new namespace and preceding control names with “Surface”.  Special thanks to Jennifer Marsman (@jennifermarsman)

But perhaps the one session that opened my eyes the most was the very first session I attended in which we spoke about Test Driven development and the idea of exploring how a system COULD be laid out through mocking.  Basically starting with no real code, just interface definitions and gaining an understanding of how you could architect the system to determine what makes sense and what does not.  I really wish this was something that we could do in the future at RCM, I think it might help many design a more concise set of rules around their code and build the principles and habit essential to TDD.  Special thanks to Phil Japikse (@skimedic)

But for all the great sessions, CodeMash 2009 would not have been complete without the people and the many friends I have made in the tech community over the past year.  I got a chance to get a hands on lecture about Dependency Injection (using Ninject) from the master himself: Nate Kohari (@nkohari). I got to sit down and eat with some of the greatest minds and the most awesome people, such as Leon from Telligent (@fallenrogue) whom I shared JavaScript horror stories with and talked about Prototype and its future in the industry with the rapid adoption of JQuery.  Everyone was nice, even when I asked them to teach me to play poker, they were nice enough to not take all of my money or laugh when I had to play Rock Band on the easiest settings cause I am terrible, LOL.

In the end, it was a great time with great people and I learned a great deal.  I cant wait to go back to RCM and start to permeate these ideas and explore the new technologies and techniques that I have been exposed to.  And finally, I express my most sincerest and profound gratitude to the innumerable number of volunteers who made it so awesome.  Thank You and we will see you next year.

JQuery Form Validation Plugin

This weekend I sat down intent on finishing a portion of my media management application and testing different strategies regarding client side validation of data using the JQuery Validation Plugin (link).

My initial impressions of this plugin were, at first, not very positive, though a lot of this was the learning curve associated with grasping the rather fragmented documentation in certain areas.  Basically what I was looking for from a plugin was to easily be able to define standard validation requirements (required, minlength, equality, range) easily, as well as the flexibility to define my own( Asynchronous uniqueness check against a database ).

Below is some code from my application demonstrating the definitions of required and minlength rules:

$("form").validate({
     rules: {
          seriesName: {
               required: true,
               minlength: 2
          }
     },
     messages: {
          seriesName: {
               required: "Field is required",
               minlength: "Name must be at least 2 characters"
          }
     },
     errorElement: "div"
});

The idea here is that the parent key names (seriesName in this case) relate to the form elements themselves.  The documentation on what can be done is very skimpy, and takes a fair amount of deduction to figure out. If you look here you can get an idea of the types of validation rules that exist.  But, if your like me, you want to see this in code to see how its used, and for that there is a good example provided here, just do a view source to see the code.

Most of this will be focused with what rules you can define for various elements, the next step would be how do you communicate to the user what is wrong and what is the flexibility of what you can do.  Based on the provided examples, and my own experience, I was able to mutate the type of container the error messages were being displayed with, and the class, thus I could properly style and display them in any manner I desired.  There are a whole host of options for controlling this behavior, listed here.

In the example I provided above, I took most of the default and simply changed the element type to a

instead of label, which is the default element.

So at this point, we have a relatively simple call to a JavaScript method that allows us to use rule based validation setup without coding a lot of redundant JavaScript to perform the validation.  Validate will also internally check if the selected form is valid and suppress the submit if it is not valid.

The final component I was looking for was a way to implement custom validation and I was very impressed with how the Validation plugin handles this.  First some code:

$.validator.addMethod("IsUniqueName", function(value, element) {
     var id = Number($("#seriesId").val());
     var result = $.ajax({
          async: false,
          type: "GET",
          url: GetApplicationPath("#hidAppPath")
                    + "/Series/AjaxCheckSeriesName",
          data: { seriesName: value, seriesId: id },
          cache: false,
          dataType: "json"
     });
 
     if (result.responseText == "false")
          return false;
     else
          return true;
            
}, "Series name must be unique");

I found this to be very ingenious and admitting confusing the first time I read the documentation (though it was 4am and I had just finished Wave 50 Hardcore on Security in GoW2), but its actually rather simple.  If you notice above we have the key’s required and minlength, these are simply built in definitions, but using the addMethod we can add our own internal rules.

In the case above, I am creating a new rule called IsUniqueName and defining as its function check a routine to make a call to a routine to verify the provided name is unique.  The final parameter is the message to be displayed in the event the two values (expected and actual) do not match up.  To specify the expected, we modify our existing call to validate as such:
.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

rules: {
     seriesName: {
          required: true,
          minlength: 2,
          IsUniqueName: true
     }
},

Notice the new addition here, we are saying that for the rule IsUniqueName we are expecting a value of true from the callback.  Notice that I am using a block Ajax call to determine this, mainly because a callback would not be able to directly return the handler.  This is a weakness of doing this kind of check inline, instead of having the user request it.

Overall, I found that working with this plugin to be a little bit confusing at first and the documentation seemed scattered.  However, the more I stayed with it the easier it became.  This plugin adequately offers the ability to easily do standard validation as well as custom validation with a minimal amount of custom code needed.  This is important as validation tends to have a lot of redundancy involved in performing it, so a tool such as this makes that process much less error prone.  I should point out that the total LOC was 39 lines for everything that I am doing.  The amount of time this will save in future projects is well worth learning this tool and finding out all it has to offer, and that is actually quite a bit.

Below are the links I referenced in this article:

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Programming Decorator + Notification Pattern

Recently, I was asked to rework the code which handled our companies internal expense tracking for projects.  Before I get into the solution let me first describe the workflow and some details a long with it:

  1. Expense entry is submitted for a valid project so long as it falls within the budgetary guidelines for the project
  2. The entry is then saved as an Unapproved expense to such a table within the database. If this save is occurring on a such a date as to make the expense late an email is sent out as well
  3. The expense is then reviewed by the project manager and approved or rejected.  When approved it is moved from the Unapproved table to the Approved table

Pretty simple for the most part.  The one thing that irked me was because it used two tables it used two different entities.  This would mean that I would have to know which entity I wanted to work with, that is where I started to consider the Decorator pattern.  The reason for this is, I don’t feel that I should care about which entity I am updating, that code already exists, I just need to determine if I can call it.  For the most part the two entities share many common property, just with different names, so they can easily be united using an interface. In addition, the union of properties between the two, I also made visible Save and Delete routines.  This way I could create my decorator containing my single reference to the underlying entity via a abstract type:

public class ExpenseEntry : IExpenseEntity, INotifyPropertyChanged
{
     private IExpenseEntity _entity;
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

This is the definition for the Decorator class which I chose to call ExpenseEntry.  Notice how it implements our IExpenseEntity interface, which guarantees that interacting with the parent is the same as interacting with the underlying entities. Remember the decorator contains properties that are designed to talk to underlying object.

One of the requirements is that we want to know when certain properties change as it may or may not throw the underlying object into a invalid state depending on context.  Basically, business rules state that only administrators can modify the entry once it has been approved.  This class contains a simple boolean flag to specify whether the class is being used in such a context. However, the case exists that within a given week an approved and unapproved expense can exist, so we need to make sure we are leaving approved expenses alone and not notifying the user of failure unless they try to modify it.  This is where the Notification pattern comes in, in that we want to make sure that we have an easy and clean way to notify the parent that something changed and respond appropriately.  Here is the code in the Decorator for handling the event:

public event PropertyChangedEventHandler PropertyChanged;
protected void OnPropertyChanged(T oldValue, T newValue)
{
  PropertyChangedEventHandler handler = PropertyChanged;
  if (handler != null && !oldValue.Equals(newValue))
  {
    handler(null, new PropertyChangedEventArgs(string.Empty));
  }
}

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

The important thing to see here is that if the two entries match we do NOT fire the event as there would be no need to do so.  Notice we are using generics so that we can use this handler with a variety of data types, this is because the properties we will be monitoring also vary in type and our goal is to have a single point of monitoring.

If the event is triggered the handler simply flips a boolean member variable (_dirty) to true.  This variable comes into play when the context is examined as Save is called, an exception is raised if the state is not valid for saving.

This is nothing new to may developers in the world, nor is it necessarily anything spectacular.  But I am slowly beginning to see the fruits of my interaction with the other intelligent manifest themselves, through more pattern thinking to the questions I ask and when I ask them. Initially this solution used a generic event handler which some more complex logic. I have since simplified it, but I recall commenting to a co-worker that before I came here I would never have dreamed writing code like this, but now it feels so natural. Evolution of the mind truly is a wonderful thing to behold.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

ASP .NET MVC Authentication Strategies

During the weekend I decided to try out a few strategies for protecting content against anonymous users.  During this I got the chance to explore creating custom attributes as well as gaining a better understanding of how routing really works. Ultimately I came up with a very decent solution but not the most desirable in my mind.

First Attempt: Custom Attribute
For the first attempt I took the standard ActionFilterAttribute and derived from it to create the following attribute class:

public class IsAuthenticatedAttribute : ActionFilterAttribute
{
     public override void OnActionExecuting(ActionExecutingContext filterContext)
     {
          if (!HttpContext.Current.User.Identity.IsAuthenticated)
          {
               string redirectPath = FormsAuthentication.LoginUrl;
               HttpContext.Current.Response.Redirect(redirectPath, true);
          }
     }
}

This is a very rudimentary example of deriving from the ActionFilter attribute, I have seen examples where a custom attribute is used for logging and other reporting features. What happens in this example is we are override the OnActionExecuting method from the base class, so before the method this is decorating is executed this code will execute.  As you can see, if the user is not authenticated we redirect to the login url specified in the web.config.

This is not a bad strategy, and is the strategy I am using at present within my application. However, it was not the approach I was looking for, mainly because I still have to remember to put the attribute on the methods.  What I really would like is a single place that enforces the requirement that a user must be logged in for certain action to take place.  Furthermore, I would like to be able to split up a controller’s functionality for its admin and non admin actions.

Second Attempt: Routing

Based on my requirements, I naturally decided to look at routing as a means to prevent access to a particular path (aka namespace).  To begin, I created a directory under controllers (an Area as the term is known in the MVC world) called Admin and a controller called Series here to compliment the Series controller in the parent directory.

My first attempt was to institute a specific route for the admin actions and apply a constraint to the route that requires the user to be logged in, if they wish to access it, below is the following code I created:

routes.MapRoute(
     "Admin",
     "Admin/{controller}/{action}/{id}",
     new { controller = "Series", action = "Index", id = 0 },
     new {
          isLoggedIn = new AuthenticatedPathConstraint()
     },
     new string[] { 
          "AnimeManager.Controllers.Admin"
     }
);

The fourth parameter to MapRoute is a listing of the constraints to apply to this route.  Constraints are classes that implement IRouteConstraint and are passed in via an arbitrary property name in an anonymous class:
.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

public class AuthenticatedPathConstraint : IRouteConstraint
{
     #region IRouteConstraint Members
     public bool Match(HttpContextBase httpContext, Route route,
          string parameterName, RouteValueDictionary values,
          RouteDirection routeDirection)
     {
          if (routeDirection != RouteDirection.UrlGeneration)
          {
               if (!httpContext.User.Identity.IsAuthenticated)
               {
                    httpContext.Response.Redirect(
                         FormsAuthentication.LoginUrl);
                    return false;
                }
                return true;
          }
           return true;
     }
     #endregion
}

The next parameter is a set of strings representing namespaces to search. When the MVC framework looks for controllers it, by default, re-curses through the Controllers directory and makes a string list of all controller names. Thus if you have controllers with the same name, even in a different space, you will get a server error with respect to the inherent ambiguity created.  The strings provided to the fifth parameter of MapRoute serves to restrict the namespaces where MVC will look for potential controllers.

Now, I want to return to the constraint to explain a small, but critical, piece to this.  The comparison against the RouteDirection enum is very important.  Using ActionLink from the Html helper will invoke the routing mechanism as well as accessing a particular Url.  Because of the way we are doing the redirection we want to allow UrlGeneration to occur regardless of state.  To clarify, if we dont have this logic, when generating the link the page will never load and redirect endlessly.

To actually properly use this feature we need to use the GenerateLink Html Helper extension method as such:

<%= Html.GenerateLink(
     "test", "Admin", "Index", "Series",
     new RouteValueDictionary(new { id = 12 }), null)
%>

Using GenerateLink we can actually describe which route we want to use to generate the final link.  Remember anytime we call ActionLink we invoke the underlying routing system to generate the final URL, the same is true for GenerateLink, except it allows to tell what path we wish to use in the routing table.  Because of this, I decided to keep a Default route map and place all subsequent routes following and refer to them by name as needed.

Conclusion

Both strategies do a good job in minimizing the amount of code needed to protect functionality and consolidating the logic to determine authentication status to a single place.  I personally prefer the level of control I can get using Attributes over routing, granted the routing system still need a bit more work and research, I think it could be a very acceptable method in the future.

Hacking Linq for Set Based Eager Loading with Entities

First I will show the result:

var series = Series.GetAllSeries(em).ToList();
series.Select(s => { s.SeriesGenres.Load(); return true; }).ToList();
series.Select(s => { s.SeriesGenres.Select(sg => { 
     sg.GenreReference.Load(); return true; }
).ToList(); return true; }).ToList();

So what is going on here, and why am I doing this.  So first the reason I am doing this is that I am trying to maintain my context within the controller and not persisting it to my View.  However, in this case we are dealing with a set of Series as opposed to a single Series.  In this case we want to load all related types without resorting to foreach statements, as that is what Linq is here to help us circumvent.  Now to what this is doing.

The first line is pretty obvious, it makes a call into our Series model and returns an IEnumerable which we force enumeration on by calling .ToList(). 

Moving to the second line, understand that our goal now becomes to Load the SeriesGenre collection of each Series in the set.  This is what Select is for, to allow operation on individual instances within a set.  Normally, Select returns an IEnumerable where T is the inferred type of the property being returned.  So this code effectively says: “For each Series as s in series call s.SeriesGenres.Load()”.  However, since Load returns void and Select inferred type for T cannot be void (IEnumerable would not be especially useful) we have to return an explicit type that it can infer.  C# Lambda’s do allow for statements within { }, thus we can return true for the operation.

An important point here, this expression will NOT be loaded without calling .ToList() at the end. Why? Because, aside from Linq to Objects, Linq statements delay enumeration until you call them.  However, since we need this to be loaded immediately, in preparation for the next Linq query, we force enumeration by calling .ToList(), thus this  yields a List, which we do not assign.  One can think of a call to ToList() effectively changing context from Linq to Entities to Linq to Objects.

Moving to the final query:

series.Select(s => { s.SeriesGenres.Select(sg => { 
     sg.GenreReference.Load(); return true; }
).ToList(); return true; }).ToList();

We can really see a lot of the same practices being used here as above, except this is multi-level.  Again notice the use of .ToList() to force the set enumeration at each level and the use of multi-statement lambda’s to explicitly specify the return type of Select.

So the main point of this blurb is to show the trouble one has to go through to utilize Loaded reference types in the Entity framework in a set based fashion.  Microsoft’s recommended practice regarding the Entity Framework seems to be to simply keep the context open for the duration of each life cycle.  Thankfully, they are promising that lazy loading will be more automated in the next release.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }
.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }
.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }