NYC ALT .NET Meeting – NoRM and MongoDB

I recently had the opportunity to attend the ALT .NET meeting in New York City and catch the presentation on NoRM and MongoDB by John Zablocki.  As someone who was educated and has worked with only relational databases throughout his career it was kind of a mind bending experience. The concept of literally storing data ad-hoc within the “database”, without any real relational model to speak of.

The reasons for this shift in thinking vary, but the main reason is the increased need to support horizontal scaling. Traditionally, we have known vertical scaling, whereby a bigger machine is brought in as needs increase. In today’s computer world where we are starting to see limits on processor speed, this is simply not feasible to large high volume sites such as Amazon, Facebook, Twitter, and Google.  Horizontal scaling is based on the principle that you can get better performance by increasing the number of machines hosting the service, even if the individual machines themselves are standard consumer boxes.  However, as we begin to leverage this distributed model, the limitations of RDBMS become apparent, hence the need for something different.

Database persons will often talk about ACID with respect to RDBMS as a means for ensuring data consistency. ACID often refers to the transactions within the database: Atomicity, Consistency, Isolation, Durability.  But when we start moving into distributed systems this becomes much harder to utilize without compromising performance.  In 2000, Eric Brewer introduced the CAP theorem.  CAP stands for Consistency, Availability, and Partition Tolerance.  It postulates that a database system can ONLY ever provide two of these principles.  Most RDBMS system are CP (Consistency, Partition tolerance) systems. MongoDb (and other NoRM systems) is an example of AP systems, or highly available with “eventual” consistency.  It means that the MongoDb system does not care about immediate consistency, but that the system will eventually be consistent.  This is a rather hard concept to get your head around and I am not here to sell you on the idea; I leave that to members of the NoRM community.

MongoDB is a Document Store type NoRM system, whose databases are comprised of collections (“tables”) which are actually sets of schema-less JSON objects, yes this database stores things in JavaScript. The objects have not set format, they just exist. There are automatically given a unique key (primary key) as a means of identification and can contain references to other “documents”.  MongoDB is an example of a Document Store NoRM database.

I am not going to go into a whole in-depth discussion on MongoDB or its technical facts.  But it is definitely something worth checking out.  I was quite impressed by the concepts being pitched by the NoRM community and how it could  be used.  The even better news is that drivers do exist to allow .NET integration and use of the LINQ syntax, though be careful as many NoRM systems are built on the Linux platform and can only be used in such an environment.  MongoDB, however, is one of the ones that is supported on Windows.  I admit, it is a bit strange using JavaScript as the primary means for storage and querying of data, but I am still fascinated and looking forward to checking it out.

Official link to MongoDB project: http://www.mongodb.org/

What I learned in New York

I think its finally to a point where I can talk about this with some level of safety.  For the past year, I have been working in New York on a top secret project for a large company. Initially, this started as a 3 month gig, but quickly turned into something much bigger as the scope of the project became clear and I was placed in the position of Lead Developer.

The application is a typical n-tier application with both a web and web service component.  What made the application so difficult was the insane number of rules the system had to enforce.  The company is a conglomerate, very similar in fact to RCM, and with that comes the differing ways things are done within each of the divisions or, in this case, brands.  This meant that a great deal of research had to go into the system to make sure it upheld the rules for all division as well as forcing them to adhere to standardization, one of the long term goals for the company.  Having to deal with this not only taught me patience but also brought an iron truth to the forefront: these processes could change at a moments notice and if we didnt design the application to change we would be in a world of hurt.

One of my greatest interests is in software development practices and study of patterns. I am often amazed at how many developers I meet, many with greater experience then I, who seem to know nothing about design and patterns. These patterns saved my skin so many times, from keeping this structured to making things more reusable and extensible.  Some of the notable patterns that we used were State, Strategy, and Template Method patterns.

Perhaps the first biggest example of how these patterns helped was in their main central process.  The process itself is central to the entire company.  The process was initially designed to have four separate implementations, one of which was relatively simple and dealt only with the items themselves, the others had to contend with ensuring that an inventory was maintained as well.  What I ended up creating was a two-tiered abstraction which housed components central to each level.  After discussions with the client we came up with a set of guidelines that these process would follow; and you can guess what happened next.  It was at this time the design showed its merit by making it very easy for us to make the change, though in all honesty, I was guilty of coupling to an assumption I was told was “set in stone”.  The design showed its merit yet again when another portion of the project, a backend process, needed to utilize it in a different way, rather then taking time to create the module my colleague was able to do it in just under one day.

That was perhaps the biggest lesson for me: patience.  I was told before I took the gig that I would be expected to train AS400 programmers how to use .NET.  I initially had three students, today, a year later, I am proud to say that two of them have become excellent programmers and are fully supporting the web piece.  However, the biggest challenge ended up being the mobile portion of the project.

You see, I had never done any sort of serious mobile programming before, and what little I had done was on Android.  However, as the mobile piece showed signs of stalling, my client took a gamble and showed faith in me and asked me to take it over and see it through.  Upon inspection of the code base it was a clear case of “you gotta rewrite this”.  Basically it was totally abusing system resources, which you might get away with on a Windows app, but certainly not a mobile app.  This was causing the application to randomly crash without any sort of error. This led to the company developing a second application to watchdog the app and restart it when it died.

The problem with the application was: structure, or lack thereof.  I will never understand the reason why developers do not understand that a house must have a solid foundation to stand; the same is true with software.  I think more often then not the answer is lack of knowledge; I have seen so many 9-5 programmers I am just sickened.  I view being a developer as a lifestyle more then a job.  I have to keep myself sharp or I will become a dinosaur.  I know programmers who work 8hrs then go home and dont touch a computer until the next day, and yes, it does show.  Thus began my crusade to rewrite the application.

What I learned here was that having successfully implemented the web application I had credit, and was able to give assurances to my client that I would keep this under control and it would not run away.  After two months, I am proud to say that we have completed our first pass and that all features have been implemented.  I basically ended up writing a full on mobile framework which allows for workflows, this is what guides their processes.  In the end, this ended up separating things so nicely, we were even able to introduce sub workflows, all of which was based on the State machine design.

So after one year in New York what is the biggest and most important thing that I have learned: I can do it. It has given me the self confidence to know that I have the ability to be a great consultant and that I am able to take on that leadership role and guide a project technically.  The thing I have had to be most careful about is: over confidence.  I would be lying if I said I didnt consider moving to New York, but if you wait long enough reality sets in and being away from Michigan for so long has really made me appreciate where I come from.  Much as I like the big city, right now I am still a small town guy, though Grand Rapids is hardly a small town.

The reality is, I have learned more then I can say, and I am sure not everything was good.  I know I made mistakes and in the future I must work to avoid those mistakes and learn from them.  I must continue to grow.  And I must keep the people who keep my feet on the ground as close to me as possible.

Persisting oAuth using DotNetOpenAuth

To continue my series in talking about oAuth I am now moving to one of the most critical aspects of any oAuth application: persistence. There is no value in oAuth unless you can automatically authenticate the user without asking them to allow you access to their profile.  This is the core feature of Twitter and Facebook apps on mobile devices, since you dont want to allow the application every time.  I will admit that getting this to work with DotNetOpenAuth was surprisingly very difficult and in the end required what I consider a bit of a hack.  But lets walk through the basics of this approach.

The oAuth Dance
The first thing to understand is the oAuth dance whereby we gain the users trust. This is the process of authorization and must be done to confirm the user is ok with us accessing the service to which they belong.  In our example, we are using foursquare. Once this authorization takes place you will receive an access token and access secret from the service. Storing these allows you to make authenticated requests now and in the future without asking for authorization. This article does not deal with the expiration of these tokens.

In DotNetOpenAuth we use a WebConsumer to carry out the requests over the oAuth service line. This requires service endpoints and a ConsumerSecret and ConsumerKey which is provided by the service so as to indentify the application. You can then use these to get the authenticated token as noted above.

Our application
Disclaimer: This application is a proof of concept and does not follow proper coding standards, do not use this code in a production setting.

So our application uses a local SQL Express database to store a simple username and password with the related foursquare data stored in a separate table linked by a foreign key. An Entity Framework layer sits on top of this and allows us to extract these values based on the login and make authorized requests to the foursquare service without the user needing to do anything.

The chunk of code which allows this to happen is here:

image

Line 11 is most critical.  You see, DotNotOpenAuth creates this concept of a TokenManager which contains the token/secret combinations for the user to access authorized data.  This is a very ugly implementation and is not something you would use in production.

To clarify, there are two sets of credentials involved: the set identifying the application (ConsumerKey, ConsumerSecret) and the set proving authorization to the service (AccessKey(token) and AccessSecret).

For the application, if the user does not exist in the database they are given a message stating the login is invalid. If they have a user account but have not provided foursquare credentials we redirect them to foursquare for authorization.  This part was especially hairy.  I use a second page to handle the callback, though the page is never visible just does a redirect.  The idea is we want to save the access information so we can use it when they log in again.

Remember, our application is a web application so multiple users will hit the same interface.  This differs from a device where the device itself is associated with the person.  We can, of course, use cookies to circumvent the login, but that is not a matter discussed by this article

With the foursquare information saved we redirect back to our main application and present the user with the interface for access the various services provided by foursquare.  The following code chunk is an example of calling the users checkin history and displaying it in a Repeater:

image

Ahh the beauty of Linq to XML and how deliciously evil it makes parsing XML.  Granted foursquare does also support JSON which I intend to look at later.

To conclude, oAuth is quite interesting in the way it handles authentication. Still in need of some fine tuning before I can take this to the level needed to support the application I am planning to write.  For one thing I would like to encapsulate a lot of the piping for parsing data and the plumbing for the connection and storage of credentials.

Download Source Code
http://cid-630ed6f198ebc3a4.office.live.com/embedicon.aspx/Public/FourSquareFullAuth.zip

Working oAuth and .NET

I recently had an idea for a mashup that would use foursquare.  I look at this as an opportunity to understand how I might leverage oAuth in the future as a solution for a Single SignOn solution.

To start with I knew that I would need a library to handle the oAuth communication. A Google searched turned up the oAuth Community Code site, which maintains a listing of popular libraries for various platforms. Among those for .NET are DotNetAuth and oAuth for .NET.  I decided to go with oAuth for .NET first.  It seemed like a solid library but for one minor drawback: it uses DI.  Dependency Injection is not a bad thing, but its not something that that I should need know to use a library.

So I decided to check out DotNetAuth which I came to found out is the same library used by Stack Overflow.  After a fair amount of testing I got a working example which walks through my first milestone: getting the token for authorization.  To start with, however, you will need to get a Key and Secret to provide to foursquare to prove you have authorization to use the service. You can get that information here.

So the first step is understanding how to get the token that proves the app is authorized to access the account.

image

This uses the DotNetAuth WebConsumer class to setup for our call into the foursquare oAuth service.  The service addressing is defined within Provider definition is shown below.

image

Looking at this code you can see what we are doing, basically pointing at where to get the oAuth token from the foursquare service.  The information is here for foursquare, though it is pretty self explanatory.

The call to Send on the Channel property will cause a redirect to the foursquare auth page where the user can enter their crerdentials and allow the app to access their data. One of the things that I found curious was that my token kept changing whenever I ran my test.  This makes me wonder if I will be able to store the token in a cookie and thus refrain from authorizing every time, which seems counterintuitive.

Thus after the allow the application will redirect back to the page that the request originated unless a callback Uri is defined.  The best way to understand how to use the library is to use this example from the foursquare HowTo: http://tinyurl.com/2b9x66a.

At this point I am very confident in my understanding of the oAuth workflow and my next step will be developing an understanding of how I can store the token and then use it repeatedly without reauthorizing and also to actually pull data from my foursquare account.

Thanks to the DotNetAuth team, the Twitter interaction sample was also very helpful to understanding the library and how it can be used.

Developing a Silverlight Maps app for WM7

I decided to play around with Windows Mobile 7 today.  While I have been present and listened to many people talk about the framework, I have not had a chance personally to play around with it.  I decided to set a small goal of getting the Bing Maps Silverlight control to work.  It took some work, but eventually I got it.  I decided to take a different approach and create a video using Camtasia Studio.

The video is available here: http://www.youtube.com/watch?v=v0YbfXA3KHI

New York City Web Camps

While I was at the WCF Firestarter during the weekend of June 19th, my friend Peter Laudati (http://www.twitter.com/jrzyshr) asked if I would be willing to use my .NET web expertise and help out at the Microsoft Web Camps the following weekend.  And so I caught the 7:49 Ronkonkoma and arrived at the Microsoft Offices in Manhattan at 9:20am.  I was greeted immediately by Jon Galloway and spent the remainder of the day in the Palace conference room assisting people with tutorials and projects, mostly involving MVC.

I always enjoy these sort of experiences because its a chance to engage others in intelligent discussions about technology.  I dont get the raging Apple-fanboys, like my brother, who love their company but dont understand any empirical reasons for why they love it.  We talked about a lot of things, to addressing the debate between Web Forms and MVC to United States mass transit futures.  I also got the chance to help out some really awesome people play with Microsoft’s new technologies and practices.  My only regret, kind of, is that I didnt bring my laptop; though I didnt bring it because I am forced to bring a real keyboard with me, the absence of the ‘O’ key is a real nuisance.

But in the end I had a great time and the projects the teams created were pretty nice, given the time constraints. It also made me realize that at some point I need to do an actual production project in MVC; I clearly understand the theory well enough.  One of my personal projects uses MVC, but I currently have no client projects using it; something I need to rectify.

NYC WCF Firestarter

I had the chance to attend the Microsoft WCF Firestarter event put on by Microsoft in New York City. It was similar in many ways to the SilverLight event I attended a couple weeks back.  We got to hear from MVPs Miguel Castro and Don Demsak and Peter Laudati, a Microsoft lead evangelist for the New York/Jersey area.  Needless to say these are some brilliant people and boy were the session chalked full of useful information.

One of the big reasons for me attending, outside of my love for community events and a deep interest in WCF, is for the current client project I am working on, we are using WCF in conjunction with the Compact Framework, or as I call it – the bastard Framework child from Microsoft.  Ever wonder why Microsoft fell behind in the smart phone race, try to program with CF against modern standards, yeah not fun.  Really hoping CF4 changes all of that since Apple has shown there is now a tremendous desire among general consumers for smart phones, but that discussion is for another day.  I will say that since WM7 will use SilverLight as the basis for interfaces (which is beyond awesome) I can be almost certain that CF will get wsHttpBinding in addition to the current, and only choice, basicHttpBinding, which is the slowest and least feature rich binding.  But more discussions for later dates.

Perhaps the coolest thing that I got to see in action for the first time was the use of transactions over these services and get some ideas for best practices, though I think Miguel might take things a bit too far, but as I say, to each their own as long as it works.  In the end, the stuff that Peter, Miguel, and Don showed  us just totally blew my mind. It was unfortunate that I attended this event partially sick and without an ‘o’ on my keyboard (was especially difficult when they talked about OData).  I really enjoy the topic of OData, in fact my colleague Chris Woodruff, is doing a national tour (or at least along the East Coast and Chicago) giving workshops on using it.  At this point, not sure if I will be able to attend this, though it would be nice to see a familiar face, been a while for me.

Really the coolest thing I took from the entire day was the feature of Transactions and Synchronization that Miguel showed off. The worst news I got is that since CF only support basicHttpBinding, we are not in good share in terms of speed.  We do have performance tests scheduled for this week, but I do like to have a contingency should things go astray.

Anyway, I will likely be back in Manhattan next weekend to help out with the Microsoft Web Camp.  As for now, I need to get some rest to prevent a relapse. I am quite certain my client will kill me if that happens 🙂

Using Delegates for the Command pattern

The Command pattern is one of my all time favorite patterns.  I have used it numerous times in programming with great success. Recently, while rearchitecting an application I am reworking for a client I needed a means to encapsulate the logic that called a web service and caught the corresponding error or processed the result if it succeeded.

The principle problem is that these calls must support the ability to retry the call should it fail. The previous developer used loops in each function to support this requirement, flagrantly violating DRY.  My approach was a permutation on the Command pattern.  Pass a command object into a Executor class.  The class would treat the object generically the same, simply calling a command method which would call the particular web service method via a layer of indirection.  Here is my first rough implementation of this idea:

   1: public T Execute(ICommandRetry command) where T : ResultBase

   2: {

   3:     int attempts = 0;

   4:  

   5:     do

   6:     {

   7:         try

   8:         {

   9:             return Execute((ICommand)command);

  10:         }

  11:         catch (CommunicationException)

  12:         {

  13:             var retry = FireOnRetryEvent();

  14:             if (!retry) break;

  15:             attempts++;

  16:         }

  17:     } while (attempts < command.RetryAttempts);

  18:  

  19:     throw new Exception("Unable to complete communication");

  20: }

In addition, I wanted to make sure I could control the return type of the call so I could maintain type safety, hence I used generics.  In a later version of this code, I made the RetryAttempts property virtual and defaulted it to 0, which would yield only one attempt.  Derivative command objects could overwrite this as necessary.

For the most part this worked except that it yielded three classes for each web service operation (input, result, and command classes).  This is simply too much ceremony and given the team that will support the application would result in being very ornery and error prone.  Hence I decided to look for another way.  I found it using lambda expressions.

There are three basic lambda expressions in .NET: predicate which is used for Is comparisons, action which is used to simply call a function with no return value, a func which is the same action, save it supports a return value.   An analysis of the previous code yielded the stipulation that all service calls will return a class with ResultBase as its parent.  Hence we arrived at the following code:

   1: static void Main(string[] args)

   2: {

   3:     var function = new Func(i => (new Server()).Print(i));

   4:     var result = Command.Execute(function, new PrintInput {Name = "Jason"});

   5:     Console.WriteLine(result.Message);

   6:     Console.Read();

   7: }

This is the structure of Command.Execute:

   1: public static TResult Execute(Func theFunction,

   2:                                                TInput input)

   3:         where TResult : ResultBase

   4:         where TInput : InputBase

   5:         {

   6:             return theFunction.Invoke(input);

   7:         }

So what we have done is used a lambda to replace the original command object.  This will operate exactly the same way. (Note: this is a different solution as the solution was reached at a later date).  This cleans up our code base and reduces the amount of ceremony we need to go through, no more creating the command objects.  However, we really dont like exposing the Func syntax in the main code, if our developers are not as fluent with C# as we are this can cause a great deal of confusion. In my case, the developers left to support this application will not be as fluent and need to have certain things stored away where they can be easily copied.  For this, the best pattern is the Factory pattern.  Hence we change our code to the following:

   1: static void Main(string[] args)

   2: {

   3:     var command = CommandFactory.GetPrintCommand();

   4:     var result = Command.Execute(command, new PrintInput {Name = "Jason"});

   5:     Console.WriteLine(result.Message);

   6:     Console.Read();

   7: }

Not really much has changed except the addition of the CommandFactory static class.  Here is the implementation:

   1: public static class CommandFactory

   2: {

   3:     private static Server _server = new Server();

   4:  

   5:     public static Func GetPrintCommand()

   6:     {

   7:         return _server.Print;

   8:     }

   9: }

This actually works perfectly for my case.  You can see that we were able to remove the crazy lambda that would have caused confusion and instead replace it with a much simpler expression and IN ADDITION, provide a function for it.  This helps give our code meaning.  This code also reads much easier with all the typing inferred rather then explicitly stated.

New York City Silverlight Firestarter

This weekend (Saturday, June 5, 2010) I attended the New York City Silverlight Firestarter.  I had the opportunity to attended the MVC firestarter in New York last year in August and subsequently made contract with Steve Bohlen who helped introduce me to the technical community in New York.  I find that with these sorts of events, for me, its hit or miss.  For the most part the content consists of examples that I have seen and features I am already aware of.  Though you do also tend to find nuggets of things you didnt know, and this event was no different.

Perhaps the best thing that came out of this talk was clarification on the MVVM pattern.  I attended a session at CodeMash 2010 on MVVM and was able to grasp the basic idea, but I could never understand what the purpose of the ViewModel was and why a control concept seemed to be missing.  Thanks to the presentation today by Todd Snyder of Infragistics the concept was clarified for me.  I now see the pattern as mostly a client side pattern, not a pattern for the solution as a whole.  In fact the acronym would be more accurate as MVVC (Model View ViewController).

Of course, whenever I hear Silverlight being talked about I am interested to see how people are using and organizing application which leverage RIA services.  I love the concept of RIA services, but I think the technology needs more maturation before it will truly be ready for primetime.  Not to worry, in the mean time we have standard WCF which works pretty darn well.

But the main reason I got to these events is for the people. I just love to meet and talk with people about technology.  In particular, today I was seeking advice on solving a problem I was having with my proof of a new architecture for a application I am developing for the client. After much discussion I think I settle on something. I do have questions on its performance, but the idea is very sound and accomplishes many of the goals I was seeking to achieve.

June 19th is the WCF Firestarter. Very much looking forward to that.

Building Rich Forms with JQuery and Plugins

One of the things I have been focusing on is building live forms for gathering large amounts of data in a rich fashion. For the personal project that I am working on there is a large amount of data that must be collected and putting it all on one page would make for a difficult user interface. So my goal became to turn this into a wizard, which makes the most sense.  To do this, I decided to use the formwizard plugin.

Understand that this plugin takes a form whose various “pages” are separated by

with a class of step; it is also recommended that you apply an id attribute to these div sections.  The setup for the plugin is below:

   1: $("#innerContent form").formwizard({

   2:     formPluginEnabled: true,

   3:     historyEnabled: true,

   4:     validationEnabled: true,

   5:     focusFirstInput: true,

   6:     showBackOnFirstStep: true,

   7:     next: "#mynextButton",

   8:     back: "#mybackButton",

   9:     onShow: function (step) {

  10:         stepChange(step);

  11:     }

  12: },

  13: { /* Validation Configuration - None needed, MS takes care of it */ },

  14: {

  15:     success: function (data) {

  16:         $("#innerContent form").formwizard("show", "#" + data.CurrentStep);

  17:         $("button").removeClass("ui-state-hover");

  18:     }

  19: });

Some things I would like to point out.  First is the onShow event.  This is a custom event that I added because the existing event that was similar was just not quite what I wanted.  I will provide my updated version of the plugin at the bottom.  I have communicated the change to the person who wrote the plugin and he is considering it for the next version.

This application uses ASP .NET MVC 2 and leverages the client-side validation generation done by .NET DataAnnotations formally of the DynamicData library.  Here is the Speaker class, complete with one validation attribute:

   1: public class Speaker : ModelBase

   2: {

   3:     [DisplayName("First Name")]

   4:     [Required(ErrorMessage = "First Name is required")]

   5:     public string FirstName { get; set; }

   6:  

   7:     [DisplayName("Last Name")]

   8:     public string LastName { get; set; }

   9:  

  10:     [DisplayName("Twitter")]

  11:     public string TwitterHandle { get; set; }

  12:  

  13:     [DisplayName("Blog URL")]

  14:     public string BlogUrl { get; set; }

  15:  

  16:     [DisplayName("Bio")]

  17:     public string Bio { get; set; }

  18:  

  19:     public string DisplayName

  20:     {

  21:         get { return string.Format("{0}, {1}", LastName, FirstName); }

  22:     }

  23:  

  24:     public override string[] ToArray()

  25:     {

  26:         return new string[]

  27:         {

  28:             DisplayName, TwitterHandle, BlogUrl

  29:         };

  30:     }

  31: }

The formwizard plugin does support validating the steps before moving to the next.  You must have the JQuery Validate library for this to work and set validationEnabled to true in the wizard setup.  The goal here is to allow MVC2 to generate the validation and then seamlessly be able to use it.

To achieve this, I needed to find the MicrosoftMVCJQueryValidation.js file because, for whatever reason, it does not ship as part of the MVC 2 templates found in Visual Studio 2010, I am told it ships as part of another package.  It is available at the bottom for download.  Once you have the file, include the following line on your page to apply the validation:

   1: 

This looks for forms on the page and then matches their inputs with metadata supplied from the DataAnnotations to generate the validation setup script.  However, I ran into a bit of a gotcha with this, but I will move that to later.

Now if you you were just going to move from page to page with no real effects and just show the inputs, you would be set at this point.  However, whats the fun in that, I wanted to use a modal to add Speakers to a grid. However, I also wanted this modal to be validated independently of the formwizard.  It was clear that I would have to manually invoke the validation because I didnt want the validation rules in the modal to prevent moving to the next screen.

First, this requires the inputs within the modal to be in a separate form, this is the markup:

   1:  <% Html.BeginForm(string.Empty, string.Empty, FormMethod.Post, new { @id = "speakerForm" }); %>
   2:  <div class="modal" id="speakerModal">
   3:      <% Html.RenderPartial("SpeakerModify", new Speaker()); %>
   4:      <p class="footer">
   5:          <%= Html.ActionLink("Add Speaker", "StoreSpeaker",
   6:                   new { }, new { @class="fg-button ui-state-default ui-corner-all", @id="speakerSubmit" }) %>
   7:      </p>
   8:  </div>
   9:  <% Html.EndForm(); %>

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

Now, I learned something very interesting here, though it makes sense after the fact. You MUST use Html.BeginForm to create a form which MVC will attempt to tie JQuery validation to.  I spent hours trying to understand why my validation did not work and it was because I initially went with a simple for this.  I had to put debgger statements in the MicrosoftMVCJQuedryValidation.js file to totally understand this.

For formwizard you will need this to be outside the main form, which having its own form requires anyway, however, some may know that the JQuery Dialog component is actually moved to the end of all markup during its setup.  Thus you need to add some additional code to the setup, like such:

   1: $("#speakerModal").dialog({

   2:     autoOpen: false,

   3:     modal: true,

   4:     title: "Modify Speaker",

   5:     width: 500,

   6:     height: 'auto',

   7:     resizable: false,

   8:     draggable: false,

   9:     open: function () {

  10:         $("#FirstName").focus();

  11:     },

  12:     close: function () {

  13:         $(this).find(":input").val("");

  14:     }

  15: }).parent().appendTo($("#speakerForm"));

Basically, the dialog

starts in the form, and thus we will move it back into the after it has been created, as the process of creation moves it out of the form, which causes problems for the validation linking when the MVC onReady event fires, this is the reason why the form above had an id is specified.

Finally, we want to submit the form data to the backend Controller action and use the standard model binder to create the object that can be applied to our model for saving later. Here is the code that checks the data is valid and initiates a POST action to the controller method:

   1: $("#speakerSubmit").click(function () {

   2:         // ensure the form is valid

   3:         if ($("#speakerForm").valid()) {

   4:             var href = $(this).attr('href');

   5:             var formData = $("#speakerModal :input").serializeObject();

   6:  

   7:             $.post(href, formData, function (success) {

   8:                 $("#speakerModal").dialog("close");

   9:                 //UpdateGrid(success.Data);

  10:             });

  11:         }

  12:  

  13:         return false;

  14:     });

This code utilizes the serializeObject plugin which, given an array of JQuery form inputs, creates a JSON object which can be passed to the various Ajax related JQuery functions.  Also, notice how we are using the href attribute from the link to specify where this data is to be posted, this way we can easily change the URL in the View via the HTML Helper ActionLink function and affect this.  It just makes more sense then hardcoding the URL in the JavaScript itself.

A word of caution about JQuery validate and JQuery selection.  In order for the validate to work you MUST select the tag.  JQuery does some special logic with this selection.  The form will then display its various labels and inputs in an array.  It basically looks different from a traditional selection of a

or other elements, and it this difference which allows JQuery validate to work property; this is the reason for the id attribute being specified for the Speaker form.

Finally we have the relatively simple controller action:

   1: [HttpPost]

   2: public ActionResult StoreSpeaker(Speaker speaker)

   3: {

   4:     CurrentConference.Speakers.Add(speaker);

   5:     return Json(new SubmitActionReturnViewModel

   6:     {

   7:         IsValid = true,

   8:         Data = speaker.ToArray()

   9:     });

  10: }

This simply returns a JSON object to be used by the view.  Our next step is to take the return data, close the dialog and update the table to show the new speaker.  This is using a collection methodology and waiting to save everything till the end.

Using MVC 2 it is easily possible to use the DataAnnotations model to provide a means to easy way to generate this validation from a central place and allow it to be usable for both the client and the server.  This is perhaps my favorite feature in ASP .NET MVC.  I think its incredibly useful and a great to help developers continue to enforce DRY (Dont Repeat Yourself), in an area that before we had to do everything on the sever to achieve.

http://cid-630ed6f198ebc3a4.skydrive.live.com/embedicon.aspx/Public/Scripts.zip