Temporary Content Focus Shift

Starting Thursday, April 1, 2010 and going until Saturday, April 17, 2010 I will be changing the focus of this blog to one of travel. I am taking my first vacation in nearly 2ys to Japan. During this time I intend to post daily updates of my travels through the country, complete with photos. I hope you will continue to read my blog during and after my trip and hope I can provide valuable on a different culture.

Introducing Windows Azure

Recently I decided to break the mold and look into something other then WCF RIA Services, mainly because every developer on the planet seems to be doing just that.  As fascinating as I find it, the concept is simple enough to grasp where gaining a stronger understanding would not be that hard to gain.  I decided to change gears at look at something that has just as much potential: Windows Azure.

So why? Well working on in New York on what is becoming a very complex project I have begun to notice that my knowledge of scaling strategies could use a bit of work, and what is Windows Azure about: scaling, among other things.  You see after doing some initial reading I tend to think of Azure as this way to gain near infinite scalability and without the cost or complex designs that generally come with a highly scalable deployment model.  I decided to poke around and gain more insight into Azure, in particular SQL Azure and get some rough ideas on what it could offer.

So, I wont bore you with talking about how to create a deploy a “Hello World” Azure app, that tutorial can be found here.  Essentially here is what appears to be happening.  When you download and install the Windows Azure Toolkit you get a Cloud Project template.  Utilizing this template prompts you for creation of “roles” among which is a Web Role. You can roughly think of this as an instance of a web server running in the cloud.  This means you can dynamically configure how many you want, thereby easily scaling up to match demand, without increasing hardware cost. Just gotta pay the man for the bandwidth and processor you use; very cheap in comparison.  Note: This is an over simplification and I am aware of it

So some links that are helpful:

Perhaps the coolest bit of Windows Azure is this idea of “Database as a Service” brought forth via SQL Azure.  Based on what I am reading it has the ability to dissolve the common bottlenecking problems that occurs in many applications (database is a central point and therefore a bottleneck).  But using “the Cloud” you can have as many SQL Azure instances running as you like and they even take care of backup, migration, maintenance, and replication tasks  by themselves, all you do is pay for the storage.  I love this aspect and if my assumptions about it are correct it may prove useful to not just my current client but many others to come.

The final piece I have been looking at is AppFabric.  Just based on what I have seen so far it appears to allow application to have a “cloud runtime” complete with configuration settings and what not. I am the least sure about this piece and look forward to more exploration in the future.

To this end I am attending Azure Bootcamp in Southfield, MI next week ahead of my vacation to Japan.

New York City Code Camp

This is coming a bit late I realize, the Code Camp happened on March 6, I had planned to have this entry up by March 7, but due to a variety of circumstances, it just didn’t happen.

Ironically, this was my first big speaking engagement and I choose to be on business in New York for seven months and present at a camp in a city known for its high end talent.  Frankly, I was nervous about being blown out of the water by how smart people were there.  But you know, despite what people tell you, New Yorkers are some of the nicest people I have ever met, so long as they are not driving a car.

My presentation was on Coolite which is a set of ASP .NET web controls that encapsulates ExtJS.  You are seeing this trend with a lot of vendors.  They are adopting popular JavaScript frameworks and using them as the core of their web control libraries.  In my presentation I did the obligatory History of RIA slides that you will see with any presentation that mentions RIAs and I talked about how to setup and configure Coolite for operation.  Then I gave some examples.

The big thing I wanted to communicate with the examples was the ease with which Coolite gave you access to rich components that enabled you to create very consistent and clean websites without the aid of a designer.  Consistency is so vital in professional rich internet applications where the user experience will likely determine whether or not they come back; you simply cant expect the user to look around and figure out your page.  The trick is to design the pages similar so the user templates the pattern when he/she enters the site and begins interacting.

I also discussed some of the strengths and weaknesses of Coolite.  I talked about how it gives you great modular consistency (you can download new themes) while having a lower learning curve for existing ASP .NET developers.  However, its total lack of documentation (aside from an Examples site) is very disconcerting.  There are also a number of small bugs that do tend to show themselves when you get into really using it.

The weaknesses aside, the Coolite framework is a great tool for small to medium projects, throw in the fact that its free and it is something that could really help you.  However, if you are on a larger project I would recommend looking at something like Telerik due to the increased maturity of their components and the level of documentation they provide which can be a huge timesaver in a larger project.

One other note, the samples did use Coolite and I didn’t feel like putting a 20MB zip here, so you’ll be able to download the PPT, but you’ll have to get the source from SVN.  Here are the links

http://cid-630ed6f198ebc3a4.skydrive.live.com/embedicon.aspx/Public/CoolitePresentation.pptx

 

To get the source code, please execute the following command in an SVN client:

svn checkout http://cooliteexamples.googlecode.com/svn/trunk/ cooliteexamples-read-only

Thanks again for checking it out

VB .NET Gotcha and a word on compilation as testing

Not really a big fan of VB .NET though it seems clients have this notion that it is somewhat easier to learn then C#.  Where they get this I am not sure; frankly I totally disagree due to the fact that the way typing is handled in VB .NET can make for some very tricky gotchas.  Notably, when attempting to pass around interfaces, you are never told, until runtime, whether or not the types match and the function can actually be called.  But the following is an even more annoying problem that I ran into recently and helps me explain why compilation can be considered a form of testing.  This was the function prior to the change:

public shared function GetFacilityManagementUrl(int facilityId)
{
   
return string.Format(“{0}Pages/Facility/Manage.aspx?{1}={2}”, GetBaseUrl(),
       
Core.Constants.FACILITY_ID_PARAM, facilityId);
}

So this function was being called throughout the application like such:

ProcessController.GetFacilityManagementUrl(CurrentFacilityId);

Following the refactor the function looks like this:

public shared function GetFacilityManagementUrl()
{
   
return string.Format(“{0}Pages/Facility/Manage.aspx”, GetBaseUrl());
}

So what I always do when I make this kind of change is I immediately do a compile and will be expecting a code break.  In this case the code does not break.  Do you know why?

In VB .NET () is used for both functions and indexing.  Strings are naturally character arrays, so VB see’s my function returning a string and then it thinks the (CurrentFacilityId) is me indexing the string for a particular character index.  Would a unit test have caught this, of course, but without having to kick of some test tool I was able to see the problem immediately because compilation succeeded when I expected it to fail.

Always remember, when you make a change you should always try to break things first, in cases like this compilation can be a good first step before you run your actual unit tests to determine if things are able to work.  I generally view tests as look at whether the behavior is correct, but compilation is determining whether the behavior has the opportunity to be correct.

Starting with RIA Services

I am planning to speak at The New York City Code Camp on March 6, 2010, my topic will be Developing Rich ASP .NET Web Applications with Coolite”.  To challenge myself before this I decided to experiment with Silver Light RIA Services in creating a small Silver Light application to get the data into the database.  I have some initial thoughts on RIA Services being something that I played with in the past and now.  I felt it would be good to write a blog entry to share my thoughts on the idea and how it works, also ask some questions.

So let me first state that I think RIA services is a great idea, and is a step in the right direction.  The reason for this is that it seems no matter what tools we create or how much we abstract the application requirements continue to increase.  This is because as we get better at using our tools we find ourselves able to focus more on the applications at hand, when this happens inevitably we think of bigger and better things.  So of course, this is a vicious cycle, as we get more time we use it to make things more complex; being a developer is fun 🙂

If you have ever made any kind of advanced web application that leverages the principles fundamental to Web 2.0 you know that it always takes a lot of code and tends to lead to some coupling that is undesirable.  To counter this Microsoft is working on a new idea for its Silver Light platform that automates much of the communication between the front end application and the server.  Its purpose is to open the door for Silver Light to be used in Line of Business applications.

RIA Services is centered around the idea of Domain Context objects, which are representations of decorated backend Domain Services classes.  When the application is built, Visual Studio will use its internal T4 template generation engine to generate code within the Silver Light project.  This code can then be used and it automatically takes care of the process of connecting to the server and getting its data via WCF RIA Data Services.

Now, I cannot tell you how many times this has been attempted by someone somewhere, every month it seems a new framework comes out that promises to change the way we develop the web, never does.  RIA services has a lot of momentum but it is not easy to get started.  There are a lot of gotcha, mainly because it is still so new and has a lot of things in flux.  .NET 4 promises the first finalized release version integrated into the .NET framework.

I earnestly look forward to some additional development in this area as I think it is very important.  With the, pardon me, failure, of MS Ajax Microsoft made a great move to accept JQuery as the JavaScript framework of choice for ASP .NET web apps, now we see them developing a framework to help RIA development in the sandbox approach.  We can hope for a better outcome for the RIA services.

Understanding Co and Contra Variance in .NET 4

Of all the features that are coming in .NET 4, perhaps the one that is the most exciting to me is Covariance and Contravariance.  So why does this excite me more then EF and other new features coming; cause for those of us who leverage generics and polymorphism the updates to variance are a god send. Consider the following example:

This is the abstract class we are going to demonstrate against:

public abstract class Vehicle
{
     public abstract int NumberOfWheels { get; }
     public abstract void Turn(string direction);
     public string Name { get; set; }

     public virtual void Drive()
     {
          Console.WriteLine("Driving Vehicle");
     }
}

We create the following concrete classes defined using the Vehicle abstract class:

public class Car : Vehicle
{
     public override int NumberOfWheels
     {
          get { return 4; }
     }

     public override void Turn(string direction)
     {
          Console.WriteLine("Turning " + direction + " on " + NumberOfWheels + " wheels");
     }

     public override void Drive()
     {
          Console.WriteLine("Driving the Car");
     }
}

public class Truck : Vehicle
{
     public override int NumberOfWheels
     {
          get { return 6; }
     }

     public override void Turn(string direction)
     {
          Console.WriteLine("Truck is Turning " + direction + " on " + NumberOfWheels + " wheels");
     }

     public override void Drive()
     {
          Console.WriteLine("Driving the Truck");
     }
}

Now, we will define an extension method to operate on both of these types, we can do this by operating on the abstract Vehicle type:

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

public static void DriveAll(this IEnumerable listing)
{
     foreach (Vehicle v in listing)
     {
           v.Drive();
     }
}

So given all this, you might expect this bit of code to compile in C#:

static void Main(string[] args)
{
     List carList = new List()
                                    {
                                        { new Car() { Name = "Nissan Sentra" } },
                                        { new Car() { Name = "Volkswagon Passat" } },
                                        { new Car() { Name = "Toyota Camry" }}
                                    };

     carList.DriveAll();
     Console.Read();
}

You would be wrong, this code does not compile because .NET 3.5 does not support covariance and contravariance.  This means that .NET will ONLY analyze the outer type in this case, so IEnumerable and not look at the inner type which would allow it to know that since Car inherits from Vehicle this is ok.  To get around this we employ a interface trick to allow us to control the inner type and “trick” .NET.  To do this, we first need an interface:

public interface IVehicle
{
     int NumberOfWheels { get; }
     void Turn(string direction);
     void Drive();
}

Next we update the Vehicle definition to implement the IVehicle interface like such:

public abstract class Vehicle : IVehicle
{
     // ...
}

By doing this we are able to constrain the inner type and control it.  The key thing to understand here is that the extension method is attempting to find a similar signature to attach to, with thus approach we can give it that, while still keeping the flexibility of working with all types inheriting Vehicle.  This is our updated signature:

public static void DriveAll(this IEnumerable listing) where T : IVehicle
{
     foreach (T v in listing)
     {
          v.Drive();
     }
}

To understand what is happening here, we are effectively taking control over the inner type through generic constraints.  By doing this we are able to provide the signature that .NET 3.5 will need need to provide the extension.  In addition, since we are using the constraint we maintain the flexibility and requirements for the internal type.

However, in .NET 4 where we have covariance and contravariance, we don’t need the interface approach through a generic method.  The inspection of the extension method will encompass an analysis of the internal type to determine the inheritance.  This is already being done in 3.5 on the outer type (try using IEnumerable for List), in .NET 4 we are now having it done on the internal type as well.

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

How Can Extension Methods Help You?

I am a huge proponent of extension methods in .NET (and other languages).  I believe they are a gateway to making complex code easier to follow and understand, also helps to maintain the object oriented paradigm that .NET is built on. They are also key to the notion of Fluent APIs which has been permeating throughout the programming space for the past year or so.

The idea of a Fluent API is an API that reads very much like an English sentence and is considered a tremendous advancement in readable code.

string value = "500";
int myValue = value.AsInt();

This is a sample code snippet that you see a lot in my projects as I tend to work a library I have created which contains many extension methods for formatting and parsing of values. Notice how as you read the code it flows and its concise with the parsing logic centralized.  Traditionally you might have seen code such as this:

string value = "a500";
int myValue = int.MinValue;
int myValue = int.MinValue;

int.TryParse(value, out myValue);
myValue = Convert.ToInt32(value);

Using TryParse, which is a standard method common to all primitive value types in .NET, this code does not throw an exception and myValue has a value of zero at operations end.  I dont like this because I would expect TryParse to leave the value of myValue unchanged if the parse fails, it does not.  As for the use of Convert it basically emulates what would happen if you called int.Parse on this value, it throws a FormatException.

In addition to the obvious problems of repeating this code anywhere you need to parse an int from a string, I find this to be very procedural.  Now there is no question that all programming, no matter how hard we try is going to be at some level procedural, however, my goal is always to minimize such code and keep things as Object Oriented as possible, after all that is what OO languages are designed for and, not surprisingly, very good at.

Extension methods are also exceptionally useful for custom formatting and outputting for certain types. For example, lets say you wanted to output a DateTime object is standard notation for the United States, mm/dd/yyyy.  Well simply calling ToString(“d”) easily accomplishes this. But littering your code with this code could turn into a potentially horrendous maintenance problem if your boss decides it ought to be dd/mm/yyyy (which is common in Europe), but use an extension method and your code could look like this:

DateTime dt = new DateTime(1983, 1, 13);
Console.WriteLine(dt.AsStandardFormatString());

Using this approach you can have a standard way out outputting a DateTime, perhaps stored somewhere in a different project, that you can easily update, perhaps make culture aware, all without your application ever being the wiser.

So, while Extension Methods are very useful for type conversion libraries and provides standardization out formats and outputs for types, they have another great use. Provides reusable ways to affect changes on objects, collections in particular, while maintaining a very clean interfaces and emphasizing “changes as a unit of work”.  I am sure someone has published something relating to the notion of change as a unit of work, but let me elaborate a little further.

The idea of change as a unit of work derives from the Single Responsibility Principle and Open Closed Principle laid out by Robert Martin.  The idea is that everything has one responsibility and it should carry out that responsibility effectively and thoroughly, from the construction of methods to the definition of classes, they amalgamation of these constructs comprises the application and only by working together can they do such.  Consider the following pieces of code that comes from some code that I wrote this week for a rather complex process that I am working on at work.

var scheduleList = dayList.GroupByShiperId().ConvertToSchedule();

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

(For those concerned, yes I did mutate this code so that it does not reflect what was actually written).

The idea here is to take a very complicated process and break it down into smaller units and then create functions to carry out those units of work.  Hence any changes to the information is a unit of work.  Easily testable and thanks to extension methods, very clean from an API standpoint. But extension methods are only a small part of this process, understanding the interfaces and classes you have available to you is critical as well as it can help you move code to where it should be.  The method in question could easily have been a 200 line function that would have been a nightmare for anyone to maintain.  Instead its five functions, all with less then 12 lines working together and producing a result.

Using this approach you can be less procedural in your code and leverage the object oriented paradigm of the .NET framework and help people who will come after you understand what is happening. Thus together with good function naming, you can create similar clean reusable APIs for your projects and further leverage the concept of change as a unit of work.

Thoughts on CodeMash 2010

As CodeMash 2010 has come to a close I would like to take some time to reflect on the conference.  This year was, in my opinion, not quite as good as last year but a lot of fun.  I think this year more people decided to really forgo the sessions and focus on the networking and social interactions that really define what CodeMash is. To me CodeMash has ALWAYS been about the people; people that you meet and the people you get to talk to and learn from.  So that said, I didnt feel the sessions were quite as good as last year, so we will look forward to next year.

One thing that did happen was I did my first ever open spaces sessions where I spoke on Developing for Android. Considering I decided I want to talk about it earlier in the morning and did no formal preparation, it went very well.  I had about 20 people around me and we just talked.  I explained some of the things I have discovered in developing CodeMashDroid and really perked the interest for a lot of people.  I really want to speak next year, but for right now I am preparing to submit my first abstract to the New York City Code Camp, which would give me a chance to really push RCM’s name for web development in the New York area.  But I digress. Speaking of CodeMashDroid, I was amazed at how many people had Droids at the conference. It really is nice to see that Android is making good success against ATT.

I was also not the only one to develop a mobile application for Droid, as an entire company decided to get involved in the development. I also missed the emails about posting information on the app so other would know where to find it, in addition I targeted only Android 2.0+ so I limited myself to those with Droids or newer; but its good to take away lessons from such an experience.  I also learned about some of the shortcomings in my test methodology and emulator capabilities.

Overall, CodeMash was a great experience and I look forward to the same experience next year.  I will be posting my pictures to Facebook later this week as well as hopefully getting time to write up a few blog entries elaborating on the topics I spoke about in my Open Spaces discussion.

Introducing CodeMashDroid

I am pleased to announce the release of a new app in the Android app store which I have entitled CodeMashDroid.  This app was developed in response to the request from the CodeMash organizers that attendees develop a solution for managing the CodeMash schedule using a REST API containing information for all conference information.  In addition, I had just purchased a Motorola Droid and was curious in entering the world of modern mobile development.  So I set out to create such an application.

As I developed the application I began to think about the fact that each time I went to a conference we were always being handed a piece of paper with the schedule or people would use their laptops to look up times.  It seems with the prevalence of cell phones, especially in the geek community, the smart handset could be used to manage this information and be used to alert people if they are missing something.  Think of it as a specialized calendar, or even better based on what suggestion, allow integration with Outlook, gCal, and iCal.  From this feedback and train of thought the idea for a conference managing application was born.  CodeMashDroid is a proof of concept app for this idea, and should it prove successful the project will officially kick into gear after CodeMash when we have collected feedback.

To download this app please look for it in the Android App store, it is free and we welcome any feedback as we decide on how to move forward. Please leave these comments in the AppStore feedback section, we will be monitoring this and will work the features into planning as we move forward.  We also welcome naming suggestions as I have no idea what I am going to call this yet 🙂

Thanks

2009 Year in Review

Its amazing how much has changed in my life this decade, even in the last year. We all know how 2009 was bad for a lot of people and at the start there was much we wondered about.  For me, 2009 was a year of redemption, a chance to make my mark and advance in my industry.  When 2008 ended I had narrowly avoided being laid off from my company, and now as we ring in 2010, I feel that I can breath easy with a fair amount of job security.  Many other things occurred so I thought I would publish a year in review posting for this pivotal year.

Job Questions
As December of 2008 came around I already had a feeling that I would be included in the layoffs.  I hadn’t really been on a billable project in quite some time and though the work I had done was valuable from a process standpoint, as a consultant it really comes down to billable hours. I steeled myself for the outcome which came to fruition.  But all was not lost to me, I continued to work even after I was told I didn’t have to, I wanted to finish this final feature I was working on, as a token of thanks to the company that gave me my start.  As I started looking around I had garnered much interest from the tech community and my transition to a new company seemed assured when I caught a most fortunate break. I received a phone call from our head manager in Grand Rapids who asked me if I would like my job back.  I was taken aback by this and asked for some time as I was considering other offers.  But the more I thought about it, the more I wanted to be at RCM, I felt it was the best place to learn and grow.

So I had a conversation in which I asked the reasons behind the decision to let me go, it was because they had not put me on work because of the good work I was doing to improve process.  I said to them that if I were to come back, I would not like it if I were simply laid off again for the same reasons, something must change.  And so it did.  From the get-go they were true to their words and through work at me and I thrived.  I worked on one of the biggest projects the company had that year, and it became the most successful, a testament to good planning and a well organized team.

Then came another challenge, we had a client who wanted something done but only PHP could be used due to the existing technologies. This time I pulled out the skills that I had not used since converting to a .NET programmer (heavily anyway) and created a well architected easily extensible enhancement to an existing software product, this time working remotely in more of a leadership role for my potion of the project, again I thrived.  My boss, who had always believed in me, was thrilled, I was starting to really carve out a niche for myself.  Then it came, the need for someone to go to New York to work on a huge project for another division in RCM, one under the direct supervision of the RCM brass.  After a long interview process, I was chosen and so as I close 2009, I look back on the great breaks I had knowing full well that my positioning had a lot more to do with getting them then luck.

Preparations for Japan
With questions about my employment swirling at the end of 2008, my Japan trip for 2010 was also placed in doubt.  Once the employment situation stabilized, the stability led to me beginning to take the steps to prepare for my trip to Japan.  The first step was to refresh my knowledge of the Japanese language through study.  To aid in this study I purchased Rosetta stone with levels 1, 2, & 3 and immediately began my training.  The remote work in New York helped bolster my accounts and allowed me to purchase a new camera which is something that I indicated was necessary for my trip.  And now, as we enter the new year I am preparing final preparations for plane tickets and ryokan, hostel, and hotel reservations following CodeMash.  In addition, I am preparing for the final purchase, a Netbook to help with blogging details of my journey.  Truly 2009, was a great year that saw me take great strides in preparing for my trip.  The final detail that solidified my desire to go was learning that my host brother and sister would be graduating from juugaku (middle school) and shogaku (elementary school) and thus I would be in country and able to attend a juugaku entrance ceremony, and a kookoo (high school) entrance ceremony.  This is very exciting.

Final Reflections
I generally tend to think that each year should bring knowledge and new experiences from which you should grow as a person.  And you should be proactive in seeking these experiences and seeking new challenges to increase your knowledge.  I feel that in the past year I have done this and grown as a person.  I hope to continue to increase my experience and knowledge in 2010 by continuing to challenge myself professionally and grown from the experiences I will have in Japan.  Happy New Year everyone, have a happy 2010.