Back for a Bit

So I was at work today and I am currently on the bench, meaning not actually assigned to a project, though I am supporting both Spout as well as the Columbus Dispatch project. And I noticed my actual website, http://www.jfarrell.net seems to have lost its domain. I think this is a good point to recreate it anew. I am not really concerend with losing the domain, but I want to be able to create a new site, so for a bit I will be using to blog until my Nusoft one is ready.

So lets talk about LINQ, which has become a favorite subject of mine, mainly for the verbosity it allows code to have. While its obvious it is still incomplete, Microsoft has made great progress on making life for the programmer easier. So anyone who has heard of mine as probably seen the examples that have been posted such as:

var query = from s in db.Series
select s;

For the uninitiated, this an example using LINQ to SQL, although it could really refer to any supported LINQ context. In this case db is a generated reference to our database via a set of classes generated to represent the database in a clear and strongly typed fashion. I wont spend a lot of time explaining this, Ill save it for another post. Basically think of it this way, db.Series is a collection of objects, and I am select each object and naming it ‘s’. I then use the ‘var’ keyword to infer the type of the return based on the evaluation of the right hand side.

Simply put, var has no type until we finish executing the query, then the CLR determines the type through inference.

So enough introduction, how this useful, what can it do?

In theory, we can use LINQ to generate the vast majority of the tedious grunt work performed in the business layer of an application. LINQ will generate the classes to represent our tables, properties to references the fields and will “connect the dots” with our one to many relationships. No many to many at this point, but Ill show you how to get around that.

Now you might be thinking, this is too good to be true, surely Ill have to do some work. And you would be correct. Whether its intended or LINQ is incomplete, set against something like the Rails framework, it appears incomplete. For example, in so far as I have found, counting low level hierarchical data from the top level cannot be done without creating a query. In Rails, you can add a couple statements to the needed models and this information is provided. In LINQ, this was the solution I came up with:

var query = from result in (
from s in db.Series
select new
{
SeriesId = s.seriesId,
Name = s.name,
EpCount =
(
from ep in db.Episodes
join se in db.Seasons
on ep.seasonId equals se.seasonId into joinedSeasons
from js in joinedSeasons
where js.seriesId == s.seriesId
select js
).Count()
}
)
orderby result.EpCount descending
select result;


So let me give some background to this data. This is from my AnimeManager program that I am redesigning using LINQ, the previous version used RoR. All main data is organized as such: Series, which have Seasons, and Seasons which have Episodes. This code will return to me an anonymously typed object containing a specific series, its ids, and how many episodes are stored in its seasons.

While it may look complex, when broken down its actually very simple to understand. A couple things to note is the way I am using “joined tables”. When your writing LINQ queries, it is very much like writing normal program code in the way you think. You can use parenthesis to denote what should be evaluated in what order, and then know that inside the parentheseis will normally return a Queryable object, which supports various methods, you can think of these as table objects, though they are more like anonymous tables. You can see the use of the table method Count which I use to generate the count of the episodes in the subquery.

The way we write and think about LINQ is mostly backwards from the way we write most queries, this is mainly due to LINQs roots in XQuery. The key thing to remember about LINQ queries is the ordering and understand exactly what you have. So far, the intellisense has been very helpful within Orcas Beta 2 and is a good way to gain an understanding as to what variables you have access to at any given time.

I plan to post more about LINQ throughout the week, toodles

Mongrel and a New Ruby

Word has reached me recently, through RadRails primarily of Mongrel, which apparently hopes to become the defacto standard for hosting rails applications. This being the case, your truly decided to give it a shot.

According to most of what I read the best way to acquire Mongrel was to download the gem for it, easy enough, we just need to run gem install mongrel_service. As I run Windows, services are generally the best way for me to manage application servers, I do the same thing with Cold Fusion, Apache, Postgres, Oracle. The only services that run constantly are IIS (cause I cant seem to turn it off and have it stay off) and MySQL, which I keep up for the Anime Database. So, back to the talk, I ran that command and surprise, my gem install appears to be broken. This I suspected, I had similar problems when I was trying to do MySQL work through pure Ruby.

No big deal, I thought. Ill just download a new version of Ruby and have it overwrite the existing files, its been a while since I changed versions. So I headed out to the Ruby download site and download the Windows One Click installer. I ran the installer, but it failed, saying it was not allowed to overwrite existing versions, it told me to blow away the directory. I considered my options. Well yeah its a pain, but how much work could it be, most of my applications look for a ‘ruby’ folder name, so they would be unaffected, I might have to change a few config options. So I went ahead and blew away my existing rails directory, and installed; it worked.

So now, I had to assess the damage, and as I suspected Rails was gone and many of my other plugins, I expected as much. However, with gem working I quickly brought them all back. This scenario I considered, knowing that Rails would probably have to be reinstalled, but given that gem makes it easy, I didnt count this as a big deal. So I went ahead and installed Rails and Mongrel and made sure the MySQL gem was up to date. And I went ahead and launched my Anime server. I thought, it really shouldnt care about a new version of Rails behind it, after all its mostly a framework: WRONG!

It yelled and complained about a setting in environment.rb, the version. This seemed strange to me, so I decided that I should replace the environments.rb file. However, before doing this I copied my Rails app to make sure that I would have a copy somewhere as backup. I deleted environments.rb from the Rails app and ran rails Anime from the command line. This didnt seem to correct the problem, so I decided I should regenerate the whole application just to make sure I was starting from a clean slate. I deleted the directory and ran the Rails command again, and then ran the generator script for each of my models and controllers. I then went to the back up directory and copy and pasted the files to their original locations, since the code had not changed. This worked as expected, with a minor problem.

Somehow, a couple helper methods had disappeared, I could not find them so I had to reimplement them. Its funny cause even in the backup, they didnt appear to exist. So after doing that, and making sure my other web properties (stylesheets, JS files, and images) were copied over, I ran the application: Success!!

Time to test out Mongrel. Thanks to the Mongrel Win32 HOWTO Iwas able to easily understand the program. Mongrel is really quite neat it how it organizes itself. You essentially create a Service Profile for each app and run it, you can, of course, pick various configuration operations such as environment and port number, most notably. To take it farther, I attempted to use Mongrel from within the recently updated RadRails plugin for Eclipse, but it appears to have been designed exclusively for Linux. This is not an issue, however, as Mongrel is intended primarily as the staple Production server for Rails applications, and its not a good idea to run a server in Production mode as a development server, because of the load process, meaning you have to restart the service to see Ruby code changes. I have WebBrick working fine from within Eclipse and that is satisfactory for now.

Custom Membership Provider in .NET

There is something that has bothered me for a long time about my upcoming website. I use MySQL to store all of my data in, such as these blog entries which are on Blogger at the moment. However, I wanted to use the built in .NET User Management system, which takes care of registered user management automatically and securely. However, the problem has always been that while the login mechanism worked perfectly well from the Development Environment using the built in server, I could never get it to work once I transferred my work to IIS. I had tried several times in the past, without sucess to get this to work, but failed each time. So I decided Saturday that I was going to try again fora final time.

I Googled for the better part of a few hours, and had several people attempt to help. But nothing seemed to work, so I decided that I was fed up with the matter and thought of implementing my own solution. But given that I wanted to perserve the “clean” code that the automated system employed, I was not able to implement my own in the standard way. After searching through the various object involved with Membership and Roles in .NET I was unable to find a good pragmatic solution. I even considered putting it back to work on at some other time. However, then I remembered something I had looked at quite a while ago: Custom Providers.

Like any good Programming Framework, .NET allows you to use custom code to replace/extend existing features. In this case, by default, ASP .NET uses the SQL Membership and Role Provider class that tell it how it should interact with the SQL database that is housing the user information. Simply implementing derived classes from the base of SQLProvider allows you to implement providers for whatever database solution you choose. In addition, thanks to properly setting up these base classes, all features are available to all database types, through program implementation. So I decided to move my user login system into the MySQL database housing everything else; this makes more sense anyway, as its no real use to have two databases of differing types when one will suffice and save me a connection, and set of connecting information being stored in the web.config.

So the first thing to understand is the class relationship for these Custom Providers. Heres is the basic relationship:
RoleProvider -> MysqlRoleProvider
MembershipProvider -> MysqlMembershipProvider

So the two classes I am working with are MysqlRoleProvider (for dealing with ASP .NET Roles) and MysqlMembershipProvider (for dealing with ASP .NET site Membership). I am not going to list out the methods for each that need to be overridden, but rather provide links to the MSDN docs that I found quite helpful.
Example of Membership Provider: Implementing a Membership Provider
Example of Role Provider:

Implementing a Role Provider

The first thing to realize about this is that we have to tell our ASP .NET application which provider to use and which is the default. This involves simply adding a couple attributes and tags to our web.config file: Here is what I added to my web.config:

— For Membership —


     <providers>
       <add name="MysqlMembershipProvider"
             type="MysqlMembershipProvider"
             requiresQuestionAndAnswer="true"
             connectionString=""
             />
     </providers>
</membership>

--- For Roles ---
<roleManager defaultProvider="MysqlRoleProvider">
     <providers>
       <add
         name="MysqlRoleProvider"
         type="MysqlRoleProvider"
         connectionString=""
         applicationName="Website"
         writeExceptionsToEventLog="false" />
     </providers>
</roleManager>

Some things to note, you can leave the connection string empty here and access it as normal through the ConfigurationManager.ConnectionStrings hash – or, if you want to have a different database to store the information you can provide them here and access this connection string in the Initialize method implemented by the two providers through the config function parameter.

Finally, the afore mentioned config parameter also houses a key for each of the attributes you defined in the <add> tag in the web.config. This enables you to use this hash to store default values for your custom provider. This way you can set the settings for user management according to how you like. Now we go to speaking to the actual implementation of these custom providers.

The first thing I did was implement the Membership Provider and test its functionality my dragging a Create User Control onto a separate ASPX page. This is quite nice as you can easily break point the classes and see whats going on, its gets a little more complicated as you cant break point. But lets walk through the steps I took implementing the Membership and Role Providers, as well as my implementation validation methods.

First, after I created the Create User Control I added exception throws to each function my class implemented so I would know what functions I had to implement to get the basic functionality. The first was MembershipProvider::CreateUser which essentially inserted a user into our database. I used Microsoft’s example layout, which is horribly designed and not consistent with Database normal forms, which utilized three tables: Users, Roles, UsersInRoles.

So the create user function effectively checks if a user exists for a current application (the application name is stored in the web.config file) and insert them otherwise with the appropriate values. Perhaps the hardest part of this portion was figuring out how to properly Hash the passwords. I don’t much like storing passwords as plain-text, and while the Membership Provider supports plain-text, hashed, and encrypted passwords it seemed to be a fairly easy task. However, I was unable to get the automated routines to work, so I ended up finding code that would do the MD5 hash for me, and use that. I am still working on this portion and hope I will get it to work properly soon, as it is the proper solution.

Moving on, now that I could store users I moved onto allowing them to login. This was a simple matter of implementing MembershipProvider::ValidateUser. Again no problems here, and because of the way this is implemented you can customize the way users log in to the application to your hearts content. With this in place, I could now log into my website using MySQL, however, we were missing a key component: roles.

ASP .NET roles serve as a simple means of Access Control, that is they make it easy to determine who can do what on your site. In my website, the admin controls are only visible to users in the ‘admin’ role. So without this in place, despite the fact that we are able to login, were not able to do anything useful. So moving on to the implementation of the Role Provider. The development of this class was complicated due to the lack of debugging. Because modification of Roles can only take place in ASP .NET Web Site Configuration Manager, which runs in non-Debug mode, throwing exceptions was the only means of determine where control was when something went wrong.

Really the only problem to speak of was the AddUsersToRoles function, which has an odd way of doing things, though it is conducive to the way the Configuration tool presents the options to the admin. I had to copy and paste someoneelses code, but in the end it was a simple mistake that was killing mine. Once this was installed and I had done some more testing and further implemented needed functions (again I was using the same process of exception throwing from the Membership class), I had a working automated authentication system using MySQL on the backend to store users and roles and validate them. My hidden content was even working at this point with the roles that I stored.

Conclusions:
This is definitely one of the more useful features I have found in .NET and is example of the built in flexibility a good framework like .NET has, and also the wealth of features and freedom provided to the programmer. There are very few restrictions placed on the programmer who wishes to implement a custom provider. While I did utilize the database design, which as I said is horrible, that Microsoft provided, it is by no means restricted to this as the model is very devoid of cohesive functions. That is, the end objects only care about what is returned, now how it was obtained.

I look forward to holding on to this code as it will be very useful in the future as will the knowledge of understanding this so that applications are not seen as restricted to only using what Microsoft provides.

Final Trip to Kyoto

I would like to sit here and tell you my final trip to Kyoto was one where I finally visited some historic sights, or that it went off perfectly with no flaws. But, it was more or less an errand that I took upon myself. I greatly respect my mentor Professor George Nezlek and so when he asked me to purchase some high quality Japanese kitchen knives during my stay here I happily accepted. His request called for 5 ceramic knives. For the uninitiated, these knives are forged from heated ceramic are sharp enough to cut a thick piece of leather with little effort. Sharp enough, in fact, to cut paper in half lengthwise. Thanks to my friend Akane Nambu’s help I secured three of the knives with little trouble from a upscale shopping mall in Kyoto, I also learned there that the Sashimi knife my Professor was requested was not available in Ceramic form inside the country.

I went to Kyoto today to buy the final knife: A Carbon Steel Sashimi. I had the great experience of speaking with a friend of mine, Zack Edgerton, who lives here in the dorms at JCMU and who lived in Tokyo for a year, recommend a good knife shop to me. The shop is apparently home to a family who for 1300 years made some of Japan’s finest Katanas before they were outlawed in the Meiji era. They have now turned their expertise to forging hand made steel kitchen knives, with a similar level of quality. And so today I travel there on my final trip to Kyoto.

I took the train and subway as planned, found the shop rather easily and picked out the knife, went to pay and found they do not accept Credit Cards (this has been a continued frustration for me, as I tend to use my check card for most things). I immediately asked where the Post Office was, as it is usually home to an international ATM. Thankfully, I was told there was one nearby; so I hurried to it. It was, however, not an international ATM thus my card did not work with it. As I stood outside, the rain falling on my head, determined not to return to the dorms empty handed, I wondered what to do.

It hit me, that there is a big post office near the main Kyoto train station that I came in at, so I jumped back on the subway and returned and was able to withdraw the amount of money necessary to purchase the knife. I returned to the shop using the same subway. The workers at the store were happy to see me return and I was even able to have a small conversation with the one of the knife makers as he CARVED ‘Nezlek‘ into the blade with a chisel. This was one of the most dexterous things I have seen since I have been here, truly amazing.

Now with knife in hand, I ventured back to Hikone in the rain which was now falling harder. I got soaked returning to the dorms, but I made it and wrote to my Professor that I had acquired 4/5 knives. I expect a very happy reply shortly from him, he is very excited to say the least.

So what did I learn from Kyoto. Having been to Tokyo, and knowing that both Tokyo and Kyoto are considered major cities here in Japan, I am amazed at the difference in infrastructure between the two places. In Tokyo, the subways came very frequently and were consistently on time, this to limit the amount of crowdiness in the cars, this is not the case in Kyoto. The subway was regularly late and the cars packed full as possible. Somehow, Kyoto seemed more crowded then Tokyo, though is likely because I was in a open market area of Kyoto, and Kyoto hardly has the same level of infrastructure development as Tokyo

All in all, I will miss Kyoto and all of Japan when I depart in 1 week to return to America. Despite this, I am anxious to return to America, I miss my family and my life there. And it is time for me to finish school.

Ruby on Rails : Cascade Insert

Among the many applications and ideas I have been working with is my Anime Manager, written in Ruby using the Rails framework for Web Application development. And one of the problems I have struggled with involves sequences. Since their are two elements that require a certain ordering: episodes and seasons. The requirements for ordering the seasons is less stringent as the position numbers are used exclusively for display purposes, but are never actually shown to (or modifiable by) the user.

For example, a series of seasons with position values 1, 2, 4, 5, 7, 8, 9 is not a sequence we need to worry about heavily as the implementation does not stipulate the display of these numbers, they are used as a means of ordering. However, a series of episodes with this same sequence requires much more stringent ordering and series maintenance, as episode numbers are displayed to the user. This was the root of a problem that I have been battling for some time. Essentially, given the sequence of episodes 1, 2, 3, 4, 6, 7 if I insert an episode in position 1 I would like the resulting series to be 1, 2, 3, 4, 5, 6, 7. I know of the acts_as_list construct for models and attempted to use it. However, it is too general, the resulting sequence is 1, 2, 3, 4, 5, 7, 8. Note the gap here.

I did some searching on Google, to no avail, this is something that is too rarely encoutnered or not something skilled Ruby developers have thought of, so I decided to come up with my solution. The idea I came up with was a simple gap check where given a desired position, we should check and see if an episode exists at that point. If it does, we select this episode and attempt to change its position to the next integer value. We recursively repeat this process until either we encounter a gap or reach the end of the series, at which point the stack begins to unwind.

Here is the code that I came up with to perform this cascading insert:
def number=( pPosition )
# first – use SQL to see if an episode exists at the specified position
ep = Episode.find( :first,
:conditions => [ “season_id = ? and number = ? and episode_type_id = ?”,
self.season_id, pPosition, self.episode_type_id ]
)
if ( ep != nil )
# recurse case
ep.number = (pPosition.to_i + 1) # recurse call
#write_attribute :number, pPosition.to_i + 1
ep.save #SAVE WITH NEW NUMBER
end

# assign to the position
write_attribute :number, pPosition
end

I was surprised this was not built into Rails somewhere, but I was glad to assist as after seeing this code a couple of developers expressed interest in adding it to the Framework, some even hinted that they remember seeing a module that was written to handle this problem.

I hope this help anyone who wishes to perform this type of insertion for a series

Job Hunting

For as long as I can remember, I have been in school. No, I am not talking about college, but of school in general. When I sit down and think about it, I started first grade when I was around 4 or 5, maybe (these things are foggy) and I have been enrolled in an educational institution of some form or another since then, 18 years, more or less. But all that is about to end. If all things go according to plan I should graduate in April with a Computer Science Bachelors degree to go with my Associates in Applied Sciences degree. Given all that I have learned and taught myself, and the experiences I have undertaken one would think that I would not have any worries about stepping out of academia to work in the professional world. Such people would be wrong.

When I actually think about it, it is quite scary, the real world. Its not like I have never held a job before, indeed I have been formally employed since I was 14; but these were mostly jobs to make spending money as a teenager, it was hardly to support myself. Then when I was 20 I made the decision to move into the dorms at Grand Valley. While this was definitely a greater degree of self responsibility I still had the financial backing of my parents should I fail. When I didn’t fail for two years, I decided to take the next step and formally live on my own, by renting an apartment for over a year. I have succeeded in this matter of self sustainability, but the next step is truly the hardest, for it may involve me moving away from the family that supports me. But this is what I want, and was one of my reasons for studying in Japan: to prove to myself that I was capable of supporting myself anywhere I went. I have proven this. But the scariest thing of all is finding a job that I love. The places I have worked up until now, the internships aside, were all jobs that, in the grand scheme of things, I really didn’t care about. They weren’t jobs that I put my heart and soul into and actually enjoyed going to work all the time. They were a source of income, monotonous sources of income.

But, when I return to America in 10 days time I will be looking forward to my final semester which has me taking just two classes. I know the importance of having a job at the end of the semester, to make graduation seem more real and to know that I will be somewhere when I do graduate. I know that my skills are diverse and my mind sharp. I know I have not entrusted all my chances to some flaky technology that may make me obsolete in a week. I know I have read and read to keep myself updated. I know I have constantly looked for ways to improve my skills. I know I am ready, I know I am ready, but am I really?

The real world seems so scary from within academia where if you don’t do a programming project you simply get a 0 and another chance the next time; you can always retake a class. In business, your likely to get fired. A job that supports my livelihood is scary when I think about it. I always worry, about many things that I know I cant control, but that I want to control. Really, I have the least right to complain then anyone, I have already been offered several jobs just by people I have helped in various ways and friends, but I have my pride that makes me want to do this on my own to make sure I get the best deal possible. But what if I wait too long, what if I am no good, what if I am nowhere near as smart as I am told I am or think I am? I always have such doubts, doubts that are only ever eased my experience and the constant process of maturation.

The job hunt begins in January, though I have already started reviewing whats out there. My chances look good, but I will be spending copious amounts of time with GVSU’s career service center to fine tune my resume so as to maximize my chances. Also, the launch of my new .NET website later this month will affirm to prospective employers that I am committed to Microsoft as well as many other programming platforms. Diversity is my greatest strength and I plan to take it as far as I can.

LINQ: A Language within a Language

For those of you who are like me and like to see whats new in programming, in particular Microsoft, you have at least heard of Language Integrated Query or LINQ. For those of you who don’t know what it is, it is basically something that we developers have wanted a way to naturally do with out datasets since the term “Disconnected Dataset” was coined, perhaps even before. And now, in .NET 3.0, we see the addition of LINQ to C# 3.0. LINQ takes new features, in particular Lambda expressions, and uses them to allow programmers to naturally query against all types of complex data types, in particular: XML, Database, Collections, etc. There is a wealth of Beta Documentation available at this point. As for this article I will walk you through the steps I went through to create two basic LINQ applications. The first one is a simple query that queries the list of running processes and filters them and then writes them out using Console.WriteLine. The second is more complex involving two data tables with information from my Anime database.

A small side note: How this all works at a deeper level is a very fascinating, and complex topic of discussion. As opposed to attempting to explain it myself, I instead give you this link to a demo by the creator of C# and lead designer of LINQ, Anders Hejlsberg. Where he explains how this is all broken down by the compiler.

Lets start with the first one:
This is the query we are going to build:
var query = from p in Process.GetProcesses() where p.WorkingSet >= 4 * 1024
select new { p.ProcessName, p.WorkingSet };

So if your like me, you looked at that for the first time and went, “Wow that looks really strange and very cool”. This is a LINQ query, and you guessed right it is very reminiscent of SQL. So lets look at this. The first step is declare a variable to hold what the right hand side of the expression evaluates too. Well we don’t know the type, so what ‘var’ does for us is essentially say “Give me a local variable of the same type as what is on the right”.

Next we start the actual query on the right side. ‘from p in Process.GetProcesses()’ is similar to saying:
foreach ( p in Process.GetProcesses() )’. This is a bit backwards, but you can think of ‘p’ as being the variable you use to reference the columns being extracted throughout the query, it is not available outside the LINQ expression. Next we have the option of filtering the data in this query, using the trust where operator. Notice, however, the difference here from normal SQL, when we reference the column to be filtered by we must precede the “column name” with the variable that we declared using ‘from’.

I want to stop for a moment here, note how I have column in double quotations; I did this intentionally. If you think back to the foreach reference, what the variable declared by ‘from’ is, is a member of the IEnumerable that you are querying. Hence, ‘p’ is an instance of the Process class, thus we have access to all the normal members we would have with a Process object in code. In the case of the filter, we simply ask for the WorkingSet (that is how much memory they are consuming in RAM) and validate it against a number (in this case 4KB).

Now the next small segment is the select, where we define what the objects that go into query will “look like”. That is, the columns we retrieve will have the same names as the properties as the type of ‘p’, so in this case we want the ProcessName and WorkingSet properties. These become the properties in our instance objects as you will see below.

foreach (var item in query)
Console.WriteLine(“{0,-25}{1}”, item.ProcessName, item.WorkingSet);

Looking at this, you should quickly be able to draw the line as to why we can access these particular properties. If you cant, simple refer to the select clause that the query used. Now your probably wondering, well thats good, but what if I want to alias a column, and to that I say, “No Problem, in fact that will be shown in the next section, along with how to join two collections in the query”.

Part 2: An Advanced Example
So in the first part, I demonstrated a very basic LINQ example of querying and filtering a list of currently running processes on a computer. Now we are going to do a more advanced example. I am going to extract from a database on my system a list of Anime Series and a list of stored genres. I am then going to join these tables and produce output showing the series name and its related genre. For the sake of simplicity, I am going to skip the portion that speaks to how to get data from the database into datasets and datatables in .NET, I assume the reader is familiar with this. So lets start with how to make our datatables LINQable.

Since LINQ can only work with objects that implement IEnumerable or IQueryable; DataTable does not implement either of these interfaces so we use a method to create a Queryable object, like so:
var seriesQuery = ds.Tables[“series”].ToQueryable();
var genreQuery = ds.Tables[“genres”].ToQueryable();

With that we can now use these tables in our LINQ query so here is our query. Don’t worry if your confused by it, Ill explain it line by line.
var query = from o in seriesQuery
join g in genreQuery on o.Field(“genre_id”) equals g.Field(“id”)
orderby o.Field(“name”)
select new { name = o.Field(“name”), id = o.Field(“id”), genre = g.Field(“name”) };

Interesting isn’t it, looks almost identical to a query you see done in SQL. What we are doing here is taking the two tables loaded in .NET datasets and joining them and then getting series name, id, genre combinations. Well go through this a bit at a time, but some of it should look familiar from the first example. We again define what table and reference variable to use for access the data coming from the main table (in this case seriesQuery).

Next if you follow what is being done, this line:
join g in genreQuery on o.Field(“genre_id”) equals g.Field(“id”)
should be easily understandable. We are doing in essence the same thing as what is doing using from. Except we are doing some filtering such that the genres line up when we join the schemas. We are also creating the g reference variable for referring to the data in the joined table.

The next line ( orderby o.Field(“name”) ) is one of my favorite features of LINQ. I like the idea of being able to even sort this data along with do selections, joins, and filtering. As you can tell from reading this, it will look in the table data references by ‘o’ and use the Field ‘name’ to sort the data. Ascending and Descending keywords are applicable here, which makes this very flexible.

If you remember in the first example, I mentioned I would show you how to creates column aliases in the returned LINQ data. Simply its a simple expansion of the select line. In the case of this example, the line:
select new { name = o.Field(“name”), id = o.Field(“id”), genre = g.Field(“name”) };
Selects the fields named ‘name’ and ‘id’ in the table referenced by o and the field ‘name’ in the table referenced by ‘g’. Notice the assignment of these fields, these are the aliases that are created, thus this dataset will contain three columns named: name, id, genre. As with normal SQL if we were not to alias certain columns they would simply default to the name in the table. However, we have to alias one of the name columns, otherwise the result is ambiguous.

So the result of this query is a collection of ‘var’ variable which as I mentioned as simply of the type IEnumerable. This attribute allows us to use it in a foreach statement, among other uses. The object contained within the collection are simple objects with properties name, genre, and id.

Part 3: Conclusions
So the question is, where would this be useful? Well while I considered it in disconnected data applications, I always wonder about the level of atomicity that we have. Clearly if you take a large chunk of data and store it in memory you have the possibility that others who attempt to access the data may get a dirty read. Primarily, this would be good to take data, retrieve it, and then manipulate it in projections using LINQ commands as opposed to constantly querying the database. It is definitely something to watch as .NET 3.0 nears its final release and with it VB9 and C#3.

What was really fascinating when learning about this is how it gets broken down by the compiler, as explained in the afore mentioned video. The entire syntax is really abstracted for the purpose of readability and ease of programming, but at its basic level its very functional and takes full use of lambda expressions and type inferencing that are introduced in C# 3.0

Postgres and .NET with IRC (no code)

Perhaps I share this hobby with many other programmers, in particular those at Google. I love data, I love having lots of data on various things. I like to use this data to play with new technologies and such. Often the first thing I will do when I am learning a language is attempt to understand database access and the limitations there in. Such was the case with PostgreSQL, an RDBMS engine that I have always had recommended to me. So recently I decided to download it and see how it worked.

Understanding it from a manual perspective was fairly easy. MySQL, in my opinion, was easier to understand, but as I am told Postgres is a more professional RDMBS engine then is MySQL. However, understanding anything by reading is hardly worth anything in the computer industry. The question is: Can you Use what you Know? I have also gotten to the point where small scripts that access the DB and return some results are so trivial that it hardly proves anything to me. No, I need a project of some sort to really work with the database. So an idea came to mind: why not write a word counting bot for an IRC channel. We could, at some point, use this for stat tracking to see who talks a lot. Now, I elected to write this in .NET, I will explain why in the next paragraph.

Given that reading from the connected socket was likely to be faster then the database read/write actions. Given that I was going to be breaking these sentences down by the word and checking if I should update or insert, I felt that by making this application multithreaded would benefit the system. One of .NET strengths is Asynchronous programming and how easy this type of programming is made. Given this set of circumstances I felt this was the natural choice. The next part was to design my databases. As I mentioned, I wanted to test my skills with PostgreSQL. I elected to use pgAdmin III to create the tables, relations, and sequences needed to build the tables. Five tables where created: users, words, channels, saycount, talkcount. The first three are self explanatory, saycount is often a word was said in a given channel. talkcount is simple the number of times a stored user has spoken.

To start this program I needed to connect to IRC. I have done this by hand before in .NET, but it was quite convoluted and required a lot of time. So I scoured the Internet for a pre-existing library and I found Meebey’s SmartIRC Lib for developers. I have to give Meebey credit, using this lib it was very easy to make the connection to IRC and join my test channel. However, my next issue came with getting a connection to my PG Database from .NET.

I had originally planned to use the built-in ODBC Data lib that comes with .NET but despite my efforts I could not get a properly initalized connection. I had been using ConnectionStrings.com for research into PGSQL Connect Strings when I came across a lib called Npgsql, and decided to look it up. This lib reminds me very fondly of the .NET Connector for MySQL as they work with the same abstract base class I imagine.

Using these two tools I easily constructed my databases and set up a connection, now I had to get it working. The first idea I needed to understand was how my initial thread was going to populate and my secondary thread was going to consume that data. I decided to create my own class that held all the information I wanted to take from an IRC Channel message and store it in a type-safe collection using generics. I made this member static so that I could ensure their was only one copy of the variable and to make access to it from the secondary thread easier.

So my idea is to have the secondary thread ALWAYS attempt to consume the first element in the List. If it is not there, an exception is thrown and caught and the secondary thread will sleep as it waits for data to appear in the list that it can use. Keep in mind the main thread is populating this List each time it detects a channel message. So what do I do with this data. Well I created several objects that have a single constructor which is the string that their IDs refer to in the database. I did this to prevent duplicate words (in a channel) from appearing. These constructors look for an ID, if one is present they use it, otherwise they insert and use the newly generated ID. This is how I stored each user/channel/word that I parsed out of the incoming MemberInfo object (the objected added by the Main thread). To attempt to minimize some work for the database – only words greater then four characters are counted.

Much to what I thought when I undertook this process, the implementation was actually quite straightforward and easy. In fact, aside from a slight misunderstanding I had with PG I encountered no problems that were not solved by simple reading. Total project development time was about 2 hours. Once I had the application go live on one of the IRC channels I frequent I was also able to gauge PG speed as a database which looks much faster then it would have been had this been MySQL; thus my friends recommendations appear to be true.

Lesson Learned

As developers of software applications we must be ever aware of changing trends and be ready to update our content when needed. This includes revision of interfaces and underlying code, though aesthetic changes tend to be the most visible. One of the applications I have been developing for some time now, the Anime Manager, is such an application that I decided to revisit.

The Anime Manager was created by me to sort the vast amount of Anime I have collected over the years as an XML list simply was no longer a workable solution. Originally I started this project in ASP.NET with C#. Very quickly I was able to create a working application, and for a time it was good enough. After a while I decided it was time to revise, however, I didn’t want to use .NET this time around, I wanted something better. Being cognisant as I am of Internet technologies in the Geek realm I decided that this would be a great opportunity to familiarize myself Ruby on Rails. As a result of hard reading and playing I was able to get a working application up and running, complete with Ajax effects. This is the application that I have stuck with for some time to this point, however, things change.

When I first found Ajax effects in Rails, I marveled at their simplicity and the quickness with which I could use them and the precision as well. However, being as I have matured greatly in my understanding of Asycnhronous technologies and now having the ability to write them myself, I looked at the interface with skepticism. “Is this really usable?” I asked. The answer was: no. Simply put, I was relying way too much on Ajax for the Admin interface and as a result I had to make a lot of compromises when using it, compromises most users wouldn’t want to make. This is something every programmers grows through no matter how much experience they have. We create things, and then when we come back to them, smarter then before, we see our flaws.

I am now in the process of simplifying the interfaces on the Admin side to make it more usable and require less compromises on the users part. But what this experience has shown me is that I have grown as a programmers and tester since I developed the interface and shows I am taking yet another step towards being a developer that can be a great asset to a company.

Adobe Flex 2 – Initial Thoughts

So I am always looking for ways to either improve my skills with my existing computer language skill set, or expand my tool set allow me to be more marketable to whatever company I end up working for. Recently. I decided to pick Macromedia (now Adobe) Cold Fusion back up; as I hadn’t developed with the language for about three or four years (not since version 5) so I was interested to see how the language had progressed alongside Flash ActionScript and with the recent merge with Adobe. It was quite interesting, but it introduced me to something new: declarative programming for Flash. That is, declaratively creating Flash applications. This eventually led me to Adobe Flex 2, which is what I am going to write briefly about below.

Flex 2, being declarative, is based on XML and is written using mXml. The basic syntax looks like the following:
<?xml version=”1.0″ encoding=”utf-8″?>
<mx:Application xmlns:mx=”http://www.adobe.com/2006/mxml&#8221; layout=”absolute”>
</mx:Application>

The application tag defines the application, as a whole. This encapsulates all other tags, including function scripts, controls, layout, navigators, and charts. Because, like Cold Fusion, the language follows a Java-like pattern such that the controls are very similar to Java. You have panels, buttons, labels, images, etc (most of the standard controls exist). It is general practice to encapsulate the controls within Panels, though this is not required. You will notice the layout attribute in the application tag. This defines how controls are organized. Using Flex Builder plug in for Eclipse, absolute is the best choice because you can use (x,y) coordinates to position controls. But several other modes are available including horizontal and vertical.

So what would Flex be good for, one might be asking. For the part it servers well as standard standalone applications, but the mxml documents tend to be compiled into .swf files, thus they could be embedded in standard web pages. This is an interesting idea, because developers for major websites, the developer must assume that JS is not available, or risk having “dead” controls on the page. The same case exists with Flash surrounding whether the plug-in is available. However, if the plug-in is available the developer need not worry about about browser differences. That being said, it is worse to have the plug-in not be available and thus the functionality not available to the user. So from this perspective, Ajax is the more flexible solution as simple JS tests can determine how the UI should be changed, to allow or disallow Ajax functionality.

In conclusion, both Flex and Ajax allow developers to increase the user experience for their websites. However, the limitations for Ajax are similar to the limitations with DHTML effects and Flash still relies on the availability of the plug-in to provide its functionality. Both are acceptable solutions for Asynchronous web page development, but in the end I would be more inclined to select Ajax as, at this point, I do not believe there is a way for Flash apps (created with Flex) to update DOM elements of the page, this may change as my understanding increases.

Below is a small Sample Application, to give a better idea of what Flex Apps look like:
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml&quot; layout="absolute"
>
<mx:Script>
<![CDATA[
[Bindable]
public var theStates:Array = [ “AL”, “AS”, “AZ”, “NY”, “CO”, “CA”, “LA”, “PN” ];
]]>
</mx:Script>
<mx:ComboBox id=”cmbStates” dataProvider=”{theStates}” />
</mx:Application>