Problem with ASP .NET Ajax Control Toolkit ScriptManager

I havent used the ASP .NET Ajax Control Toolkit in quite a while, on account of concentrating on Silverlight and using JQuery for most of the effects in my applications.  However, I do claim to have knowledge in the Ajax web application space so I do keep myself abreast of all tools.  Helping a colleague at the client site today, I ran into a bit of a snag.  It seems the Control Toolkit team replaced the use of the ScriptManager tag with their own ToolkitScriptManager tag, to facilitate the import of pertinent scripts for its extra controls.  Example:

   1: <form id="form1" runat="server">

   2: <asp:ScriptManager ID="sm" runat="server" />

   3: <div>

   4:     <asp:Label ID="lblDate" runat="server" Text="Date: " />

   5:     <asp:TextBox ID="txtDate" runat="server" />

   6:     <asp:MaskedEditExtender ID="txtDate_MaskedEditExtender" runat="server"

   7:         Enabled="True" TargetControlID="txtDate"

   8:         Mask="99/99/9999" MaskType="Date" />

   9: </div>

  10: </form>

This code will not work with future releases of the Control Toolkit.  The curious thing is, I get the following error when I attempt to run this:

image

Clearly seeing this and you figure out the fix right away, which is to use the ToolkitScriptManager in the place of the normal ScriptManager.  But oddly enough, I did not get this on my colleagues computer.  We got something similar to this:

image

You can see this indicates the Sys.Extend.UI.MaskedEditType.None type is null.  This is still more obvious then what I was receiving on his machine.  Though, I believe that has more to do with him running IE6 without Script Debugging then the Toolkit.  Either way, if you are encountering this problem, the fix is simply to use the ToolkitScriptManager in place of, or in conjunction with, your ScriptManager.

My guess as to the reason for this change is the continuing size increase for the Toolkit.  I can remember when there were about 10 extra controls for it, now its become a whole suite.

   1: <form id="form1" runat="server">

   2: <asp:ToolkitScriptManager ID="tksm" runat="server" />

   3: <div>

   4:     <asp:Label ID="lblDate" runat="server" Text="Date: " />

   5:     <asp:TextBox ID="txtDate" runat="server" />

   6:     <asp:MaskedEditExtender ID="txtDate_MaskedEditExtender" runat="server"

   7:         Enabled="True" TargetControlID="txtDate"

   8:         Mask="99/99/9999" MaskType="Date" />

   9: </div>

  10: </form>

Gotcha with Compact Framework 3.5

My current project has me with many hats, among which is a developer for the mobile handheld system that our client will use to talk to the central architecture via a WCF layer.  This isnt full blown SOA, mainly because you can think of the handheld as complimentary to the web piece that was created first.  There isnt enough overlap to justify having a full blown SOA layer.  The idea was presented but we choose to take this approach.

Now, one of the things I have come to know about the Compact Framework is that it is totally useless and terrible, except when compared to every other phone/mobile device OS I have ever worked with. There is a long list of things that are not supported from the original framework, such as the TryParse methods.  And of course, the way you interact with WCF is also heavily restricted.  So much so, you must use a special tool to generate the proxy classes, but that isnt the gotcha, to explain the gotcha, consider the following class:

   1: [DataContract]

   2: public class Person

   3: {

   4:     [DataMember]

   5:     public string FirstName { get; set; }

   6:  

   7:     [DataMember]

   8:     public string LastName { get; set; }

   9:  

  10:     [DataMember]

  11:     public int Age { get; set; }

  12: }

What you would end up with on the client via the generated proxy class is essentially a class that looks like this:

   1: public class Person

   2: {

   3:     public string FirstName { get; set; }

   4:     public string LastName { get; set; }

   5:     public int Age { get; set; }

   6:  

   7:     // new

   8:     public bool AgeSpecified { get; set; }

   9: }

So, as you can see for non-string properties a “Specified” field is added.  This tells the Compact Framework generated base class whether to serialize the data from this property or not.  I think this is NOT a good idea.  As evidenced by fighting with the system for hours trying to understand why things weren’t serializing right.  This just clicked for me today.  To me, it speaks to pure laziness by Microsoft.  We have patterns, such as Observer and PropertyChanged, for handling this very scenario.

So the trick here is when you are passing your data pack to the server in an object, you need to set the “specified” property to true. To actually see the value on the server.  Hope this helps people, cause it drove me nuts.

Using Worker Roles with Windows Azure

Windows Azure is considered to be a cloud based operating system which allows developers to leverage a network of computers to permit what amounts to infinite scaling. This scaling is all managed within data centers run by Microsoft. Of the many uses for Azure one is computation. With the ability to leverage so many computers the prospect of performing heavy computation becomes very attractive. For this purpose, Azure supports what are known as Worker roles which act as background processing agents.

If you have flirted with Azure at all, you know that one of the most popular uses is to host web applications and allow for “n” instances to be spun up and down at will, this allowing companies to hug the demand curve rather then planning for the highest usage point.  The roles which carry this out are called Web roles.  However, the other type of role, the one which carries out background processing is called a Worker role.

Worker roles are much like Web roles in that they have OnStart and OnEnd events, but a web role is, by default, externally accessible, a worker role is not.  Think of a worker role as a messenger boy, they allow for communication. 

I have constructed a rather simple example to show them in action.  What we have is a simple guestbook which allows a guest to leave his name, a message, and a picture.  All of this information is stored in a blob and saved to my local storage account, however, the worker role is ticking every second and if its see’s a message in the queue, it will process that message.  In this case, the message will be the address to the blob and the action will be to create three images: one resized to half the width of the original, one resized to 1/4 of the original, and the final one is the original itself with all meta information.

Below is the code for the Run method, which is invoked as the role begins executing, and the ProcessMessage method which executes a message when it is found in the queue:

   1: public override void Run()

   2: {

   3:     // This is a sample worker implementation. Replace with your logic.

   4:     Trace.WriteLine("ResizeWorker entry point called", "Information");

   5:  

   6:     // begin analyzing the queue

   7:     // we will check every 1 second for new queue items

   8:     do

   9:     {

  10:         var message = _helper.GetQueueMessage(Core.Constants.CLOUD_QUEUE_NAME);

  11:         if (message == null)

  12:         {

  13:             Thread.Sleep(1000);        // do a sleep on the thread before checking again

  14:             continue;

  15:         }

  16:  

  17:         // process the message

  18:         ProcessMessage(message);

  19:  

  20:         // remove the message from the queue as it has been processed

  21:         _helper.DeleteMessageFromQueue(Core.Constants.CLOUD_QUEUE_NAME, message);

  22:     } while (true);

  23: }

  24:  

  25: /// 

  26: /// Process a Queue Message to resize images

  27: /// 

  28: /// 

  29: private void ProcessMessage(CloudQueueMessage message)

  30: {

  31:     // we are going to create a medium size images (half the width of the original)

  32:     // and a small (1/4 the width of the original)

  33:     var container = _helper.GetContainer(Core.Constants.CLOUD_CONTAINER_NAME);

  34:     var blob = container.GetBlobReference(message.AsString);

  35:  

  36:     var pictureStream = new MemoryStream(blob.DownloadByteArray());

  37:     var mediumImageBytes = GetMediumImageBytes(pictureStream);

  38:     var smallImageBytes = GetSmallImageBytes(pictureStream);

  39:  

  40:     // save to the cloud

  41:     SaveToCloud(blob.Uri.ToString(), smallImageBytes, mediumImageBytes);

  42: }

Personally, I really hate how they have this setup. Basically defining a busy wait inside the role as it looks for something to do.  I thinking a better process would be to find a way to wire the role up a queue within the web role and make this event driven.  Seems like it would make more sense given the structure of .NET in general.

This code is fairly straightforward.  If you wish to see more of the code and how I am doing the resizing, I invite you to download the source available at the bottom of this page in Visual Studio 2010 format.

Worker roles are a very interesting concept, though by no means new to the world.  The fact that they are totally managed by the Azure cloud makes them easy to deal with.  I also suspect that using the FabricController you can control how many worker roles are spun up or down at a time.  That is likely to be next experiment.  The final piece of code to talk about is the action which handles the saving of the Guestbook sign request, for it also speaks to the Queue so the worker role can be alerted it has a job to do.

   1: [HttpPost]

   2: public ActionResult ProcessSign(string name, string message, HttpPostedFileWrapper picture)

   3: {

   4:     using (StorageHelper helper = new StorageHelper("DataConnectionString"))

   5:     {

   6:         var container = helper.GetContainer(Constants.CLOUD_CONTAINER_NAME);

   7:  

   8:         string extension = Path.GetExtension(picture.FileName);

   9:         string blobName = Guid.NewGuid() + extension;

  10:         var blob = container.GetBlobReference(blobName);

  11:  

  12:         GuestBookEntry entry = new GuestBookEntry()

  13:                                    {

  14:                                        Message = message,

  15:                                        Name = name,

  16:                                        Picture = 

  17:                                             new BinaryReader(picture.InputStream)

  18:                                                 .ReadBytes(picture.ContentLength),

  19:                                        PictureType = picture.ContentType

  20:                                    };

  21:  

  22:         entry.SaveToBlob(blob);

  23:  

  24:         var queue = helper.GetQueueStorage(Constants.CLOUD_QUEUE_NAME);

  25:         var cloudMessage = new CloudQueueMessage(blob.Uri.ToString());

  26:         queue.AddMessage(cloudMessage);

  27:     }

  28:     return Index();

  29: }

The key code to look at here is lines 24-26, though admittingly you wont see much as I have it wrapped by my StorageHelper class, which is available via the download.  The way I have it setup the worker role checks every second for a message. Still even with such a short delay as I cannot guarantee the new pictures will be in place, I have to use the original for display or do a resize here and send the original dimensions as part of the message.

That is a weakness of the Worker role, with respect to web development.  We will not see the work done instantly for things like resizing or even program maintenance.  One has to be very careful when using Worker roles.

http://cid-630ed6f198ebc3a4.skydrive.live.com/embedicon.aspx/Public/PictureUploadAndResize.zip

Using Tables for Storage with Windows Azure

In my previous post I talked about using Blobs for storage when using Windows Azure.  In this article we will talk about table which are similar in many respects to blobs, though as you might expect, with a higher degree of organization.

Lets first try to understand how tables are organized.  Many of you are thinking similar to SQL tables and you are partially correct, but its more of a bucket for storing entities with similar structure.  So basically, your class definition defines the table structure, you are not explicitly creating a table with columns.  That said, Azure does make a requirement of these classes, they must inherit from TableServiceEntity, the code for our PictureEntry object:

   1: public class PictureEntry : TableServiceEntity

   2: {

   3:     public string Name { get; set; }

   4:     public byte[] Picture { get; set; }

   5:     public string Type { get; set; }

   6:     public int Length { get; set; }

   7:     

   8:     public PictureEntry()

   9:     {

  10:         PartitionKey = "picture";

  11:         RowKey = Name + Guid.NewGuid();

  12:     }

  13: }

The key thing to notice with this snippet is the constructor which references the two properties given to you via the inheritance.  Remember, Azure tables ARE NOT SQL tables.  The partition key is used for partitioning your entities.  Remember, you are not creating separate tables for data, you have a bucket so the PartitionKey is used to separate the entities for different application/uses so to speak; its a partitioning mechanism, similar to defining a table.  The idea behind RowKey is to provide a way to uniquely reference a row, think auto-incrementing field.

The following is a picture, taken from Cloudy in Seattle

image_thumb[1]

So to understand this, first lets talk about the setup.  In my the previous post I noted, at the end, that you do have to modify the WebRole.OnStart event to get things to work because the Publisher settings must be established.  Here is the code for the OnStart event:

   1: public override bool OnStart()

   2: {

   3:     CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => {

   4:         // Provide the configSetter with the initial value

   5:         configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));

   6:         RoleEnvironment.Changed += (sender, arg) => {

   7:             if (arg.Changes.OfType().Any(

   8:                 (change) => (change.ConfigurationSettingName == configName))) {

   9:                     if (!configSetter(

  10:                         RoleEnvironment.GetConfigurationSettingValue(configName))

  11:                     )

  12:                     {

  13:                         RoleEnvironment.RequestRecycle();

  14:                     }

  15:                 }

  16:         };

  17:     });

  18:     

  19:     return base.OnStart();

  20: }

Our next step is to initialize the storage location.  Here we will provide a name for our bucket.  Below is the code snippet:

   1: var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

   2: account.CreateCloudTableClient().DeleteTableIfExist("Pictures");

   3: account.CreateCloudTableClient().CreateTable("Pictures");

This is sort of backwards considering the standards that Azure recommends, however, in the case of this experimental application I wanted to just remove the existing table and recreate it when this method is invoked.  The table is, quite literally, a bucket for entities, as the diagram above shows.  It is important to remember, however, that these tables ARE NOT relational and many of the aspects of TSQL programming we take advantage of are not supported here; that is what SQL Azure handles.

To actually add and remove PictureEntry instances from our table, we are going to employ a repository pattern using Azure’s managed library classes.  Here is our context layer:

   1: public class PictureEntryDataContext : TableServiceContext

   2: {

   3:     private CloudStorageAccount _account;

   4:  

   5:     public List Pictures

   6:     {

   7:         get

   8:         {

   9:             var result = PictureQueryList;

  10:             if (result == null)

  11:                 return new List();

  12:  

  13:             return result.ToList();

  14:         }

  15:     }

  16:     private IQueryable PictureQueryList

  17:     {

  18:         get

  19:         {

  20:             if (!DoesSchemaExist)

  21:                 return null;

  22:  

  23:             return CreateQuery("Pictures");

  24:         }

  25:     }

  26:  

  27:     public PictureEntryDataContext(CloudStorageAccount account)

  28:         : base(account.TableEndpoint.ToString(), account.Credentials)

  29:     {

  30:         _account = account;

  31:     }

  32:  

  33:     public void AddPictureEntry(PictureEntry entry)

  34:     {

  35:         AddObject("Pictures", entry);

  36:     }

  37:  

  38:     public PictureEntry GetPictureByRowKey(string rowKey)

  39:     {

  40:         return Pictures.FirstOrDefault(p => p.RowKey == rowKey);

  41:     }

  42:  

  43:     #region Schema Methods

  44:     public bool DoesSchemaExist

  45:     {

  46:         get

  47:         {

  48:             return _account.CreateCloudTableClient().DoesTableExist("Pictures");

  49:         }

  50:     }

  51:     #endregion

  52: }

In this case, we are defining a context class inheriting from TableServiceContext, this class will give you many of the methods seen in the Entity Framework, such as SaveChanges.  However, it is specifically for working with Azure and cloud-based tables.  For the most part this code is fairly straightforward.  What links this class to the Cloud is the construct which takes a CloudStorageAccount as a parameter; we have seen this class frequently between this entry and the last.

So lets table about how we would add a picture to our table.  To be fair, I used webforms last entry this time I will use ASP .NET MVC.  Here is my action receiving the form with the file upload:

   1: public ActionResult AddPicture(string name,HttpPostedFileWrapper uploadedFile)

   2: {

   3:     var entry = new PictureEntry

   4:     {

   5:         Name = name,

   6:         Length = uploadedFile.ContentLength,

   7:         Picture = new BinaryReader(uploadedFile.InputStream)

   8:                     .ReadBytes(uploadedFile.ContentLength),

   9:         Type = uploadedFile.ContentType    

  10:     };

  11:  

  12:     var account =

  13:         CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

  14:     var ctx = new PictureEntryDataContext(account);

  15:     ctx.AddPictureEntry(entry);

  16:     ctx.SaveChanges();

  17:  

  18:     return Index();

  19: }

I honestly didn’t know you could do this, I just decided to try it out of understanding the theory. Who knew you could specify the uploaded file in the parameter list as opposed to getting it from Request.Files.  Our only goal here is to get the the binary stream into a byte array form so we can assign it to the PictureEntry instance we are creating.  After that its just basic interaction with our context instance reference.

Like the Blob example, we would like to be able to show the user what we have stored in the table, however, in ASP .NET MVC we dont normally think of using ASHX, its kind of against the convention.  The better method would be to call an action which returns binary data.  After some looking I was surprised that this did not come built-in meaning I would have to write something myself.  As it turns out people have run into this issue and a solution exists using BinaryResult (code is courtesy of Jim Guerts). Note: I am told that a version of this code exists in MvcContrib, but I couldnt find BinaryResult in the most recent release.

With this and my context I needed to only pass the RowKey and use it as an ID to extract the images binary stream:

   1: public ActionResult GetImage(string id)

   2: {

   3:     var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

   4:     var ctx = new PictureEntryDataContext(account);

   5:  

   6:     var pictureEntry = ctx.GetPictureByRowKey(id);

   7:     if (pictureEntry == null)

   8:         return ViewContents();

   9:  

  10:     return new BinaryResult() {

  11:         Data = pictureEntry.Picture,

  12:         ContentType = pictureEntry.Type

  13:     };

  14: }

If the image does not exist, I simply return an empty view which would result in a broken image being displayed.  Here is the code for the view that uses GetImage:

   1: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">

   2:     <h2>View</h2>

   3:     <table>

   4:         <tr>

   5:             <th>Name</th>

   6:             <th>Type</th>

   7:             <th>Picture</th>

   8:             <th></th>

   9:         </tr>

  10:         <%
   1:  foreach (var entry in Model)

   2:         {

   3:         

%>

  11:         <tr>

  12:             <td><%
   1: = entry.Name 

%></td>

  13:             <td><%
   1: = entry.Type 

%></td>

  14:             <td><img src="/Home/GetImage/" /></td>

  15:             <td>[ <%
   1: = Html.ActionLink("Delete", "RemovePicture", "Home", new { id = entry.RowKey }, new { @class = "removeLink" }) 

%> ]</td>

  16:         </tr>

  17:         <%
   1:  

   2:         } 

%>

  18:     </table>

  19:  

  20: </asp:Content>

I always hate how this code looks, soon I hope to make Spark my default ViewEngine for all things, but for now I am still using WebForms and getting ugly code like this which reminds me too much of ASP3.

To conclude, working with tables for this simple example is not that much different then working with Blobs.  Tables do provide a bit more structure and replace metadata with the properties that most developers are intimately familiar with having worked with .NET in the past.  I would prefer tables to blobs because of this structure.

  Download the Code here (Visual Studio 2010)

  http://cid-630ed6f198ebc3a4.skydrive.live.com/embedicon.aspx/Public/PictureStoreTables.zip

Using Blobs with Windows Azure

So before I left for Japan I went to the Azure Bootcamp in Southfield, MI. My goal was to understand how I could potentially use Azure to deliver better solutions for customers. To that end, upon my return I have started to play with what I learned and see where it takes me.

Among the many facets of Azure that intrigue me, I decided to play with storage first. Coming back from Japan, I have about 600 pictures that I wanted to manage somehow. I figured I would try to see how it I could work with images. To this end, I created a very basic ASP .NET Web Application with a couple pages, one to add containers and then add blobs to those containers, then another to view those blobs and delete them.

Here is the code for listing the containers in a particular storage account. Note that all of this code is using DevelopmentStorage=true, so its going local.

   1: var account = CloudStorageAccount.

   2:     FromConfigurationSetting("DataConnectionString");

   3: var client = account.CreateCloudBlobClient();

   4:  

   5: var containerList = client.ListContainers().ToList();

   6: ddlContainers.Items.Clear();

   7:  

   8: if (containerList.Count == 0)

   9:     ddlContainers.Items.Add(new ListItem() {

  10:             Text = "No Containers Available",

  11:             Value = string.Empty

  12:     });

  13: else

  14: {

  15:     foreach (var cloudBlobContainer in containerList)

  16:         ddlContainers.Items.Add(new ListItem {

  17:             Text = cloudBlobContainer.Name,

  18:             Value = cloudBlobContainer.Name.ToLower()

  19:         });

  20: }

So we have the standard call to setup our reference to the storage account in the Cloud one line 1, again our DataConnectionString is set to UseDevelopmentStorage=true. The Azure SDK contains a number of client creation calls for the various storage types, in this case as we will be working with Blobs, we create a BlobClient.

It is important to understand that Blobs are partitioned into Containers which have unlimited size restrictions (well its not unlimited but its insanely high, such that you could never hit it unless you really tried). Blobs, however, have a 1TB size limit.

The code should be fairly straightforward. We will get a list of the containers within this storage account and then add them to a Drop Down List in our view.

In my example, the view shows the list of containers in the view for the purpose of picking which container the newly created blob should go into.

The next piece of code we will look at is the code for creating containers. This code does two things; first it creates a new container in the storage account, second is to update the dropdown list of containers that we referenced earlier. Here is our code snippet:

   1: var account =

   2:     CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

   3: var client = account.CreateCloudBlobClient();

   4: var containerName = txtContainerName.Text;

   5:  

   6: var container = client.GetContainerReference(containerName);

   7: container.CreateIfNotExist();

   8:  

   9: var permissions = container.GetPermissions();

  10: permissions.PublicAccess = BlobContainerPublicAccessType.Container;

  11: container.SetPermissions(permissions);

  12:  

  13: BindContainerList();

The CreateIfNotExist pattern occurs throughout Azure and is the recommended way for creating elements (tables, blobs, queues). For our new container we must set the permissions for this container. Permissions can be set as one of the following:

  • No Public Access
  • Full Public Read Access
  • Public Read Access for Blobs only

The documentation for this topic exists and is available at

http://msdn.microsoft.com/en-us/library/dd179391(v=MSDN.10).aspx

Notice that the majority of the Azure documentation speaks to the REST API. Azure SDK is created first as an REST API, but many of these calls have been wrapped in managed code and it is recommended that the managed library be used for actual development. The vast majority of the features are supported by the managed library.

The call to BindContainerList encapsulates the code we wrote above for listing containers. It is used to give the user access to the newly created container.

The next example is perhaps the more important; how do we get data into the container. In my tests, I worked mostly with pictures that I brought back from Japan, these are my raw pictures that are about 2.2MB in size apiece. This is the code snippet:

   1: string extension = System.IO.Path.GetExtension(fileUpload.FileName);

   2:  

   3: var account =

   4:     CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

   5: var client = account.CreateCloudBlobClient();

   6: var container =

   7:     client.GetContainerReference(ddlContainers.SelectedItem.Text);

   8: var blob = container.GetBlobReference(Guid.NewGuid() + extension);

   9:  

  10: blob.UploadFromStream(fileUpload.FileContent);

  11: blob.Metadata["FileName"] = fileUpload.FileName;

  12: blob.Metadata["Size"] = fileUpload.PostedFile.ContentLength.ToString();

  13: blob.SetMetadata();

  14:  

  15: blob.Properties.ContentType = fileUpload.PostedFile.ContentType;

  16: blob.SetProperties();

Again, we do the normal routine of setting up our reference to the storage account and then creating the client. Then we select the container based on the data the user has sent us, from this container reference we are able to create a blob reference (line 8).

Now you will notice there is no call to CreateIfNotExist for the blob, only a name is provided. For the sake of uniqueness we use a Guid here combined with the extension that was parsed from the file path. Think of this as a normal file on the file system; your OS does not care what the type of a file is, its only concerned with whether the file will fit on the disk. The same is true here, however, we are limited the max size of the blob to 1TB.

Blobs can also contain Metadata and Properties to provide additional information about the Blob. The Metadata collection is a NameValueCollection which can contain ny number of keys for the information. Properties are fixed and identical for every blob. It is mostly things relating to how the blob should be stored, the official content-type, etc. Each of these have corresponding set methods which must be called to persist the information.

The second piece is a page to actually read the container contents and display them and remove them if so desired. The markup for this is pretty straightforward, a page with a Repeater with Literal, Image, and LinkButton in the ItemTemplate. Here is a code snippet from the markup:

   1: <asp:DropDownList ID="ddlContainers" runat="server" 

   2:     AutoPostBack="true" OnSelectedIndexChanged="ddlContainers_SelectedIndexChanged" />

   3: <hr />

   4: <asp:Repeater ID="rptBlobs" runat="server" OnItemDataBound="rptBlobs_ItemDataBound"

   5:     OnItemCommand="rptBlobs_ItemCommand">

   6:     <ItemTemplate>

   7:         <asp:Literal ID="litName" runat="server" />

   8:         <asp:Image ID="img" runat="server" /><br />

   9:         [ <asp:LinkButton ID="lbDelete" runat="server" CommandName="Delete" Text="Delete"

  10:                 OnClientClick="return confirm('Really delete this entry?');" /> ]

  11:         <p> </p>

  12:     </ItemTemplate>

  13: </asp:Repeater>

The code which populates the ddlContainers widget is identical to the code we showed in the first example which reads the container list from the current storage account and populate the dropdown list. However, based on what the user selects we do need to get a list of all the blobs in the selected container and set it to the DataSource property of the Repeater. Here is the code snippet:

   1: var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

   2: var client = account.CreateCloudBlobClient();

   3: var container = client.GetContainerReference(containerName);

   4:  

   5: var blobList = container.ListBlobs().ToList();

   6: rptBlobs.DataSource = blobList;

   7: rptBlobs.DataBind();

Most of this code should be with either familiar or self-explanatory. It really does not require explanation. The next code snippet comes from the ItemDataBound event handler for the repeater:

   1: if (ev.Item.ItemType == ListItemType.Item ||

   2:     ev.Item.ItemType == ListItemType.AlternatingItem)

   3: {

   4:     var item = (IListBlobItem) ev.Item.DataItem;

   5:     var blob = BlobContainer.GetBlobReference(item.Uri.ToString());

   6:     blob.FetchAttributes();

   7:  

   8:     var litName = (Literal) ev.Item.FindControl("litName");

   9:     StringBuilder sb = new StringBuilder();

  10:     foreach (var key in blob.Metadata.Keys)

  11:     {

  12:         sb.AppendFormat("{0}={1}
"
,

  13:             key, blob.Metadata[key.ToString()]);

  14:     }

  15:     sb.Append("
 "
);

  16:     litName.Text = sb.ToString();

  17:     

  18:     var img = (Image) ev.Item.FindControl("img");

  19:     img.ImageUrl = string.Format("BlobReader.ashx?Uri={0}", blob.Uri);

  20:     

  21:     var lbDelete = (LinkButton) ev.Item.FindControl("lbDelete");

  22:     lbDelete.CommandArgument = item.Uri.ToString();

  23: }

Understand that, despite the fact that we give the blob a name, we will retrieve it via its URI (its name is actually within the URI). So understand, the list that you get from ListBlobs is NOT the blobs themselves, but just a proxy that allows you to get basic information about the blob. This makes sense when you consider how big a blob could be, loading it all into memory would be crazy. The same principle applies to the Metadata and Properties. Notice the call to FetchAttributes on line 6. This is what populates the Metadata NameValueCollection and the Properties for the blob reference. Remember, what we get from the ListBlobs call is nothing more then a lightweight reference to what we have in Cloud storage.

The one thing that may be curious to you is line 22 where I am assigning the URI to the CommandArgument of my LinkButton. I will explain this momentarily, until then, think about ways we could implement uniqueness among the blobs in a container.

In this application, I am working with Images. There is no way to read the binary data out of my storage account and get it to work with an Image tag in ASP .NET. So I am using an ASHX handler to perform the binary read for me. Here is the code snippet:

   1: var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");

   2: var client = account.CreateCloudBlobClient();

   3: var blob = client.GetBlobReference(Uri);

   4:  

   5: context.Response.ContentType = "image/jpeg";

   6: var byteArray = blob.DownloadByteArray();

   7:  

   8: context.Response.OutputStream.Write(byteArray, 0, byteArray.Length);

The code here is fairly straightforward though the OutputStream.Write call may look kind of weird if you have never seen this sort of approach before. I am providing the source code at the bottom and you are free to ask questions about the strategy if you are curious. That aside, this code should look very similar to our past code snippets. DownloadByteArray() is new, but its name indicates what it does.

Returning to our main page, I want to speak to how we delete blobs from a container. In this example, the user first selects a container which has blobs in it. The code reads this container and its blobs and outputs them, using an ASHX handler to display the actual contents of the blob. In our ItemDataBound event we set the Uri of the blob to the CommandArgument of the embedded LinkButton. Here is the code snippet for handling the delete action:

   1: string commandName = ev.CommandName;

   2: string commandArgument = ev.CommandArgument.ToString();

   3:  

   4: if (commandName == "Delete")

   5: {

   6:     var blobReference = BlobContainer.GetBlobReference(commandArgument);

   7:     blobReference.DeleteIfExists();

   8:     

   9:     ListContainerContents(ddlContainers.SelectedItem.Text);

  10: }

Again, we notice the *If[Not]Exists pattern that is commonly seen throughout the Azure SDK. Finally the call to ListContainerConents encapsulates the code from our sample showing how to bind the BlobList to a Repeater.

To conclude, it is very easy to create an Azure based application and the potential for clients is immense. When you consider how sites are used the Azure cloud model is much more appropriate then the traditional way of having the scalability for your highest peak. I can remember a project I worked on in the past where the client had 5 web servers, 3 of which were used only when they had a major promotion, the rest of the time they had minimal load. Azure would be a very viable solution for this problem of waste.

Companies tend to know their trends. Good companies understand their trends. Great companies use their trends to maximize profitability. But all companies hate waste. Microsoft did a case study with Dominos who had immense waste with a huge datacenter for its online ordering which was only used heavily on Super Bowl Sunday, the companies busiest day of the year. Using Azure, Dominos can spin up x amount of servers on that one day and spin them down when business returns to normal levels. In addition, because the processing is in the Cloud, Domino’s only pays for usage, this inevitably creates a savings for the company as a whole and helps improve the business process.

Given the global shrinking of economies cutting costs and being more lean is going to be even more important then it normally is. I view Azure as a viable solution to helping reduce IT and Operating costs for companies, which still offering “infinite scalability”.

Notes:

I apologize for the low quality of this code, this comes from more of a playing around perspective. I recognize a criminal violation of DRY (Don’t Repeat Yourself). I plan to utilize what I learn from my experiments to create a library to wrap the current managed library to reduce the repetition you see above.

I did not touch on the necessary updates to the WebRole.cs file to permit modification of cloud data. Please see the code example for this piece.

Code is available for download (Visual Studio 2010, 2MB)

http://cid-630ed6f198ebc3a4.skydrive.live.com/embedicon.aspx/Public/PictureStoreTest.zip

Day 15 – The Finale

Sometimes there are things we do in life that, no matter how great they go, we always want more.  This Japan trip follows this perfectly, and yet in knowing that I still tried my best to have as much fun as I could this final day.  I divided the day into two parts: day and night.  The weather was awful, yet these pictures are still so impressive.

The day started with a trip to the Marunouchi district, which is effectively downtown Tokyo and one of the largest mostP4150006 modern districts in the city.  It is also where the city’s busiest subway line get its name.

Walking around this area gives you an idea of just how big Tokyo station is, even after walking three blocks from the main station that I came up from, I was still over the station’s underground extremities.  And, of course, like most busy rail stations in Japan, unless you know it is there, you would not even notice it.

The real reason for the trip to this district was its proximity to the P4150010 Chiyoda district, the home of the Emperor.

To say this area is beautiful is an understatement, Japan being Japan you could only imagine how meticulous they are about how they keep the Emperor’s lawn.  According to what I read, this is the most expensive square kilometer of land in the world.  At the height of Japan’s economic growth, this plot of land was worth more then the state of California! Incredible.

Unfortunately, I learned too late that coming to the palace on a Friday (or Monday) is a bad idea because most of the public grounds are closed, so I didnt get to actually enter the compound.  This disappointed me, but I guess I have something to look at, along with Nagano, when I return to Japan.

The next stop was the district of Nihonbashi (listed in tour books as Nihombashi, but as the Japanese dont have an “m” sound, clearly wrong).  This district is very financially centric and even contains the P4160023 Tokyo Stock Exchange which reports the Neikkei and Topix averages.

This is the first time I have ever been inside a stock exchange as the one in New York is locked down pretty heavily and doesnt allow most people to even get close.

Truthfully, there are a lot of interesting things to see here, even for someone with little to no knowledge of business and economics.  I really liked the Economic History Museum; I got to see how the Japanese economy was affected and changed by the Pacific War.

At this point, I made a decision to head back and save the second part of the trip for night; I wanted to get some shots of Tokyo at night.  Along the way I stopped and saw an old friend: Tokyo Tower.P4160045   Strangely, when me and Kamin ascended the Tower 3 years ago, I was more excited and was so impressed with the view.  Now, after the Landmark Tower and TMG (Tokyo Metropolitan Government), I was hardly impressed.  If you asked me today, I would not recommend ascending it, unless you feel like burning money; still I got a few shots that were pretty nice.

I decided to return to the hostel to rest, recharge the camera battery and make more time for my friend Bruna (from Bosnia) to complete her activities; she had promised to walk around Odaiba with me.

Let us just say, the weather did not improve, but I was undeterred.  After resting for a couple of hours I decided to return to the TMG building and see if I could get some good night shots, you be the judge.

 P4160058 P4160048

P4160056

When I finally put these in my digital picture frame I hope that the view is shown in a form that is justified because it really is jaw-dropping, even with the rain.

Following this I grabbed a small dinner and met up with Bruna at Shinbashi station, our goal: take the Water Bus across the Bay of Tokyo, walk back across the Rainbow Bridge on foot.  Well, we were not able to find the Water Bus, it seemed to be closed, though we saw ships on the bay.  And Bruna probably saved my life by convincing me to NOT walk across the bridge.  The winds were howling and the rain was starting to come down hard.  So we settled for some shots of the city and the bridge from Odaiba.

P4160079 P4160084 P4160088 P4160075 

All in all it was a very fun time, despite the weather.  We headed back totally exhausted through the maze of subways that is Tokyo’s underground.

One of the things I learned this final day is the value of the 1000yen Metro/Toei pass.  This saved me probably 2000yen in the end and is a must for anyway sightseeing in Tokyo.  The reality is, you will take the subway mostly when you travel around Tokyo, its just more convenient then even the Yamanote.  Buying this pass lets you pass through all the stations for a one time charge of 1000yen, it might be the best bargain in the city, and anyone can buy it at anytime from any subway ticket machine.

So that’s it, tomorrow I am returning to America.  I admit, I was hoping that the volcano eruption in Iceland might cancel my flight for one day.  The only things that I sourly missed from being here was a shot of Tokyo Tower with Sakura and an experience of Tokyo nightlife in Roppangi.  I will have to save these for the third trip.  🙂

Day 14 – A Day in Tokyo

Given the crappy weather and the state of my foot I debated going anywhere today.  In the end, I cancelled the trip to Nagano and decided to walk around some parts of Tokyo and re-visit some places I went to when I was here previous.

I started my journey on the West side of Shinjuku in what is known as the Skyscraper District.  It is from here that local government convenes and manages the city, though frankly I cant even imagine governing a city bigger then most of the countries in the world.  As it turns out, the Tokyo Metropolitan Government Building. P4150021 Its two towers are among he tallest buildings in the city offering an impressive view of the city and surround, for free.  This is in stark contract to Tokyo Tower, which is considered to be overpriced but a nice tourist trap for the locals.

This is seriously the nicest city hall I have ever seen.  And the view was simply breathtaking as you would expect.  Given where it sits, it is surrounding by many large buildings both of residential and commercial nature.

P4140004 P4140012 P4140014 P4150019

After relaxing at this spot, I headed for my next stop: The Meiji Shrine in Harujuku.

The Meiji shrine was built in 1920 to commemorate the Meiji emperor who took power in the 1863 after the fall of the Shogunate government and the end of the Edo Era.P4150027

You really cannot begin to understand how much respect the Japanese have towards their emperor.  Just about every Japanese (old and young) leave or entering was bowing.  It was the Meiji emperor who brought Western culture to Japan and led to its modernization, away from the Xenophobic policies of the Shogunate.  The arrival of Perry in the 1850s unleashed a great divide in Japan, mainly from in the form of other, more developed nations, being a threat to under developed Japan.  This turmoil led to the fall of the Shogunate and the rise of Emperor Meiji.  The Meiji still exists today, though only as a figurehead left in place by the US and its Allies at the end of World War II.  Many believe that by leaving the Emperor, the US was able to help Japan rebuild much faster then otherwise, despite the decision being very unpopular with key allies at the time.

The temple is very beautiful, especially the approach: its a long windy path with long trees that hang over you and give you a sense of serenity.

P4150023

Once I left, I decided to have lunch at TGI Fridays in Shinigawa.  I went here forP4150030 the nostalgia (me and Dave visited here when we were studying abroad).  No offense to the Japanese, I like their food, but there is seriously nothing better then an American Cheeseburger. 

One other note, I found this place purely from memory.  I remember the station and its relative position when i was here last time.  After eating I decided to walk around an do some exploring.  Like more things in Japan, Shinigawa has a purpose.  It is the mainly arterial link between Tokyo and the cities to the Southwest.  Shinkansen passes through here on its way to Shin-Yokohama.  Most of the major train lines pass through here, P4150031 in particular the long distance ones not used for inner city travel.

Yet despite this, from the street, you could never tell this was a train station.  The Japanese are really so good at blending things in that often times you dont even know you are looking at a busy train station, aside from the signage.  It goes with what Dave was telling me when I was in Mito.  The Japanese goal is to make things seamless; for instance, you never immediately realize when you are outside vs inside.  The same is true with their busy train station.

So after this it was time for a bit of nostalgia: Shimbashi and the Yurikamome line.  First Shimbashi is, impressive. P4150036 Its one of the big business districts in Tokyo, with huge glass buildings stacked high with offices.

The Yurikamome line is a fully automated line which runs without an operator intervention; much like the Airtrain in New York city.  It runs through much of Shimbashi and out into Tokyo bay through the reclaimed islands.

Let me explain this, according to what I was told companies needed to build buildings for transmitting and receiving but the buildings around them in Tokyo were so big and numerous it was interfering.  This being the case, and with no land available, the Japanese made some land in the form of “reclaimed” islands.  You gotta love some of the buildings that were built, very futuristic

P4150044 P4150047 P4150051 P4150068

Following this it was time to head back to Shinjuku to rest up for the next day.  Unfortunately, I learned that my friend Mami-chan had other plans for tomorrow night, so I would have to figure out something on my own.  Kind of difficult as my budget is dwindling and my rail pass expired today, but I am sure I can figure something out.  Just one more full day here and then back to the states and business as usual.

To answer the question; yes I do plan to return to Japan again.  When I am not sure, I think next I would want to go to Europe, London perhaps.

Day 13 – A Trip to Yokohama

Yokohama is one of the most famous cities in Japan for a variety of reasons.  Mainly because it is the second biggest P4130001city in Japan, a fact which actually surprised me as I thought that was Osaka.  It is also consider to be where Japan began to modernize with the arrival of Matthew Perry from the US in the 1800s.

Today it is one of the cities in Japan used to see foreigners, mainly because it houses the US Naval base at Yokosuke, known as the headquarters for the US Seventh fleet.

Many companies in Japan call Yokohama home, including Nissan (my favorite car company).  This was an unplanned stop that I found looking over the tourist map available in the station.  There is a large showroom of models from Nissan (mostly Japanese models); I love the Nissan Skyline.

P4130003

P4130006 P4130011  

Once I left, it was time to start my real tour, which was mostly along the waters edge of the Bay of Yokohama, considered downtown Yokohama.  The buildings here are very impressive.P4140039

The large building in this picture is known as the Landmark tower and was completed very recently.  It is the tallest building in Japan (note tallest structure is still Tokyo Tower), though it pales in height to the Empire State Building and formally the WTC.

In addition, it also features the fastest elevator in Japan, which reaches speeds of 710m/min and completes the journey to the 69F in under 40 seconds.  The view from this floor is incomprehensible and really hard to describe.  Tokyo and Yokohama are so big, there is just nothing else like it (except the view from Tokyo Tower).

P4140020 P4140022 P4140027 P4140031

Its unfortunate that I had to shrink these picture so they would fit here; the pictures do not do the view justice.  You can see for miles and it is nothing but city, and you are far enough away where you can see the downtown of Tokyo in the background, shrouded by the fog, really is spectacular.

Unfortunately, this journey took a major toll on me and my already injured right left.  I could barely walk after coming down; so much so that I decided it was about time to head back.  I decided I could still visit Yokohama Cosmoworld.  I had planned to ride the Vanish coaster and the Ferris Wheel.  However, things dont always work out as many people know and I dont like riding coasters by myself, it just feels pathetic.  After some internal debate I decide to overcome one pathetic thing and take a ride of the giant Ferris wheel, looking to get some more shots of Yokohama.

P4140042

P4140041

Following this I headed back to the hotel in Shinjuku, a good hour and a half worth of train riding, though I took special care to make sure I took the Shinkansen to Shinigawa to allow me some relaxation along the way.

Upon returning I discovered that a third blister was forming on my right foot as my body begins to wear out from the insane amount of walking I have been doing.  Still I decided to get some rest so me and David could hit Shibuya.

I had been to Shibuya one other time, that was back when me and Kamin visited during our trip to Tokyo.  It was such an awesome time, and I will never forget it.  However, Shibuya seems to be much more subdued on the weekdays, though we still found a couple of nice places.

P4140045

For the most part, we just drank and talked about things.  David being from France and not speaking much English we actually communicated heavily using broken Japanese; I still find this amusing.  I will leave you with one final shot of Shibuya before we packed it up and headed back.  At this point I could barely stand my right foot and had a considerable limp.  Hoping for some healing tonight ahead of whatever I decide to do tomorrow.

P4140047

Day 12 – Asakusa, Ueno, Akiba, and I go to Prison

It appears that the Japanese have the same problem with the weather forecasters that we in the states do; they are rarely right.  In this case I am happy for it.  The weather was gorgeous as I visited Asakusa, Ueno, and Akiba.

Asakusa is a small district located near Ueno in Tokyo’s downtown Taito district.  It is famous for its temples and historic artifacts.  Perhaps the most famous is Sensooji and P4120002its variety of gates leading up the temple. The biggest of these is Kaminarimon (thunder gate).  Guraded by the gods Fujin (Wind God) and Raijin (thunder god).  Beyond this gate sat the Nakmise shopping arcade.

I found a couple nice souveneirs touring the street.  Like most of the tourist areas in Japan it was heavily crowded, mainly because you are dealing with Japanese tourists, Westerners, and school groups at such historic sites.  In particular here, live prayer ceremonies were being conducted with a monk on hand.

P4120008 

The structure in this picture is Hoozoomon, which is another gate marking the end of the processional road leading to a Japanese temple.  It is this processional road which comprises the afore mentioned Nakamise shopping arcade.

Unfortunately, the main temple is currently in the process of being restored on the outside, so all we say was the curtain. However, the building was open.

P4120014

As I walked back into the courtyard area I was greeted by a standard five story pagoda.  This would normally not impress given as many as them I have seen, but remember, I am in one of the worlds largest and most compact metropolises.  P4120010 The fact that the Japanese are willing to allow this land to remain as it has for centuries gives you an idea of how important their culture and history are to them.

And its not just this one temple either, this is an entire grounds that is just plopped into the middle of this huge metropolis, so occasionally you will get a shot of a temple with a lot of buildings behind it.  I love those sort of shots, because it really gives a sense of the mixture of old and new found not just in Tokyo, but all over Japan.

Following my stop in Asakusa I traveled over to Ueno, which is also located in the Taito district and serves as the main Tokyo station for servicing the northern areas of the island (I traveled through here when I went to Mito).  This being the case the JR Station here is HUGE and features the Hard Rock Cafe that I had the chance to eat at last week.

However, I learned that Ueno is also famous for being the location for the final stand of the Shogunate warriors before the Meiji government came to power.  You see, originally Tokyo was known as Edo and the capital of Japan was set in Kyoto.  P4120036 After 1400 years (or so) of rule by the Shogunate Japan revolted (if I remember correctly, the revolution was spurred heavily by the arrival of Matthew Perry from the US).  Shogunate warriors in Edo fought to the last and died on this hill (picture).

The hill is a part of Ueno park which is a popular spot for hanami parties (cherry blossom watching).  Unfortunately, sakura only bloom for a short time and so by now much of it has disappeared. I kind of look at it like my trip to Japan, which bloomed beautifully at the start but, like all good things, must come to an end.

Around the park there are many monuments dedicated to the Shogunate era and its people.  This again underlines the value of history to the Japanese.  Though they may have disliked the Shogunate toward the end, they understand and respect that this was a part of their history.

P4120041

Perhaps most famous of these is a statue of Saigoo Takamori (picture).  He is very famous for leading the P4120034Satsuman Rebellion against the Meiji government.  His story is one which may be known to many Americans as it was the basis for The Last Samurai.  But you would never guess it by looking at this picture as it appears to be just a man walking his dog.

I followed this by going to a museum which talked about the way Japanese life has changed throughout the years.  In particular, we talked about how Tokyo changed after the great Kanto earthquake in the 1920s and the destruction of Tokyo following World War II.  It is interesting how, even back then, the Japanese understood the decisions that had to be made to survive on such a small island compared to their population.

Before leaving the park I stopped by the Ueno zoo and got some shots of the animals, some of which are American, like Prarie dogs, but most from China and India.  This zoo does not have a panda as you might expect, but you get told that up front.  It was a nice spot to walk around and relax.

The final stop on my journey was a final trip to Akihabara for shopping.  As I have stated before, P4130073 Akiba (as its known by the locals) is greek heaven in every sense.  From computers, video games, anime, food, everything a geek wants is here.

The thing is that you have to be very careful with what buy and remember that DVDs tend to be region coded, so know your region before purchasing anything.  For me, its kind of depressing here.  I didnt pack with bringing back large things in mind as I needed to be light and swift given how much movement I was going to see during my trip.  So price is really not an issue for me, its a logistical thing.  Also, I cant really buy things like books because, well, I cant read Japanese well enough to enjoy them.  See, despite being a huge attraction to western tourists, its hard to find English material here so most of the things are going to be relatively useful unless you speak/read Japanese.  But it is still fun to look around.  The Japanese are nuts about technology, more then any ten Americans I know.

My first stop in Akiba was the Tokyo Anime Center.  My desire was to buy a small model of either a Macross VF fighter or a Gundam from the UC Gundam series.  Unfortunately, the models are build with detail in mind and so tend to be large.  And again, I have huge space issues as I prepare to head back so, in the end, I just ended up looking around.

In the end, I ended up purchasing a digital picture frame and after walk up and down a couple stores with nine different levels.  Like I said, the Japanese are nuts about technology and their electronic stores will destroy any store in the US, this is especially true of the stores in Akiba.

At this point, it was clear that I had reaggrevated the two blisters on the right foot and was beginning to develop a limp.  I quickly headed for the Yamanote line and returned back to my hostel in ShinjukuP4120001

My friend Megumi, who is currently teaching Japanese in China, introduced me to her friend Maminoto, who lives near Tokyo (it was Mami who introduced me to Mari).  We had decided to check out a resteraunt called “The Lockup”, a prison themed restaurantP4130077 not far from Shinjuku station.  This is one of those, only in Japan type things.

So yes, I got handcuffed by the hostess and led to a prison cell aka our table.  It was a really good time, they did all sorts of things to try to scare us, though it was really weak.  However, I found out that Japanese girls scare very easily.  At one point, a person dressed as an axe murder popped through the door trying to scare us.  Mami jumped back in terror, I asked him for some more beer.

P4130078 P4130080 P4130081

After this, my limp had become very noticeable and we still had the massive task of trying to find my backpack in the correct coin locker.  I didnt want to take it with me to the restaurant and Japan stations tend to have pay-for coin lockers you can put stuff in.

The other thing to understand is that unlike New York, Tokyo and its transportation do shut down.  I can remember times in New York, if you wanted to catch a train at 4am you could; not in Tokyo.  At around 1130 everything shuts down for the night and there are no more trains.  My theory is this keeps kids from staying out late and has them focus on school, maybe.

However, if you are out having a good time with friends, this can suck major balls.  The answer is, of course, stay in an area with places near you so you can walk as the bars and clubs stay open much later then the trains, this is mysterious to me.  Either way, after some walking, painfully, I managed to catch the second to last train.  Up tomorrow, is Yokohama.  I want to go to Cosmoworld, but I need to find some Dramamine first.

Day 11 – A Return to Tokyo

Is it weird to feel so at home in a place that is not your home?  I sort of feel that way about leaving the Oe.  Dont get me wrong, I love America and I love my parents more then anything in the world.  Its just the way the Japanese think and act is so much in line with how I think things should be, and that by incorporating some of these aspects I think America could gain so much.

I had a rather unique experience today; I went to my host brother, P4110001 Toma’s, entrance ceremony for JuuGaku (Middle school).   In America, you go to school depending on where you live, in Japan its the opposite, you go where you get accepted.  I have seen kids traveling for 2hrs by train so they can go to a good school.  The reason for this is the schools in Japan are very much organized in a caste system, and its very difficult to moveP4120003 between the castes.  Thus, what middle school you go to will influence what high school you go to.  What high school you go to thereby influences what colleges you will get accepted to (and this is a HUGE deal in Japan).  And that college will greatly influence what job you will be able to receive upon graduation.  Basically, in Japan, if you really screw up, recovery is nearly impossible.  Among these steps perhaps the most important is high school.  College in Japan is not very difficult and is viewed more as a reward for students who make it through high school, contrast to America where college is where you really work hard to define your specialty.

The ceremony was really neat and the air was just filled with the respect the students and parents give the teachers.  Being an extremely old middle school the school itself has a cultural heritage that most of the students known and understand.  There are songs and introductions by the teachers.  There is even a part where the principal reminds everyone that the teachers will guide the students in school but the parents are responsible for guiding them at home. Can you imagine hearing that in America? I guarantee at least one parent is one her Blackberry emailing the principal to say how dare he tell me how to raise my son.  And the discipline, its like watching an army drill.  You could not get American High School students to follow instructions this strictly, and these were Japanese elementary students.  And then afterwards, without even being told, every student grabbed their chair and took it with them.  Once the parents left they returned to remove those chairs.  It was so efficient it was scary.  Someone asked me what I thought, and I could only say it was impressed and I can understand why Japan turns out some truly amazing minds.

Following the ceremony, Toma gave me a tour of the school (more or less, we were actually trying to find Asahi and Mama).  P4120026

This is a shot of the shoe lockers that you will always find near the front of a Japanese school. Not wearing shoes inside is one the most hallowed traditions in Japan and has caused me a bit of grief.  I recommend if you ever go to Japan bring shoes you can slide on and off easy.

Its hard to mistake the respect the students have for the school and their teachers.  Nowhere do you see a teacher doing any work besides teaching.  A ceremony had just taken place and all the students were working together to clean up and prepare for the rest of the days classes, the ichinensei (first years) would be starting Tuesday.

Unfortunately, Asahi seemed unable to contain himself for the duration of the ceremony and caused a couple disruptions.  I really admire Emiko-Okasama, her level of patience is simply astounding.  Even I was beginning to approach my limit with his antics; I think he is worse then I ever was.

Long story short, we ran late getting out of the ceremony and with Mama needing to go to work it was clear she would be unable to take me to Maibara to catch the Shinkansen to Osaka.

Now if that sounded weird, let me explain.  I have an affinity for the Shinkansen and I wanted to ride it from Osaka to Tokyo, the main route if you will.  Thus my chosen route was Kurasam to Maibara via normal train, Maibara to Osaka via Shinkansen, and finally Osaka to Tokyo via Shinkansen.

As you might guess, it was getting fairly late as we approached Tokyo and I didnt want to take the Yamanote halfway across the city to get to Shinjuku where my hostel was located, and it was raining heavily.  I got off at Shinjuku which is, by all accounts, the primary junction point for many of the cities trains and most of the subways.  I caught the Yamanote here and took it to Shinjuku.  From Shinjuku I was able to, awkwardly, catch the Toei Shinjuku line, and go off at the appropriate exit and made it to my hostel by 1030.  I got the chance to meet some of my bunk mates and, funny enough, since they spoke so very little English (being from France and Switzerland) the common language is Japanese.

Tomorrow I begin my excursions of Tokyo and the surrounding area, including a date with Mami-chan.