Trip to Hiroshima Peace Museum

I like to think that I am a tough person, that I can control my emotions. I like to think that I can see any hardship and stand fast and not cry or become emotional. But the truth is: I cant, none of us can. And when comforted with what I saw at The Hiroshima Peace Museum I am not ashamed to say that I had tears in my eyes. Because I did have tears in my eyes and I wanted to weep for them. But I couldn’t, I could only hold it in.

The pictures from Hiroshima are forthcoming in my Flickr Profile but I have already felt what those scenes did to me. Even just hearing the stories of skin melting off a person’s bones and stretching to incomprehensible lengths made me cringe. Hearing the 1000 crane story and how she kept on making cranes until she died, putting the wish of wanting to live into each one. The feeling cannot be described, and cannot be ignored. Perhaps, it was the people of Hiroshima who were the most worried in the world when North Korea conducted the recent Nuclear test. Perhaps, this is where Japanese children get the drive for wanting world peace that is so prevalent in their culture and Anime.

Truly it was atrocious and was an event that awakened us to what mankind is capable of through Science. As FDR put it, we have harnessed the power of the Sun, and made it our weapon. Nuclear weapons will always exist, so long as humanity is dependant on Nuclear power as a means of power. Weapons will always exist until everyone can accept another differing beliefs and act with kindness and thoughtfulness. Wars will always exist because Humans don’t trust each other and don’t give a reason to trust each other.

It was perhaps one of the saddest and most enlightening experiences of my life to visit Hiroshima. The city is gorgeous and one could never tell that it is one of two areas on Earth to have a nuclear bomb used on it during warfare. Perhaps, this beauty is a tribute to the strength of Japan that is evident in every citizen. A country that miraculously recovered after being systematically annihilated. We could learn a lot from Japan and its people.

Hypocrisy

In general I am a very easy going guy, not much bothers me, only a few things. Poor products, a lack of interest in following standards, and complaining about something that shouldn’t be complained about. In recent years, I have for the most part stopped reading Slashdot or as I have come to call it, “The Domain of the Hypocrites“. It is of course good for seeing some good old fashioned unsubstantiated Microsoft bashing. For example, today an article was posted detailing the new Microsoft Conspiracy to Undermine Google by disallowing Image Searching. Firstly, I honestly wonder what Microsoft would possibly gain from such a move and second, I wonder why the Slashdot moderators, with all the other better news stories out there decided to post this. For one thing, its not even true. Load up IE7 head to Google Image Search (http://images.google.com) and do a search, you’ll find nothing wrong whatsoever with this page, despite the claim that it is supposedly label a Suspicious website.

This is likely a better result then you would get with FireFox whose Anti-Phising has been described as sketchy at best. However, its likely best not to say that cause for all their teasing and ridicule of Microsoft fan boys, FireFox likely has some of the biggest hypocrites on their side. I remember, reading this post on OSNews.com (this is a link to Slashdot’s version: IE7 Vulnerability Discovered). Of course, everyone was jeering and laughing at this because Microsoft was already being bitten by the security flaw bug. However, as those of us who don’t immediately jump the gun expected, the problem was not with IE7 but with a different component (Outlook Express) as Microsoft explains here. While this attack users IE as a vector, it does not effect IE at all and as such is hardly a critical flaw. And if you want to talk about critical flaws compare the numbers of flaws per month between FF and IE – here

Now, I want to say that I am a loyal FireFox user and have always been, and always will be. But I don’t IE (or Windows) bash, except for things that I know can be compared and are established facts. For example, numerous blog posts have been made about the recent accomplishment is circumventing Microsoft PatchGurad in the Vista 64bit OS. I tend to wait on this to see how it affects the OS’ security, rather then jump on the bandwagon and taunt and make fun of Microsoft. While I am a FireFox user, I by no means think its perfect, in fact, I remember getting upset at one point when it seemed I was getting new updates for the browser everyday. Maybe its because I use IE so rarely that don’t notice the patch updates, but the number do seem write in the article I quoted above.

I get tired of hearing it, all this Microsoft bashing. I have to constantly remind people of the basic fact that A) Windows users are not Linux users. B) Microsoft has to support a much wider variety of configurations then Linux, as well as contend with idiocy that comes with. C) The Microsoft code base is much older then Linux and is more susceptible to attack. Linux had the advantage of learning from Microsoft’s mistakes as well as not being anywhere near as large a target. This could be partially blamed on some of the decisions Microsoft has made which have given programmers and developers a negative perception of the company (see my rants about IE7).

So lets understand something, Windows users are, on the whole, generally not as computer inclined as Linux users. This comes with the territory. Linux has only very recently begun to understand that to compete successfully with Windows, they must dumb down their OS and automate a lot of tools and provide a GUI. Ever wonder why KDE and Gnome look so similar out of the box to Windows, that may be an indication.

I actually had the privilege of having a member of the Avalon team come to Grand Valley and speak to the Computer Science students. One of the questions asked to him was why Windows has so many problems. He stated that Microsoft lays down guidelines and standards for programmers to follow, when installing, executing, and uninstalling software. However, Microsoft has no real way of enforcing these guidelines, and thus developers openly omit them. As a result, the registry becomes messy, or programs don’t execute as the OS expects them too, leading to problems. Over 90% of all XP crashes are the result of drivers not following this standard. I have noticed this too, the only time XP has ever crashed on me was when I used old Creative drivers for my web cam. But I am deviating from the topic of this article.

While the 0 day vulnerability for IE made headline news in the computer world, not surprisingly the 0 Day Flaw for FireFox 2 did not. You can read about it here. It was not surprising that Slashdot in particular did not report this. Slashdot users are extremely defensive about any criticism thrown at FireFox. Examine this post to Slashdot where the author identifies 9 things he didn’t like about FireFox 2 (though it ended up being 10 with the inclusion the Zero Day Flaw) and is subsequently blown out of the water by the majority of users. And the thing is, rather then agree and ridicule as what happens with IE, FireFox users play it off and regard it as a feature or a bug “you only see once in a while”. These same users will then turn around and blast IE for having the exact same problems.

I cant say don’t criticize cause their is legitimate room for criticism for both browsers. In fact, MANY that I have talked to are unhappy with FireFox 2 because it doesn’t deserve to be called version 2, rather 1.6 most feel. And the same problems from previous versions still exist: huge memory consumption and random arbitrary crashes (which one Slashdot users said “Freezes: yes, they occur. But hello, restore session. I don’t say it’s no problem, I’m saying it’s no reason not to switch.”. I will say I have heard less stupider things said, but not much. Also, when writing this article, I did a lot of searching and opened up about 12 tabs in FireFox and it crammed them all together, I saw no evidence of this tab scrolling I hear about, maybe I have it turned off, but IE does a good job handling a lot of tabs due to its thumbnail mode.

Are their bugs in IE? Yes, of course. Are their bugs in FireFox? Yes. Are their bugs in software? If its over 100 lines, yes, since it was proved in 1970 Mathematically that the longest program a person could write was 100 lines, done in Algo68 FTW. The thing people need to understand is that in ANY complex software product, your going to have bugs. It is certainly true that IE has more bugs discovered because of its higher profile. Any sane person can see a rise in the number of bugs with a rise in popularity. If you still cant see that, your just fooling yourself. As Linux gains popularity, more bugs will be found. I have ever confidence that Linux will end up being no more secure then Windows, yes you can argue about various features in Linux, and I wont listen to a damn thing. Every argument I have ever had about security, their is always some facet of customization to Linux that makes it more secure. People seem to ignore that just about every security study done concludes that neither is more secure then the other.

And, in truth, it doesn’t even matter. We still have people dumb enough to down virus.exe because they think its a patch. We still have Grandma and Grandpa who fork out their CC and Expiration date because they think they can win a $1,000,000. An OS and its programs can only do such much to protect the user from themselves. The least the tech community can do is be more constructive in their criticism and not simply to “jump the bandwagon” when an idea or notion is popular.

And now, one final url: http://blogs.zdnet.com/Ou/?p=352&tag=nl.e539

Ajax WebDev : Doing it Right

First let me preface this post with some history. The previous post can be thought of as part 1, though it was more of a rant whereas this is for reference purposes. My goal was given what I had learned in reading “Beginning Javascript with DOM Scripting and Ajax” by Christian Heilmann to develop a simple Ajax style web application that conformed to proper web design principles.

  • The application demonstrates unobtrusive Javascript that “helps” the user
  • The application is functionally available for all major browsers
  • The application is functionally available without Javascript enabled

To begin I had to choose how I would develop this application. I started off by using PHP Designer 2007 Personal. However, I became frusterated with its code completion style not being as smooth as other applications I had used. So I began looking for an alternative. And I found it: the Eclipse plugin Aptana (Click Here). This is something I dont think I will ever get ride of. It properly shows documentation fro Javascript functions and also their availability depending on the browser. It also provides very complete documentation as to what each function can do, in my mind a tool no developer should be without. But moving on. The inital design was quite easy. I was able to construct the interface with no problems. I used PHP to generate any default content that should be their, ie the list of series.

My first problem I ran into was given the large amount of data I was likely to get back from the PHP script being called asynchronously, how to get that data into the select boxes. I was not about to parse the responseText property, so I opted to play with responseXML. I must give credit to Firebug in this case, this would have been obscenely difficult without its aid. It took me a bit of time, however, to decipher what it was reporting to me for the responseXML property. It apparently calls the response Document object with a ‘null’ name. This makes sense, I suppose, but confused the hell out of me initally.

Once I had the XML document, using the various get* functions provided by JS, acquiring the data and using it was very easy. This was accomplished with very little effort and in no time I was to the point where the user could select the desired episode. While I had the idea of making the information for the episode more complex, I decided to just show some info from the episodes table, rather then joining tables to get the other information.

Part 1 : CreatingThe Script
Using what I had learned in the afore mentioned book, I had constructed the UI with the basic practices needed to make the UI usable even without JS. The basic idea is to show everything, that is assume the user does not have JS. The modify the UI using Javascript if we find that it is available to us. This concept of developing the website assuming their is no JS is not always easy and I think something that escapes many programmers writing websites. We tend to get very caught up in making apps look awesome with a lot of bells and whistles. Professional development is not about bells and whistles, however, and I think a lot of developers struggle with that, I know I do.

But moving on, so using JS I hid the various select buttons and the div’s enclosing each select box, except from the series select, as that needed to be visible. I also added the events to the controls via this method. This is another lesson I learned. It is bad practice to add event handlers inline, because its messy and your assuming the user has Javascript. Here is the code for modifying the application, this is also cross browsers compatible:

// get a listing of all input tags and hide the ones that are submit buttons
tags = document.getElementsByTagName( ‘input’ );
pagexhr.hideTags( tags, ‘submit’ );

// hide the unneeded sections
document.getElementById( pagexhr.seasonSelect ).style.display = “none”;
document.getElementById( pagexhr.episodeSelect ).style.display = “none”;
document.
getElementById( ‘episodeShow’ ).style.display = “none”;

// attach the event handlers to the select boxes
pagexhr.addEvent( document.getElementById( ‘series_id’ ), ‘change’, pagexhr.handleSeriesChange, false );
pagexhr.addEvent( document.getElementById( ‘season_id’ ), ‘change’, pagexhr.handleSeasonChange, false );
pagexhr.
addEvent( document.getElementById( ‘episode_id’ ), ‘change’, pagexhr.handleEpisodeChange, false );

You’ll notice this code makes a reference to the addEvent & hideTags functions – which you havent seen the defintion for, but you will shortly. Effectively what happns here is we get a list of all tags in our document and use the function ‘hideTags’ to hide those with a type submit. Next, we hide the select boxes for season and episode selection as they are not needed; we also hide the div used for outputting information about the episodes. Next, we call the addEvent function to register events to the controls on the form, specifically in this case we are registering an ‘onchange’ event handler to each of the select boxes on the page.

Here is the code for addEvent (I will not show hideTags as it is rather mundane, please download the source to see it): I would also like to thank Scott Andrew who is the author of this function for cross browser event registration.

if (elm.addEventListener) {
elm.addEventListener( evType, fn, useCapture );
return true;
}
else if ( elm.attachEvent ) {
var r = elm.attachEvent(‘on’ + evType, fn );
return r;
}
else {
elm[ ‘on’ + evType ] = fn;
}

The idea behind this function is very simple: check if we are using a W3C complient event model (ie FireFox, Flock, Opera, etc) or if we are using a non-compliant model (ie Internet Explorer), and register the event appropriately. Notice that the IE call requires the ‘on’ prefix, this must be included for IE to know what you are talking about. For more information on the parameters refer to Google.

So moving on, at this point I know have my application using JS to hide parts of the interface I dont want to show initially and being visible if JS is not available, their by giving all users a means to use the application. But right now, my event handlers dont do anything, so lets define one of the event handlers. For my purposes, I will show the call by the SeriesHandler to setup the Ajax call and then show how I handle the return values. With this one example, it should be sufficient to explain the rest of the functions, as they are all the same – aside from minor difference in parameters being passed. So without further delay, here is the code for the Series OnChange Event Handler:

if (document.getElementById( ‘series_id’ ).selectedIndex)
{
if ( document.getElementById( ‘series_id’ ).selectedIndex == 0 )
{
return;
}
}
else if (document.
getElementById( ‘series_id’ ).sourceIndex)
{
if ( document.
getElementById( ‘series_id’ ).sourceIndex == 0 )
{
return;
}
}
else
{
return;
}

try
{
// prepare to send the Ajax request
// IE requires these lines precede the definition of the readystatechange event handler
var qs = pagexhr.buildParameters( ‘series’ );
pagexhr.prepareSend( ‘seasonQuery.php’, qs );

pagexhr.xhr.onreadystatechange = function()
{
try
{
if ( pagexhr.xhr.readyState == 1)
{
pagexhr.progressContainer.style.display = “”;
}
else if ( pagexhr.xhr.readyState == 4 )
{
if ( /200|304/.test( pagexhr.xhr.status ) )
{
pagexhr.success( ‘season’ , pagexhr.xhr );
document.getElementById( pagexhr.seasonSelect ).style.display = “”;
pagexhr.progressContainer.style.display = “none”;
}
else
{
pagexhr.failed( pagexhr.xhr );
}
}
}
catch ( error )
{
alert( error.message );
}
};

pagexhr.xhr.send( null ); // send
return;
}
catch ( error )
{
return;
}

So thats a lot of source, I colored it to help with reading. The basic idea is that we first make sure that we are not looking at the 0 index of the selectbox, which is instructional. I did put a rather important comment following that, something I ran into while develolping this to make it work in IE. The order of the lines really does matter, the open call MUST precede the definition of the readystatechange handler, which I have defined an anonymous function. Then we send the request. Their are several support functions here, once again I wont cover their implementation, their names describe what they do. To see their implementations, please download the source file.

So once we have this function in place, we can copy and paste it to work with handling the selection of a season. The selection of an episode is essentially the same, but uses responseText for its output, but I will describe that later. Now, with the code now, you can make selections in the select boxes, but nothing noticeable happens, what gives you ask. Well, we are not tell Javascript what to do with the response, right now it just calls a function that we havent defined yet. So lets create the success function for handling a successful Ajax request, this function is very simple – mainly cause I use it just to determine what population function to call.

success : function( taction, request )
{
switch ( taction )
{
case ‘season’:
pagexhr.loadSeasons();
break;

case ‘episode’:
pagexhr.loadEpisodes();
break;

case ‘show’:
pagexhr.showEpisode();
break;
}
}

So lets look at the loadSeasons function, to continue with our trend of what happens when the Series is selected. Below is the implementation of the loadSeasons() function:

try
{
var theXml = pagexhr.xhr.responseXML.documentElement;
var seasons = theXml.getElementsByTagName( ‘season’ );
var selectBox = document.getElementById( ‘season_id’ );
selectBox.options.length = 0;

// insert the new option first
selectBox.options[ selectBox.options.length ] = new Option( “Please Select a Season”, “” );

for( i=0; i<seasons.length; i++ )
{
var name = theXml.getElementsByTagName( ‘name’ )[i].firstChild.data;
var id = theXml.getElementsByTagName( ‘id’ )[i].firstChild.data;
selectBox.options[ selectBox.options.length ] = new Option( name, id );
}
}
catch ( error )
{
alert( error.message );
}

Ok so this code is quite self explanatory. We acquire the XmlDocument from responseXml.documentElement. Effectively you can think of this reference as another HTML page so your age to use all the DOM techniques to get the data from it. In particular we are interested in all the tags as they contain the information for each season contained within this series. Once we get the array using the getElementsByTagName function we perform a simple iteration and extract the necessary data from each set. We then create a new Option object an insert it into the select box. Note that this interface is not all that intuitive. Whereas most collections have some sort of an add function, Javascript does not. However, we are able to append to the list by adding new items to the end, hence why I call the index based on the length of the options array, which is dynamic and is incremented each time we add a new element. You will also notice some cosmetic manipulation above the for loop, not required, done for affect and to make the user interface most intuitive.

Once that is complete, we hide our progress updater and execution ceases, this is the end of the script. Its actually quite simple when we break it down, but it does take some practice. However, contrary to what many people seem to believe, writing Ajax style is not all that difficult, nor is the DOM manipulation that follows.

Part 2 : Making it work without Javascript
If your new to server side programming this portion of the program might present more of a challenge to you, for those hardend server side programmers this is pretty much what we do all the time anyway. In the first section, I made several references to why its important to have Javascript manipulate the interface after we determine its available. The reason, again, is we must assume the user does not have Javascript enabled so our page works regardless. This does not mean we write multiple versions of the page, it just means we have to add additional logic in to server side programming so that it functions along side Ajax style calls.

For example, I hide the submit buttons so the user cannot submit the form, thereby it becomes impossible for them to use the PHP scripting that is tied to this script. This is a really simple trivial way to handle this situation, and also why you do not declare JS event handlers inline. Lets look at some PHP code I wrote, but first an excerpt from the application showing PHP being included conditionally:

<?php
if (!isset( $_POST[ ‘btnSeriesSubmit’ ]) )
{
?<
Please Select a Series

<?php include_once “./includes/seasonQuery.php”; ?>

Nothing much to say here, basically if I detect that a series id is being submitted, I see the button in the $_POST array then I know that I am going to be filling the select box with seasons, hence saying to select a series is bad UI so I exclude it from being added to the options array. Now I present PHP code from the included file: A small note, I am assuming you are familiar with PHP/MySQL data access functions and thus will not explain them or how to use them. If you do not know how this is done please visit this URL: http://www.php.net/mysql
<?php
if (isset( $_POST[ ‘btnSeriesSubmit’ ]) )
{
?>
“”>Please Select a Season
<?php
// connection and querying code

while ( $row = mysql_fetch_assoc( $s ) )
{
if ($sid == $r[‘id’])
$selected = ‘ selected=”selected”‘;
else
$selected = “”;
?>
<option value="”
><?php echo stripslashes( $row[‘name’] ); ?>

<?php
}

// close the connection
}
?>

For those PHP programmers out there this code looks pretty standard. Its only real flaw is the fact that when compared to the Ajax experience, this one isnt quite as slick as menus reset themselves and dont hold their values in between postbacks. In a real app, this would be a necessary step, but as I am just trying to give people the general idea, this is sufficient. The idea is that you would include similar files for each type based on the data that had been submitted and then display the options in the select boxes. Its very simple and quite easy to implement. Likewise, on a website you would have PHP code look for the submit which you would disable or not allow if you were using an Ajax style callback.

The final step : Making IE work
If you have read my previous post you know that I became quite frusterated last night attempting to get my application to work in IE7. For the most part it seems that the Event model has not changed and has little in common with the set W3C standard followed by just about every other browser on the planet. To further make things difficult IE lacks any kind of extension for debugging JS leaving the developer to the old “alert to check” debugging. I am going to note some things in this script that are changes made to support IE7.

if (elm.addEventListener) {
elm.addEventListener( evType, fn, useCapture );
return true;
}
else if ( elm.attachEvent ) {
var r = elm.attachEvent(‘on’ + evType, fn );
return r;
}
else {
elm[ ‘on’ + evType ] = fn;
}

This is one that I already spoke about in the first section, but it is a good idea to mention it again. IE does not use the W3C standard addEventListener, instead it uses a global object (window.event) which would require the use of another function getTarget we were using the event argument passed into our event handlers. Since I elected to simply get the references to the select boxes, this is not needed, but it may be needed in your case. Google for getTarget and I expect youll find the definition for the function.

The next difference is one I already pointed out again:

try
{
// prepare to send the Ajax request
var qs = pagexhr.buildParameters( ‘series’ );
pagexhr.prepareSend( ‘seasonQuery.php’, qs );

// define readystatechange method ( .. not shown )

// send the request
pagexhr.xhr.send( null );
}
catch ( error )
{
alert( error.message );
}

This is something that literally drove me nuts for the better part of an hour. Because IE showed no errors and every alert test I ran showed it completing. But if you define the header information (which is what prepareSend does in addition to opening the request) after the definition of the readystatechange event handler and you have multiple Ajax requests, it will not work. If you move the prepareSend line above the readystate event handler definition, it works. This is boggling and no one I asked could really explain why this is, so I added it to the list of Microsoftie oddities I have found.

The application now works with JS and without and is cross browser.

Final Thoughts
Given the amount of time I spent developing this I would say its a good idea to use pre-made libraries if available. That being said, Ajax is not so complicated that you should avoid writing your own altogether. I rather enjoyed doing this, for the most part. Admittingly, it was rather annoying and testy trying to make things work in IE because of A) the lack of an Add-On to properly debug JS; VS2005 was no help either and B) the lack of satisfactory error reporting when an error does occur. Of course being compliant with most of the W3C standards is a definte starting point for MS to creating a browser that can challenge FireFox in both usability and under the hood features.

And now a link to the source code: here
As to a license and that jazz, yeah its not that important 🙂 do what you want with this code I dont care 🙂

Ajax and IE7 Complaints

Nothing burns me more then when people waste precious time. And nothing is more evident of this then my experimentation with Ajax today. I decided that I had learned enough to write a nice little Ajax style application. I also wanted to make the application available to those who did not have Javascript enabled. And finally, I wanted the application to work in both IE7 and FF2. This was mostly a test to see just how hard it is to A) Develop an Ajax application, B) how hard it is to extend the application to be cross browser compatible as well as C) functional without Javascript.

Of course to develop this application I choose to use FireFox, because of the higher standards compliance and the fact that I then have access to Firebug, the ultimate tool for web developers. I will say this, Ajax development is very easy, contrary to popular belief. This being my first attempt I was able to create a working application in about 5 hours. This was a challenge because I had to understand the way Firebug was displaying my debug information to me, in particular the responseXML property. I eventually discovered that what I thought was a null reference was actually a rather interesting way of displaying the incoming XML document.

Once I figured this out, the process became rather easy, though to be honest I find the code a little messy. This is mainly because its my first real attempt at JS and I am extending a lot of examples out of the book I am reading. Over the next few days I intend to simplify the logic so that the code is cleaner and I intend to post a link to it. Of course it will be available on my website when it launches. But I want to talk about the remainder of the process.

So I have the application written, so I wrote the PHP to allow the application to work without JS (albeit not quite as well, but its gets the job done). This wasn’t quite easy, as I expected it to be given that PHP is a rather easy language to extend and the Ajax calls were all to PHP. Now came the next step, getting it to work in IE7.

One would think with all the advancements in .NET and the very intuitive nature of event handling in .NET, we could expect something similar in IE7. Clearly this is far too much to hope for as IE7 event model is still very much IE6 which was horrible then and is even more horrible now, when stacked against the FF event model. This is clearly either a huge oversight by MS or plainly just pure idiocy. It clearly is not beyond them to properly create an event model as they did very well with .NET, so why were these policies not applied to IE7.

It just doesn’t make any sense to me. Every day they are losing more ground to FF and Opera, two browsers which implement standards and follow the intuitive designs laid out by the W3C, who do speak for most developers. Not improving the model, to me, essentially means they are ignoring web developers. That is a mistake. And one that will lead to the take over by FireFox and Opera. To me this is hardly a big deal as the only thing I care about is that the browser I am most concerned with supports the majority of standards. Situations such as the current, where the worst product is the most popular product are tremendously undesirable regardless of the industry.

Literally, with all the money they have, I know they have good developers on staff. What is so hard about applying the standards. I admit, from a presentation standpoint, the general consensus around the community is that FF and IE7 are very close. But under the hood the difference is so overwhelming it makes developers want to cry. Microsoft does so much to try to get developers to program to their platform when the most obvious solution (quality products to program for) are the easiest solution. The same questions about IE surround Vista, why has it taken 5 years for you to realize that you were falling so far behind that catching up would require and obscene amount of effort and therefore likely result in more hackish code.

Windows and IE will not remain at the top forever, people are starting to learn. We can see this trend in the browser war already; FireFox is likely to destroy IE in the developer world, but it will be much closer in the normal persons world. Why? Because the majority of normal people are very superficial and the cosmetic changes will take hold and they will tell their friends. And while this is good for MS, its a very short term trend. It is the developer community who recommends solutions to their family and friends. True, MS may be able to provide better security with Vista’s new modes. But the ignored cries from developers will do little to diminish the negative image Microsoft has in the developers world, and this will result in more attacks.

The bottom line is Microsoft needs to listen to the developers who use their software as well as the normal users. Cries to improve CSS support and the JS engine are just as important, if not more important, then an RSS reader or (I dare say) tabbed browsing. Somehow Firefox and Opera, two companies with far less resources then MS have been able to consistently produce better browsers that make the developer experience, as well as the user experience, enjoyable. Clearly Microsoft is missing some important piece or they are just being arrogant. In either case, this is no way to improve their image in the development community.

FireFox 2.0

What is it about FireFox that makes it so good? Could it be the fact that among the top browsers it is the best at supporting W3C standards (Web Browser Standards Support Summary) . Is it the fact that they constantly add minor new features that always seem to boost my productivity of make it so much easier to do things on the web. Or maybe its their constant respect for the developer or the power user through extensions development and standards support.

Whatever it is, one thing is clear FireFox 2 has won this round of the new browser war. I user FF2 far more then IE7 during the time I have been running them both (all through RC stages). I am considerably impressed. You can all be in awe of the new Microsoft Internet Explorer, but I am in awe of FireFox. They went and took something that was for the most part a perfect browser, my opinion, and somehow made it better, without changing anything major. About the only complaint I have come across is that a few people don’t seem to like having the close tab button on the tab itself. I also read one article that despite the fact that FireFox won out in every major category (except a couple) the author wanted to stay with IE because it looked prettier. As for me, my only complaint is a annoying bug that causes my page to jump to the top when I click on a link, this forces me to either hit “Enter” to follow the link, or scroll back to the link and click it again. I have no idea why this wasn’t fixed, maybe I am the only one experiencing it.

One of the things FireFox 2 did was allow for the tabs to flow off screen and provide a small control to move side to side along the tab bar, this makes it difficult to have the global tab close, as was available in previous versions. I have only seen this in action a couple of times as I don’t generally have that many windows open (only on a big news day). This is a decision that has some in the community raising an eyebrow, because it is different from what we are used too. Since the invention of tabbed browsing, we have again seen the rise of, what I call, the overzealous Internet user.

We have seen it before, people who don’t seem to ever want to close a window, for fear of “losing” information. Clearly, they don’t understand bookmarks. But I know people who will not only have 10 tabs open at a time, but will never close their browser. These are the same people who would have a list of 20 or so Internet Explorer instances in their taskbar. With tabs it becomes trickier because the idea of tabbed programming is to have a single window contain the entire browsing experience. To counter this, very often we truncate parts of the tabs title to allow others to fit. While FireFox 2 does do this, it also allows for the tabs to flow off the screen. This is one way to solve this problem. I, however, prefer IE7 way of doing this, which is reminiscent of Mac OSX of showing thumbnails of all open tabs. But this is one of the few area where IE7 beats FireFox.

I will admit a few of the new features, such as the afore mentioned method of being able to see thumbnails of all open pages, are cool, but FireFox now has an inline spell checker, a necessity for programmers 🙂 I have spent a lot of time in both programs, and comparatively from the users perspective, they are just about even. Notice, though I am describing usability and not including features in this statement. We have to remember that IE STILL uses Notepad for the display of source code. I have always loved the color coded syntax viewer I get with FireFox, though I guess the argument could be made: how many of the “normal” people ever look at web page source code.

However, FireFox has always had one feature that I have found incredibly useful: Find. Its not a popup dialog that appears on the screen, its a small extension of the status bar appearing near the bottom. I have always loved this feature. With Opera and IE I still get a dialog that I have to constantly reopen if I am using the keyboard. I use the keyboard ALOT and I hate clicking. So clicking on “Find Next” is, to me, a waste of time. With FireFox I hit Ctrl-F and type in and then just hit “Enter” to go through all the matches on the page. I love this feature. Anything that makes me have to use my mouse less is always a good feature in my book.

There is also the feature that is heavily touted by Mozilla, and you will hear about it in any review worth anything. With the advent of Ajax, we, as developers, have more ways to help users find the information they are looking for. Yahoo and Google have devised the Auto Suggest mechanism where the search string entered by the user is compared against a list of common search phrases to help the user figure out what they are searching for. Previous this was only available in browsers with a plugin and often was based on what the user had search for previously (this is Auto Complete actually). What FireFox 2 does is quite different as it uses a list served from Google/Yahoo to make this comparison. This coupled with their inline searching that even IE has now, makes finding what you want in FireFox that much easier. Just type in part of a phrase and Firefox will query Google (or Yahoo) for suggestions, this, of course, only works with those two search engines.

Of course people want security. Anti-Phising and protection against rouge Javascript programs have become a standard part of the modern browsing defense system. FireFox and IE both have anti-phising installed out of the box. However, I have yet to find a way to change the source in IE7 for the suspect website list, you can in FireFox. Of course, this make sense as Microsoft intends to provide this service to its users rather then use a third party. FireFox has always been secure against rouge javascript, mainly because it does not run so close to the OS as IE has in years past. This is changing, however, for those who have read about Vista and IE7 you know that IE has been uncoupled from Explorer.exe and runs in Protected Mode (Vista Only). It will be interesting to see how effective this is once Vista is released. I have already made at least one post showing the new IE7 protection against spyware and drive by install.

However, IE7 is still part of Windows, and Windows is still hated by a lot of people. So I have little doubt that someone will not find a way to circumvent the protection. The question of whether FireFox 2 is any safe is not an easy question. FireFox 1 has shown that it has its share of problems, in fact, to my knowledge, I patched FF more often the IE in the last year. Of course, this could also be due to Microsoft’s lack of caring about IE6 as they developed IE7, but this is speculation at best on my part.

In the end, FireFox, to me, is the better browser, and the browser that I will charge with handling the majority of my browsing. I think the current war is a good war and I think it will help spur innovation and development of IE by Microsoft. And since IE still holds around 80% of the market, this will be good for developers as the popularity of FireFox continues to rise we can see one of two conclusions happening: 1) FireFox eclipses IE7 as the #1 browser an we can program for a browser that best supports the standards, 2) IE7 becomes the better browser and more widely supports standards, in which case we can program to either and have our pages come out alright. To me, either case is acceptable as either case will make my life easier. But one thing is for sure, I am an FF user for life.

Internet Explorer 7

So the day is finally here, or was rather, that Microsoft released its newest rendition of Internet Explorer. Since just about everyone and their mother has taken a stab at this new browser, for good and ill, I felt it was my turn. I have not been sitting idly by as Windows Internet Explorer (yes kids, that’s the marketing teams doing, new name, new look) but I have been reading, and speaking with my fellow web developers. What I have found is a mixture of praise and complaint, and of course the occasionally anti-Microsoft zealot who, despite never trying it, refuses to believe it could be better then IE6.

They are, however, wrong on that point. IE7 is a huge step up from IE6 and did well to close the feature gap. That being said, when compared to FireFox, it is “good enough” at best. This makes me wonder if Vista will be “good enough” or Office 2007 for that matter. Yes they have fixed a lot of the numerous problems with IE6, block level hover now works (with the DTD in place) (Quirks mode still sucks), PNG alpha is up to date, its got an RSS reader, etc (read more about what it does well here ). However, were it any other company I might be willing to say “good job”. But not with Microsoft.

We are talking about a company with vast resources at its disposable and some of the best programmers in the world. There is no reason why IE7 should not be better then FireFox 2. I am not even speaking to the security issue because as far as I am concerned people will always find a way around everything (but having to click ActiveX apps to allow them to run is a definite plus. As is the uncoupling of IE from Explorer and Vista’s Protected Mode – read about those here). And as I previously posted, IE7 does a lot to minimize the risk of “drive-by” install and spyware installation, this still doesn’t replace good old common sense (dont download virus.exe). In terms of security, I am pleased because now, your average user, has to work (at least in Vista) to actually screw up their computer. This means less trips to home, for one, to save the family computer from being overrun.

But lets not deviate from the issue at hand. For Microsoft their is no excuse for not meeting all CSS2 and Acid2 standards. Yes you can argue that such standards make no sense in real world but the bottom line is they are standards. As the browser with the leading market share it should be Microsoft leading in standards compliance, not FireFox (Web Browser Standards Support Summary). With IE Microsoft had the chance to show the world that it meant well for developers. And while the addition of native XMLHttpRequest is nice, developers and designers will still have to implement hacks to “not make IE behave like a retarded cat”, as one developer put it.

I will state it again, IE7 is a great improvement and will help keep the average user safe for the time being (people will always find a way around any security measure). But Microsoft owes more to the development community. They owe it to developers (who they encourage to write for their platform) to provide proper standards support for the majority of web pages. Granted some of the features I am speaking to are lesser known and not seen on websites, but they are standards. Why does the one browser I HAVE to design the web page for have to be the one that supports standards the least? This is a question on many developers minds.

Perhaps, it was a question of time (though FF2 is in RC3 and will be well worth whatever wait). In that case, I wonder if it was necessary to add all the features in IE7. Let me be frank, I think most power users and geeks (like myself) already user FireFox. I doubt your average user even knows, or understands what RSS is for example. My thinking is that, Microsoft should perhaps concentrate on improving the browser before trying to imitate other products features. I do not use IE7 for much, though I have now had it for about two months. I will occasionally start it up but, its ugly when you compare it to FireFox 2.

Its really is funny, because all this talk about the new UI increasing the real estate for web pages and they have less space then FireFox 2 does. Somehow, FireFox has managed to keep a consistent interface for about 7 years now, without removing my menu bar. I agree with many of the articles I have read, the person who came up with this idea should be fired at once and then shot. Why throw away everything we have gained in 20+ years of UI development; this is NOT an intelligent decision. Of course the geeks and power users can adapt to it, or figure out how to make it look as it should, but they are not your target audience: that’s the common user.

I seriously worry how my parents will take to the UI, seeing as how opposed to change they already are. This is one of the reasons I cant get them to fully switch to FireFox, they have been using IE for too long (my fault I know). But, this new UI is clunky and why are the tabs so friggin big and why does it not follow IE6 at all. News flash: 82% of people still use IE6, so lets confuse the 79% of people who wont know what the hell is going on when they open IE7 (for a howto on implementing tabs, see FireFox 2). Which in the end will increase the market share for Firefox, which is a good thing for those of us who want to follow the standards when writing web pages.

But what I am talking about is responsibility. Microsoft, while a great company with some very gifted people, needs to deliver a better product. If this is any indication what Vista will be like then I should prepare for disappointment. Five years of development should bring about a better OS with many new features. Five years should not bring a Windows copy of KDE or OSX without the *nix core. This is do or die for Microsoft. The whole world is holding their breath waiting for Vista and IE7, two products which could make or break the company. IE7, while not a huge disappointment, is still disappointing. Now all thats left is Vista in December, and that is what everyone will have their eye on.

War with MySQL

I like plugins and I like IDE’s. I like to play with new tools to see how they work for me. I find it fun and interesting: Here are some of the tools I use:

  • Ruby in Steel (Rails plugin for Visual Studio 2005)
  • Visual Studio 2005
  • PHP Designer 2007 (Personal)
  • Net/Connector 1.0.1
  • Zend Studio
  • Netbeans

This is what I am using at this time mind you, I have used others in the past and simply not liked them or did not see a need for them, in any case, to try something is the first step in accepting change. However, one area that I have always disliked is MySQL support in Visual Studio. While I should be grateful as no IDE (that I have come across) offers the wealth of features that assist the programmer in so much more then writing code. However, naturally Visual Studio is geared toward Microsoft products, this makes sense of course, as it is a Microsoft product. Anyway, ths being the case, the majority of database interaction had to be handled pragmatically rather then through the GUI tools provided to SQL Server users. I have long since looked for a way to counter this. Tonight I though I found such a solution in MySQL Tools for Visual Studio.

However, I was immediately met with complications as after installing and the Connect to.. Dialogue disappearing on me, I sought answers. The first answer Google told me was that my machine.config file needed to be edited with a few changes brought on my Net/Connector 5.0.1. I found this strange, having never heard of Net/Connector 5.0.1, I assumed it to be a misprint and continued using my Net/Connector 1.0 as the MySQL Download site showed no higher version then what I had. Upon making the changes Visual Studio began sputtering and a variety of plugins began to fail, noteably Ruby in Steel (http://www.sapphiresteel.com/) and Atlas. So I decided to revert and fix these applications before continue.

After attempting repair and installs twice for Ruby in Steel and Atlas and getting the same error, it was clear that something was wrong with my configuration. Since the only config change I had made was to the machine.config file I decided to replace it with the default and see what happens.

While this worked, as expected, none of my plugins worked at all. However, I was able to uninstall the plugins, as the config error had been preventing it, and reinstall them. After some minor tweaking I was able to get everything back to the way it was using the Net/Connector 5.0.1. Yes I did manage to find this, it is buried in MySQL.com and not really publicized, it was only thanks to Google that I found it. I now went back to install the MySQL Tools, and this time everything went in alright and no errors were reported.

I attempted to follow the instructions for using the plug-in and was greeted with the same error. While MySQL Tools for Visual Studio and the option to use the MySQL Connector show up in the startup and Datasource type selector, as soon as I enter one keystroke the form disappears. Annoyed at having spent about 3 hours trying to get it to work (not that the actual workload took that long, working from Japan has disadvantages when accessing servers in the US) I decided to try #mysql on IRC hoping that one of the Linux gurus may have played with .NET. However, as was expected not one person used .NET in the channel, in fact I was rather hounded for supporting the “evil empire”. Undaunted I decided to try #c-sharp. There, thankfully, people had tried the same thing I was trying and been greeted with the same effect. Whenever a keystroke is entered the form disappears.

So for now, I am waiting on Net/Connector and MySQL tools, hopefully in their next version they will correct this bug I submitted cause it would really be nice to take advantage of all the built in help VS provides for developers in creating and maintaining databases, as well as automatic creation of the Data Access Layer. You cant blame Microsoft for these problems, they dont even have an obligation to allow plugins to be developed for Visual Studio, the fact that they do allow it is evidence that they support external development of their tools. The developer is important, they want people to develop using their software and to make suggestions as to how to improve it. I wouldn’t expect Microsoft to provide MySQL support into Visual Studio because, from a business perspective, they want you to use their product. Anyone with any business sense would understand this; just like Red Hat wouldn’t send drivers for Suse applications with their products. Or how some projects are KDE only, its the same concept.

A Word About Manuals

In this day of frameworks and new languages with new paradigms and new strategies to increasing developer efficiency the modern programmer has to do a lot of thinking on their feet. As programmers/developers we are gift with the quite thinking highly analytical mind necessary for our profession and our personal success; in most cases. However, even the most skilled programmers must consult a reference for their tasks, whether it be learning or using, a manual is something the modern programmer is coming to expect, in some form or another.

Now a manual is not always obvious, nor does it have to be. When I program in Visual C# .NET I have been able to figure out how entire sections of the framework work by simply reading the intellisense documentation put together by Microsoft. However, this is a rare case, as most programming languages do not have an IDE that supports such highly sophisticated features. Some would say the IDE does not need it and the programmer should be charged with this task. That is, however, a separate argument and not what I am discussing in this post.

What I am curious of is why most new programmers do not use the manual at first. Being an active member on IRC Freenode, in particular in the #php channel, I have had the chance to see many of the problems plaguing people new to the PHP world. It really surprises me how mundane some of the questions are. Things like, “how do I use this function?”, “how do I add elements to an array?”, or “how do I work with multi dimensional arrays?” etc. Such questions are easily answerable by the PHP Manual, which is, IMO, one of the best programming language manuals on the Internet. It is so good, that I consider the need for a formal book on PHP unnecessary.

However, that said, having such a good manual is not always a good thing. As in the case with PHP this leads to people programming, who maybe should not program. Now, I am not one to discriminate, but based on some of the questions I have been asked, and I only named a few, I seriously wonder about some of the people who are programming out there. I hope it is only inexperience with programming that is the result of such questions, or not being use to such a good reference being available.

Now, look at Ruby on Rails manual which is, if your not an experienced programmer, horrible. A lot of the people I have talk to in the Rails community agree with this and agree that the manual needs a lot of work, namely the addition of examples. We often joke that the manual serves as a deterrent to discourage inadequate programmers from picking up such a beautiful piece of art that is Rails. Truly, Rails is not for the faint hearted, its not so much difficult as it is unconventional for the untrained (or uneducated) programmer. Its rules can seem like a limitation when really they are the keys to the shackles, like .NET was for the Windows programmers. But I am not going to turn this rant in a Rails praise article, their are enough of those by the fan boys already.

What I am trying to point out is that even in the case of a good manual people often over look it and ask questions that shouldn’t be asked. Yes, its nice to ask someone and get the answer quick, but I often wonder about the amount of effort expended to find the answer. I know their have been times that I have served as a walking encyclopedia in the PHP chat rooms, this is not my function, nor why I hang around in the channel. PHP has a great manual, and it should be utilized to the fullest extent. This is true with all manuals. I had a case today where I had a question about a new tool I am playing with called CakePHP. My answer was not in the docs, but rather was something obscure that had to be answered by one of the developers. But before I asked the question I sifted through the manual and made sure I couldnt find the answer there.

So next time you have a question about PHP, .NET, Java check the Manual/KnowledgeBase/API, if its about Rails, well check the manual still, but keep your IRC app on standby. And don’t forget Google, the programmers best friend 🙂

Dont call it Ajax, Please

Recently the computer industry, in particular, the web programming realm, has seen the rise of technology called Ajax. Its actually quite funny, because all this “new” technology is, is a combination of existing technologies that have been around since the late 90s. So to call Ajax new is technically incorrect, it is simply a new way to use these technologies. This is the same set of ideas that were behind DHTML that was used by websites in the mid 90s: use existing technologies together to create new effects on a website to enhance the user experience.

DHTML really is nothing but a combination of Javascript and CSS. Javascript doing what it does best – manipulate the DOM with CSS to generate effects that make the page more interactive to the user. But DHTML is, like Ajax, not a new technology, it was simply a new way to use existing technologies. Ajax stands for Asynchronous Javascript and XML, technologies that have been around since the early days of the Internet. Again, we are using Javascript to enact DOM changes on the client side to make the page appear to be more desktop like, which is the final goal for any webapps – to function on the web as a desktop app functions on the desktop.

What really bothers me is, people who call simple DOM manipulation, such as hiding elements on the fly with display: none and show elements with display = “”. This is not Ajax, this is Ajax:

// setup the request object – if we can
var request;
try {
request = new XMLHttpRequest();
}
catch (error) {
try {
request = new ActiveXObject( “Microsoft.XMLHTTP” );
} catch (error) {
return true;
}
}

// open the request and be prepared to handle the response, it exists
request.open( ‘get’, url, true );
request.onreadystatechange = function() {
if ( request.readyState == 1 ) {
simplexhr.loading();
}

if ( request.readyState == 4 ) {
simplexhr.complete();
if ( /200|304/.test( request.status ) ) {
simplexhr.retrieved( request );
}
else {
simplexhr.failed( request );
}
}
}

// send
request.send( null );
return false; // this ensures that the link is not followed ()

DOM manipulation has been something Javascript has been able to do for years, something it was designed to do. I even question half of Microsoft Ajax Atlas Controls as to whether they are really Ajax, but rather simple encapsulations of DOM manipulation with underlying JS logic. To me, Ajax is about going out to a server and getting XML to manipulate. Nowhere in the acronym do we talk about DOM manipulation as being part of it. This is basically DHTML not Ajax. DOM manipulation is certainly an integral part of the Ajax process as it allows you put the data into the page – but this is DOM Scripting not Ajax.

Just a rant and my two cents – have fun coding 🙂