Thursday, July 27, 2006

I've Got The CE Blues

I've got a bad case of the CE blues, which is a bit like having a bad case of the Mondays.

You might have read my previous three posts about our attempts to find a Windows CE compatible device to use with our new POS product, Ontempo Store.

Well, over the last two days I've managed to scrape a grand total of one hour together to do some testing with the SQL Mobile database engine, running on an HP T5520. I've also spent most of one morning and a little bit of an afternoon trying to figure out how to permanently install the compact framework 2.0 and SQL Mobile on the device. So far, the results are not promising.

Starting with the install issues, it looks like we're up a brown smelly river without a large wooden thing with which to make progress. There isn't an image available from HP with the compact framework included, and we've have no luck with creating or finding a Altiris script to install/keep it installed. We've also failed to find any form of BSP for the device, so building our own CE platform isn't looking good either. The compact framework 2.0 SP1 will install the GAC to the storage card/hard disk, but the registry settings and core files are still lost on the HP unit since they are installed to RAM. While we will spend more time looking for a solution, we're not exactly brimming with enthusiasm at this point.

Then there's the database engine. I guess the good news is the limited amount of RAM in the device isn't a problem - SQL Mobile doesn't appear to do any data caching, and probably no execution plan caching either. I haven't been able to find any thing on the 'net that conclusively states this is the case, but having run the same query twice consecutively and had it take approximately 24 seconds each time, it's unlikely there's caching involved.

As you might have guessed by my last statement, the bad news is the performance. My first test had the database loaded in RAM, since the device wouldn't detect my USB hard disk, I didn't have a USB thumb and there wasn't enough space left on the inbuilt (only) 64mb compact flash chip. The results were quite astounding, every single query I ran on my 277000+ row table was well sub-second, except for;

SELECT COUNT(*) FROM Customer

which took 4 seconds - not great, but a figure I could live with it. Since keeping the database in RAM isn't an option for us long term I eventually found a couple of USB 2.0 thumbs. I tried the database on each of them, and the database engine went from running like a cheetah the morning after a hot beef vindaloo to running like a gold fish in the desert.

Starting with some simple queries, I tried;

SELECT * FROM Customer WHERE Surname='willmot'

this still sub-second, with an index on Surname, and returning three rows.

SELECT * FROM Customer WHERE Postcode='1008' AND Surname='willmot'

Went from sub-second to over a minute without an index on the postcode field, and returning only one row. After adding an index to postcode (which appeared to crash CE the first time I tried), the time went back to sub-second. This is pretty bad, but possibly not yet fatal. However I then tried;

SELECT * FROM Customer WHERE Surname like 'will%'

This took anywhere from 1-2 minutes depending on how bored I got waiting and what else I tried to do with the unit while the query was running - if I was able to get it to do anything. I got several results, and I don't remember any of them being under a minute even when I completely left the device alone until it finished. I checked the query plan SQL Mobile was using, and it was indeed using the index on the Surname column so it didn't appear the query optimizer was at fault.

I figured maybe the IO performance of the USB thumb sucked, so I tried running the query again immediately after the first execution and got almost exactly the same result. It appears there is no data caching occurring in the data engine. On the web I have found one blog comment asking whether or not SQL Mobile performs data caching (no reply to the comment) and one PowerPoint presentation that suggested there wasn't any caching, but wasn't totally explicit.

These same, simple, queries worked fine when I had the database in RAM so I can only presume the speed difference is in the IO to the USB thumbs, but since a USB hard disk isn't likely to be 100+ times faster I'm not filled with hope about using this database engine for our product.

In any case, the more I tested the worse it got. I couldn't get a statement using the TOP keyword to work at all, presumably SQL Mobile doesn't support this ? Any query I ran on an indexed field without using the like keyword was pretty fast, any query that used like or an unindexed field felt like watching a full season of cricket all at once, without the one-day games. I was so distressed by the performance of these queries (and ran out of time anyway) I never even got around to trying >, <, or, in, sum, group by, order by or various other functions/operators/keywords. Since the CE version isn't on our current critical path for the POS, I'm forced back into doing some real work so I won't get much more time to play with this for a while. None of my tests were very scientific, and I really should get a USB hard-drive working to see how much faster that is than the USB thumbs. A bigger internal compact flash chip where we can store the database might also be worth a look. I could try the beta of SQL Everywhere, the next version of SQL Mobile. I'm not sure it's worth it though, since I haven't seen any comments saying it significantly faster or supports data caching so I don't expect it to help.

Maybe we'll see if we can get somwhere with embedded Firebird for CE although this doesn't look likely, or maybe we'll write our own database engine.

Our other project, a computerised trolley to assist staff picking stock in a retail warehouse, isn't dependent on SQL Mobile, but does require a similar spec CE device with the compact framework installed. The T5520 would be fine for this, if we could keep the compact framework installed between restarts, power offs etc. At this point, we're still looking for a solution to the install problem or a new device that fits the bill.

Tuesday, July 25, 2006

SQL Mobile, but Not Agile

On Saturday I sat down to spend an our creating a SQL Mobile version of a SQL Express database for use in some performance tests. Oh, how I should have known better. An hour ? What was I thinking ?

This is part of the Ontempo Store POS product I'm the development lead for at work. We want to create a Windows CE based version that runs on low-spec hardware, and we're planning on using SQL Mobile as the database engine. Since our current target device has less memory and a slower CPU than we'd originally planned, and we have no idea how SQL Mobile performs in general anyway, we're going to try some performance tests first. To do this, we need a representative SQL Mobile database.

Since our sample device hadn't arrived I figured I'd build the database on my desktop and deploy it to the device later. This is possible since SQL Mobile is cool enough that it can be installed and used on a desktop PC, and managed with the SQL Management Studio that comes as part of SQL Server and SQL Express SP1. I was able to create the database itself easily enough, although I found the engine's first limitation when I tried to manually set the maximum database size property - you can't have a database larger than four gigabytes. Fair enough though, and this should be plenty for our POS.

The next job was to add some tables to the fresh, new database full of wondrous opportunities. Now I already knew SQL Mobile doesn't support all the data types it's bigger relatives have. Most notably the varchar and char brothers are sadly missing although their cousins nvarchar and nchar are in residence. Despite this I figured the easiest thing to do was script our existing database schema (just the tables) and edit the script by hand to make it compatible. After all, search and replace should take care of a lot of the script editing.

Ah, to be young, green and stoopid (stoopid = a particularly moronic kind of stupidity).

The create table statements for the two engines are similar, but the syntax for creating primary keys and constraints is not exactly the same. In fact I still haven't worked out the correct syntax for creating a primary key with multiple columns, but since I only had a handful of tables that needed this I simply created those keys by hand via the GUI. Many of the set options (which Sql Express helpfully scripted for me) also aren't supported, along with keywords relating to file groups and other similar features. The 'with' keyword now available in 2005, for including additional column values in indexes to improve lookup times, is also not available. Eventually I got my script edited and my tables created. I was now ready to insert some data.

I tried to use the import/export wizard in SQL Management Studio but couldn't figure out how to create a connection to the database from inside the wizard. It appears the SQL Mobile provider isn't included in the list of providers when setting up a connection. I tried creating an ODBC connection, but had the same problem - no provider for SQL Mobile databases in the ODBC dialogs. At this point I decided I'd use my trusty Script Table Data application which I built recently. This application creates a script file full of insert statements (or stored procedure calls) to insert data into a table, based on an existing database.

Since SQL Mobile doesn't support stored procedures I created scripts with plain insert statements and started running them against the database using the query window in SQL Management Studio. This worked fine and the inserts were quite fast given the number of rows being inserted, and that the database was probably being resized quite often. All was fine until I got to the customers table, which is the largest table in our sample database. It contains approximately 288 thousand rows. This is perhaps overkill, although very large retailers or those with poor de-duping systems may end up with this many customer records. In any case it's a good quantity of data for testing with, if we run fine with that many rows we'll run great with less.

My problem was I couldn't open the script in SQL Management Studio, instead I got a short wait followed by an error saying the 180mb script file couldn't be opened. If I wanted to wait long enough I could open the script file in notepad, cut a group of lines out and then paste them into SQL Management Studio, execute them and repeat. Each copy/paste/execute operation took quite a while though and it was clear this wasn't the best way to go.

I decided I'd write a simple console application that opened the script file, read one line at a time and executed it against the database. I wrote the app fairly quickly and ran it, only to find I got an error saying the SqlServerCE.dll could not be loaded because it's version was mismatched or the right version could not be found. I couldn't quite figure out why I was getting this error, and since I was only trying to get some data into my database, rather than write a proper application, I decided not to mess around figuring it out. Instead I changed the application to be a SmartDevice console project so I could run it in an emulator.

Having changed the project type and selected the Pocket PC 2003 emulator I started the application in debug mode. The emulator appeared and Visual Studio started installing the compact framework and my application. While VS was busy I opened the emulator configuration and shared the folder on my desktop PC containing my SQL Mobile database with the emulated device. This meant the shared folder was treated as a storage card.

Whoops ! What I discovered when I searched the web an hour later is; SQL Mobile won't properly deal with a database hosted on 'emulated storage cards'. Instead you get an error saying SQL Mobile made an unsupported request to the host operating system.

In order to prevent myself pulling out any more of my already thinning hair I grabbed hold of the desk. You can still see the impressions of my fingers.

I moved the database from the shared folder into the "My Device" folder in the emulator. Even though this folder is part of the emulator, it's not emulated as a 'storage card' so you can use the database fine from this location. I changed my application so it pointed to the new file location.

At last, I had achieved success, if you can call it that. My application ran and started inserting rows without exceptions. I hadn't placed any timing/performance checking code in my application at this point since I was primarily concerned with testing select and update query performance. I don't know how long the application actually ran or how many rows it inserted per minute, but it was at least eight hours before it completed.

Maybe this is because it was running in the emulator (and I was busy working on other things while it ran in the background, sucking the life from my CPU). Maybe the customer table is too heavily/badly indexed, I'm not sure yet.

I doubt the problem is SQL Mobile itself though, since the database happily inserted a lot of rows at once when the queries were run from SQL Management Studio. Of course, my application is making at least one call into the database engine for each line in my script, so my code's not particularly efficient either and that could be the problem. On the other hand, it shouldn't take eight hours either.

Sigh.

At least now I have a database I can put on our actual device and test with. Here's hoping the performance tests go smoother.

It's Heeerrreeee !

Our HP T5520 arrived today. Our local suppliers website screwed up and dropped the order we placed on Friday, but we replaced the order this morning and we were able to pick up our unit late this afternoon. We didn't have a lot of time to play with it, but I thought I'd post our initial reactions.

First, the good points. The unit is physically smaller than we'd imagined, which is good. It's actually smaller than the old devices we were building, and we were always a little unhappy with their size anyway. Also, while you'll never see it hanging on a wall at the Louvre, it looks quite good too.

It starts relatively quickly, much quicker than XP or probably any other version of Windows would on the same hardware. Once the desktop appears, the loading process is complete so there's none of the usual post-startup performance hit you often get logging into a desktop OS. On the other hand, the device isn't instant on either.

The pre-loaded Windows CE has almost all the features we'd like, including a VNC server which is very cool from our perspective (for remote support/training). The CE theme in use for the UI is also quite a lot nicer than the theme (or lack of) that we had in our own builds. The performance of the few pre-loaded applications, tools and applets seems very good.

Ok, now for the negatives. First, we took the unit apart to investigate the possibility of adding more RAM (just in case we install some more ewes). Unfortunately the memory is all surface mounted chips, there are no expansion slots of any kind in the unit. This means we're stuck with the 128mb it comes with. We don't even get to use all of that, since some of it is reserved by CE and some of it is shared to the video. There are units with more RAM available, but they all seem to have XP Embedded loaded instead of CE and we've already decided XP Embedded isn't the right answer for our products. We're also worried about performance. We don't have any experience with SQL Mobile and CE, but our experience with Sql Express/Server is that low spec machines generally run the database engine better with more RAM. Of course CE and SQL Mobile are designed to run on much lower spec hardware than we've got, so we're hoping it will be ok. Fingers and toes crossed at this point.

Secondly, the unit doesn't come with either the compact framework or SQL mobile installed. I didn't think this was a big problem because we can install these ourselves, but, it all turned to custard when we realised the installations are lost whenever the unit is turned off or rebooted. Since it's a thin client designed to access software via terminal services etc. there is no battery, and the RAM where the software is installed gets reset. The compact framework 2.0 SP1 can be installed to the storage card, which would be fine, but apparently this only installs the GAC to the storage card and some changes must still be made in RAM (or on the ROM if that's an option). We haven't spent a lot of time looking for an answer to this problem yet, in fact we haven't even tried the compact framework 2.0 SP1, so maybe we'll find a solution. There does seem to be a flash utility in the CE build which lets us update the image from a file or the HP website, so maybe HP have an image with these included.

The last two problems are relatively minor. The top of the unit slopes down towards the front, which means you can't really sit a monitor on top of it safely, especially since the top is a hard plastic rather than a non-slip rubber surface. The unit comes with a stand for mounting it on it's side, which helps to conserve desk space. This is good, but it looks a little funny since it's not symmetrical, one side (the top) has an angle and the other side is flat.

Overall the unit is still very exciting. It's nearly perfect (apart from the compact framework installation issue) for our warehouse picking trolley project, and if we can also prove the performance is ok then it will be great for our POS too.

Saturday, July 22, 2006

HP to the Rescue

For nearly the last two years I've been working on a new point-of-sale system, called Ontempo Store. Everything has been going relatively well, and the combination of Visual Studio and the .NET framework has made the development of the software much easier than it would have been in any other tool I know of.

Unfortunately there is one major feature we haven't yet implemented. One of our original design goals was to have both a Win32 and a Windows CE version of the software. Preferably built from a single, shared, source base. After all, no one wants to maintain multiple copies of their code.

Our initial problem was finding some CE compatible hardware and building a CE platform. My boss found some VIA hardware we liked, and a case of appropriate size. We ordered some units for use during development and started trying to build our CE platform. At this stage we'd hired a 'CE Expert', who quickly got CE starting on the device but that was about it.

Initially he had trouble getting the network devices to start correctly. That took a couple of months or so to sort out, and once that was solved we had issues deploying and debugging Visual Studio projects on it. Our expert eventually left our employ for various reasons, and so we tried to outsource the CE development. We didn't get too far down this track, with all the companies we spoke to either being too busy or too expensive for us to use.

At this stage the company decided to press on with development of the Win32 version of the software. Since we had no emulator or CE platform it didn't seem plausible to develop the application as a SmartDevice project, since we wouldn't be able to debug it. It was only a year or so later we realised our error, that we could start a SmartDevice application and attach Visual Studio to it for debugging after the fact.

In any case, we built our software as a Windows Application project with the intention of porting it to a compact framework application later. We did make an effort to ensure our data access layer and some other components are compact framework and SQL Mobile compatible, but the bulk of the code has been written without much thought to compatibility.

Perfoming the code port is, of course, easier said that done and we now have a big job ahead of us. Fortunately we think, barring unforeseen problems, we can probably get most of the job done in a couple of months if we work on the problem full time.

Our last remaining problem was we still didn't have a CE device. VIA have since stopped making the hardware we had planned to use, and their replacement gear is too expensive and not really suitable. An alternative set of hardware was found, based on a VIA chipset. My boss and a colleague of mine tried to build a CE platform themselves, but again had difficulty with getting all the devices to initialise properly. We also had huge problems getting a compatible wireless networking device to run, and this is a requirement for a second CE based project we're working on.

Recently our company partnered with HP, specifically with the intent of selling our POS with HP hardware. My boss mentioned our CE issues to some HP people and they asked why we didn't try the HP T5520 thin client. This device appears to come with Windows CE pre loaded, with a desktop and all the required terminal client software we want installed. It's also compatible with both 15" and 17" LCD screens which we need for our software. All of this is good stuff.

The HP T5520 is less powerful than the hardware we were looking at. For example, it has only an 800mhz processor as opposed to the 1.2 (fanless) or 1.6ghz CPU we were using. It also comes with only 128mb of RAM, compared with 256 or 512 in our old system. There is a bigger version of the unit available, with 1.5ghz processor and more RAM. The problem is we're not sure if it's available in NZ and it seems it only comes with Windows XP embedded, rather than CE.

The good news is this; we've previously run our software on a celeron 400 with 384mb of RAM, using Windows XP and SQL Express, and it ran fast enough to be 'usable'. Since the HP unit will be running Windows CE and SQL Mobile, there should be much less stress placed on the unit before our software starts, and so we're hoping the lower specification won't hurt us under the CE environment.

Before we spend the time porting our code, I'm going to build a SQL Mobile version of one of our SQL Express databases and a SmartDevice application to test our SQL statements against. If the database performs well enough, then we should be able to tidy up any performance issues in our own code, that appear under CE, without too much difficulty.

We've ordered one HP T5520 for testing, and it should arrive next week, so once I've got some results I'll make another post about what I've found.

Framework Incompatibility Proven

A few days ago I made a post about a problem my boss and I discovered when running a SmartDevice project's binary on our desktop PC's. Our understanding is the .NET compact framework and full framework are supposed to be 100% compatible, so SmartDevice applications will run on desktop operating systems. This seems to be true until you host a web browser control in the SmartDevice project, and then you run into issues relating to the process threading model.

While we suspected the problem was an incompatibility, we couldn't' be sure because we didn't have a Windows CE device on which to test the application. The problem didn't occur in the PocketPC emulator that shipped with VS - but that isn't a guarantee the problem won't occur on an actual device (although it is pretty good circumstantial evidence).

A friend of my owns an HP iPAQ running the PocketPC 2003 SE version of Windows CE, and today he allowed me to borrow it for some testing. I created a sample SmartDevice project, which consisted of a single form hosting the mobile version of the web browser control with it's URL set to the Microsoft home page.

I compiled the application and tried to run the binary on my development PC. The application crashed with an unhandled exception, when creating the WebBrowser control, stating the control can only be hosted in single threaded apartments (which doesn't seem to be possible in compact framework applications).

I then installed the compact framework 2.0 SP1 on the iPAQ, downloaded the program and ran it. The program not only started and displayed the form successfully, but the WebBrowser control displayed the Microsoft home page after a few seconds.

So it seems you cannot be guaranteed SmartDevice projects will run on both desktop operating systems and devices. I've posted this problem on the Visual Studio Feedback site, but haven't got a response yet. Which leaves the question, how do you easily develop applications that will run on both platforms ?

Thursday, July 20, 2006

.NET Framework Incompatibility ?

It looks like my boss has discovered an interesting incompatibility between the .NET Framework and the .NET Compact Framework (version 2.0).

We haven't actually confirmed this yet, but I'll describe the symptom and our suspicion about the problem now, and post a follow up if we find out what's really going on. If anyone else knows, or has alternative theories, please leave a comment 'cause I'd love to hear what you think.

Ok, so here's the background to the problem;

My boss has been working on a SmartDevice project in VS 2005. Specifically the project is designed to run on a custom build of Windows CE 5.0 on some hardware we've bought from Taiwan. Our problem has been that we've been unable to successfully build a version of CE (or even an emulator) that works. We've tried outsourcing this part of the project, but without any success. While we continue to try to build our own CE platform, development on our application software has continued. That's beside the point, except that it explains why we're building our project the way we are.

Since we have no CE platform or emulator (yet), we're building the (Smart Device) project, compiling it, and running the executable on our desktop PC's (running XP) to test it. If we really want to debug it, we compile it with debug symbols and attach the debugger to the exe after it's started. This has been working really well for us so far, and it should since exe's created with the compact framework are supposed to work %100 on desktop PC's with the full framework. As a side note, the compact framework is also installed on the desktop PC (it seems to have been installed with VS).

The problem we've discovered occurs when we add a web browser control to our Smart Device forms. The (full framework) .NET web browser control will only load in an application running in Single Threaded Apartment mode. This is normally specified in an application by applying the STAThread attribute to the Main procedure in Program.cs. Unfortunately, we can't find STAThread anywhere in our compact framework application or it's references (using intellisense or the object browser), and by default VS has marked the main method with MTAThread. We therefore presume STAThread is not supported in the compact framework.

The compact framework .NET version of the web browser control must therefore support applications with the MTAThread attribute applied. We tested this by creating a new SmartPhone application, placing a web browser on it's form and running it inside the standard SmartPhone emulator. The application and it's form loaded fine, and the web browser control didn't throw any exceptions. We couldn't run the binary for the SmartPhone application on XP though. I'm not sure if this is because of the problem we've discovered, or because SmartPhone applications aren't supposed to run on XP.

In any case, if we run our actual SmartDevice application on XP it crashes as soon as we load the form with the browser control on it. We get an exception stating the control cannot run in a process that is not a single threaded apartment.

Given that;

  1. We can load and use the (compact framework) browser control in a project running under an emulator;
  2. We can run and load our application on XP if we're not using the web browser control;
  3. The full framework web browser control will only run in STA mode but the CF control appears to support MTA...

We suspect the problem is that at runtime the application is loading the full-framework version of the web browser control - even though that's not the one we referenced in the project itself. Since the controls are in fact different (the STA/MTA issue), this causes the application to crash. The end result is that compact framework applications aren't necessarily executable on desktop operating systems with the full .NET framework.

If this is the case, it's possible we could implement some kind of assembly binding policy to force the application to use the compact framework control, but I haven't done much work in this area and I'm not sure how, or if it will work with a compact framework executable.

Of course we're just guessing at what the problem is. If we had a working version of Windows CE we could try our application on it and see if it ran. If it did, that would go along way to proving the problem wasn't with our application or code, and that it is a compatibility issue. Until we can do that though, we can't be sure what the issue is.

I've posted this issue on the Visual Studio Feedback site so we'll have to wait and see what Microsoft say about it.

InstallAware and SQL Scripts

So the other day I had a problem with an install I was building using InstallAware. The install had been working fine, but suddenly it stopped applying one of my SQL scripts properly.

The symptom was quite weird. The SQL script function in the install created the database, and didn't return an error (in fact, it returned SUCCESS), but my script clearly wasn't being run. The script was supposed to create several pieces of schema in the new database, and none of this schema existed after the install.

I spent hours checking the SQL script, the install script, changing various things, adding debug message boxes, and re-building/running the install. All to no avail. In the end I broke through the network security on our test system and attach SQL profiler (running on my development PC) to the test PC. I watched the statements executed during the install and I noticed something weird - instead of running my script it, the install was executing an 'exec' command followed by some high-ASCII characters.

I checked my script file again by opening it in notepad, and it appeared fine. At this point I was suspicious anyway, so I re-saved the script and explicitly selected ANSI as the encoding. The file size for the script was cut in half, and after I rebuilt and tested the install it worked fine.

The moral of the story ? If you're using InstallAware be careful NOT to save your SQL scripts in Unicode format. It seems InstallAware can't deal with Unicode.

Another thing to be aware of, the evaluation version of InstallAware I downloaded recently has a bug in it where variable replacement isn't performed on SQL scripts loaded from files (rather than embedding the script in the install itself). I'm not sure if the evalulation download has been updated since or not, but InstallAware do have a patch available for this, so check their support forums for a link to the download.

Monday, July 17, 2006

Exception Rant Follow Up

Perhaps I'm a bit slow, but I just discovered a really cool exception class that I thought I should mention (particularly after my last little rant on exceptions).

The Win32Exception exception class from the System.ComponentModel namespace represents a Win32 error (duh!). What's cool about this is that you can pass an integer value to the constructor, normally the result of the System.Runtime.InteropServices.Marshal.GetLastWin32Error() function, and the message of the exception is automatically setup to represent the correct text for the Win32 error code provided.

Now that's cool. No more magic numbers :)

Install Aware Rocks !

I just wanted to take a few minutes to rave about installation.

Over the last six years or so I've had the (dis)pleasure of working with a number of install products, including those that shipped with various versions of Visual Studio, and some from both InstallShield and Wise.

While I have managed to use all these products to some extent in the past, after trying InstallAware I don't think I need ever try another installer again.

I was a bit suspicious at first, since I've never knowingly used an installer created by InstallAware and I figured it can't be that popular. Whether it is or not though, it's great !

InstallAware inherently allows for creating and applying SQL scripts to database, including SQL Server and MySQL databases. It also has inbuilt functionality for creating and configuring IIS applications, virtual folders and properties. A simple, if somewhat clunky, scripting language is easy to use since every line of script has an associated dialog you don't have to learn a whole new script language to get started. A UI provides access to most functionality (although not always on a per-feature basis). The dialog editor is also very simple and familiar to Visual Studio/Basic users, and has a number of useful controls, including a non-IE based HTML view that can be used to display billboards during install time.

InstallAware also features an incredibly powerful compression engine, and inbuilt logic for creating web-download-components on a per feature basis, so only the features selected by a user are downloaded and installed.

Best of all, the InstallAware IDE is fast and stable, and has never yet corrupted an install project on me (unlike some other products). All setups produced with InstallAware use the Microsoft Windows Installer technology, and have all the associated benefits.

This is in fact the first product where I've ever been able to create a complex install without resorting to custom actions that run little VB6 or .NET applications to perform complex setup operations. It's also the first product I've used to create custom dialogs for a real-world project.

InstallAware also has all the other usual features, i.e integrated install debugger, default scripts, pre-requisite installation etc.

It does have some limitations, for example there is no inbuilt ability to edit XML configuration files for .NET applications, although there are some text and INI file functions and there is a 3rd-party plug-in that uses the text file functions to perform tag replacement on XML files installed with an application (which is a good work around). It can also be hard to diagnose why database operations failed or created unexpected results, although it may be I just haven't found the right way/depth to enable logging that helps with this.

All in all, InstallAware is the best install builder I've seen, and I'd suggest possibly the best on the market (although I haven't looked at the latest offerings from the other big players). In any case, I highly recommend you try InstallAware first. I'm sure you'll love it too.

Sunday, July 16, 2006

Nothing Is Ever Easy

Ok, so I've been working on a little C# windows application that creates SQL scripts to insert (or update) data in a database, based on an existing database and it's contents.

To be honest, my boss wrote a similar utility in VB6 years ago which we still use today. The problem is it doesn't understand some of the new SQL Server 2005 data types (like XML or varchar(max)).

I was simply going to update the existing application, but this proved problematic. Firstly, because a colleague has had the source code checked out for months and since he works in another city it's quite easy for him to ignore my email requests to check it back in (we don't allow multiple check outs). Secondly, the code is a bit of a mess. It was never written that well and it's been hacked about by several people (including me) since it was first written. Finally, it's always been a poor application anyway, with no user input validation or error handling, and it outputs it's script to the 'current directory', which in Windows is god-knows-where-at-any-given-time-except-that-it-won't-be-where-you-think-it-is-oh-no-that-would-be-too-easy.

My other problems with it are that it can only output one script per table, and for some installs I'm writing (using InstallAware) I'd really like one script that inserts data for multiple tables. You also can't save your settings, which means you have to script every table individually, and set up the correct settings (turn on IDENTITY_INSERT, truncate table first etc) for each table every time.

After I looked at the list of problems and the state of the current code, I decided a re-write was in order, and I might as well do it in .NET. Besides, this gave me a chance to try out some development techniques and .NET classes that I haven't had much use for (or time to investigate) previously, such as asynchronous UI, StreamWriter etc.

I figured the application was pretty simple (the VB6 version certainly was) and this would be a relatively small job. Oh, how wrong I was.

Firstly, this was my first experience using the TableLayout Control since Beta 1. It's a little bit different, but mostly I had problems simply because I hadn't used the control much at all and while it's great at what it does, getting it to do exactly what I wanted wasn't always intuitive. I did get my desired out come with just a little effort though.

Next, I had issues with serialisation. I've looked at .NET serialisation once before, but had several problems with it and in the end my boss and I decided to implement our own form of serialisation in that project. Given this new utility wasn't exactly mission critical or urgent (although I did need it), I figured I could take the time to explore serialisation and get it right. Well, turns out I didn't have much trouble making my classes (in particular a custom dictionary class using generics) compatible with binary serialisation. In fact, it pretty much just worked. Very cool.

What sucked was the XML serialisation. For reasons I just couldn't explain, the generic KeyValuePair<> object keep writing out a KeyValuePairKeyTypeValueType node with no attributes or sub-nodes. This meant my data was never actually serialised properly. I couldn't get it to work no matter what I tried, even after I added specialised constructors and so on to make it work. I also had to add an ugly Add method overload because I got compiler warnings saying a method with that signature was required at ALL levels if the inheritance hierarchy to support serialisation. I was not happy. In the end I managed to make it work by implementing System.Xml.Serialization.IXmlSerializable and manually serialising the required data. This worked but the time I got to this point I wasn't exactly delirious with success.

Then I had a few problems (and still do) with memory usage. My new application can create scripts for multiple tables at once. All tables are scripted asynchronously and can be output to a separate script file per table or a single script file for all tables.

If each table is sent to a separate script file, then the data is written directly to a file stream as the script is built and there are no problems. However, if a single script is being created then the system scripts each table to a memory stream. When all the tables have been scripted they are sorted based on foreign-key references (so referenced tables have their data inserted first), and then each script is written to the file in the order determined.

The problem here is that tables with many/extremely large rows can generate very large scripts. On my development PC (an Acer P4 laptop running Windows XP Professional with 1gb of RAM) I get an OutOfMemoryException around about 160mb of data written to the stream. A similar problem seems to occur with very large strings, or placing very large strings into the Text property of TextBox controls. The interesting thing is that I still have plenty of physical RAM left, not to mention swap-file space, so I must be hitting some internal limit. On top of this, I can quite easily catch and handle the OutOfMemoryException, which is something I've heard implied is quite hard to do properly. I can only presume this is because the MemoryStream object itself has some internal limit, or that object specifically is being denied more memory, rather than my application (which makes plenty of allocations after the exception is thrown/caught). In the end this isn't really a problem for us, since we won't be using this application to create scripts that big, but it is still annoying. I guess I can still change the code so it uses temporary files and combines them all at the end, but that doesn't seem very cool. So far, I haven't found anything that mentions what memory limit I might be hitting or how to increase it. In fact, a number of documents I've found say the size of the stream should be limited only by the amount of memory available.

The most disappointing problem I had though was the difficulty in obtaining schema information from a database. By the time I got this far through the code, I actually needed to finish it quickly so I could use it to generate my install scripts. As a result I didn't spend a whole lot of time researching this area, so it's quite possible there's something simple I missed.

That being said, it wasn't as easy as it was in VB6. Our old code simply opened an ADODB.RecordSet containing the data to be scripted. From the RecordSet object we could access not only the data, but also some meta-data about the columns, such as names, maximum lengths, database data-types and so on.

At first I tried the same thing with the SqlDataReader in .NET. This seemed like a good choice since I only needed forward-only access to the data and the SqlDataReader is quite efficient. Unfortunately I found I could pretty much only obtain the column names and values from the data reader, no maximum lengths or SQL data types, nor anything about foreign/primary keys etc.

I then discovered GetSchemaTable method, but this didn't help either. The function returns a DataTable with a row per column in the SqlDataReader. This isn't really a helpful format in my case. I'd really like the metadata available to me as I loop through the columns in the SqlDataReader, to do this with the DataTable I have to 'find' the row for the column I'm working on each time I change columns. That seems inefficient and is more code than we needed in VB6. Besides which, many of the columns seemed to return nonsense data. Looking in the help I found a comment stating that several of the (useful) columns in the DataTable always return zero (or some other useless value).

I should note I have since seen the GetProviderSpecificFieldType which might provide the database data type of a column, although I haven't tried it yet. I also don't know why I didn't notice it the first time round - domestic blindness I guess.

I also needed foreign and primary key information as well as the maximum length of each field (if it was a char, varchar, nvarchar, nchar, binary, varbinary etc.). I couldn't find any good way of retrieving all this data using the .NET classes, so I ended up executing SQL statements against the system tables in SQL Server and caching the results into collections before I began the scripting process.

This works, and is quite fast, but isn't very cool. It's not cool simply because I don't like resorting to queries against SQL Server system objects (ever), it's a philosophical issue I have. Mostly, it's not cool because the application now only works against SQL Server (although hopefully I've got it 2000/2005 compatible - haven't tested it against 2000 yet). The old VB6 application would work with just about any database you could access through ADODB, at least for the input.

.NET hadn't finished thwarting me yet either. My code uses a StreamWriter object to output the script data to either an underlying FileStream or MemoryStream. In the case of a MemoryStream, I wanted to leave the stream open until all of the scripts were completed (so I could read the contents back and output then to a file stream), but I had no further use for the StreamWriter object. You'd think I could just keep a pointer to the MemoryStream and manually dispose/close the StreamWriter, or even let the StreamWriter go out of scope. Not so. The StreamWriter class seems to have an unwanted behaviour in that it closes/disposes it's underlying stream when it is disposed, and since it correctly implements the dispose pattern this also occurs when it is garbage collected. The end result was that I had to keep a pointer to the StreamWriter object and leave both it and it's underlying stream in memory until I was done. This sucks. What's more I think this is incorrect behaviour in terms of the dispose pattern implementation, since one of the rules is that you should never dispose any object you don't own - StreamWriters (in my opinion) do not own the stream they are asked to write to.

The bottom line is the whole thing isn't cool because it was just easier in VB6.

Now to be fair, C# does make some things easier. The TableLayout control made the dialog resizing easier, even if I did have some problems with it. The threading objects in .NET made the asynchronous UI and script creation much easier than in VB6. The StreamWriter and various stream objects, combined with inheritance and polymorphism, made outputting scripts in various encodings and to various places easier than in VB6 as well. Of course there's hundreds of other things as well that I, and everyone else, use daily (or less often) in C# that are so much better or easier than they were in C#. Infact, I love .NET (and even C#, despite semi-colons and case-sensitivity).

It's just a shame things that used to be simple, aren't always as easy now.

Anyway, version 1 of the application is finished. When I finally get some web space and a domain sorted for Yortsoft I'll post the application as freeware. It isn't perfect, but it's good enough for most things. In the meantime I'll keep plugging away at the issues that remain. Here's looking forward to version 2 :)

PS: The Table Data Scripter v1.0 is now available for download here.

Sunday, July 09, 2006

A Software Icon

From time to time I need to create an icon, or edit an existing one. There are a huge number of tools out there for doing this sort of work, and Visual Studio 2005 itself has it's own icon editing facilities. Unfortunately, virtually all of the products I've tried have issues with transparency. While they all claim to support transparent backgrounds, and in fact most do most of the time, they usually produce icons whose transparency will disappear intermittently. It might only be when resources are low, or when the icon is displayed in Windows explorer, or perhaps only on the task-switch dialog (the one displayed when you press alt-F4 in Windows). In any case, it's annoying and not good enough in my opinion.

Luckily, there's one product I've found that consistently gets it right. Better yet, it does an amazing job of making icons out of existing bitmaps or jpgs, by allowing you to simply paste them from the clipboard into it's editor. The effect is particularly good for icons that are 32 x 32 and have 256 or more colours.

So, if you're wanting a good icon editor that does all the right things, try Axialis IconWorkshop 6.0.

Every Code Has It's Exceptions

Ok, so most people can throw and catch exceptions in C#, but unfortunately good error handling is a little more complicated than that. There are also a few, shall we say, 'issues' with some the .NET classes (notably the ApplicationException class) that not everyone is aware of.

Krzysztof Cwalina makes some good points in his blog entry on the ApplicationException class. His follow up entry is also very good, and provides some handy hints for error handling and custom exception creation in general.

However, there are a few things that I think could be said in a different (and perhaps clearer) way, and a few other points I think he missed, so I've decided to put this blog entry together so I throw my two cents worth in on the subject.

If you're looking for some best practices to use when writing error handling code, try this CodeProject article.

Exceptions and Performance, When to Throw an Exception

Firstly, a note on performance. Most of you have probably heard this many times, but some may not have, and it will have a bearing on some of my later comments.

It is generally agreed the overhead of a try statement is minimal and quick to execute, but that actual throwing of an exception (using the throw statement) is slow. How slow depends on various things such as the call stack size, number of active catch handlers and so on.

As a result, the general rule is don't throw exceptions to control code flow. For example, if you're writing a function the finds an item in a list, what do you do if the item isn't found ? Do you throw an exception or return null (or some other value that means not found) ?

Some purists will tell you that throwing an exception is the 'right' thing to do, because it can't be ignored like a null value, and a 'CantFindTheItemYoureLookingForException' is easier to diagnose than a 'NullReferenceException'. This is true, but you must also ask yourself;


  • Is this really an exception condition ?
  • Is my function documented, and if so, is it really a problem to return null instead of throwing an exception ?
  • Will my function ever be called in a loop, or during time-sensitive operations (i.e events).

The third question is really the killer, if the answer to that is yes, then you shouldn't be using an exception (if you're coding in .NET). If it's no, you should still consider the other two questions before you decide whether or not an exception is appropriate.

One more thing on this point - if it's appropriate to throw an exception, do so. Don't avoid throwing exceptions because you don't like them, or because they're slow if performance is going to be a problem with an exception occurring in the code your working with (perhaps a function that runs once on startup etc).

When and How to use Custom Exceptions

Krzysztof Cwalina is correct when he says you shouldn't create your own custom exception just for the sake of having your own exception class. He's also correct to say it's ok to throw the existing exception classes provided by the .NET framework. What he doesn't explain (to my satisfaction) is why.

The basic point of having different exception classes is;

  1. To allow developers to control which exceptions they catch, by group or by specific exception.
  2. To provide exception specific information.

Note point #1. If you're writing a procedure that takes an argument which cannot be null, it is fine to throw System.ArgumentNullException because this allows a developer calling your function to specifically trap that single exception and deal with it separately to any other exception your code might throw. There is no need to create another exception class that means the same thing, all you would be doing is forcing the calling developer to learn about your own exception class that isn't providing any value.

On the other hand, if you're writing a function that really does throw an exception that isn't catered for by an existing exception class you musn't be afraid to create your own. Creating a NotAllowedOnPublicHolidaysException class that's thrown from your ScheduleMeeting function is great, because once again it allows the developer calling your function to trap just that exception and deal with it, independently of any other exceptions that might occur. If you throw the ArgumentException, then the developer might not know whether Monday 29th of February was invalid because it was a holiday or because it doesn't exist in that year (assuming you throw the same exception for both cases). In which case, they must resort to catching both exceptions and trying to peel apart the exception objects message text (assuming it's unique to the two different cases) to determine how to handle the exception.

This is also the same reason throwing new Exception() is also the wrong thing to do.

So DO use custom exceptions, but use system exceptions in preference if an appropriate one already exists.

ApplicationException - Is It Harmful ?

Krzysztof Cwalina says that ApplicationException isn't harmful, just useless. He's nearly right.

The reason ApplicationException is useless is because some of the system exceptions inherit from this class. The idea behind ApplicationException is that only custom exceptions (i.e non .NET exceptions) would inherit from it. This would allow developers to catch ApplicationException and be guaranteed they would only catch their own exceptions (or perhaps exceptions). Since some of the .NET exceptions inherit from this class, it's not longer possible to do this.

ApplicationException isn't harmful in that if you inherit from it in your custom exceptions, it doesn't cause any problems. What could be harmful is any code that tries to catch (ApplicationException).

The reason this is harmful is that it's likely the developer who wrote the code believed they would only catch their own custom exceptions, but this is not the case.

The truth is I had some code that suffered from this problem. The code in question tried to invoke a method using certain reflection techniques. I wanted to code it so if the invoke failed, an alternate (and simpler) invoke mechanism was used. I figured I could put a catch (ApplicationException) block in and rethrow the error, following by a catch (Exception) block that tried the simpler invoke. Now the code was ugly and I've re-written it since anyway, but the point is that my code failed in nasty ways because I did occasionally catch .NET errors that derived from ApplicationException but that shouldn't have been treated as one by my code.

So ApplicationException can only harm you if you're catching it, not deriving or throwing it. If you are catching it, you might want to revise your code.

I saw a post somewhere recently saying ApplicationException was useless because it didn't provide any functionality over and above the Exception class. This person had missed the point, remember, the purpose of ApplicationException was to allow a particular group of exceptions to be caught.

So What Can We Use Instead ?

If you need ApplicationException like behaviour, the answer is quite simple - create your own. I have my own UserCodeExceptionBase class. You can create your own, with your own name of course, but here's what I did;

  1. I made my class abstract (and put base in the name). This is because throwing new UserCodeException() would be about as useful as throwing new Exception(). The calling developer still can't catch specific errors. By making my class abstract, I've forced myself to be tidy from now on - any time I want to throw a UserCodeException I must either use an existing custom exception or make a new one that is specific to the situation I'm in. This isn't required, but you might want to consider it.
  2. UserCodeExceptionBase inherits from ApplicationException. This means that it should be compatible with any existing (but broken) code that catches ApplicationException. Since I might throw my exception from class libraries I build, and those libraries may be used by other developers who have incorrectly assumed ApplicationException works, I decided the compatibility was important.

Other than that, follow the rules Krzysztof Cwalina makes in his second blog entry on ApplicationException. That should ensure your exception classes are well formed.

Deep Exception Hierarchies

Another thing I wanted to comment on is 'deep exception hierarchies'. I've only just recently started hearing that you should have too many levels of inheritance in your exception classes (and this may be one reason NOT to inherit from ApplicationException in your own UserCodeException class). To be honest, I haven't yet had anyone explain why it's bad, but I strongly suspect it relates to performance.

Assuming this is the case, I would question how much of a problem it is. We've already covered the fact that throwing exceptions isn't fast, and you shouldn't do it unless you really, really have to. Throw and catching exceptions shouldn't be the norm (that's why they're called exceptions). So if you're not throwing them often, why does it matter if there's a couple of extra layers of inheritance and it takes a bit longer to process the exception ? Even if it takes another second, assuming your exception is being thrown as a last resort, and only being thrown once, the performance hit shouldn't be too bad.

I guess it depends on your exact situation, but I wouldn't panic if you have four or five levels in inheritance. At least not if you're using exceptions appropriately in the first place. Obviously, you shouldn't use more levels than you need though.

Visual Basic Isn't the Poorer Cousin

Ok, so VB6's error handling was dirty, ugly, and often poorly used. With VB.NET, Visual Basic 'grew up' and got structured exception handling like C#. You might think that's the end of the story, but it's not.

I turns out VB.NET has some very nice exception handling features that C# lacks.

One of these features is 'Exception Filters'. This basically lets you call a function that returns a boolean value, before a catch block executes. If the function returns false, the catch block is ignored the exception continues to be throw to the next most appropriate catch block, or ends up unhandled. Personally I haven't had a need for this, but it's interesting that VB.NET has explicit syntax for allowing this, while C# does not. Panopticoncentral.net does have a blog entry describing one use for this feature.

The second feature (and more useful as far as I'm concerned) is the Retry statement. The Retry statement can be called from a catch block, and effectively 'goes to' the associated try command again. This is useful when you're dealing with exceptions thrown from a device, and you want to allow the user to 'Abort, Retry, Ignore'. With out the Retry statement, you must either use the horribly 'goto' command/equivalent (if it's available in your language) or a loop where the execution breaks only when the code executes successfully or the user elects to abort. While both of these methods are functionally equivalent, they are not as nice to read.

Exceptions Hidden by the .NET Runtime

Sometimes exceptions can be hidden by the .NET runtime. One instance of this I've discovered is exceptions that are thrown during databinding.

If you bind a control to a property on an object, the property set will be called when the controls value is 'written' by the data binding engine. If during that call an exception is thrown, the control or databinding error appears to swallow the exception. You will not get an unhandled exception, nor can use use the events on the System.Windows.Forms.Application object to be notified of it.

If you want access to the error you need to connect an event handler to the BindingComplete event of the data binding. Your event handler will receive an
BindingCompleteEventArgs object that contains the exception that occurred.

The worst thing about this behaviour, is the control doesn't re-read the value of the property, so the controls contents are now inconsistent with the actual data. While this shouldn't ever occur because the validation events on the control should be used to prevent bad values being passed to the object, this possible requires validation code in two places, and requires the validation code in the event understands all the possible situations a the object may throw an exception for. If you have controls bound to properties that may throw exceptions, be very careful.

Catching Unhandled or Threading Exceptions

Applications can use some events (System.Windows.Forms.Application.ThreadException and AppDomain.UnhandledException) to catch unhandled or exceptions from other threads.

Note, these event handlers are really only useful for logging event information. They cannot be used to 'handle' the exception, you cannot prevent the application from unloading if the event was thrown on the main thread. You also do not have access to any non-global information, except for the exception itself. Oh, and if you're a newbie, please avoid the temptation to make all your application internals global so they can be managed from these events - it's bad design. Use these events for logging, or not at all.

It should also be noted this event only fires for exceptions occurring within the AppDomain the application was started in. Exceptions occurring in other AppDomains created by the application itself will not fire these events.

See this CodeProject article for more details.

Wednesday, July 05, 2006

Red Gate Lets Intellisense into Sql Server

I just thought I'd let everyone know about a really awesome utility for Sql Server. The product, known as Sql Prompt from Red-Gate Software puts Visual Studio like intellisense into Query Analyzer or the query windows in Sql Management Studio. Best of all, it's free until September 1st !

This little utility is absolutely great. I no longer have to empty all the blood out of my keyboard (from having worn my fingers down to the nub) after spending a day coding stored procedures and triggers !

I have noticed it can cause some small performance problems occasionally when it builds the list of keywords, and two of my colleagues have had it crash on them a couple of times (although it left their Sql Server windows intact and they didn't loose any data). For the most part though, it's a life saver, and we highly recommend it.

Tuesday, July 04, 2006

Trap For VB6 Developers in C# - Beware Your Use of Strings

Ok, so I wrote this whole post about how storing binary data in strings was bad, mostly around an experience I had with C# strings and character zero. I even published it for a few hours. Then I discovered I couldn't reproduce the problem I thought I'd had, which means either;

  1. The problem wasn't what I thought it was (even though I'm certain I proved it).
  2. The problem only occurred in one of the betas.
  3. The problem isn't reproducible in any way I can remember or puzzle out.

In any case, I removed the post since it now appears to be inaccurate. However, this doesn't mean we should all feel free to store binary data in strings (the byte array is your friend). This article in the MSDN security blog talks about several ways binary data can be corrupted due to string encodings, and while it talks about encrypted data the same rules apply to any binary data. I have actually seen the same problem occur with binary data created by the GZip classes, and it has confused a number of people.

So in short, don't store binary data in strings. Use a stream where you can as it's likely more efficient, and where you can't, use a byte array. If you must use a string, make sure you base64 encode the data first, but beware that any function attempting to use the binary data will need to know to base64 decode the data first.

Is String.IsNullOrEmpty Really Broken ?

The Short Answer

Yes. Use of String.IsNullOrEmpty can cause runtime errors unexpectedly. Bill McCarthy has found and proved this; the details are posted on his blog.

Should We Panic ?

No. The error only occurs when



  • Code is compiled to release mode.
  • Code is compiled with optimizations.
  • The code itself is structured so a particular set of optimizations occur.
  • The program is NOT started from the Visual Studio IDE.

If you haven't found the problem yourself and you have compiled your code for release with optimizations, and you've tested your application from outside the IDE, then you're not likely to encounter the problem anyway.

Oh, and yes, the problem does occur in the sample code Bill provided, even when the 'if' block has code in it. Several people, including myself, have proved this.

So We Don't Need To Fix Anything and We Can Keep Using String.IsNullOrEmpty Right ?

Erm, not exactly - you should stop using the String.IsNullOrEmpty function in any new code, and you probably should refactor old code for safety's sake.

What Is The Workaround and Do Microsoft Know ?

The problem has been reported to Microsoft, and there is a workaround posted on the Visual Studio Feedback Centre site. Microsoft say they don't plan to fix the problem in the short term because the work around is available.

Basically the workaround is to produce your own copy of the function, and to mark the function as not optimizable so the compiler won't play with it.

Note the function provided on the Microsoft site is apparently the most performant way to check whether a String.IsNullOrEmpty (checking for null and then string length, as opposed to comparing the string to null/"" or String.Empty). Using an IsNullOrEmpty function also looks neater than manually checking the two conditions separately all the time, so I recommend using the workaround provided.

So That's That Then ?

Yes and no. There are three problems I see with the current state of things;

  1. Not everyone knows about the String.IsNullOrEmpty problem.
  2. We're now going to have umpteen different versions of the same function, all maintained by different people.

Technically number two could be solved very easily. Somebody could create a library that contained a working function and publish it for free, and the Visual Studio/.NET community could all agree to use that library. In practice though, this could be difficult. Trying to get everyone to use the same library, deploy an additional assembly with their applications, agree to whatever license agreement comes with the library, and trust person X's code is likely to be a bit like herding cats. In practice, it probably won't happen. Even if it did, you'd still have to contend with point #1.

If we're not all using the function, it's probably just an annoyance rather than a big problem. That is until all of today's programs become legacy code. The IsNullOrEmpty function is so trivial it should never be a problem. On the other hand, storing years as two digits once seemed like it was reasonable too, and then we got Y2k. And similarly, the Year 2038 problem. My point is, that if there ever is a change to that function, be it a year from now or 20 years when we're using the @Basket framework 1.0, the upgrade path isn't going to be very neat because everybody's individual code is going to need to be replaced or upgraded appropriately. We can only hope this is never a problem. I'm a pessimist however.

Point #1 is the big problem however. Firstly, there are many books, blogs, forums, websites and people all touting the use of String.IsNullOrEmpty. Indeed, searching for String.IsNullOrEmpty shows a great number of posts by people saying how cool it is this function is in .NET 2.0 and that it should have been in earlier versions. Have these people realized the potential problems with the function ? Secondly, there are new programmers entering the IT world all the time, so even if everyone knows about the problem tomorrow, we're going to have to continue teaching newbies not to use it.

The second problem is that while the sample code for reproducing the problem is fairly specific, and not likely to be a huge problem in a lot of production code, it's possible that other code could be optimized in a similar way. Or at least in a way that causes similar problems. This means that any library, control, or other .NET function that uses String.IsNullOrEmpty may now be suspect. I don't want to be an alarmist - again, if the problem hasn't already been found in a particular piece of code then probably it won't show up at all, ever.

Is that guaranteed though ? What if the compiler, current or future versions, don't always apply the same optimizations (perhaps based on how much memory is available and the performance trade off between faster/large code and smaller/slower code) ? Wouldn't that mean the problem might occur intermittently even with the same code ?

What I don't understand is Microsoft's reluctance to fix the problem. I can understand that changing the compiler at this point is perhaps too hard. I can also understand that releasing any hot fix or service pack is not a minor challenge for products the size of .NET/Visual Studio. I would think, however, that patching the framework so their own IsNullOrEmpty function is marked at not optimizable and releasing this as either a. a hot fix, b. part of a larger service pack (which they are probably planning anyway), or c. both, would be the sensible thing to do.

I guess we can only pray they change their minds. If you agree with me, I suggest you go the problem report at the Microsoft Visual Studio Centre and vote for the problem as being important. If you really feel strongly about it, maybe you could even leave a comment voicing your concerns. Who knows, public opinion might force a fix ?

Monday, July 03, 2006

Another Day, Another Certification

Ok, so I didn't intend for this to be my first post, but the best laid plans of mice and men...
Technically this isn't about the .NET Framework, Visual Studio or even a Microsoft software product, but it is related, so I'm going to post it here anyway.

My MCPD Certification

Today I sat and passed my third Microsoft exam, 70-548 Designing and Developing Windows Applications by Using the Microsoft .NET Framework. This means that once my results are passed to Microsoft, all going well, I'll soon be an MCPD (Microsoft Certified Professional Developer) for Windows Applications.

Thinking About Taking an Exam ?

I can't tell you much about the exams, since candidates are required to agree to a non-disclosure agreement before taking each exam (to preserve the integrity and value of the certifications for those who pass). What I can tell you is to check-out the Microsoft Learning site if you want more details. Also, the on-line skill assessments are really useful tools, not just toys. If you've looked at the exam pages and are terrified by the number of things the tests cover, take the relevant skills assessement. Study up on the sections you do poorly on in the assessment, or if you're already getting a high score (an actual high score as opposed to a score high on the score board) then you might want to try sitting the exam without further training.

My Examination Experience

I was very nervous when I sat my first exam (Exam 70–536: TS: Microsoft .NET Framework 2.0 - Application Development Foundation) because a lot of contents the learning website said were covered by the exam were things I either hadn't used, or hadn't used much. However the exam itself, while still challenging, wasn't as impossible as I had feared and I passed without any additional training and only a little study conducted on-line using MSDN and Google. That being said, I didn't pass by a wide margin so perhaps I could have done with a little more brushing up in some areas.

I was more confident on the second exam (Exam 70–526: TS: Microsoft .NET Framework 2.0 - Windows-Based Client Development) , and I passed by a greater margin, but the exam covered more stuff that I do day-to-day wasn't unexpected.

After passing the first two exams I gained the MCTS (Microsoft Certified Technology Specialist) with the Windows development specialisation.

I didn't know what to make of the third exam before I took it. While I was confident I should be able to pass based on my experience with designing and deploying applications, my boss wasn't so sure. Obviously neither of us knew what the content was really like so we were both guessing. As it turns out, I did slightly better in this exam than in the second one, getting a pretty good result overall. I scored 100 percent in two sections on both 70-526 and 70-548, which I was really pleased with.

Why Did I Take The Exams ?

This is a bit convoluted. Basically I took the exams because my boss wanted me to. As I understand things, the company I work for used to qualify as a certain level of Microsoft Partner, but the rules changed and this was no longer the case.

In order to re-qualify we needed to do several things, including employee two MCPD certified people. Since we weren't looking to hire more staff, my boss decided to send a couple of existing employees along to the exams and see if they could get certified. I was chosen to go first, and over the past month or so I've sat the three required exams.

Normally people prepare for exams, but since the training materials are often more expensive than the examination fee itself my boss decided to send me in without taking any courses, buying any books or even purchasing a practice test from MeasureUp or Self Test Software. The theory was that if I failed, we'd then purchase the neccesary materials.

Luckily I have now managed to pass all the exams. We considered purchasing some training materials for the final exam (70-548), but at the time we checked there was nothing available yet for the exam related to the Windows development specialisation (although I believe there was one practice test for available for the Web development specialisation). Apparently the training materials are due out later this year.

My best friend and colleague who is also getting certified managed to pass the first exam (70-528), but apparently needs some more study before he can pass 70-526.

The Value of the Exams and Certifications

Having actually sat the exams and gained the certifications I now have a better understanding of their value, and yes, I believe they do have value. Someone who has sat and passed the exams has proved that either a) They have a good knowledge of the topic(s) being tested, and/or b) They have good problem solving and reasoning skills. Both of these are important attributes in a software developer, although I believe the second set of skills are more important overall. Unfortunately I'm not sure the exams prove whether an individual has more of A or B.

That having been said, there is a downside to the exams, and that is cost. At roughly $NZ200 each they are not hugely expensive. Yet there is no compelling reason I can see for an individual to fork out for the examination fee themselves, let alone the training/study materials they may need to pass. The exception would be if they either couldn't get work, or thought they'd hit a salary cap, but if they can't get work they probably don't have the money to spend anyway. As for the salary issue, while my boss is pleased I passed (and I'm happy too), I'm not getting paid any more because I did. Because the exams have a fee associated with them, this means many people who can pass the exam will not even sit it.

That is not to say there aren't benefits to being an MCP or having other Microsoft Certifications, but at roughly $NZ600 if you pass all three exams first time, becoming an MCPD is probably more costly than most individuals would justify if paying for the exams themselves. Unless of course they have a specific need for the certification for a job they are already in or applying for.

If I were an employer looking at two candidates, and both had the same skills and experience on paper but one had the certification or had passed an exam, then I would (all other things being equal) choose the MCP candidate. Further, I would be happy to accept the exam/certification proved a certain level of ability. This would be particularly true if the candidates were both applying for junior positions.

However, if one of the candidates had more experience while the other had passed an exam or gained a certification, I would likely choose the candidate with more experience. This is because experience counts for a lot in IT, but mostly because the experienced candidate likely could pass the exam or gain the certification as well - they just haven't tried because they had no reason to.

Are the Examinations and Certifications Worth It ?

Definitely if you can convince your company to pay your way. If you have to pay for them yourself, then it's still a great thing to do (it can't hurt after all) and can provide some people with a confidence they might be lacking (especially if like me you were never formally trained or had a tertiary education). However, if you can't afford it, don't feel like you're missing out. The examination itself only proves your worth, it doesn't make you better.