Wednesday, October 14, 2009

Google & MVPs

For those who don't know who Jon Skeet is, he is an excellent contributor to the C# community, through his daily answering of questions on Stack Overflow, to his many excellent articles and books. Jon recently posted that his employer has asked him to turn down the MVP reward for this year.

What employer wouldn't want their employees to be recognized as community leaders in Microsoft Technologies? Google for one, as they are Jon's current employer. Jon won't comment on why this is so it leaves us to speculate what the reasons might be. Even if there is a sensible reason for this the silence lets myself feel that this is due to the competition between the two companies. That has definitely soured my opinion about Google and its Management team.

Thursday, October 1, 2009

Just trust me!!

Today I was working with my intern, we are using DotNetNuke to run a marketing site so that our product team can go in and tweak some things. We have custom workflow in there, and a requirement that we want to save the results of the workflow into our own table so we can report on it latter.

The workflow component has some facilities to do this, so he creates a table and a stored proc, and gives it a shot. He is instantly greated with a lovely error in the front end Invalid Syntax Near xyz

I tell him that's not good, as it indicates a potential for a Sql injection attack. Fast forward three hours, and he is arguing with me that its safe, that it can't be injected, and demands I prove to him it can be done.

Not wanting to take the time or effort to figure out the magic sequence, as this isn't something I do everyday (Contrary to popular belief, I don't sit around trying to hack). I tell him to trust me, parsing errors are easy to inject into. He doesn't buy it, and I can see he is going to be stubborn until I prove him wrong.

Fast forward ten minutes, and I found that injecting with this forms component was as simple as entering a string like the following for the last field:

injectComing' select * from aspnet_users--
Watching with profiler, we quickly saw three key events. First was a Batch Starting which looked something like this:
exec myProc '','','','injectComing' select*from aspnet_users--';
Second was SQL Stmt Starting Event:
exec myProc '','','','injectComing'
Third was another Sql Stmt Starting Event:
select*from aspnet_users--';
Needless to say now he is working on a costume module so we can execute the Sql as a parameterized query and avoid all this ass ache. Moral of the story? While you shouldn't always trust everyone, you should trust your boss who has 10 more years of experience. And if you don't you try and prove him wrong, don't demand your boss to prove himself.

Gahh Interns!

Saturday, September 26, 2009

Responsability of Third Party Control & Blacklists

My intern just sent me a screen shot of a CAPTCHA from a site we are working. The CAPTCHA is generated by a third party control which we have no control over.

If you are a developer of third party controls which are sold to professional companies please take the effort to implement a blacklist of letter sequences that should not be allowed in CAPTCHAs. What are the odds of this happening probally preety small but had we lost a sale because of this....

Tuesday, September 8, 2009

Telerik Gives Back to the Community

While using StackOverflow a great site for asking questions to problems software developers face, I noticed an advertisment from Telerik. They have decided to give any SO user who has over 10,000 reputation points a free developers license to their control suite.

This is great to see a company see the value a site such as Stackoverflow provides, and to provide some recognition to the people who spend their time helping others by answering questions. Thanks again!

Wednesday, August 26, 2009

Duration in RPC:Completed is Misleading

I discovered something today while troubleshooting a long running query today. We have a specific dialog in our application which in a specific case takes 60 seconds to open up. I fire up SQL Profiler and click run just to see the standard events and verify if this a DB issue or not.

I find that there is a specific query which does take 60 seconds to run according to the RPC:Completed event (We are using SQL Server 2005). So I pull that query out and run it inside of SSMS, it completes in under a second. These are the issues I hate having to troubleshoot. So the first thing was to rule out our web application, so I create a little .net windows form app and I basically run this:

using (var reader = sqlCmd.ExecuteReader())
    int i = 0;
    while (reader.Read())


The query runs fine no issue. Now we are actually using nHibernate so I wanted to simulate us processing the result set. I add a small Thread.Sleep(5) (my result set has 1500 items in it so a small sleep duration is plenty). I run the query and wait about eight seconds.

Then I go and look at SQL Profiler RPC:Completed (And Stmt:Completed events). The duration for the events is 8 seconds. Now I am asking myself if I opened a ticket with Microsoft burned a support incident for an nHibernate issue or not.

The moral of the story? Well don't always trust what you think the tool is measuring. And if your interested, when I ran my queries loading the entities through nHibernate with the windows form, the query again completed extremly fast. So I am still hunting this one down.


I found and resolved the issue. It wasn't an issue with SQL but a configuration issue which caused the system to try and log to an invalid file thousands of times which slowed the system down.

This is very eye-opening as one of my fundamental assumptions about SQL Profiler and how to interpret the results has been crushed.

Tuesday, July 14, 2009

Turning off ASP.Net's Unique ID Generation

One of the things I am not a fan of (but I understand why they have it), is the unique id generation for server side controls in ASP.Net. The unique Id generation allows the framework to ensure that all controls will have a unique client Id, allowing it avoid collisions.

While this is great it does add a lot of complexity as you start to move away from the traditional ASP.Net Web Forms model. This complexity is one of the reasons this feature doesn't exist in ASP.Net MVC. What are some of the issues with the unique id generation?

  • If you have javascript in your pages the controls are given a somewhat dynamic name, this leaves you with two options. Either you dynamically generate your javascript, or you can assume that you won't be moving the control, and can hard code the ID. The ID is based upon a concept of Naming Containers which is beyond the scope of this post. But essentially as long as you don't introduce new containers in the hierarchy the Id will always be the same.
  • If you want to use post a form to a non-web form page(or different page then the generating page), and want to make use form fields, the field names are associated to their unique id.

The first issue I have always solved by either generating the javascript dynamically in my user controls for example (A user control acts as a naming container and thus all controls in the user control will have it's unique id based on the user control's id). Or by just hard coding the value and hopping I don't change it.

During my latest round of enhancements to my current project I ran into a scenario where I felt I had to disable this unique id generation and couldn't find a workaround that didn't involve rewriting a ton of code. Let me give you some background on the scenario.

This site deals with types of documents, and allows the user to printout (or Email) various PDF files based on the document. There are currently roughly a dozen various printouts and emails which are generated, and the number is always growing. In order to streamline my code and simplify adding new types of printouts, I created a method of doing this which essentially is as follows:

  • On the document's details page we have a second form, this form contains all the printing and email options. There are quite a few variations of options, and certain options depend on other options being set.
  • We preserve the last options used to printout the document, which are set at render time for the page.
  • The form is smart enough to know which type of printout it is doing and will either open a new window, or reuse the current window when doing a post.
  • This form posts to an ASP.Net Handler which based on the type of printout directs the request to what I call a print handler. The print handler will either execute (if its sending an email for example), or it might render a new page to let the user see a preview, or it might just generate a PDF for the user.
  • When generating PDF's we generate the Html and then use a third party tool to convert it to Html.

It's actually a pretty slick mechanism and has streamlined a lot of the code. My new requirement is to enable printing from other pages then the Details page. At this point I have two choices. I can either duplicate a subset of my code to allow printing for just document types we need, or I can try and find a way to consolidate this printing framework to make it reusable.

I opted for the second option of course. So the first thing was to move all the markup, script and code behind outside of the details page into its own user control. A quick test showed that some basic functionality is there so I dropped the control onto my new page and things were looking good. This might be possible I thought.

Then I started doing a deeper more thorough testing (And I wish I had an automated unit test for this UI stuff but ah well It's on my ever growing to do list). I quickly found that all the Id's had changed, this messed with all the javscript which enables or disabled options based on what you've selected.

Well we can fix this one easily so I pick one of the controls, and modify the javascript so it is dynamically generated, and things are looking good again. So then I go and printout the most complex of the documents, this time paying close attention to it, and I noticed that none of the options that were set, were being honored.

The issue is the form fields changed so in the old version I was expecting to see a field with id of "printDescription" now it's coming out as "printOptions_printDescription". Furthermore when I drop it into a content page, which happens to user a master page which is also nested, the Id becomes a long mess. So now at this point how can I tell ASP.Net to not auto generate the Id's. I could write my own custom server side controls but that's a lot of work. I could get rid of the server side setting of options but to do that I would now have to either generate all the html dynamically, use code in the markup, or make a second call to get an json representation of the print options and set them on the client.

So let's get back to the original point of this post. I am going to turn off the Unique ID Generation. The first thing is this works if you have a naming container, which you can override properties on. A user control works perfectly, and since a user control doesnt render any markup by default it doesn't change the structure of the page.

I have found that by overriding the following properties results in the control's child control's from picking up the containers uniqueid and appending it.

    public override string UniqueID
        get { return ""; }

    public override Control NamingContainer
        get { return null;}


I have not tested this beyond my simple usage scenario. I do not know what would happen if these controls were used in a normal postback scenario, if they would be able to pickup their state from the form fields or not.

Wednesday, June 24, 2009

I need a vacation

So this morning I was thinking about some basic things we can do to speed up the download of our pages, including combining and minifying our JS / CSS files. I decided to go look at YuiCompressor, which is a minifier from Yahoo.

I think to myself neat let me bookmark that and come back to it.

I guess I had this same thought the last time I thought about the subject. Ugh!!!

Monday, June 8, 2009

How not to use a user control

I am in the middle of a nice refactoring session, changing a setup dialog, which allowed you to edit 4 types of email templates. Since the data was the same I had a user control which was then placed on four different tabs.

We are going to tripple the number of types of emails that are going to be configured. So now my simple dialog of four tabs explodes into 12, and while I do not claim to be a UX expert by any means, and I am not able to design the latest slickest easiest to use interface; I can recognize a bad UI very easily.

I have decided to use a drop downlist paradigm where the user selects from the drop down the template they are going to work with, and then we present them this information.

During my refactoring I stripped out my tabs and added the drop down. I removed all but one instance of my user control. I have everything working except I'd like for drop down to line up better with the contents of the user control.

The html that gets rendered is like this:


        Select a Template:




        <!--This table comes my user control -->





                        Item 1:









The problem is the select template doesn't line up with the rest of the labels. This lead me to the ahhahh moment that lead to this post. I do not recommend doing this in production code. I think it makes the code very unreadable.

Nothing says that the user control must render a valid html fragment. Using that to my advantage I modified the code so my aspx page looks like:




                    Select A Template:



                    <asp:DropDownList runat="server" ID="DropDownList1" AutoPostBack="true">





            <croixUser:emailConfig runat="server"  />

This is basically my entire, I never close the table in the page, I do that in the user control. While we shouldn't be able to do this it makes sense that Microsoft hasn't wasted any energy in blocking this. If you want to shoot your own foot off with convoluted code go ahead.

Now I must be off to refactor that user control to dev\null

Wednesday, June 3, 2009

Jquery 1.3.2 & Intellisense in Visual Studio 2008

After following the directions to get intellisense working with the latest version of jquery, I ran into some issues.

First I would get this error:

Error updating JScript IntelliSense: D:\Source\...\JS\jquery\jquery-1.3.2.js: Object doesn't support this property or method @ 18:9345

The first problem is that the file which has the enhanced comments for intellisense is named jquery-1.3.2-vsdoc2.js. Visual Studio is looking for a *vsdoc.js. Make sure you rename the file to jquery-1.3.2-vsdoc.js.

After renaming the file I was greated with this lovely error:

Error updating JScript IntelliSense: D:\Source\...\JS\jquery\jquery-1.3.2-vsdoc.js: 'div.childNodes' is null or not an object @ 1487:1

My understanding is that the code inside the methods doesn't really matter (Assuming it's not changing the object definitions of course). Since this code will never be ran, I simply removed line 1488 which looks like:

elem = jQuery.makeArray(div.childNodes);

And now we have intellisense working with jquery 1.3.2


I've noticed a lot of traffic recently on this post, if this post has helped resolve your issue (or not helped) leave a comment let others know.

Thank you and good luck


Thursday, May 21, 2009

One more VSS bites the dust

I am happy to announce, that I have just my finished my second VSS to TFS migration, and I hope and pray that I will never be forced to open up VSS again. Yahooo!

Tuesday, May 19, 2009

Rendering ComponentArt's Splitter in Firefox

I've been working a little bit to add support for FireFox in my application which has only supported IE6 and better. I am trying to be proactive knowing one day my boss will ask for this, and it is actually helping to cleanup the markup we have quite a bit.

Along this journey, I noticed that we have a splitter bar, which has a tree view in one pane and a tree view in a second pane. It is a two pane vertical split. In IE6,7 and 8 (even in standards mode), the pane rendered correctly. When viewed in Firefox the first pane was shown; however, the second pane wasn't being rendered. It looked like the splitter container width was being set to the width of the first pane, which caused that pane to take up the entire container.

After working with Component Art's support team, it was noticed that when you resize the window, the pane would then render correctly. With their help, we found the following to force the splitters to render correctly in firefox.

function splitter_load() {

    if (jQuery.browser.mozilla) { = 

              $("#divBidSummary").width() + "px";




myContainer is a div which I have set to a width of 99% of it's parent. This allows the splitter to grow and shrink as the page is resized. This sample uses jquery, which is an excellant javascript library.

Note: Thanks goes out to Hwan to helping to track down and develop a work around for this solution.

Automatic text selection with Compenant Art's Tree View

Over the last year I have been using Component Art's Web UI suite. I have a love hate relationship with their controls. I love how they are quick and easy to get some advanced UI functionality into your site. I hate their documentation and that some features which I would consider basic seem to be missing. I also want to take a moment and say they have a great tech support team, who has helped me quite a bit over the last year. But overall I would recommend the controls.

In a recent project, I had a tree view, which the user could create their own nested heirarchial tree, they could move the nodes around, and it worked very simillar to the experience you get with Windows Explorer. One minor feature which was missing, that I had wanted (and some of my test user's also asked for), was that when you go to edit the name of the node, select the text for them.

This is a basic UI paradigm that if I go to edit something it automaticly selects itself. When you look at the TreeView control you will see there is no event that indicates that a node is being edited; however, if you don't mind mucking around with a thrid party object (This is something that should not be done lightly CA makes no promises that this will work in future versions) there is a way to do this.

If you examine the TreeViewNode object you will find a method called Edit which is called when a node is supposed to go into Edit mode. By overridding this method we can provide our own implementation for method. The following code copies the Edit function into a new function EditOriginal. It then creates a new Edit function which calls the original function. The new function then uses Jquery to select an input which is a child of a TreeView class. If your not using Jquery or if your page structure is different you might need to modify that code to find the input element, and then select it.

        ComponentArt_TreeViewNode.prototype.EditOriginal = ComponentArt_TreeViewNode.prototype.Edit;

        ComponentArt_TreeViewNode.prototype.Edit = function() {


            $('.TreeView input').select();



There is an EditNodeCSS property which I was unable to get a selector working with it. If anyone does get a selection working off that let me know. I'd love to know where in the dom its used and when I try and look at it with my tools the node leaves edit mode.

Note: I want to thank Stephen Hatcher of Component Art's Tech Support team for his assistance in coming up with this solution.

Friday, May 15, 2009

Centering Box Elements in HTML with CSS

Most of my carreer I have been a "backend" guy, working on the database, and building services for consumption by other systems. Over the last couple of years I have had the pleasure of becoming a "frontend" guy, working on various web sites.

So far the site I have been working on only targets IE, As such there hasn't been a strong motiviation to add support for Firefox; however, with IE8 being much more standards compliant, I have a personal desire to get the site to a point where I can turn off IE7 Compatability Mode.

Today I was a little stumped when I was working on markup simillar to the following:

    <div style="text-align: center">




                    I am Centered





In IE8 with Compatability mode this renders as I expect. The table is centered within the div; however, in Firefox and IE8 standard mode the table is left aligned.

I've always questioned why am I using text-align to align elements, but it always worked so I just shrugged and moved on. Turns out IE wasn't honoring that text-align should only align text. It was using to align all elements. The proper way to do this is:


        <table style="margin-left:auto;margin-right:auto">



                    I am Centered





This is so much better. I hated having to use text-align center, and then go through and set all the text up so it would align back left. I have tested this in Firefox 3.0.7 and IE8 (IE7 and Standards Mode). It feels to good finally learn the right way to do something.

Tuesday, May 5, 2009

Dynamic Expression

Scott Gu has a good article about a great sample component called Dynamic Linq. This is a very powerful addition to your linq toolkit which lets you create expressions based on strings. So imagine that you want to add sorting to a grid which is bound to a List. One way would be to have a switch statement for each options (Yes you can throw up I'll wait).

Another way would be to use the dynamic query library. I've always wondered how difficult it would be to create an Expression dynamically. This would be a nice addition to my Audit Logger allowing for the configuration of an audit category to exist outside of the code.

Much of this code came from this article. I simply went and added the glue:

        public static IEnumerable<T> Sort<T>(this IEnumerable<T> source,

            string sortExpression, bool desc)


            var param = Expression.Parameter(typeof(T), string.Empty);


            var fields = sortExpression.Split('.');

            Expression property = null;

            Expression parentParam = param;

            foreach (var field in fields)


                property = Expression.Property(parentParam, field);

                parentParam = property;




            var sortLambda = Expression.Lambda<Func<T, object>>(

                Expression.Convert(property, typeof(object)),



            if (desc)


                return source.AsQueryable<T>().OrderByDescending<T, object>(sortLambda);



            return source.AsQueryable<T>().OrderBy<T, object>(sortLambda);


Basically this will give you an expression which returns the value of a property. It will support property / field invocation so myObject.MyField.MyProperty. It works by creating an expression which represents the type of parameter being pased in to the lambda.

Then it builds an expression tree by chaining new expressions together for each token myObject, myField etc. Finally it creates a lambda which can be executed.

Wednesday, April 29, 2009


Recently I had the pleasure of helping out a former employer of mine resolve a couple minor issues that have come up since my departure. During this time I had a revelation about how we treat our source code today, and how we should treat it tomorrow.

When I worked for this company I became the sole developer, and was able to define my own coding standards. Overtime all of the source files were updated, as I made changes to them, and was able to use Reformat Document tool in Visual Studio.

Since I have left, a new developer has taken ownership of all the projects I was working on. Said developer prefers a different style to their code. For example. I prefer all braces to start on a new line where as he prefers all opening braces to exist on the parent line.

What does this have to do with how we treat source code? Well imagine this scenario. I open up a large class file, and remove a single using statement from the top of the file. Upon saving the file, visual studio applies my formatting rules which I defined. This results in my one line change to cascade into hundreds of changes.

The question I pose then is why do our tools treat non-significant characters such as extra white space as significant when comparing two versions of the code? Shouldn't our development tools store the source code in as compact a format as possible and allow us to define our own personal views of the source code? Two developers on the same team should be able to read the same source code formatted how they like to see it.

Perhaps Visual Studio should be written as a MVC, and allow us to define our own views seperate from the source code.

Writting this reminded me of when I worked on VB6 applications and one developer would check out a file and the casing of all the variables would change. What a nightmare. Again the source code should be treated as data rendered into a view by the IDE.

Tuesday, April 28, 2009

Rendering to Xml with ASP.Net MVC


Warning: This is my first post using Windows Live Writer so if things don’t come out right I apologize.

I have been working lately on using the ASP.Net MVC platform as a rest based endpoint. My current consumer is looking for xml documents of our model; however, in the future I’d love to move our web site over as well. I have been playing around with a couple of different ways to render xml content from a view.

I have seen some approaches on the web which use an automatic serialization mechanism to convert your model into xml or json. I feel that this should not be done automatically but you should be able to customize the rendering however way you need. This has led me to try out three different options to date.

Option 1 Serializing a DTO

The first approach I tried was to take my DTO and get into Xml the quickest easiest way possible. The XmlSerializer. This broke down a little bit when I discovered I actually needed two DTO’s. This method I used a helper method in a code behind file. So my view looks like:

<%@ Page Language="C#" Inherits="MyViewToXml" CodeBehind="MyViewToXml.aspx.cs"  %> 
<%this.RenderToXml(); %>

Now if we look at RenderToXml we can see the magic here:

Response.ContentType = "text/xml";

var dcs = new DataContractSerializer(typeof(List<DTO1>));
dcs.WriteObject(Response.OutputStream, this.Model);

var dcs2 = new DataContractSerializer(typeof(IList<DTO2>));
IList<DTO2> bfs = (IList<DTO2r>)this.ViewData["DTO2"];

dcs2.WriteObject(Response.OutputStream, bfs);


I like this approach in that it was quick easy, and gave me full control, but it doesn’t sit well with me. It feels like I am grinding against the purpose of having a view. So let’s look at option 2.

Option 2 Embracing MVC View

My next shot was to serialize a much more complex object. In my actual case I don’t have a DTO defined, and I need to have even more control over the Xml being generated. Serialization by the framework won’t cut it, and I wasn’t go back 10 years to the days of having build xml via strings or even via a DOM. So what does that leave me with?

<Order Id="<%=this.Model.Id%>"  Number="<%=this.Model.Number%>">

<Customer Company="<%=this.Model.CompanyName %>">
<Address Line1="<%=this.Model.Address.Line%>"
City="<%=this.Model.Address.City %>"
State="<%=this.Model.Address.State %>"/>

<ShipTo Name="<%=this.Model.Location.Name %>">
<%if (!this.Model.ShipTo.Equals(this.Model.Contact.Address))
{ %>
<Address City="<%=this.Model.ShipTo.City %>"
State="<%=this.Model.ShipTo.State %>" />
<%} %>

<Items GrandTotal="<%=this.Model.CalculateGrandTotal()%>"
="<%=this.Model.IsGrandTotalOverridden %>">
<%foreach(var item in this.Model.Items) {%>
<Item Name="<%=item.Name%>" Total="<%=item.Total%>" />
<%} %>

Here we are embracing the view and generating our xml just as if we were rendering an Xhtml view of the domain. This is neat; however, it is a little verbose so I started to question what else is out there.

Option 3 Finding Alternate View Engines

Asp.Net MVC is very extensible. If you don’t like the view engine go find a new one or write your own. I ran across a port of HAML called NHAML. This can simplify the syntax to look at; however, NHAML’s tooling is not there. If you want intellisense or color highlighting, its just not there.

Further more it treats white space as significant characters. For example to nest elements you would need to do:


Child elements are indented by two spaces. While this frees you from worrying about closing all your tags, its a little counterintuitive. There are other view engines out there, which I am looking forward to running through their paces.

Thursday, April 2, 2009

Setting up SSRS on server with multiple sites

I had to set up SSRS on a server today, which has about six or seven different sites. I went through and configured SSRS, bound the virtual directories to a free IP address, fired up the browser and kaboom!

Firing up google was very frustrating as all the solutions I was finding told me to check permissions. The issue here is most likely due to SSRS defaulting to using http://localhost to access the web services. (If anyone from Microsoft or knows whose responsible for this piece of code it'd be great to give a little bit more detail). How do you tell SSRS to use a different server?

There is a config file in %SQL Install Folder%\MSSQL.*\Reporting Services\ReportManager\RSWebApplication.config.. In side this file you will see some xml which looks like


Perfect right so I enter in the URL for my site, fire up the report manager and....kaboom! This time the site doesn't even load and we get the following error from our log files:

w3wp!diagnostic!6!4/2/2009-12:03:16:: Error loading configuration file: The configuration file contains an element that is not valid. The ReportServerUrl element is not a configuration file

This is a horrible error message and doesn't really indicate what the issue. Turns out that you can have either ReportServerUrl or ReportServerVirtualDirectory. Enter the fully qualified URL for ReportServerUrl and remove the ReportServerVirtualDirectory node. Now we fire up the ReportManager in your favorite web works!

I could have saved many many hours and advil, had the developers done proper error handling. Ahh well hopefully you find this article before you've wasted to many hours.

Thursday, March 19, 2009

nHibernates poor validation of mapping files

One thing that I have come to notice as I've been using nHibernate is that it does a very poor job at validating that mapping files make sense. One could argue that I shouldn't write poor shoddy mapping files in the first place. I would beg instead to tell me when I've written a poor shoddy mapping file.

Here's the scenario I just faced. I was working with an Entity which has a One to One relationship with some legacy code which doesn't use nHibernate. When I first mapped this I had an identity column on the table. After thinking about it, I want to enforce that we have a one to one so I then modified my schema and mapping file to use the FK field as its identity. I went from this:

class name="MyOptions"  lazy="false">

    <id column="id" name="Id"  >

        <generator class="native"/>


    <property name="FkId"/>


To this:

<class name="MyOptions"  lazy="false">

    <id column="FkId" name="FkId" unsaved-value="any">

        <generator class="assigned"/>


    <property name="FkId"/>


I fire this up, and we have no initialization issues, I am also to read data in without an issue. But when I go to save the data, I get an IndexOutOfRange exception when nHibernate tries to access a parameter.

Do you see the issue? I left my property for FkId in the mapping file. I needed to remove it since it is now the Id. Which caused parts of nHibernate to break. Interestingly enough when I look at the insert statement it only had the FkId field being inserted into once. So part of nHibernate handles my shoddy mapping file but other parts do not.

Anyways nHibernate is still an awsome tool, and hats off to the people who devote their time without pay to such a great tool.

Dynamically call a Generic method using Reflection

While working on a request to make the Database Logging add-on I discussed previously, I came accross an interesting problem.

I wanted to be able to define the name of the ExpressionGroup, stored procedure and connection to use when logging to the database in a config file. I figure this was the first step in enabling configuration to the system. In order to do this I needed to create an instance of Expression<T>.

The expression cache already has a CreateGroup<T>() method which I wanted to use. So how can I call this method if I don't know the type of T until runtime? Its actually very easy.

Essentially you want to get a reference to the Method via Reflection. There are lots of ways to do this. Once you have a MethodInfo class, you can call MakeGenericMethod which takes a parameter array of type objects. It will use these type objects to subsitute in place for the Generic type T.

At this point you can call the method just like any other method via reflection, using the Invoke() method on MethodInfo. The code looks like this:

var meth = this.GetType().GetMethod("CreateGroup");

var genericMeth=meth.MakeGenericMethod(getType(grpConfig.AuditLogEntryType));

IAuditLoggerExpressionGroup expGroup = (IAuditLoggerExpressionGroup)genericMeth.Invoke(this, new object[] { grpConfig.Name, grpConfig.ConnectionName, grpConfig.Procedure });


That's all there is to it.

Wednesday, March 11, 2009

Extension Methods & Null Objects

I ran into an issue that I had learned about a while ago but the code that I was working on predated this. I find it interesting enough to share. I love the various predicate / action methods on List<T> that allows you to interact with a list by passing in a delegate.

I've always been a little dissapointed that these methods aren't part of the IList<T> interface; however, with Extension methods we can rectify this. I defined a new method in a static class as:

        public static T Find<T>(this IList<T> col, Predicate<T> match)


            if (match == null)


                throw new ArgumentNullException("match");



            for (int i = 0; i < col.Count; i++)


                if (match(col[i]))


                    return col[i];



            return default(T);


This has been working fine for almost a year now, until today when I got a null Reference exception on the for loop. The interesting thing with Extension methods is they are nothing more then a static method that the compiler does some magic with.

To prove this go ahead and create a simple console application. Add an extension method on your favorite type (I choose a string), and call this from your main method. Here's what I used:

    static class Extensions


        public static void ToMe(this string value)


            if (value == null)


                Console.WriteLine("Null value");




static void Main(string[] args)


            string s = null;




After you compile this go ahead and fire up reflector or Ildasm and have a look at the IL. In my case I had the following instruction:
call void ConsoleApplication4.Extensions::ToMe(string)

Since all we are doing is calling a static method and passing in a reference to the variable which the extension method was called off of, it becomes perfectly legal to call extension methods on a null reference.

Now in my opinion this is wrong, and a side effect of the implementation, and we should not rely upon this behavior. The correct code should look like:

       public static T Find<T>(this IList<T> col, Predicate<T> match)


            if (match == null)


                throw new ArgumentNullException("match");


            if (col == null)


                throw new NullReferenceException();


            for (int i = 0; i < col.Count; i++)


                if (match(col[i]))


                    return col[i];



            return default(T);


Monday, March 9, 2009

Benefits of obfuscating and pointless licensing systems

Recently I was faced with a difficult problem. A component that a contractor choose for a site which I now maintain, is licensed per domain name. The site is for a relativley young company which hadn't choosen how it was going to brand the software they were selling.

After they had choosen the domain they wanted, we switched the site over and the component as we knew was going to break. We had emailed the control's manufacturer three times over a three month period, and they never replied to us. So now I was stuck with the following choices. Either we buy another license through their channel, replace the control, or find a plan c.

Considering they hadn't responded to any of my emails, I was reluctant to send them any more money. The control was used extensivley in a site that isn't worth the time to replace so that leaves us with plan c.

I have to state that plan c is a temporary fix until they respond to our emails. So I started digging around in the compiled assemblies wondering if there was a way to get around this licensing issue (again as a temporary fix).

What I saw shocked me. First the assembly wasn't obfuscated. Had it been obfuscated, I probally would have given up, as I only wanted to spend 20 minutes on this max. The second thing was that licensing scheme basically makes a call to an external assembly which returns a decrypted string that the caller then used to match against the URL's, or it would match against the host name if the string was properly formated.

Well this was easy to fix. All I had to do was create a new assembly which matched their signature, and return a very simple string. This was the only thing this assembly did.

This is where obfuscating becomes important, it is not fool proof but again nothing is foolproof, if the computer can understand someone somewhere can understand but you want to raise the bar high enough to make it so that it is not worth ones time to break through.

I'm also confused why they shipped this assembly seperate had it been compiled with the rest of their stuff it would have been riskier for me to mess around with it. Lesson learned here is either protect your code a little bit, or at least be competent enough to respond to customers emails.

Sunday, March 1, 2009 MVC Model Binder

Model Binders is a powerful extension point to the ASP.NET MVC. It allows you to define your own class which is responsible for creating parameters for your controller actions. Scott Hansleman had a good example showing how to decouple a controller from HttpContext's user property.

Recently I've started a new Proof of Concept site which is based upon REST. My goal was to build a single web site which would expose its resources as addressable items. That is the URL defines the resource. This site should also be able to serve multiple clients which want different representations of the same resource.

For example assume we have an Automobile Parts catalog. We might have the browser point to which would render an HTML page of all the ford oil filters. Now let us say we want to implement an AJAX callback on the page so as a user mouse overs a specific part, the site might send a request to, with the intention of gathering additional data about the part to display to the user. For this request we don't want to get an HTML representation of the part, we want the part rendered as JSON or XML, which will let us programatically access the part information.

I have grown tired of having to support multiple sites / pages one that renders the HTML and another that provides an API and exposes the data. The natural seperation of concerns that the MVC model gives us makes this an ideal platform to build a single web site which can serve HTML, XML, JSON or any other content we could imagine (for example what about a PDF version of the HTML page for offline usages).

To do this imagine we have the following controller defined:

public class PartController : Controller


public ActionResult List(string oem)



public ActionResult Detail(string oem, string partnumber)




We need a way to determine how the client wants the data to be returned. We could put that in the URL; however, that doesn't feel very restful. A key idea of a REST implementation is leveraging the capabilities in the HTTP protocol to define the resource your requesting. One of the Http Header's is Accept.

The Accept header allows us to define the content type that the client is requesting. So we could indicate in our request that we want text/json or text/xml. We could then put the following code in our controller:

        public ActionResult List(string oem)


            //Get the Model for OEM.

            switch (HttpContext.Request.AcceptTypes[0])


                case "*/*":

                case "text/html":

                    return View("PartList.aspx");

                case "text/json":

                    return View("PartList.json.aspx");



Warning: I am not finished investgating how ASP.Net handles the Accept header and my switch statement might not work; however, this is an example to highlight the flexibility of ASP.Net MVC.

So this works great; except that we want to unit test our code to ensure we are returning the right view for each request; however our controller is bound HttpContext which can be difficult to unit test. So the question is how do we decouple our controller from HttpContext? IModelBinder's are the answer.

We can define a ModelBinder by implementing the IModelBinder interface. The interface requires a single method that takes a couple parameters which provide context about the request, and returns an object.

    public class ContentTypeModelBinder : IModelBinder



        #region IModelBinder Members


        public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)


            if (controllerContext == null)


                throw new ArgumentNullException("controllerContext");


            if (bindingContext == null)


                throw new ArgumentNullException("bindingContext");



            return controllerContext.HttpContext.Request.AcceptTypes[0];





The return value from the function will be used to pass in to a parameter in a controller. This allows us to change our controller definition to:

        public ActionResult List(string oem, [ModelBinder(typeof(ContentTypeModelBinder))]string contentType)


            //Get the Model for OEM.

            switch (contentType)


                case "*/*":

                case "text/html":

                    return View("PartList.aspx");

                case "text/json":

                    return View("PartList.json.aspx");



The magic here is the ModelBinder attribute which is applied to the contentType parameter. This tells the ASP.Net MVC runtime to use our own custom IModelBinder to provide the value for the contentType attribute.

The thing I don't like about this is that we have to use an attribtue in our controller. It is possible to indicate that all instances of a specific type use a specific binder. But I will leave that for a different article.


Wednesday, January 14, 2009

Rendering an ASP.Net Page twice???

Not to long ago, I found myself in a peculiar situation which I will explain shortly. Essentially I found myself needing to render an ASP.Net Page, modify an attribute or two and then re-render. I know you must be asking yourself why in the world would you want to do this? Let me explain first why I needed to do this, then we can look at why this doesn't work and how you can hack around the Page model to get it to work.

The web application I've been working is a LOB application, and printing forms is very important. Furthermore these forms will be provided to our customers customers so we needed to have the utmost control over the form. IE doesn't give you much options when controlling the output. For example using standard methods there is no way to tell IE not to print out the url on the page. The user can modify this setting but relying upon the user to modify their page settings on every print reduces the usability of the application.

What's a poor web developer to do? PDF's are a portable format and we can have a lot of control over it. Now the challenge became do we learn a whole new document model or can we find a tool to convert Html to PDF's. We opted for converting HTML to PDF for a couple of reasons including:
  1. We is actually I, and I didn't want to waste any time learning a new document model. My current employer is a small start up and we just don't have the time to waste.
  2. Many of our forms are displayed to the user in a preview mode. So had we gone with not converting the Html we'd have to synchronize our changes, again the "we" is actually "I"...

We create the PDF's by overriding the page's Render method. We call a method RenderHtml() which looks like:

protected virtual string RenderHtml(string baseUrl)



StreamWriter sw = new StreamWriter(new MemoryStream());

HtmlTextWriter writer = new HtmlTextWriter(sw);



StreamReader sr = new StreamReader(sw.BaseStream);

sr.BaseStream.Position = 0;

return sr.ReadToEnd();


The RemapImageUrl() provides absolute paths to the images in the document. This could also be done by setting a base; however, in my exact example I had to change the actual URL to get some dynamic images. After we call this method we take the Html and send it to a Html to Pdf converter which then in turn send to the client.

This was working without any issues until like all solutions, a new business requirement came in. We had a form that we sometimes wanted to print a single copy, but in other cases we wanted to print two copies. The second copy would be identical to the first save for a watermark.

I had done something simillar by rendering smaller portions of page indepedantly but in this case, I had a Gridview which requires a form, and needless to say I was trying to jump through so many hurdles guessing at a magic combination of controls and method calls to make this happen

The solution I decided upon was to render my page twice, create two PDF documents then combine them. I know I could have probally done some string parsing, but the documents and format was very fluid at the time, and I simply needed to find a solution and quick.

I figured I could use roughly this code:

string firstDoc=RenderHtml();

EnableWatermark(); //This method turned the water

string secondDoc=RenderHtml();



After firing this up, I got an error indicating that a form was already on the page, and there could only be one form. Opening reflector to see what was going on made me cry as I realized what ASP.Net was doing. Now I realize at this point I am far beyond what anyone would consider acceptable use...but lets look real quick why it doesn't work.

Open up reflector and look at the HtmlForm.RenderChildren method this basically calls the Page.OnFormRender then Page.BeginFormRender followed by rendering it's children. It finishes by calling Page.EndFormRender and Page.OnFormPostRender. So let's start with Page.OnFormRender.

This method looks like:

internal void OnFormRender()


if (this._fOnFormRenderCalled)


throw new HttpException(SR.GetString("Multiple_forms_not_allowed"));


this._fOnFormRenderCalled = true;

this._inOnFormRender = true;


This is where we erroring on our second call to the Page's render method. This method makes sense, essentially it ensures there is only form by tracking this at the page level in the _fOnFormRenderCalled variable. Now in the FormPostRender event they are reseting the flag which tracks if a form is activley being rendered; however they do not reset the _fOnFormRenderCalled which is correct. The problem is this variable is never set back to false. Even when the page is finished rendering.

So our solution is to manually reset this private variable using reflection:

FieldInfo fi = typeof(Page).GetField("_fOnFormRenderCalled", BindingFlags.NonPublic BindingFlags.Instance);

if (fi == null)


LogWriter.LogError("FieldInfo is null verify _fOnFormRenderCalled still exists on the Page object.");


fi.SetValue(this, false);

I highly recommend avoiding this, and if anyone has a better solution feel free to share. I'd also write your code defensivley. Well I hope at least someone somwhere finds this helpful, and if you happen to be a dev at Microsoft in the ASP.Net team why not reset these variable before the Page's render method exits.

Thursday, January 8, 2009

Uploading attachments for blog entries on blogger

A request was made to upload some of the source code for a previous blog post. Is this possible to do with blogger? If not where is a good place to do this at? Anyone have any suggestions? Free is critical.