Category: C#


Funky functions

While reading a little article on Javascript performance enhancements (sorry lost linky) I noticed a little trick whereby a closure redefined itself and I wondered whether this was possible in C#. Well, lo-and-behold, it is i.e.

Func<bool> isTrue = () =>{
      isTrue = () => return false;
      return true;
};

On executing this function on the first occurrence the value will return true, but thereafter will return false, this may be useless but it’s pretty interesting (to me anyway).

Now the Javascript example, that I previously read, used this technique to avoid unnecessary conditional lookups when retrieving the window scroll position (depending on if the browser was IE/FF/Chrome etc). In C# and equivalent (if not slightly unrealistic) example might be:

Func<Coords, Postcode> getCoords = pCode => {
    if(CoordinateProviderConst=="XService")
       getCoords =  pstCode => new XService().Coords(pstCode);
     else
        getCoords =  pstCode => new YService().Coords(pstCode);
     return getCoords(pCode);
};

Now please don’t take the above code as being at all sensible, it’s not, it’s just the best example I could think up quickly. What it does show is that this technique gives us the ability to have almost a state-pattern-type-thing at the function level.

That said, I’m really struggling to think up a sensible use case for this technique and the Javascript example, I read about, could have been handled several other ways that would mean that only the conditional lookup occurred once. So with that final thought, here ends my random musings…. still it was pretty interesting to find out that this was even possible, if not a bit useless.

Advertisement

I am by no means a regular blogger, but the last time I put pen to paper (or fonts to screen) I suggested that the .NET unit testing cycle was just too long. This became completely apparent to me when recently working on a project with thousands of code files, where the compilation of a test project could take several minutes. Obviously, poor workstations had their portion of the blame, but I also wondered if the fact that a whole load of unnecessary files had to be compiled in order to create the testing assembly.

When going through the red-green-refactor loop we are often constantly running a few select tests over and over again, and not the whole test suite. This made me wonder, do we really need to build the whole testing assembly (and more importantly all referenced projects) just to run a single test?

The fairly simplistic idea that I had was to parse a single test file and determine only the dependencies required to make it build. I would achieve this by looking at the solution file and work out where the code files for these dependencies live on the file system.

Well, while preparing for an upcoming trip, I have had a little time off work. With this time I thought I would knock up a really quick and dirty spike to see if what I thought was actually sensible to implement. The result of which is some really nasty code with very sparse test coverage and some extremely poor design decisions. That said, I also managed to prove that (if I have the required time) I could likely make the .NET testing cycle a little less painful for myself.

My spike can be found at http://github.com/chrisjowen/WellItCouldWork, and the result is a small console app that takes the path of a test file and its corresponding solution file. Whenever the test file is saved the app parses the file to find any dependencies required referenced in the file, then builds a temporary assembly and runs the NUnit test runner against this little assembly i.e.

Output

As I said this is just a spike so the code isn’t great, but I think that when I get back from my travels I will look to seriously do a production ready rewrite as I honestly think this would be a useful tool in the .NET world. Any thoughts?

5 Second IList ordering with Linq and strings

Ok, so Im only writing this blog post as a reference for myself so I dont have to think about it again, but hey it *may* be useful to someone but me.

I just needed to order a generic list based upon a property name string, here what I have:

public static IList<T> OrderByName<T>(this IList<T> items, string order, string coloumn)
{
 // don't try to sort if we have no items
 if (items.Count == 0) return items;
 var propertyInfo = typeof(T).GetProperties()
                             .Where(p => p.Name == coloumn)
                             .FirstOrDefault();
 return propertyInfo == null
    ? items
       : (order.ToUpper() == "ASC"
       ? items.OrderBy(n =>propertyInfo.GetValue(n, null)).ToList()
       : items.OrderByDescending(n => propertyInfo.GetValue(n, null)).ToList());
}

Usage:

myList.OrderByName("asc", "Property");

After yet another day of being distracted WAY too much by twitter, I recall a tweet that Paul Cowan posted:

“what are people using to combine all their .js files into 1?”

Now I had no idea how people we doing this but, ironically, this is functionality that I will require myself shortly. What I did know though was that the kind people at Google recently released their JavaScript closure compiler a tool that cleans, optimizes and minifys your .js scripts. This library is available as either a downloadable command line tool, or via a RESTful API.

So, for the purpose of testing this library (and given the fact that Mads Kristensen has created a small wrapper around it) I opted to play with the RESTful API. My aims were to build a quick and dirty HttpHandler around this library that would mash all my .js files together, send them to be complied and stuff the results into the cache.

With this in mind I give you my 15 minute spike: www.dotlesscss.com/mashpotato.zip

If you download this solution you’ll see a web project that references my HttpHandler and has the following config section:

<add verb="GET" path="*.mash" validate="false" type="MashPotato.Core.ClosureHttpHandler, MashPotato.Core"/>

This configuration allows me to add “.mash” file in the same folder as my scripts which contains a list of newline separated names of the JavaScript files to be mashed together i.e.

Looking in scripts.mash you’ll see:

Now simply reference the .mash file like any other JavaScript files and you should get the mashed, compiled, optimized of the referenced file i.e:

<script src="../../Scripts/scripts.mash?cache=true" type="text/javascript"></script>

Also, notice the “cache=true” uri parameter… guess what this does?

Issues:

I’d say there’s likely to be hundreds of issues as it took me approximately twice as long to write this blog post than to write the handler, but off the top of my head here goes:

  • Googles RESTful API has a size limit of the amount of data that can be posted to it so no mashing up your large framework scripts like JQuery.
  • Obviously, the console application should perform much better than the RESTful API
  • I don’t know, it took me 10 mins I’m sure it’s  buggy as hell.

We are now at a point where the testing has gone through the stages of been the new kid on the block and come out the other end a proven engineering practice. Many developers are now seeing improved code quality, and a greater feeling of confidence in what they release.

But with all the goodness we’re still have a few teething issues with the frameworks that support these paradigms. Not to mention the fact that without the shiny gloss surrounding testing we’re feeling a sever lack of the “ooooh” factor. Enter a few new and old faces in the OOP community to spice up testing frameworks (and hopefully resolve a few problems with existing libraries).

#TestEx (Sharp Tests Extensions)

One of the great things about Unit Tests is that they double up as the a great source of documentation for any developer looking at the code. As with any documentation, it is only any good if you can easily understand it. Traditional unit testing frameworks such as NUnit or MSTest use a standard set of static method calls against an assertion class such as:

Assert.AreEqual("so", something.SubString(2));
Assert.AreEqual("ing", something.SubString(something.Length-3, something.Length));
Assert.That(something.Contains("meth"));

While this code isn’t exactly cryptic, it does take a little bit of concentration to figure out what’s happening. #TestEx is brought to us by Fabio Maulo (among others), and adds a series of extensible extensions to work with various unit test frameworks. What this gives us is tests that read much more fluently and clearly, for example:

something.Should()
   .StartWith("so")
   .And
   .EndWith("ing")
   .And
   .Contain("meth");

Behaviour Driven Tests

One of the other complaints about existing testing frameworks is that it’s difficult to match a series of tests together with desired system behaviour. Several articles have been published on the topic of BDD and now it seems there are some pretty interesting testing frameworks coming out of the wood work such as NBehave and SpecFlow.

These framework take the approach that you initially write your user stories (and acceptance criteria), then write clear tests that specifically meet these criteria. This may not sound much different to previous approaches, but the key difference is that these frameworks totally cater for this scenario and almost force you down this path.

For example, SpecFlow dictates that we write our stories in a Business Readable, Domain Specific Language that both our clients and the framework understands.

SpecFlow

What this essentially means is that we can write tests at a feature level that our clients can verify for correctness. How about that for the shiny “ooooh” factor?

While reading one of Daniel Hoelbling‘s great posts I noticed a strong warning he makes saying: The GAC is your enemy!

I fully understand his point, that its a PITA at the least to have to hunt down dependencies that others have installed in their GAC. But I also can’t help thinking that installing something to the GAC is very much like adding a Gem in Ruby.

So why is this lavish disregard for what other team members may (or may not) have installed on their machines acceptable in Ruby world?

In Ruby if I have a missing Gem reference then all I need to is pop open a command line and type “Gem install xxxx” and hey presto I have the dependency installed. Couple this with the fact that Rails brings some Rake tasks to the table to allow all a projects missing Gems to be installed at once by executing “rake gems:install”.

Now don’g get me wrong, I’m fully aware that there are many other reasons not to install to the GAC, but I don’t see why Ruby manages to side step a lot of these issues. This is generally a question to anyone reading this post, what does the Gem framework do to counter versioning issues and updates to shared libraries?

HornGet: Apt-Get for .NET

A quick search on the web leads me to HornGet a great project that allows “apt-get” type scenario for .NET applications via a command like “horn -install:rhino“. Horn will not do any GAC installation, instead it will build the latest versions of your libs and add them to a specified location (defaults to user profile directory).

This is a great project as far as I am concerned, as just trying to hunt down the latest versions of common 3rd party libraries can be painful. I think that .Less is defiantly going to be added to horn.

Today we launched a site to host dotless – less css for .NET and decided to tweet about it a little. Now the fact that I’m new to twitter and generally don’t have anything interesting to tweet about means I have about 3 followers. Some of the guys in and out of the dotless project however a slightly larger following, hence 1 hour in and we have had ~120 viewers.

stats

nLess – Less Css For .NET

While the world ticks by and time to play with personal open source projects gets more and more limited it seemed like the best thing to do was to push live what I have with regards to my .NET version of  the Less library. With this in mind I have pushed my latest code up to codeplex at: http://nlesscss.codeplex.com/

When time permits(i.e. when I find a stable contract for a few months)  I’ll get back to working on the project, but until then enjoy the codeplex release.

EDIT: We now have a shiny new home for the .NET Less port – www.dotlesscss.com

The tortoise and the hair

Today I learned that I was not the only one who though LessCss warrented porting to the .NET world, it appears that Erik van Brakel at smooth friction may have piped me to the post.  Looking at his project it seems I may be a few steps behind, though he also doesn’t have the Less spec tests passing.

So what do I do with this new information? Should I throw the towel in now? Well, maybe, but I have put too much time in to not finish my port, even if it ends up been more of a ill performant hack job than Erik achieves.  Time will see how it all plays out, but either way its been a great learning curve and Im glad there are people out there filling gaps who are happy to take on these types of projects.

Playing with PEGs

Im now part way through the Less CSS port and i am getting close to porting the core libary for programatically building the Less nodes that are evauated to create CSS. This now seems like a good time to look at the parsing element of the project.  Less originally uses a TreeTop as its parser libary, which I gather is a really powerful PEG/PackRat parser.  Now it should be said at this point that I know even less about parsing theory than I do about Ruby (why did I start this!).

My first steps here are to find out what the darn tootin’ a PEG parser is and see what C# had to offer. After a few hours hunting the bad news is that the .NET world has very little to compete with TreeTop although I did find a very detailed article about PEG parsers with a little implementation of one.  After downloading the code I can see that this is more than a “little implementation”, there has been a hell of a lot of work done on this project and it looks promising for a TT alternitive.