Latest Entries »

Funky functions

While reading a little article on Javascript performance enhancements (sorry lost linky) I noticed a little trick whereby a closure redefined itself and I wondered whether this was possible in C#. Well, lo-and-behold, it is i.e.

Func<bool> isTrue = () =>{
      isTrue = () => return false;
      return true;
};

On executing this function on the first occurrence the value will return true, but thereafter will return false, this may be useless but it’s pretty interesting (to me anyway).

Now the Javascript example, that I previously read, used this technique to avoid unnecessary conditional lookups when retrieving the window scroll position (depending on if the browser was IE/FF/Chrome etc). In C# and equivalent (if not slightly unrealistic) example might be:

Func<Coords, Postcode> getCoords = pCode => {
    if(CoordinateProviderConst=="XService")
       getCoords =  pstCode => new XService().Coords(pstCode);
     else
        getCoords =  pstCode => new YService().Coords(pstCode);
     return getCoords(pCode);
};

Now please don’t take the above code as being at all sensible, it’s not, it’s just the best example I could think up quickly. What it does show is that this technique gives us the ability to have almost a state-pattern-type-thing at the function level.

That said, I’m really struggling to think up a sensible use case for this technique and the Javascript example, I read about, could have been handled several other ways that would mean that only the conditional lookup occurred once. So with that final thought, here ends my random musings…. still it was pretty interesting to find out that this was even possible, if not a bit useless.

Advertisement

I am by no means a regular blogger, but the last time I put pen to paper (or fonts to screen) I suggested that the .NET unit testing cycle was just too long. This became completely apparent to me when recently working on a project with thousands of code files, where the compilation of a test project could take several minutes. Obviously, poor workstations had their portion of the blame, but I also wondered if the fact that a whole load of unnecessary files had to be compiled in order to create the testing assembly.

When going through the red-green-refactor loop we are often constantly running a few select tests over and over again, and not the whole test suite. This made me wonder, do we really need to build the whole testing assembly (and more importantly all referenced projects) just to run a single test?

The fairly simplistic idea that I had was to parse a single test file and determine only the dependencies required to make it build. I would achieve this by looking at the solution file and work out where the code files for these dependencies live on the file system.

Well, while preparing for an upcoming trip, I have had a little time off work. With this time I thought I would knock up a really quick and dirty spike to see if what I thought was actually sensible to implement. The result of which is some really nasty code with very sparse test coverage and some extremely poor design decisions. That said, I also managed to prove that (if I have the required time) I could likely make the .NET testing cycle a little less painful for myself.

My spike can be found at http://github.com/chrisjowen/WellItCouldWork, and the result is a small console app that takes the path of a test file and its corresponding solution file. Whenever the test file is saved the app parses the file to find any dependencies required referenced in the file, then builds a temporary assembly and runs the NUnit test runner against this little assembly i.e.

Output

As I said this is just a spike so the code isn’t great, but I think that when I get back from my travels I will look to seriously do a production ready rewrite as I honestly think this would be a useful tool in the .NET world. Any thoughts?

On of the most painful aspects of working with .NET is the time to retrieve feedback after writing tests and waiting to see the red/green result. The obvious overhead is the compilation time, and this is especially apparent when updating a large solution where files in several projects have changed.

Unfortunately, the solution coming from the guys in the Microsoft camp seems to spend excessive amounts on quad core machines with SSD drives. While this is *an* option, I started to wonder if there were other alternatives coming from other development communities. Now obviously, looking at the Ruby or Python worlds would be a little pointless, as they live in blissful interpreted language heaven, but what about Java?

One advantage that Java has is it’s compilation model, here each java file is compiled to a corresponding class file holding it’s intermediate data representation ready to be interpreted by the VM. In order to reduce the number of class files floating around Java supports packaging these files into a JAR/WAR archives. What this essentially means is that it is possible to compile only the files that have changed and replace the individual class files. Couple this with the fact that most Java IDE’s support compilation on save of a file (which I *think* on success overrides the .class files) , this leads to fast compilation and code that is usually ready to execute at any given time.

In the .NET world compilation does’t happen at this level of granularity, instead all files within a given MSBuild project are compiled to a single dll assembly, making it quite difficult to only update the parts that have changed.

In my mind the only solution is to determine the minimum requirement for a given test class and ONLY compile these files, instead of the whole world. I’m unsure how this could be accomplished other than parsing the text of a given .cs file and its corresponding solution+project files to resolve dependencies. Even in this case there is no way to determine dependencies resolved via reflection (aka IoC injected dependencies). I would love to see the day that c# has lightning fast test feedback cycles, and am open to any ideas on how to achieve this.

Quick post to say that on Friday morning I received my acceptance letters from ThoughtWorks. This means I can finally close the door on one Saga (and open the door to another one).  All in all it should make for an interesting start to the New Year.

5 Second IList ordering with Linq and strings

Ok, so Im only writing this blog post as a reference for myself so I dont have to think about it again, but hey it *may* be useful to someone but me.

I just needed to order a generic list based upon a property name string, here what I have:

public static IList<T> OrderByName<T>(this IList<T> items, string order, string coloumn)
{
 // don't try to sort if we have no items
 if (items.Count == 0) return items;
 var propertyInfo = typeof(T).GetProperties()
                             .Where(p => p.Name == coloumn)
                             .FirstOrDefault();
 return propertyInfo == null
    ? items
       : (order.ToUpper() == "ASC"
       ? items.OrderBy(n =>propertyInfo.GetValue(n, null)).ToList()
       : items.OrderByDescending(n => propertyInfo.GetValue(n, null)).ToList());
}

Usage:

myList.OrderByName("asc", "Property");

After yet another day of being distracted WAY too much by twitter, I recall a tweet that Paul Cowan posted:

“what are people using to combine all their .js files into 1?”

Now I had no idea how people we doing this but, ironically, this is functionality that I will require myself shortly. What I did know though was that the kind people at Google recently released their JavaScript closure compiler a tool that cleans, optimizes and minifys your .js scripts. This library is available as either a downloadable command line tool, or via a RESTful API.

So, for the purpose of testing this library (and given the fact that Mads Kristensen has created a small wrapper around it) I opted to play with the RESTful API. My aims were to build a quick and dirty HttpHandler around this library that would mash all my .js files together, send them to be complied and stuff the results into the cache.

With this in mind I give you my 15 minute spike: www.dotlesscss.com/mashpotato.zip

If you download this solution you’ll see a web project that references my HttpHandler and has the following config section:

<add verb="GET" path="*.mash" validate="false" type="MashPotato.Core.ClosureHttpHandler, MashPotato.Core"/>

This configuration allows me to add “.mash” file in the same folder as my scripts which contains a list of newline separated names of the JavaScript files to be mashed together i.e.

Looking in scripts.mash you’ll see:

Now simply reference the .mash file like any other JavaScript files and you should get the mashed, compiled, optimized of the referenced file i.e:

<script src="../../Scripts/scripts.mash?cache=true" type="text/javascript"></script>

Also, notice the “cache=true” uri parameter… guess what this does?

Issues:

I’d say there’s likely to be hundreds of issues as it took me approximately twice as long to write this blog post than to write the handler, but off the top of my head here goes:

  • Googles RESTful API has a size limit of the amount of data that can be posted to it so no mashing up your large framework scripts like JQuery.
  • Obviously, the console application should perform much better than the RESTful API
  • I don’t know, it took me 10 mins I’m sure it’s  buggy as hell.

.Less gets all Horn(get)y

Just a quick post to say that .Less/DotLess is now available via Horn and also via their web based package down loader.

As some of you may know I have recently been hard at work porting Less Css for .NET and I thought it was about time I actually gave my opinion on the syntax and offered a few of my thoughts on usage.

Variables

The first thing that strikes most people about the Less syntax is the use of variables. I have to say I still find this extremely useful, and its often overlooked that properties and variables are also accessible from within a nested ruleset scope i.e:

#defaults {
  @width: 960px;
}
.article { color: #294366; }
.comment {
  width: #defaults[@width];
  color: .article['color'];
}

Also variables are, well.. erm…  variable and they change their value depending on the scope that you find them in i.e:

@var: red;
#page {
  @var: white;
  #header {
    color: @var; // white
  }
}

The use of variables really requires a little bit of thought. Other than the “corporate colour” example it is often difficult to determine what candidates are eligible for variables. I have found this is a lot easier if I’m working against a design and I site down with a cup of tea first and plan out my approach.

The other issue I have is that I can never remember the names of the damn things. This may be a personal problem with my goldfish like memory span, but I certainly wouldn’t sniff at a bit of tooling support. Now as I type we are currently looking at visual studio integration, but with true MS spirit, they have made this bloody painful so don’t expect instant results.

Mixins

Other than variables, which are pretty cool we also get the advantage of mix-ins and operators at our fingertips. Mix-ins are great for creating reusable chunks of style info that we can easily “mix-in” to another element.

That said, this same functionality can be gained by adding several classes to a HTML element. One of the first pain point I had with Less was trying to justify using mix-ins, and for the most part I have to admit they are pretty funky, but equally useless.

One pitfall, for example is that it is often it’s common to use client side scripting to effect page layout by dynamically adding or remove the CSS classes on an element. Obviously, this is not possible if we have mixed several classes into one instead of adding them separately to them HTML node.

HOWEVER…

As with anything, when used correctly mix-ins are really valuable. Where mix-ins really shine, is when you couple them with a framework such as blueprint, and if any of you have used such frameworks things like this won’t be uncommon:

class="span-15 prepend-1 colborder"

With mixins we can bundle all these together and make a single class i.e:

@import "~/Content/blueprint/screen.css";
#sidebar{
    .span-15;
    .prepend-1;
    .colbord
}

One, other cool thing worth noting about mix-ins is that you can access them via namespaces i.e:

.outer{
    content:"ignore me";
    .inner{
        content:"mix me";
    }
}
#mixer{
    .outer > .inner;
}

This obviously gives a much finer grain of control than simply adding CSS classes to an HTML element.

Its also worth mentioning that mix-ins also hurt my brain when trying to remember what I called the damn things. Once again, tooling will help here and I really should get on with the visual studio integration (or at least prod Erik a bit as he’s currently looking at it).

Imports

Imports allow you to bring together several Less/Css files and merge them into one. You will even be able to access variables/properties from the file you import. Imports are great even if you don’t want to use any of the syntactical sugar that you get with Less and simply want a way to merge your CSS files together.

All in all I can’t get enough of imports as a way of separating my Less style sheets into manageable, reusable sections. There are however, a few gotchas you’ll have to keep in mid with Imports, and they are:

  • When caching is enabled in the Http Handler the cache will only be recycled if the main reference Less file is changed, and not any imported files.  This is no big deal, simply disable the HttpCache until you deploy.
  • Imports will not have access to variables in the main reference Less file (or other referenced Less files in the main one). This ensures that imported Less files have no dependencies on where they are been used.

Futures

As with any language/framework/whatever there are issues that you will hit, but all-in-all I really enjoy working with Less. I think we could expand on the tooling though and as I mentioned our first port of call is VS integration, but other than this here are a few other ideas we have thrown around:

  • Environmental and query string variables passed via the HttpHandler for use in our Less document.
  • Conditional (IF/THEN/ELSE) blocks – This combined with environmental variables would allow switching on say browser or maybe the currently selected theme held in the session.
  • Mix-ins with variables – this is actually implemented in the Ruby library, but we are yet to port it.

Any other thoughts and ideas welcome.

We are now at a point where the testing has gone through the stages of been the new kid on the block and come out the other end a proven engineering practice. Many developers are now seeing improved code quality, and a greater feeling of confidence in what they release.

But with all the goodness we’re still have a few teething issues with the frameworks that support these paradigms. Not to mention the fact that without the shiny gloss surrounding testing we’re feeling a sever lack of the “ooooh” factor. Enter a few new and old faces in the OOP community to spice up testing frameworks (and hopefully resolve a few problems with existing libraries).

#TestEx (Sharp Tests Extensions)

One of the great things about Unit Tests is that they double up as the a great source of documentation for any developer looking at the code. As with any documentation, it is only any good if you can easily understand it. Traditional unit testing frameworks such as NUnit or MSTest use a standard set of static method calls against an assertion class such as:

Assert.AreEqual("so", something.SubString(2));
Assert.AreEqual("ing", something.SubString(something.Length-3, something.Length));
Assert.That(something.Contains("meth"));

While this code isn’t exactly cryptic, it does take a little bit of concentration to figure out what’s happening. #TestEx is brought to us by Fabio Maulo (among others), and adds a series of extensible extensions to work with various unit test frameworks. What this gives us is tests that read much more fluently and clearly, for example:

something.Should()
   .StartWith("so")
   .And
   .EndWith("ing")
   .And
   .Contain("meth");

Behaviour Driven Tests

One of the other complaints about existing testing frameworks is that it’s difficult to match a series of tests together with desired system behaviour. Several articles have been published on the topic of BDD and now it seems there are some pretty interesting testing frameworks coming out of the wood work such as NBehave and SpecFlow.

These framework take the approach that you initially write your user stories (and acceptance criteria), then write clear tests that specifically meet these criteria. This may not sound much different to previous approaches, but the key difference is that these frameworks totally cater for this scenario and almost force you down this path.

For example, SpecFlow dictates that we write our stories in a Business Readable, Domain Specific Language that both our clients and the framework understands.

SpecFlow

What this essentially means is that we can write tests at a feature level that our clients can verify for correctness. How about that for the shiny “ooooh” factor?

While reading one of Daniel Hoelbling‘s great posts I noticed a strong warning he makes saying: The GAC is your enemy!

I fully understand his point, that its a PITA at the least to have to hunt down dependencies that others have installed in their GAC. But I also can’t help thinking that installing something to the GAC is very much like adding a Gem in Ruby.

So why is this lavish disregard for what other team members may (or may not) have installed on their machines acceptable in Ruby world?

In Ruby if I have a missing Gem reference then all I need to is pop open a command line and type “Gem install xxxx” and hey presto I have the dependency installed. Couple this with the fact that Rails brings some Rake tasks to the table to allow all a projects missing Gems to be installed at once by executing “rake gems:install”.

Now don’g get me wrong, I’m fully aware that there are many other reasons not to install to the GAC, but I don’t see why Ruby manages to side step a lot of these issues. This is generally a question to anyone reading this post, what does the Gem framework do to counter versioning issues and updates to shared libraries?

HornGet: Apt-Get for .NET

A quick search on the web leads me to HornGet a great project that allows “apt-get” type scenario for .NET applications via a command like “horn -install:rhino“. Horn will not do any GAC installation, instead it will build the latest versions of your libs and add them to a specified location (defaults to user profile directory).

This is a great project as far as I am concerned, as just trying to hunt down the latest versions of common 3rd party libraries can be painful. I think that .Less is defiantly going to be added to horn.