Just a quick post to say that .Less/DotLess is now available via Horn and also via their web based package down loader.
Archive for November, 2009
As some of you may know I have recently been hard at work porting Less Css for .NET and I thought it was about time I actually gave my opinion on the syntax and offered a few of my thoughts on usage.
Variables
The first thing that strikes most people about the Less syntax is the use of variables. I have to say I still find this extremely useful, and its often overlooked that properties and variables are also accessible from within a nested ruleset scope i.e:
#defaults {
@width: 960px;
}
.article { color: #294366; }
.comment {
width: #defaults[@width];
color: .article['color'];
}
Also variables are, well.. erm… variable and they change their value depending on the scope that you find them in i.e:
@var: red;
#page {
@var: white;
#header {
color: @var; // white
}
}
The use of variables really requires a little bit of thought. Other than the “corporate colour” example it is often difficult to determine what candidates are eligible for variables. I have found this is a lot easier if I’m working against a design and I site down with a cup of tea first and plan out my approach.
The other issue I have is that I can never remember the names of the damn things. This may be a personal problem with my goldfish like memory span, but I certainly wouldn’t sniff at a bit of tooling support. Now as I type we are currently looking at visual studio integration, but with true MS spirit, they have made this bloody painful so don’t expect instant results.
Mixins
Other than variables, which are pretty cool we also get the advantage of mix-ins and operators at our fingertips. Mix-ins are great for creating reusable chunks of style info that we can easily “mix-in” to another element.
That said, this same functionality can be gained by adding several classes to a HTML element. One of the first pain point I had with Less was trying to justify using mix-ins, and for the most part I have to admit they are pretty funky, but equally useless.
One pitfall, for example is that it is often it’s common to use client side scripting to effect page layout by dynamically adding or remove the CSS classes on an element. Obviously, this is not possible if we have mixed several classes into one instead of adding them separately to them HTML node.
HOWEVER…
As with anything, when used correctly mix-ins are really valuable. Where mix-ins really shine, is when you couple them with a framework such as blueprint, and if any of you have used such frameworks things like this won’t be uncommon:
class="span-15 prepend-1 colborder"
With mixins we can bundle all these together and make a single class i.e:
@import "~/Content/blueprint/screen.css";
#sidebar{
.span-15;
.prepend-1;
.colbord
}
One, other cool thing worth noting about mix-ins is that you can access them via namespaces i.e:
.outer{
content:"ignore me";
.inner{
content:"mix me";
}
}
#mixer{
.outer > .inner;
}
This obviously gives a much finer grain of control than simply adding CSS classes to an HTML element.
Its also worth mentioning that mix-ins also hurt my brain when trying to remember what I called the damn things. Once again, tooling will help here and I really should get on with the visual studio integration (or at least prod Erik a bit as he’s currently looking at it).
Imports
Imports allow you to bring together several Less/Css files and merge them into one. You will even be able to access variables/properties from the file you import. Imports are great even if you don’t want to use any of the syntactical sugar that you get with Less and simply want a way to merge your CSS files together.
All in all I can’t get enough of imports as a way of separating my Less style sheets into manageable, reusable sections. There are however, a few gotchas you’ll have to keep in mid with Imports, and they are:
- When caching is enabled in the Http Handler the cache will only be recycled if the main reference Less file is changed, and not any imported files. This is no big deal, simply disable the HttpCache until you deploy.
- Imports will not have access to variables in the main reference Less file (or other referenced Less files in the main one). This ensures that imported Less files have no dependencies on where they are been used.
Futures
As with any language/framework/whatever there are issues that you will hit, but all-in-all I really enjoy working with Less. I think we could expand on the tooling though and as I mentioned our first port of call is VS integration, but other than this here are a few other ideas we have thrown around:
- Environmental and query string variables passed via the HttpHandler for use in our Less document.
- Conditional (IF/THEN/ELSE) blocks – This combined with environmental variables would allow switching on say browser or maybe the currently selected theme held in the session.
- Mix-ins with variables – this is actually implemented in the Ruby library, but we are yet to port it.
Any other thoughts and ideas welcome.
We are now at a point where the testing has gone through the stages of been the new kid on the block and come out the other end a proven engineering practice. Many developers are now seeing improved code quality, and a greater feeling of confidence in what they release.
But with all the goodness we’re still have a few teething issues with the frameworks that support these paradigms. Not to mention the fact that without the shiny gloss surrounding testing we’re feeling a sever lack of the “ooooh” factor. Enter a few new and old faces in the OOP community to spice up testing frameworks (and hopefully resolve a few problems with existing libraries).
#TestEx (Sharp Tests Extensions)
One of the great things about Unit Tests is that they double up as the a great source of documentation for any developer looking at the code. As with any documentation, it is only any good if you can easily understand it. Traditional unit testing frameworks such as NUnit or MSTest use a standard set of static method calls against an assertion class such as:
Assert.AreEqual("so", something.SubString(2));
Assert.AreEqual("ing", something.SubString(something.Length-3, something.Length));
Assert.That(something.Contains("meth"));
While this code isn’t exactly cryptic, it does take a little bit of concentration to figure out what’s happening. #TestEx is brought to us by Fabio Maulo (among others), and adds a series of extensible extensions to work with various unit test frameworks. What this gives us is tests that read much more fluently and clearly, for example:
something.Should()
.StartWith("so")
.And
.EndWith("ing")
.And
.Contain("meth");
Behaviour Driven Tests
One of the other complaints about existing testing frameworks is that it’s difficult to match a series of tests together with desired system behaviour. Several articles have been published on the topic of BDD and now it seems there are some pretty interesting testing frameworks coming out of the wood work such as NBehave and SpecFlow.
These framework take the approach that you initially write your user stories (and acceptance criteria), then write clear tests that specifically meet these criteria. This may not sound much different to previous approaches, but the key difference is that these frameworks totally cater for this scenario and almost force you down this path.
For example, SpecFlow dictates that we write our stories in a Business Readable, Domain Specific Language that both our clients and the framework understands.
What this essentially means is that we can write tests at a feature level that our clients can verify for correctness. How about that for the shiny “ooooh” factor?
While reading one of Daniel Hoelbling‘s great posts I noticed a strong warning he makes saying: The GAC is your enemy!
I fully understand his point, that its a PITA at the least to have to hunt down dependencies that others have installed in their GAC. But I also can’t help thinking that installing something to the GAC is very much like adding a Gem in Ruby.
So why is this lavish disregard for what other team members may (or may not) have installed on their machines acceptable in Ruby world?
In Ruby if I have a missing Gem reference then all I need to is pop open a command line and type “Gem install xxxx” and hey presto I have the dependency installed. Couple this with the fact that Rails brings some Rake tasks to the table to allow all a projects missing Gems to be installed at once by executing “rake gems:install”.
Now don’g get me wrong, I’m fully aware that there are many other reasons not to install to the GAC, but I don’t see why Ruby manages to side step a lot of these issues. This is generally a question to anyone reading this post, what does the Gem framework do to counter versioning issues and updates to shared libraries?
HornGet: Apt-Get for .NET
A quick search on the web leads me to HornGet a great project that allows “apt-get” type scenario for .NET applications via a command like “horn -install:rhino“. Horn will not do any GAC installation, instead it will build the latest versions of your libs and add them to a specified location (defaults to user profile directory).
This is a great project as far as I am concerned, as just trying to hunt down the latest versions of common 3rd party libraries can be painful. I think that .Less is defiantly going to be added to horn.
I’m one of the thousands of people who are all-in-all pretty impressed with WordPress as a blogging platform. As a primarily windows developer I have looked around for half decent .NET alternatives, and there are a few such as Das Blog and BlogEngine.NET, but none are as “polished” as WordPress.
With this in mind I went about installing php and mySQL on my windows 2003 64 bit virtualization slice, which isn’t as straight forward as you might expect.
Step 1: Install PHP
Well it turns out that php and IIS play quite nice together, and have done for some time now. It is also a fairly well written about setup process see: http://www.iisadmin.co.uk/?p=4.
So I followed the instructions given and everything went ok until I navigate to a test.php page and hit:
The problem is that the only supported version of the php ISAPI filter .dll is compiled for a 32bit machine. Now it turns out that IIS can only load applications that are either ALL 64-bit or ALL 32-bit. Not to worry as we can enable IIS to run under 32-bit by throwing this at the command line:
cscript %SYSTEMDRIVE%\inetpub\adminscripts\adsutil.vbs SET W3SVC/AppPools/Enable32bitAppOnWin64 1
This is fine if your aspnet_isapi.dll referenced by your ASP.NET sites are 32-bit. Otherwise (like me) any ASP.NET sites living on the box will crash, taking its application pool with it (bye-bye www.dotlesscss.com).
So what are the options? Either I and reset all my ASP.NET sites to use the 32-bit aspnet_isapi.dll or I find 64-bit php libraries, I went for the latter. If you hop along here you will find a nicely zipped up 64 bit php instance (Note: Its the PHP-5.x.x-x64-2007-xx-yy.zip).
Step 2: Install mySQL
Fortunately, mySQL do provide a supported 64-bit windows installer so its only really a matter of following the instructions. If you require a step-by-step installation guide for this then see here.
Step 3: Install WordPress on IIS6
Yet again, this is a fairly well documented process and shouldn’t cause you too many problems. The only other bump in the road I hit was that I forgot to remove the comments referencing mysql in the php.ini file, in which case WordPress will dutifully report “your php installation appears to be missing the mysql which is required for wordpress“.
Other than that, happy blogging.