Playful Programming

(Visual Studio 2010 Source: JavascriptTests.zip)

Ever written some client-side logic that could use unit test coverage? There are tools to help with that, such as qunit, fakequery or even screw unit. But a true unit test of a JavaScript method would allow you to isolate your logic from the document and from jQuery or any other JavaScript framework you use, and it would not even require a web browser or a web server to run.

Wouldn’t it be cool if you could apply the same tool stack you use to unit test your .NET code to unit testing your JavaScript files, and use something like rhino mocks to mock JQuery and something like MSTest to run your unit tests of JavaScript logic? I think so, but it’s always possible I need to get out more.

Ideally I’d post the production JavaScript that was the guinea pig for my initial experiments with my technique for unit testing of JavaScript, but untill this blog makes it big time so I can monetize it and retire, I probably need to keep my job. So here’s a trivial example of some JQuery you might like to unit test.

function sampleClass(controlId) {
    var options = theOptions;
    var control = $('#' + controlId);

    this.showControl = function () {
        $.ajax({
            dataType: 'json',
            url: 'PageName.aspx/GetWhetherToDisplayInRed',
            success: function (showRed) {
                if (showRed == true) {
                    control.addClass('redClass');
                };
            }
        });
    }
}

How do we run this in .NET? You might think JScript .NET and that’s a reasonable approach, but I’ve had bad experiences with the differences between standard and “managed” JavaScript, so went for an open source .NET JavaScript parser called Jint. It is not perfect but it does behave quite well most of the time.

Another option I looked at was JavaScript .NET which wraps the javascript engine used by google chrome. Very promising stuff, except at the time of writing the API didn’t give me a way to easily invoke the “success” function passed in the argument to the Ajax call. I’m sure if I had been more determined I could have found a way around it, but to be honest this post is more intended to share the concept than the specific JavaScript interpreter, unit testing framework or mocking framework.

Anyhow the JavaScript interpreter I went with allows me to inject CLR objects into the JavaScript it runs, so that should include rhino mock objects, right?

Therefore if I define an interface for a mock jQuery object like this

public interface IJqueryObject
{
   void addClass(object s);
}

and an interface for an object to mock the jquery “$” symbol like this

 public interface IJquerySelector
 {
     IJqueryObject find(string arg);
     void ajax(JsObject arg);
}

I should be able to intercept jQuery calls like this

 var javascriptEngine = new JintEngine();

//faking the jquery calls
var mockJquery = MockRepository.GenerateMock<IJquerySelector>();
mockJquery.Stub(x => x.find(Arg<string>.Is.Anything)).Return(MockRepository.GenerateMock<IJqueryObject>());
javascriptEngine.SetFunction("$", new Func<string, IJqueryObject>(arg => mockJquery.find(arg)));
javascriptEngine.SetFunction("mockAjax", new Action<JsObject>(arg => mockJquery.ajax(arg)));
javascriptEngine.Run("$.ajax = mockAjax");

//load the javasc5ript file under test
var jsFile = File.ReadAllText(@"..\..\..\JavascriptTests\SampleJavascriptToTest.js");
javascriptEngine.Run(jsFile);

Now I can do all the usual rhino mocks magic on things like “$.ajax” and “$(‘#controlId’)”, although to set expectations and verify them I for some reason also need

 //this seems to be needed for rhino mocks expectations to work..
javascriptEngine.DisableSecurity();

And that’s about all I need to do. :)

Now I can make my Jquery mock return my mocked JQuery object and set expectations on both the “$” operator and the returned jquery object like so:

 //mock the control that will be found with the passed-in id
var control = MockRepository.GenerateMock<IJqueryObject>();

//if the ajax call returns true, expect that we add the "redClass" to the control 
control.Expect(c => c.addClass("redClass"))
          .Message("'redClass' should be added to the control if 'GetWhetherToDisplayInRed' returns true");

var mockJquery = MockRepository.GenerateMock<IJquerySelector>();

//mock the control returned by searching for the passed-in control id
mockJquery.Expect(jquery => jquery.find("#" + controlId))
               .Return(control)
               .Message("The passed-in control id should be used to find the control to work with");

What about getting coverage on the Jquery callback?

Well I need to first capture the “success” function in the argument passed to “$.ajax”, which I can do like this

//used to store the ajax argument used to call the IsDisplayRed page method
JsObject displayRedAjaxArg = null;

//expect a call to the display red page method using ajax and capture the argument used
mockJquery.Expect(jquery => jquery.ajax(Arg<JsObject>.Matches(ajaxArg => IsDisplayRedAjaxCall(ajaxArg))))
                        .Do(new Action<JsObject>(ajaxArg => displayRedAjaxArg = ajaxArg))
                        .Message("If the status is 'ajaxCallRequired' an ajax call should be made to the 'GetWhetherToDisplayInRed' method");

The “IsDisplayRedAjaxCall” is a helper to see whether the ajax call is the one our test is looking for

  bool IsDisplayRedAjaxCall(JsObject obj)
  {
     return obj["dataType"].Value.ToString() == "json"
        && obj["url"].Value.ToString() == "PageName.aspx/GetWhetherToDisplayInRed"
        && obj["success"] is JsFunction;
  }

Now when the call is made to the PageMethod we’re interested in mocking, we capture the passed-in argument in “displayRedAjaxArg ” and we can invoke it with each possible return value for the page method, for example

//invoke the ajax "success" callback function
javascriptEngine.CallFunction((JsFunction)displayRedAjaxArg["success"], true);

And there you have it. If you’re still with me, I’ve demonstrated how to get complete .NET unit test coverage of JavaScript with JQuery calls. The tests are repeatable, quick to run, and could easily be made part of your continuous integration. There are some gotchas, and some JavaScript that may not work in the interpreter I chose, but I think this technique could be very useful for complex client-side logic that has occasional dependencies on JQuery or another JavaScript framework.

Book Review: Clean Code by Robert C. Martin

Posted by: admin on: March 13, 2011

When someone at work organised a screening of a video of a Robert Martin speech it wasn’t long before people were chuckling and nodding and even putting their hands up when he asked who in the audience had to deal with “turgid, viscous architectures”. It’s similarly easy to find oneself nodding when reading his book. For example, here he talks about developing with a dependency on an API that had not been developed yet:

Though mists and clouds of ignorance obscured our view beyond the boundary, our work made us aware of what we wanted the boundary interface to be.

Thus his solution: create mocks and stubs that capture assumptions about what the dependency will provide, and away you go. As a bonus this decoupling makes it easy to mock the dependency for unit tests.

Except in my opinion those assumptions are going to be wrong and move the task of integration testing to the very end of the project because the team developing the dependency has tasks other than exposing the API and the dependant team can “progress” without it. The situation I am imagining reminds me a bit of a company Martin Fowler described at a ThoughtWorks event I attended. This story he told had taken place prior to the broad adoption of continuous integration software, and in this story he came into a company that had completed a project and was left with “only the integration work”. The only problem was that nobody knew how long this integration work was going to take.

Having said that, the approach has its merits – I remember getting quite far with an agreed-upon database schema that was to be populated by a service that integrated with a different system and was still in progress, and it worked out reasonably well for the most part, notwithstanding a few integration bugs where assumptions based on data types and names of columns turned out not to be true. On a related note, I also really like James Grenning’ chapter (even though the cover says “Robert C Martin” there are a lot of guest chapters) in which he presents the idea of approaching a third-party API using “learning tests”, which is where we learn about a third-party library by writing unit tests for it rather than experimenting with it in an adhoc manner. When I was messing around with Facebook programming, TDD was the first thing to go, as I waded through often inaccurate documentation and used trial and error to find out what API calls did, but of course my learning is now only captured in the software I wrote and Facebook are notorious for making changes that break their API, so automated tests would have value. But I think the chapter goes too far when it says that unit tests of the third party API should give us confidence about upgrading. We’ll write the unit tests when we first start learning the library, but the way we use it in production code is going to outgrow the functionality we initially tested, so I say unit tests of the API cannot substitute for integration tests that excercise the API the way the production code excercises it. Integration tests aren’t given much attention generally; in the chapter after James Grenning’s there’s a chapter on unit tests that says “You should be able to run the
tests in the production environment, in the QA environment, and on your laptop while riding home on the train without a network.” Doesn’t make much sense if you followed the previous chapter’s advice and wrote unit tests for third party libraries you depend on, which might be exposed as web services.

I am suspicious of anything that is presented with too much confidence as some sort of panacea, and while there is much in this book that is useful a lot of it does seem to be common sense about what is bad practice mixed in with dogma about what is best practice. For instance, Bob does not like something I’ve been guilty of in the past, which is writing a while loop with an empty body. The problem being that it can confuse you into thinking that code below it is executed inside the while loop, so he’d like you to put the semicolon on its own line like this:


while (dis.read(buf, 0, readBufferSize) != -1)
;

But is that semicolon on its own line really that noticeable or would it be better to do this


while (dis.read(buf, 0, readBufferSize) != -1) { }

The problem is real, but the solutions are more arbitrary than the writing style makes it seem.

And for a book that presents itself as a sort of elements of style for coders, it’s odd to see some fairly startling typos. For example, in the chapter on concurrency we are told that

Attempts to repeat the systems can be frustratingly.

Huh?

I realize this may be seen as nitpicking, but nitpicking is the mindset that the topic of the book puts the reader in. In fact, I am being harsher in this review than I meant to. The chapter on good and bad comments (and the idea that we should favour “self-documenting” code over comments) I found quietly revolutionary in a world where the coding standards of lots of companies and universities regard comments as intrinsically “good”, even when they are the kind of comments that a tool such as ghostdoc could generate automatically.

I guess my only objection is to the tone of the book that seems to regard itself as some sort of bible, which is the same way the person who introduced it at work seems to regard it.

would be cool if…

Posted by: admin on: February 5, 2011

this week i had to stay back till 12 a.m. doing a release of some fairly isolated urgent changes. although my workplace has a policy of doing weekly deployments, I think everything other than my changes could’ve waited to be released, but with the way the build is set up and the way the code is structured (one massive solution containing about 20 different websites), you have to release everything or nothing, so QA needed to test everything and having found bugs they had to make the difficult decision on whether to release with known issues (unrelated to the urgent fixes) just to be able to release this particular bit of UI… we ended up prioritizing some fixes and going live with others that were considered less critical..

a lot of projects I’ve worked on have been like this – lots of different features that may or may not have interdependencies dumped in one solution because it’s “easy”, except it’s not easier to release…

at my old company we used to use unsigned assemblies so for example if you only needed to change the service layer you could just drop that into the bin folder rather than do a full release..

I must admit I don’t have heaps of experience with the need to sign assemblies but I believe the idea is that without it if I’m a hacker who manages to break into the production machine I can modify an assembly using something like http://sourceforge.net/projects/reflexil/ to do dodgy stuff and the website will continue to “work” but suddenly the service layer is sending everyone’s password to me or whatever.. also when doing a full deploy you lose built-in protection against not releasing changes that need to go out together..

anyway maybe the best of both worlds would come with using the Managed Extensibility Framework which comes built into .NET 4.. helps you make it so you can drop assemblies containing “plugins” for your application into a designated folder without having to rebuild.. from my very limited knowledge of this framework, I thought it was mostly targeting desktop and silverlight apps, but seems it can be applied to websites as well and there are examples using MVC.. you can also validate that you only load plugins signed with the right public key..

would be pretty cool to be able to pick and choose which parts of a complicated website to deploy and be able to rollback new changes that are causing problems without rolling back everything.. not sure if it would complicate a full deploy or make things harder to debug, though.. and database changes that go with code changes are always fun to keep track of and try to rollback..

You know, Mr. Blog that nobody reads, I’m really tired after the 12 a.m. deployment and thinking I really need to do whatever it takes to find a life outside of programming.

We’ve got the vague requirement at work for performance monitoring around all our products to make sure we are meeting SLAs for uptime and responsiveness. What we’re trying to get to is not relying on users ringing us up and complaining to know when one of our applications is inaccessible or slow, and then ideally be able to drill down and see which bit of code or infrastructure is causing problems.

Being such a vague requirement, everyone including me had a different idea of how to tackle it, so we have a pretty wide variety of different attempts at monitoring tools – we even have something with a UI written in Silverlight. I didn’t have much time to add monitoring for my project (it seems like it’s one of those things that everyone understands is important but nobody wants to invest that much in), but now that the smoke has cleared I had the desire to learn some more about this area.

Except monitoring doesn’t exactly seem to be a hot topic and for me it was a bit hard to know where to start googling.

The Five Essential Elements of Application Performance Monitoring is a strange, vague book I fond on the topic. In the introduction there’s this explanation for the fact that it’s a free download

For several years now, Realtime has produced dozens and dozens of high‐quality books that just happen to be delivered in electronic format—at no cost to you, the reader. We’ve made this unique publishing model work through the generous support and cooperation of our sponsors, who agree to bear each book’s production expenses for the benefit of our readers.

[...]

I want to point out that our books are by no means paid advertisements or white papers. We’re an independent publishing company, and an important aspect of my job is to make sure that our authors are free to voice their expertise and opinions without reservation or restriction. We maintain complete editorial control of our publications, and I’m proud that we’ve produced so many quality books over the past years.

Well the entire book does not mention a single specific “APM solution” but every so often drops screenshots from an “exemplary” solution. Turns out these are actually from a product called Foglight which armed with the new performance monitoring terminology that this book taught me was pretty much all I found on Google.

Pretty clever marketing, except the .NET version doesn’t seem to have a free trial available and when I tried to find out how much it costs, I ended up on a page inviting me to request a quote, never a good sign in my opinion.

On the other hand, a product called New Relic RPM does provide a free lite version and a demo of the “gold” version that seems to do exactly what I want (although the fully featured version is fairly expensive and seems like it would involve allowing a third party to store our performance data).

I think the informative ebook as marketing ploy was a reasonably cool idea, but the more hurdles you have to jump through to try something out the more likely you are to give up.

no CAPTCHA for the human heart

Posted by: admin on: January 21, 2011

So Xmas is gone but I think the reason for the cliché about Xmas that you don’t have to be Christian to appreciate it is the universal appeal of something for nothing. Hence the appeal of “The Secret” and “The Power“, and the reason spam will always exist. To give an example, I’ve lost some motivation to blog but recently I’ll get frequent emails asking me to moderate comments on my previous posts like this:

I love www.playfulprogramming.com! Here I always find a lot of helpful information for myself. Thanks you for your work.

I thought the inclusion of my URL in the comment to make it seem more personal was a nice touch, but a google search shows that “MampRirmrem” is free with his/her/its “love”. Even so, when I get a message like this the instinct is to click “approve” on this approval of me. I want to believe that even though I’ve not written in my obscure blog for more than half a month, the aftershocks of the previous lyrical posts I’ve dropped continue to shake down fruit.

It’s the desire to believe we’re loved that made people open email attachments containing the love letter worm. And it’s why every so often if you manage to get a spammy post onto someone’s blog people might follow the link to your spammy site and – in the case of the above spam which was for a dating site – sign up looking for love. The desire to be loved is one reasons people take the time to use a blackhat SEO technique called “referer spamming” where webmasters click your bogus referrer link from their logs to see who is linking to them. I’ve installed Akismet on my blog, but when a person goes online they seem to have to relearn real world cynicism all over again, and there is no spam blocker to protect the human heart.

On the flip side of this I believe the internal “spam blocker” I’ve developed in my 30 years has become susceptible to false positives. Recently while I was in the middle of kind of missing my old job (where I was counter offered) and feeling ambivalent about my new job, I was offered a role through someone I used to work with. It would have had better benefits as well as more responsibility and more trust “out of the box” since it involved working with someone I had worked with before. Yet I rejected it without much thought simply because my gut, my internal spam blocker, thought it sounded too good to be true.

Indeed, one of the reasons bigger companies like the one that I work for put their devs through a fairly arduous interview process is that it’s human to value things according to the effort we put in to get them.

ELAINE: I got a card, and they stamp it every time I buy a sub. 24 stamps, and I become a submarine (makes a gesture) captain.

JERRY: What does that mean?

ELAINE: (Embarrassed) Free sub.

[...]

MANAGER: (To a waiting Elaine) Lady, if you want a sandwich, I’ll make you a sandwich.

ELAINE: (Whining) I want the one that I earned.

From the Seinfeld Episode “The Stike”

Perhaps that’s the same reason I sometimes miss my old job – not the rewards but the effort/reward ratio.

Stockholm syndrome is where you basically fall in love with your captors. Let’s put it in terms of software. You get a job, you hate your job, it’s awful, it’s literally torture, and you’re trapped. But after a while, your only redemption is found through those that have put you in that situation. You live for your managers, you live for the march. You know you’re going to die, but you love them regardless.

“Scars” podcast on “This Developer’s Life”

I have been messing around at pex for fun and had a go at creating my own programming puzzle.

I now have a new appreciation for the challenge of trying to think up a programming problem that’s not too easy but also not a “guess what’s in my head” question. Unfortunately some experienced technical interviewers do not know the difference either, and unlike pexforfun.com, those people don’t come with a “fishy” button that allows victims to flag the bad behavior.

The creators of Pex for Fun mention its potential as a teaching tool, but in the right hands I could see it also being quite good for automating early rounds of a company’s selection process for developers.

Speaking of which someone’s already added a pex implementation of FizzBuzz, a simple programming problem that apparently trips up a lot of interviewees.

Most good programmers should be able to write out on paper a program which does this in a under a couple of minutes.

Want to know something scary ? – the majority of comp sci graduates can’t. I’ve also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.

Full disclosure: I did not win the FizzBuzz duel on the first click of “Ask Pex”, but in my defence it was due to my implementation leaving trailing newlines after the end of the final number, which still satisfies the requirements given by the puzzle even though it does not match the “secret implementation”… um, yeah, I think I’ll end this post here. :)

By the way, a strong hint on how to solve my puzzle is given by this Korn video. Either that or I’m just trying to link to something popular in a pathetic attempt to get some search engine traffic on this blog post.

Server Errors

Posted by: admin on: December 29, 2010

That was unexpected.

Saw a bit of a spike in traffic yesterday and worked out it was because Joshua Kerievsky nicely tweeted about my review of his book – and then of course my site went down. From what I can tell, this was more an unlucky coincidence than anything to do with the slight increase in traffic.

Nevertheless I’m more motivated to keep writing now, so thanks Josh for regularly googling your own name (joking).

Book Review: Refactoring to Patterns

Posted by: admin on: December 28, 2010

This book was a timely read for me, because at work I recently ended up having to reinvent someone’s medium-sized project because it contained some of the worst structured code I’ve ever seen. Apart from poorly named variables, comments-free, kitchen-sink methods and classes, what made it go from difficult to impossible (and others before me had tried and failed to leverage this code) was the fact that it seemed to think itself extensible because it included classes named “strategy” and “mediator” – in other words it attempted to use inappropriate patterns. It had the “strategies” doing data access, for example. At programming school they taught us that patterns are supposed to help new devs act like experienced devs, but code like this shows that patterns can make it worse.

Read the rest of this entry »

Facebook Programming: The Paralympics of Web Development

Posted by: admin on: December 12, 2010

Name Description Permissions Returns
likes The number of likes on this post Available to everyone on Facebook A JSON number

- from the Section on “Post” in the Graph API documentation

Um, not really.

First of all it looks like the Graph API actually returns this property as an object with a count property as well as a list of people who like the post,  not “a JSON number”, but that’s cool for me… probably not that cool for anyone whose app was expecting a number since presumably once upon a time it did work as documented.

Whenever we make an intentional breaking change to Platform, we provide a migration to give developers the ability to update and test their applications.

- from the latest story in the news section at  http://www.facebook.com/developers/

Oh really? I guess if there is no “migration path” that’s ok because we know it’s not an “intentional breaking change”.

For example,  it seems like facebook decided that this extra info in the likes property was too sensitive to be “Available to everyone on Facebook” so if your app tries to get the likes on a post with an authentication token from just anyone you get a nice empty object unless you are the person who made the post or a friend of that person.

The good news is it looks like if all you want are the likes of a post you don’t need the “read_stream” permission which you need to read the post itself.  If possible I’d like to avoid scaring away potential users of my app with a request for a laundry list of scary sounding permissions.

Speaking of which, the first solution I could think of for my requirement of a page that counts up the likes for posts made through my app involved asking for the ‘offline_acces’ permission which gives you an authentication token for the user that does not expire. The problem with this is that when you ask for the permission facebook will prompt your user to grant the permission and describe it like this:

Access my data anytime: <appname> may access my data when I’m not using the application”

It only means that you can access the “data” that the user has given you permission to see when they are not signed in, but it sounds incredibly broad and dodgy and I’m sure at least a few people who might have given an app a try would say “no thanks” once they see this.

But since two friends who are using the app to post to each other’s walls seem to be able to count likes on each other’s posts without special permissions,  I might be able to rethink a few of my requirements to avoid having to ask for offline_access.

Of course it’s all subject to change at anytime and like the client-side stuff you have to take any documentation you find with a grain of salt. I can’t believe it’s taken this long to get this far implementing what I’d consider a simple app idea,  but it’s an interesting challenge, being forced to do things that should be simple in an unusual way.. It reminds me of this game a friend sent me the other night.

Good to see I’m not the only one who finds this a struggle..