Insights and discoveries
from deep in the weeds

Wednesday, December 21, 2011

Panasonic UB94 wireless adapter driver

Note: Please look through the comments if you're not using Window 7 64-bit or you can't get it to work. There are a lot of notes with other suggestions. Good luck!

Got a new Panasonic flat screen TV for Christmas from my wife (she must really love me!!) and it came with this little USB wireless Ethernet adapter, I guess so they can say the TV supports wireless networks. Like any self-respecting nerd I have a wired network in my house and a switch underneath my TV so of course I don't need this. But do I look like the guy that's NOT going to figure out how I can use it on a computer?

It was a little tricky to track down drivers that work with this so I thought I'd share for future googlers. The device reports itself as a "UB94" when you plug it in. I figured out from some sketchy "driver download"/spam sites that it has an Atheros 7010 chipset, which supports 802.11n. Hey, an upgrade from my old G adapter!

 Atheros doesn't seem to provide reference drivers directly to the public, unfortunately. Some more searching revealed this chipset is shared by the Netgear WNA1100, for which drivers can be downloaded from Netgear. Probably many other devices as well.

To get this thing to work on Windows 7 follow these steps. For Windows XP, per a user's comment, it's almost the same, with one minor change noted in the steps below. I've only tested this on Win 7 though.

In the interest in not infringing on anyone's copyright I'll just tell you what to edit rather than posting a driver inf file.

  1. Download & install drivers from Netgear for the WNA1100.

  2. Locate the driver info file, probably:

    C:\Program Files (x86)\NETGEAR\WNA1100\Driver\WIN764\netathurx.inf

  3. Under the [Manufacturer] section, add one line:

    %PANASONIC%   = Panasonic, NTamd64

  4. Add a new section after the section for [VERIZON.NTamd64] (actually it probably doesn't matter where you add this, but this seemed as good a place as any)

    ; DisplayName                 Section                 DeviceID
    ; -----------                 -------                 --------
    %PANASONIC.DeviceDesc.7010% = ATHER_DEV_7010.ndi,     USB\VID_04DA&PID_3904

    For XP, it's the same except the section should be called:


  5. At the very end in the [Strings] section add this line:

    PANASONIC.DeviceDesc.7010    = "Panasonic UB94 USB Adapter"

    This is the text that will appear in device manager. Feel free to personalize.

After that, I just went to the broken device in Device Manager and updated the driver, pointing it to the folder above. If you want you can un-install all the Netgear software and just keep the three driver files -- that is all that's needed.

There, you just saved $19.99!

Tuesday, November 29, 2011

CsQuery 1.0 is imminent

In the last four months I've done a lot of work on CsQuery - on github - a C# jQuery port. I have been using it extensively in a few web site projects and it's quite solid. I've ported most of the jQuery tests that are relevant (dom manipulation, traversing, selection, attributes, utility functions).

Rather than update the list of implemented methods, I've compiled a list of the methods that still remain to be implemented. There are not many. :) Everything else that's not in CsQuery already is browser-DOM specific (e.g. related to events, callbacks, etc.) or is a utility function that I don't think is useful in C#.

jQuery Methods NOT Implemented In CsQuery


.. plus a few CSS selectors. Additionally, there is extensive support for dynamic/Expando objects using a special JsObject class, and CsQuery.Extend (which works pretty much as you would expect). Though anything that implements IDictionary<string,object> can be used as the target for object creation methods. This lets you work with objects in JSON form, or dynamic objects, almost seamlessly, e.g.:
// Create a new dom from a string of html

var myDom = CsQuery.Create(html);

// "AttrSet" and "CssSet" are the same as Attr(object) and Set(object) - since in C# we can't
//  overload return types. Attr(string) and Css(string) return the values of named items in 
//  CsQuery. This convention is used for methods that can be passed a string of JSON data.

    .CssSet("{'border': '1px solid black', 'font-weight': 'bold'}");

// create a new anonymous object. You can also use any conventional object or expando object
// as a source parameter in CsQuery.Extend

var data = new { pageName="My Home Page", url="/myhomepage.html"};

// "null" below is a convention for the empty object {}. You can also pass a new expando object,
// this isjust shorthand. The parameters match jQuery.extend. This merges the properties of data,
// and the object created from the JSON string passed. There's also a CsQuery.ParseJSON method 
// for explicitly creating a new expando object from JSON. Finally, CsQuery.Extend will work
// with conventional objects as the target (first parameter). In this case, it will only update
// existing properties with the new data, since you can't add properties to an existing non-expando
// object.

dynamic dataExtended = CsQuery.Extend(null,data,"{ 'access':'all' }");

// outputs: 
//   <div class="sidebar courier" style="border: 1px solid black; font-weight: bold; display: none;"
//       data-page='{"pageName": "My Home Page", "url": "/myhomepage.html", "access": "all" }'>
//   </div>

There are still some other features I want to implement, but I am hoping to get some examples together and create a version 1.0 distribution in the next month or so. The code is solid and well tested, and it makes server-side HTML management a joy compared to WebControls, Razor/HTML helpers, and so on, where you have limited control over server-side HTML layout. And your brain can work with HTML exactly the same way on the server as it needs to on the client. Your whole browser DOM is right in front of you. It's great for scraping too.

I have not done extensive performance testing, but have done a little. It's easily fast enough for real-time HTML parsing. Of course, if you plan to use it on something serving a thousand pages a second, this might matter, but I suspect most people would find it plenty fast. On my laptop, it can parse a 5 megabyte HTML file with over 100,000 unique nodes (the entire HTML 5 spec) into an indexed DOM in 2.5 seconds. Selecting all the DIVs (over 3,300) takes less than 1/100th of a second. Now - 2.5 seconds is an eternity for a web server, but this is meant to be an unrealistic situation, and there would be little reason to parse a big page of static HTML that you had no intention of manipulating. A web page that's 20K, which is more typical, would be less than 1/100th of a second. There's definitely room to make it faster, too, but it's plenty fast now, and I suspect it's a lot faster than manipulating and rendering a page with something like WebControls anyway.

Features that I still want to add:

  • Asynchronous HTTP gets - right now when using CsQuery.Server().CreateFromUrl() to load a DOM from the web, code execution is blocked while the get is performed. This is probably fine for some basic web scraping, but will slow things down a lot for any substantive real-time usage. I started coding for an async model but have not finished yet.
  • Form postback management - there's a basic tool for repopulating form elements from their postback data in the Server() module. This needs to be fleshed out and tested a bit, though, because I have not used it too much as I haven't created a lot of conventional HTML forms lately.
  • Framework and view engine - I've developed a useful, simple framework as part of one project. This includes some custom HTML tags like <csq-include src="..." />, <csq-when [conditions]>...</csq-when> to do things like server-side includes, environment-specific includes, and so on. These are not really specific to CsQuery but rather CsQuery is used to implement them, and they make working with pure HTML a lot easier.
  • Templates - something like the jQuery template plugin. Of course it's a piece of cake to write CsQuery code to do simple substitutions, but it would be nice to integrate some of that functionality into a framework.
  • Client script communication - one of the things that CsQuery makes very convenient is preconfiguring data for client-side controls. For example, say you have a grid control. A typical usage might be to initialize the control with an ajax request upon first page load. This causes the page to be rendered with no data at first, then perhaps an ajax loader shown to the user while it gets the default data. Why not pass the first batch of data directly to the control? It's easy to use CsQuery.Data() to pass data as an attribute of an HTML element, then in your javascript, just grab it with jQuery.Data(). This requires using some HTML element as a payload container. Not a big deal, but I would like to standardize this convention and create methods to abstract it.

Anyway, it's getting close, but feel free to download the project from github and give it a try. The basic usage could not be simpler.

var myDom = CsQuery.Create(htmlString);
var content = myDom["#maincontent > div.title"];
var newContent = myDom["Hello world!"].Css("font-weight","bold");

Thursday, November 10, 2011

How to run NUnit tests in Visual Studio 2010/MSTest

There are probably millions of lines of test code written against NUnit, and most people take no joy in switching to MSTest just so they can use Visual Studio's IDE. There is an extension called Visual NUnit which adds some support. This is actually a really nice, solid extension, but unfortunately, it doesn't solve the most basic problem: being able to debug tests from directly within your project.

MSTest is unfortunately closed, sealed, locked. It's virtually impossible to extend it. But there is a quick and dirty way to get around this problem and have the best of both worlds: NUnit as a testing framework, but still have the ability to run (and debug!) tests from within VS. And you don't have to sacrifice anything - they will still work in any NUnit test runner.

Step 1: Assert Your Independence

Add both testing framework namespaces:

using Microsoft.VisualStudio.TestTools.UnitTesting;
using NUnit.Framework;

Create aliases in your namespace declarations:

using Assert = NUnit.Framework.Assert;
using CollectionAssert = NUnit.Framework.CollectionAssert;
using StringAssert = NUnit.Framework.StringAssert;

This causes the Assert references to unambiguously refer to the NUnit version. That covers the objects, then there are all the attributes used to mark things for the framework. Luckily, most attributes do not conflict. Description is an exception, you'll have to pick a framework if you use this attribute, e.g. :

using Description = NUnit.Framework.DescriptionAttribute;

will cause all appearances of Description to be recognized only in NUnit.

Step 2: Search & Replace

You need to add all the corresponding MSTest attributes to get the IDE runner to recognize things. Just add both attributes to each class/method, e.g. [TestClass,TestFixture]


Run tests in this class at start (class-level attribute)



(SetupFixture & AssemblyIntialize/AssemblyCleanup work differently - in NUnit a class is marked as [SetupFixture] and has [Setup] and [TearDown] methods. In MS, these just apply to any static methods, with a limit of one each per namespace)

Run once at start per class



Run once at end per class



Identify a class containing tests



Run before each test



Run after each test



... and, of course, A Test



Step 3: Instance Setup/Teardown

There are some other differences. The setup types for MS must all be static methods, whereas NUnit allows them to also be instance methods. Recoding everything to use static methods is a headache, so I just do this instead. Chances are, your units tests already inherit from some other class. If so, just change your template class. If not, add one. To fake the NUnit instance startup/teardown methods, just use the constructor and destructor of your base class:

public class Test() 
    public Test()
    ~Test() {

    public virtual void Setup()
    {    }

    public virtual void TearDown()
    {    }
I've basically just skipped out on using any of the framework class-level setup/teardown methods, and use the regular class constructor/destructor instead. In each unit test, you just override Setup and Teardown. You probably do this already if your tests inherit from a base class, this just changes the mechanism by which they are invoked. I haven't thought too much about possible side effects of this, but it would seem to be functionally equivalent.

Step 4: TestContext

If you happen to be using TestContext, this will be another conflict, since both frameworks have a same-named object. The MS static intitialization methods have it as a parameter, too, whereas for the NUnit framwork, it's a static object you can always access. An easy solution is just to alias the MS one, since you probably haven't written any code against it yet, e.g.:

using MsTestContext = Microsoft.VisualStudio.TestTools.UnitTesting.TestContext;
using TestContext = NUnit.Framework.TestContext;

Now, if you actually want to use the MS static methods, you can just use MsTestContext as a parameter, and TestContext will refer unambiguously to the NUnit one.

Step 5: Convert to Test Project

Visual Studio won't give you the testing tools until you add this to the .csproj file of your test project. It goes under Project/PropertyGroup.


Step 6. Develop, Test, Not Necessarily In That Order!

You are now done. If you've been careful, nothing you've done will in any way break this when running under NUnit, and all these tests will now run directly in the Visual Studio IDE as well.

In Summary:

  • Alias conflicting objects
  • Use both the NUnit & MS attributes on each class/method as appropriate
  • Deal with non-implemented instance setup & teardown methods using constructor/destructor
  • Convert to a test proejct

... and you should be good to go. While it may take a little bit of work to update large existing test suites, it's mostly search and replace. For new work, just build this into your template, and it's already done.

Tuesday, November 8, 2011

IE7 & quirks removes trailing space from empty HTML elements

Pull up this bad boy in IE7 standards or quirks mode.

Using `innerText` or `innerHTML` causes the space after an element to be erased, e.g. if you take

"this is some <span id="field"></span> inline text"

and apply innerText to that span, you get

"this is some  <span id="field">more</span>inline text"

which renders as

this is some moreinline text"

The solution is to start with something inside the span, e.g.

"this is some  <span id="field">&nbsp;</span>inline text"

I can't believe I've never come across this before, but googling didn't turn up anything about it. Another irritation for supporting old IE.

Thursday, October 13, 2011

Another new ImageMapster feature: Area Zoom

A couple months ago I added the ability to dynamically resize image maps. This made a lot of other things possible, and I've just gotten around to integrating one of them into the codebase. (And I should also mention that the codebase just went over 100K and I will probably fragment out the major features for release 1.3 so you can build a version only with the features you need).

The new Zoom feature lets you zoom on a specific area. It's pretty limited in terms of features/flexibility right now but without too much code you can do some neat stuff without too much code. Here's a demo I whipped up to let the map "follow" you to zoom in on areas, which could be very useful in presenting a user with an image map that contained some large, but some very small areas.

Fiddle with it

Hover over any area for a second to see it zoom. (If this demo doesn't work, open this post alone -- the include script could conflict with the one from the earlier resize demo)

There are some issue to be worked out. It's not too hard to make things go completely haywire by mousing all over the place - mouse positioning data is screwed up in blogger, not sure why. The style when zooming shifts things a little bit. But the basic functionality is there.

Friday, October 7, 2011

HTML5, Firefox, canvases, oh my.

There's been a bit of a surge in feedback about ImageMapster in the last few weeks, which is fantastic because it means people have been using it. At the same time, it has made me acutely aware of the complexities of writing software that attempts to abstract a difficult problem into a general-purpose solution.

The problem I'm trying to solve is simple. Take an image. Create some rules about what happens when the user interacts with it. That's not that hard. They did it with Pong in 1972. Of course, Pong only had to work on one platform. Actually, the platform it worked on was designed only for pong.

My problem is a little more complicated. This plugin has to work on a whole bunch of different web browsers. Each of these browsers has a whole bunch of different versions.

In modern browsers, the software uses the HTML5 <canvas> feature to create its effects. HTML5 is an emerging specification. While this particular feature is well defined and has been available in common browsers longer than most, it's also one of the more sophisticated capabilities of a browser. It's like having your very own mini-Photoshop running in a browser. You can really do all kinds of crazy stuff.

Imagine if two different companies were trying to create Adobe Photoshop from a set of specifications. What are the chances that they would work the same way? With web browsers, though, it's about a half-dozen completely independent code bases.

Of course, canvases aren't nearly as complicated as a full design package, but there's still a lot of room for nuances. And since the spec is emerging, and has been implemented in various browser versions with varying degrees of success for a few years. Ideally, you want to code in such a way that avoids bugs that may exist in certain implementations.

I recently fixed a bug that appeared with the release of Firefox 6. It had to do with masks. One of the features of ImageMapster is the ability to create hotspots in images, but also create an exclusion area. In the picture at left, the circle in the middle of Texas is a mask. (Technically, Texas is the mask, since it allows the hole to show through, but I thought it would be awkward to refer to the transparent areas as "holes"). In this configuration when someone mouses over Texas, the area in the middle will not get highlighted. If they move the mouse into the mask area, the highlight disappears.

Anyway, after substantively changing the approach used to create the masks to get around the problem with Firefox 6, I found that masks no longer worked in Firefox 3.6!

In old Firefox canvas implementations, there appears to be a problem when using globalCompositeOperation = "source-out" -- a critical feature for rendering masks - to make successive copies of canvases. That is, to create the effect, I would render all the "holes" on one canvas, then render that onto a canvas with the highlighted effect with "source-out", causing the holes to be excluded. Finally, though, I need to render those onto a third canvas (the one that gets displayed) which contains other highlighted effects. Somehow, this three step process completely fails in older Firefoxes.

Previously, I had achieved the effect with a simpler, two-step process. I render all the masks onto a canvas. Then, on the same canvas, I set "source-out" and render the highlighted effect. This works on every browser, except Firefox versions later than 6, which for some reason would do nothing with the masks.

I still haven't managed to isolate the exact nuances of the different behavior in different Firefoxes, because it involves many different actions in sequence on a canvas: paths, clipping, fills, context states, and so on. But suffice it to say, if I want this to work on all known versions of FF (and versions 4 and lower are still not uncommon), I will need two entirely different code paths for this feature.

Then, of course, there's VML, the vector markup language used by Internet Explorer prior to version 9. This is completely different again. In some ways, though, it's refreshingly simple. It works. It doesn't do everything that canvases do, but at least the implementation is consistent across all browsers that use it (that is, Microsoft), and at least it will never change again.

And finally, there's Apple computers. I test this thing on an iPad regularly. I don't own a Apple computer, though. My assumption has been, if it works on Windows Safari, and it works on my iPad, it's probably got a pretty darn good chance of working on apple Safari. I mean, they're all webkit in the first place, and you'd figure that a mobile touch screen device is a pretty good stress test for this kind of software.

No such luck. I got a user reporting no go on his mac this week. Then, another user tells me it doesn't work in chrome of all things! Chrome is what I use 90% of the time. I might forget to run tests in Firefox 3.6 but I never would push something that was broken in chrome. What is going on here? Well, most likely, it's something outside of ImageMapster: a local configuration issue. A firewall problem blocking images fetched with javascript, possibly due to CORS restrictions. But it's hard to troubleshoot, and as far as someone trying to use this can tell, it's just broken.

So what is the point? All this comes down to the whole point of this in the first place. Writing software that does complicated things on the Web and works for (almost) everyone is hard. But if it doesn't work for almost everyone, it's also useless. At the end of the day, it has to work. And this is the real reason why the next generation of the web's evolution has been slow. HTML5 canvases have been in Firefox, Chrome, and Opera for several years now. Yet, few web sites use these features. The big reason is because Internet Explorer versions older than 9 still have a substantial market share, and few people want to try to accomplish such things using VML. But another reason is that the implementations remain inconsistent across browsers, and it's difficult troubleshoot because not that many people are doing it. There's just not that much information out there about these nuances.

But to pass on it entirely because of these problems is to remain stuck in the last decade. There's tremendous power in the modern web browser. I mean, you can play angry birds right in your browser!

That makes my plugin look like child's play. Of course, they make no bones about it working in anything other than Chrome. (It supposedly works in Firefox too, but isn't really fast enough). The future is here - it just waits on the developers to embrace it, warts and all.

Friday, September 23, 2011

ImageMapster 1.2 released

I "finalized" version 1.2 of ImageMapster, which includes a few major new features and a slew of other improvements. Of course, even after waiting two months to officially call it 1.2, I found a couple bugs within hours of releasing it. Such is life. So we're already at 1.2.2 which corrects a couple minor issues on the initial release.

I also finished a significant update to the project web site which I hope will make it a lot easier for new users to understand the plugin, its features, and getting started.

The new version includes many new features (most of which have been in the beta for a month or two):

  • Automatically scale image map data to match the effective image size using scaleMap option
  • resize method will resize an existing, bound imagemap dynamically with visual effects
  • includeKeys can be used to bind staticState areas to active areas and mouse events will affect active areas. That is, if area A is staticState=false and area B is normal, using includeKeys='B' for the area A data will mean that mousing over of clicking area A will cause action on area B.
  • Performance and stability improvements on startup with complex or slow-loading images across all browsers (some edge cases, especially with certain Firefoxes and IE7, didn't bind consistently).

  • ... and lots of other little improvements/tweaks/fixes. See change log on github for everything.

Monday, September 12, 2011

Google search redirects: not necessarily a virus

I just returned from a glorious week in Maine. I didn't quite manage to escape from the infiltration of technology in my life, though. My position as the resident geek came into play when one of our guests noted that his google searches appeared to be redirected intermittently to random spam/ad/virus-smelling web sites.

My first instinct when something like this happens would be to assume their machine had a virus. However, in this case, their machine was an iPad. While not at all impossible, it seemed pretty unlikely. Then, it started happening to me too, on my nearly brand new Windows 7 laptop. I was sure I didn't have a virus and besides, what a coincidence that it happened to us both suddenly while on a new internet connection.

After some effort I realized that the Linksys WRT54GL router had been hacked, and the name servers hardcoded to IP addresses in the Ukraine. This isn't unique, nor new, but it was surprising. This is our own internet connection, and we set the router up. I'm not an idiot - or at least I didn't think I was. The root password for the router had been changed when the thing was set up, and there was no remote access allowed to the router. While it's all too common for people to get compromised because they don't bother to do any configuration when they set up a router, I'm not that person.

However, the password was not strong. It was a single English language word.

I am not sure how the router became compromised, since admin access to the box was only allowed from the private network. I haven't researched to find out if any other back door would allow access to it from the internet, or if the attack must have been sourced from a user of the router (perhaps from a virus-infected computer configured to conduct brute force attacks against its gateway?). Either way the point is, never make any assumptions about security.

My assumption was that since this was a private network with very few users, we didn't need a strong password for the router. This assumption didn't consider that an attacker could be from inside your network (a compromised PC), or possibly the router firmware could have bugs that can be exploited to grant access. I am not in control of every user of the network, so I can't make any assumptions. The access to the router should have been hardened as much as possible (on a consumer device like that, anyway).

Hopefully this post will help others trying to resolve this same problem - google searches on the terms in this post's subject returned few results, and none that identified the problem as a DNS or hacked router issue. Most discussion threads concluded the user had a virus on their PC. Check your routers, and make sure they're locked down!

Friday, August 19, 2011

New feature for ImageMapster: Resize image maps in real time

I have been adding a lot of features to ImageMapster, my jQuery plugin for imagemap manipulation, over the last couple months. Unfortunately its been piecemeal and I have not had time to update the (awful) project home page to demonstrate the new features. But this is pretty neat so I wanted to put it out there before I fix up the web site. Kind of a natural extension to that little resizer tool I have here.. it took a bit of doing to make it all work with the internals of the plug-in, but now you can just plop in any image + imagemap, and make it any size you want. ImageMapster will take care of scaling everything.

With a little cleverness using divs and hidden overflow, you ought to be able to make this zoom to an area that's been clicked. Start with a higher-resolution image than what you want to show on screen initially, and the detail will be preserved when you zoom. (That will get added to the plugin down the road too!)

First click on a couple states to make some selections... then click one of the buttons.

Does it work for you? It seems to be good in everything I've tried so far. Please let me know if it breaks on your browser.

(may break your browser if too big)

Try mousing over the map while it's resizing, depending on the browser, it even kinda works! I don't actually change the imagemap itself until the end, so it's not going to detect the mouse in the right position. But because of the way canvas proportions work, it will still highlight an area properly (maybe just not the one you were over while in mid-animation). It's interesting to see how the browser handles all that mayhem.

Code is pretty much this:
<img style="width:720px;border:0;margin:auto;" id="usa_image" src="" usemap="#usa">

$(document).ready(function() {

    $('#make_small').bind('click',function() {
    $('#make_big').bind('click',function() {
    $('#make_any').bind('click',function() {

Thursday, August 18, 2011

sharpLinter: run JSLint and JSHint in Visual Studio and from the command line

sharpLinter Resources

There are a few options out there for automating "linting" your Javascript within the VS IDE. The most prominent one JSLint for Visual Studio 2010, a VS extension. Then there are a bunch of different relatively hacky ways that people have come up with to do this. None of them really did it for me, so I rolled something new. The result is sharpLinter, a C# command line application and class library for running JSLINT (or JSHINT) against your Javascript files.

This work is based on Luke Page's early crack at linting in Visual Studio from late 2010. Luke went on to create the extension I mentioned before, offering some UI integration.

OK, so there's already a VS extension, why sharpLinter?

This plug-in is cool, but I found it had some annoying shortcomings. It kept forgetting what files I'd told it to exclude from processing, so I'd end up having thousands of errors from every 3rd party script included in my project before long. These exclusions had to be specified in the GUI, one at a time or with multiple-click-select. Argh! There seems to be no way to just tell it to include or exclude entire paths or patterns, which is what I really wanted. Configuration was fairly limited, and the feature to skip blocks within files didn't seem to be recognized either. And finally, you were stuck with whatever JSLINT or JSHINT it had compiled into it!

Well, all that could probably be addressed by the author or some other industrious soul. But ultimately, it was designed to be an extension for Visual Studio. While I wanted that, I also something I could easily integrate with automated processes, or as a quick and dirty way to run against any file in any situation. I wanted a library and command line tool.

It's not a real Visual Studio extension. That means you need to configure it as an "external tool." But it produces output formatted for VS, so you can still just click on a line in the output window, and it will jump directly to that file and line. Frankly, all the config features in the the VS extension seem like more of a hindrance than a benefit to me. The lint options are limited to what was part of the script when the extension was coded, and the settings all seemed to work erratically. All I really wanted was to set up a configuration for my project, then "run, click, and go to the error." That's all here.
Example output within Visual Studio. Enlarge.

sharpLinter is all these things.

But wait, there's more!

If you act now you also get instant minification!
sharpLinter can be configured to automatically minify your scripts after they pass validation using Yahoo YUI compressor, or Dean Edwards' JSPacker. Your choice, or let sharpLinter decide how to minimize, and it will just use the one that produces the smallest script. There's even an option to preserve the first comment block of your script in the output, so you can keep your credit & license information intact, if you choose.

To keep you safe from version confusion, if you have this option enabled and a script fails, any existing minimized script matching the output pattern for just the file that failed will be deleted.

How's it work?

Complete usage instructions and command-line options are in the readme on github. Basic command-line usage is as follows:
sharplinter [-[r]f path/*.js] [o options] [-v]
        [-c sharplinter.conf] [-j jslint.js] [-y]
        [-p[h] yui|packer|best mask] [-k]
        [-i ignore-start ignore-end] [-if text] [-of "format"]
The options let you:
  • [-[r]f] specify a file, folder, or grep mask to parse. If [r] is included, will recurse subfolders too.
  • [-c] specify a file with global configuration options
  • [-j] specify the actual code to run the checks (hopefully, one of JSLINT or JSHINT). If you leave this out, sharpLinter will look for a file called jslint.js in its executable directory. If that's not found, it will use the code embedded in itself (JSHINT as of 8/15/11, at the time of this writing).
  • [-o] specify options to pass directly to the linter
  • [-p[h]] tell it to minimize the output, using a particular (or best) strategy, and define a template for the filename of the minimized version. If called with -h, then the first comment block will be preserved.
  • [-i] specify text to use as markers for ignore blocks, or [-if] to skip an entire file
  • [-y] tell it to run Yahoo YUI against the file as well as the linter, and report its errors too
  • [-v] be verbose -- will report lots of information about what its doing other than errors. Normally, only errors are reported, or a single success line if there were no errors.
  • [-of] define an output format string for the error reports using parameters for error, source, line, and character. So maybe you want to use this to feed something other than visual studio? You can format the output any way you want.
  • [-k] wait for a keystroke when finished
All the core functionality is wrapped into a class library, so it should be easy to integrate this into another project (e.g. not from the command line). One caveat - this must be compiled with x86 as the target platform. It will not work if "any" (e.g. x64 or x86) is specified, because the Neosis V8 engine wrapper is a 32 bit only application.

Enjoy, let me know if you find problems or have questions, or just fork it!

Tuesday, August 2, 2011

CsQuery: New features, tests, utility functions

This is an update to my announcement of the CsQuery project, a C# port of jQuery.

In the last few weeks I have added a lot more methods, features, tests, and have made substantial changes to the DOM model to more accurately mimic the HTML DOM objects. The project now includes an Nunit test suite that incorporates some tests migrated from the real jQuery test suite (as applicable) - though there are hundreds and hundreds and it will take time to move them all. So far I've gotten through all of Core, and part of Attribute and Traversal. There are a lot more to go. But I have been heartened by the fact that most migrated tests have passed; so far, most of the problems have had to do with unimplemented features rather than bugs.

Additionally, Styles are represented with a new CSSStyleDeclaration object that includes validation using the Visual Studio CSS validation XML data. Right now I just included a basic CSS3 implementation but this could easily be updated to permit selecting any validation scheme.

With the new Style mechanism, styles added programatically are validated against the schema, and errors are thrown if invalid styles are added. This checking can, of course, be skipped, if desired. It is also not performed while parsing HTML, since this would serve little purpose and probably fail a lot. But this also lets us ensure unit types are parsed and stored correctly in your own code, and that only valid enumeration values are used.

Finally, a number of utility functions and features have been added.

WebForms UpdatePanels can be parsed, manipulated, and re-rendered. The Server.CsQueryHttpContext object now contains an enumerable of AsyncPostbackData objects, which contains a CsQuery object of the data block as well as the identifying information for each UpdatePanel that is updated on an asynchronous postback. You can manipulate the data as usual, and the Render method will take care of reformattingso the ASP.NET client architecture can deal with it properly.

CsQuery.ParseJson - converts JSON to an ExpandoObject. Many methods that accept object in jQuery have been implemented to accept an object in CsQuery. Because it's not convenient to create true objects for every situation where an object structure is being used to pass data or parameters, as in jQuery, we implement this functionality using JSON and ExpanoObjects. For example:

dynamic css = CsQuery.ParseJSON("{ width: 10, height: 10, padding: '1px 10px 1px 10px'}");
Methods that are designed to accept an object in jQuery have generally been implemnted in CsQuery to accept an object or an ExpandoObject, but for convenience, they can also accept just a string of JSON data, which will be parsed. Any string that begins with a curly brace will be treated as JSON data in these situations. If a string that is not JSON data but begins with a curly brace is required to be passed, use two curly braces to start the string "{{" to escape.

CsQuery.Extend - mimics jQuery.Extend using ExpandoObjects. You can pass it any legitimate objects (e.g. things with properties, not value types) as parameters, and it will return an ExpandoObject, merging same-named properties of each source object. If you pass a regular object as the target, it will only merge properties that have matches in the source object, since regular objects cannot be extended. Finally, you can pass JSON data directly, and it will be treated as an object and merged.


class Person
    public string name;
    public float height;
    public float weight;
Person person = new Person();

// or using anonymous classes

var person = new { name="", height=60, weight=170 };

// you can use @strings to improve readability, if you like
dynamic props = CsQuery.ParseJSON(@"{ 
                                      height: 72.5, 
                                      hairColor: 'brown'

// Merge data into an existing regular object

CsQuery.Extend(person,props);   // => person.height = 72.5
                                //    person.weight=170
                                //    hairColor is ignored since it couldn't be
                                //    added the the target

// Create a new ExpandoObject from existing objects

dynamic fullPerson = CsQuery.Extend(null,person, "{eyeColor: 'blue', noseType: 'ski-jump')");
                               // => fullPerson.height = 72.5
                               //    fullPerson.weight = 170
                               //    fullPerson.eyeColor="blue"
                               //    fullPerson.noseType="ski-jump"

// Extend an existing ExpandoObject

CsQuery.Extend(fullPerson,"{hairColor: 'brown'}");
                               // => fullPerson.height = 72.5
                               //    fullPerson.weight = 170
                               //    fullPerson.eyeColor="blue"
                               //    fullPerson.noseType="ski-jump"
                               //    fullPerson.hairColor="brown"

You can pass "null" as the first parameter (instead of an existing object), in which case an ExpandoObject containing the results of all the sources merged in order will be returned. Parameters may also be enumerable, in which case each member will be iterated and added to the list for processing.

Helper Methods - It's become clear that working with ExpandoObjects and JSON strings a lot is useful when passing data back and forth between client and server. It's also fundamental to how jQuery works. So in addition to adding support for JSON strings natively on methods where appropriate, and so I've added some extension methods to help with this, and other "javascript/C#" conversion issues:

public static bool IsJson(this string text)
True of the string appears to be JSON data (starts with "{" but not "{{")
public static bool IsTruthy(this object obj)
Resolves objects to "true" or "false" based on the Javascript equivalent truthyness of the data type and value.
public static object Clone(this object obj)
public static object Clone(this object obj, bool deep)
Returns a clone of any type of object, simplifying data structures to basic types. If the object is a value type, the value is returned. If it's enumerable, a List<object> is returned. If it's an object or an ExpandoObject, a new ExpandoObject is returned. The "deep" argument indicates that the values of each property should also be cloned.
public static ExpandoObject ToExpando(this object source)
public static ExpandoObject ToExpando(this object source, bool deep)
Converts a regular object to an ExpandoObject, or returns the original object if it's already an ExpandoObject (and deep=false)
public static string ToJSON(this object objectToSerialize)
public static T FromJSON(this string objectToDeserialize)
Convert to & from JSON. This is used directly by CsQuery.ParseJSON and CsQuery.ToJSON, but can also be used as extension methods of string and object. Many more methods and selector improvements added as a result of working through a lot of the jQuery tests. More to come...