Yen's Blog

Lens, Wheels, Skates, Keyboard

Installing the Curt 11064 Trailer Hitch on a 2009 Honda Fit Sport

As if that’s not specific enough, I could also add On a Cold Sleety Night, After the Family Had Gone to Bed, While Feeling Renewed Pangs of Hunger. I feel compelled to write this post so that Kate wouldn’t think I spent all that time in the garage playing video games (I wish I could). I was deciding between the Curt and a DrawTite trailer hitch. The Curt seemed more substantial, is made in Wisconsin (the DrawTite is made in Mexico) and as it turned out is surprisingly lighter (lighter being relative, more on that later). Installing the DrawTite requires moving the exhaust pipe out of the way, while the Curt requires drilling a hole. The latter raises its installation difficulty to 6/10 according to a reputable web site. I thought what the heck I could handle a drill, so I went with the Curt. Installation should take about an hour, according to the same web site. It ended up taking much longer than that—I won’t say by how much because it’s rather embarrassing, and that doesn’t even include the amount of time buying tools online and repeated trips to the local hardware store. I could of course have had the professionals at Rack Attack or even U-Haul do the job for reasonable cost, but that would have been a much shorter blog post.

Gearhead that I think I am, here’s the list of tools I used with varying degrees of competency and success:

  • Rhino ramps
  • Large garden stone masquerading as wheel chock
  • Heavy-duty Milwaukee ½” “Hole Shooter” drill
  • 17/32” black oxide drill bit, ½” drive
  • Tree-shaped tungsten carbide burr bit
  • Torque wrench, up to 150 lb-ft, ½” drive
  • ¾” socket, ½” drive
  • ½” drive extension rod
  • Wire guide
  • Regular cordless drill
  • Assorted screwdrivers
  • Pair of pliers
  • Silicone lubricant
  • Blocks of wood (as headrest)
  • Bunch of collapsed cardboard boxes to lie on
  • Wearable LED head lamp
  • Sharpie permanent marker
  • iPhone (to take picture and record video)

Here’s the executive summary: make sure you remove the plastic cover immediately aft of the rear wheelwell on the driver side. Then feel free to skip the rest of this post.

The helpful installation video seemed easy enough. The lack of swearing had me feeling optimistic. After I had the car on the ramps and all the tools assembled, things started off pretty well. The most difficult part was “hanging” the hitch in place so that I could use it as a template to mark the hole to drill on the driver side. It was difficult because I had to hold the hitch with one hand while tightening the nut with the other, and there was little wiggle room. I ended up supporting the hitch with my knee on one end and shoulder/forearm on the other; if it were a yoga pose it would be name “opossum faking death”. It wasn’t pretty or doctor-approved but it worked. Drilling the hole was challenging too because the “Hole Shooter” weighs like a mini-hitch; they do not make things light in Wisconsin. The video mentioned enlarging the drilled hole so that the bulky spacer can be pushed through. I had read elsewhere that instead the adjacent pre-drilled hole could be used with a wire guide to pull the spacer and bolt through. As a test, I was able to get the wire guide through, so this seemed promising. The hole was just a tad smaller than the spacer, so I went to the hardware store to get the burr bit; this wasn’t a wasted trip as I had to return the rental “Hole Shooter”. I was tempted to go with a tapered file rather than the burr bit, but luckily I didn’t or I wouldn’t be writing this until tomorrow at the earliest. Anyways, after enlarging the hole, the spacer fit but seemed to be too long to go through. I dutifully enlarged the hole further until it could fit through. Light at the end of the tunnel, literally. But lo and behold, as I tried to pull it through with the wire guide, something blocked its way. So now the spacer was completely in the whole, but stuck. I had visions of the spacer rattling inside the frame tube for the next few years as I drive down the road, hitch-less. Finally, with the help of screwdriver to nudge the spacer and push it up through another, smaller hole, and a pair of pliers to grab it by the end, I was able to extract the spacer. I peeped through the hole and was able to to make out a semi-circular opening between it and the hole I drilled. The opening must not have been large enough for the spacer to fit, and definitely not for the bolt. That’s the last time I listen to an Internet advice! Well, maybe. So now back to the pre-shortcut plan of enlarging the drilled hole. To make more room, I decided to completely remove the plastic cover.

You see, I had only loosened the cover and pushed it out of the way to access the pre-drilled hole. I violated my own rule of “never take shortcuts”. Removing it wasn’t hard, just pop open 2-4 plastic fasteners and 2 screws. Once the cover was off, I looked around and discovered to my amazement that there was a large rearward opening the into compartment I was trying so hard to get into! It WAS possible to come in through the South Entrance. Afterward it took me about 10 seconds to get the bolt and spacer in place.

This video shows the opening relative to the drilled hole.

Here’s the view from below.

I took this video by sticking the headlight into the opening, then inserting the iPhone right above it. As you can see, there was no way that anything could have been pulled through that semi-circular opening. Overzealous googling led me through a wild goose chase.

The hitch is finally in place and we are in business.

How to Not Upgrade the Nexus 7 to Lollipop

I have a first-generation Nexus 7 tablet made by Asus which ran KitKat 4.4. When the OTA Lollipop update became available, I couldn’t wait to install it. The latest and greatest Android running on Google hardware seemed to be a match made in heaven; it would be better, prettier and faster, right? As you can probably guess from the title of this post, the answer was a resounding no. Everything became sluggish, swipes would only registered after a couple of seconds, random things popped open and close. The tablet was downright unusable.

Rather than splurging for a new tablet, which was probably what Google wanted me to do, I opted to go back to KitKat, since the hardware itself was perfectly fine. The process to restore the KitKat system image using the ADB command-line tool that accompanies the Android SDK was quite simple. I had some difficulty to get my Windows 7 machine to recognize the Nexus 7, until I installed the Google USB drivers. Alternatively, I could have ran ADB on a Mac, for which USB drivers wouldn’t have been necessary. An important part of the process was to make sure I chose the correct system image. For a Nexus 7, the KitKat image was “nakasi” 4.4.4 (KTU84P). The image archive contained a flashing script that I executed after unlocking the bootloader. The whole thing took about 5 minutes once everything worked.

So with its brain transplanted with a fresh KitKat image, the Nexus was back to its own happy self. Unfortunately, the good feeling didn’t last long. In short order, Android started notifying me that the 5.0 system update aka Lollipop had already been downloaded and ready to install. There was no way to dismiss the notification or tell it to remind me again in, oh, never. The helpful notification sat in the system tray and just generally got in the way. This is by far one of the worst instances of nagware-ism.

To disable the system update notification, I had to root Android. Fortunately at this point there were a few mature tools that make doing so quite simple. I used the Nexus Root Toolkit 2.0.4. Since I was on a fresh install of KitKat, all I had to do was to 1) unlock the bootloader, and 2) root. The toolkit also installs a couple of useful tool such as SuperSU and BusyBox Installer. The latter lets you grant root privilege on-demand to any app that requests it.

The next step was to turn off the system update notification service. There are a few apps, the one I used being DisableService. Once I navigated to the Google System Service node, System Update Service was already disabled. Hmm, clearly that wasn’t it because there was still a big fat notification in the system tray. It turned out that I had to go to the Google Play Services node. Lo and behold, there was another System Update Service. Disabling this one and restarting the device did the trick.

Validating Multiple Fields With Angular

Angular makes it simple to validate a single form field. Validating multiple fields requires a bit more work. The approach that works for me is use a directive to lay out the framework, but to perform the actual validation in the parent scope. Since the user defines the specifics of the validation, any arbitrary validation can be performed and the directive can be kept simple.

In this example, I’d like to validate that the sum of 2 text inputs equals 10. I would specify that the validation should be performed via the multi directive, which also specifies the custom validation function from the parent scope.

1
2
3
4
5
6
7
8
9
10
<form name="form" novalidate>
  <p>
    <input type="text" name="field1" ng-model="field1" multi="validateSum" />
    <span ng-show="form.field1.$error.multi">Must add up to 10</span>
  </p>
  <p>
    <input type="text" name="field2" ng-model="field2" multi="validateSum" />
    <span ng-show="form.field2.$error.multi">Must add up to 10</span>
  </p>
</form>

The validateSum function should return true (valid) if the sum of the 2 fields equals 10, or if any of the fields is empty.

1
2
3
4
5
6
7
$scope.validateSum = function () {
  if ($scope.field1 !== null && $scope.field2 !== null) {
    return parseFloat($scope.field1) + parseFloat($scope.field2) === 10;
  } else {
    return true;
  }
};

The directive specifies the validation function as a value of the multi attribute: multi="validateSum". This isn’t quite idiomatic for Angular. The preferred alternative is probably to create an isolate scope for the directive and pass in the function via & or &anotherAttribute. I just think providing the external function as the value of the directive’s attribute is cleaner and more succinct. However, this approach requires a bit of extra handling in the directive’s postlink function:

1
 var validate = $parse(attrs.multi)(scope);

The $parse service evaluates the value of attrs.multi, which is the string validateSum, in the context of the directive’s scope. Essentially this gives us the reference to scope.validateSum, which is created automatically by Angular and points to the function defined in the parent scope.

In order to work properly with Angular’s form validation, the directive integrates with Angular’s ngModel directive. This is specified with require: 'ngModel' in the directive definition object. Once this is done, the model’s controller is provided as a 4th parameter to the directive’s postlink function.

When the user changes the value of an input, we should perform the validation. If the validation fails, the model should be marked as invalid, which results in the form being invalid, and we can display an error message and prevent the form from being submitted. This is as simple as calling ngModelController.$setValidity:

1
2
3
ngModelCtrl.$viewChangeListeners.push(function () {
  ngModelCtrl.$setValidity('multi', validate());
});

Calling validate() would invoke scope.validateSum() which would invoke validateSum() in the parent scope to perform the actual validation. Note that validation is performed only after a change has been made, so any existing model value wouldn’t be validated.

Any key can be used with $setValidity, in this case multi. If the input is invalid, the special error object formName.inputName.$error would become { multi: true }. We can key on the presence of the multi property to manually display an error message, or use ngMessages.

When the user changes an input, we should validate it and related inputs, as the overall validity depends on all inputs. Since the inputs could be anywhere in the DOM, I figured the best way to notify all inputs is to use a root scope broadcast. Since the input which triggers the event would also handle it, it’s sufficient to just triggers the event in the change handler and validate in the broadcast handler.

1
2
3
4
5
6
7
scope.$on('multi:valueChanged', function (event) {
  ngModelCtrl.$setValidity('multi', validate());
});

ngModelCtrl.$viewChangeListeners.push(function () {
  $rootScope.$broadcast('multi:valueChanged');
});

This is the directive in its entirety:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
app.directive('multi', ['$parse', '$rootScope', function ($parse, $rootScope) {
  return {
    restrict: 'A',
    require: 'ngModel',
    link: function (scope, elem, attrs, ngModelCtrl) {
      var validate = $parse(attrs.multi)(scope);

      ngModelCtrl.$viewChangeListeners.push(function () {
        // ngModelCtrl.$setValidity('multi', validate());
        $rootScope.$broadcast('multi:valueChanged');
      });

      var deregisterListener = scope.$on('multi:valueChanged', function (event) {
        ngModelCtrl.$setValidity('multi', validate());
      });
      scope.$on('$destroy', deregisterListener); // optional, only required for $rootScope.$on
    }
  };
}]);

And there you have it.

Here’s the plunk;

Validating Required Form Fields With Angular

Two things can happen when the user fills out a form field: the entered value is incorrect or the value is missing. With Angular validating either case is consistent and simple. The issue is when to show the error message: as the value is being entered, after the field loses focus or after the user submits the form. I think that for a better usability, the user should be informed that the value is incorrect while entering the value. This way the user isn’t forced to scan the form to locate the problem. If the user makes a mistake, it’s more convenient to be able to fix it while already editing the field. And we also take full advantage of Angular’s instant update. This approach works well for validating whether a value is correct.

However validating a required field is different. We don’t know if a value is missing until the user submits the form. So to handle both scenarios, we should be able to display an error message as the user enters a value and after the user submits the form.

Before Angular 1.3, showing an error message requires some logic based on field statuses such as invalid or dirty. With Angular 1.3 and later, we can use the new ngMessages directive to show an error message only when it is present. Take as an example the following text field:

1
<input type="text" name="name" required ng-maxlength="5">

Following is how we can display an instantaneous error message as soon as the max length is exceeded. If this is the case, the form.name.$error property would have the value "$error":{"maxlength":true} and only the span for maxlength would be displayed. Only one error message would be displayed at a time. If there are multiple matches, for example maxlength and pattern, I believe the one which appears earliest would be used.

1
2
3
<div ng-messages="form.name.$error" class="help-block">
  <span ng-message="maxlength">max length 5</span>
</div>

To display an error message for missing value, unfortunately, still relies on a flag for whether the form is submitted. This would be shown after the user submits the form.

1
2
3
<div ng-show="form.$submitted" ng-messages="form.name.$error" class="help-block">
  <span ng-message="required">required</span>
</div>

Finally, we can apply a CSS class to the container when an error condition exists; i.e., when the field contains an invalid value and either the field has been modified (dirty) or the form has been submitted. Because the field is required, it is already invalid from the start, so the additional checks ensure that we don’t apply the error styling right off the bat, which would be a disconcerting experience. We’re using Bootstrap which has a class, .has-error, which applies red text and border to labels, inputs and help blocks.

1
2
<div class="form-group" ng-class="{'has-error':form.name.$invalid && (form.name.$dirty || form.$submitted)}">
</div>

Here’s the plunk.

Angular in Jekyll/Octopress

Have a problem with Angular scope variable binding not working on a Jekyll or Octopress (which uses Jekyll) page? It so happens that Jekyll’s Liquid Template uses the same syntax as Angular and would attempt to process the markup before Angular can get to it. To get around the problem, you can surround Angular markup with {% raw %} and {% endraw %} tags. Alternatively, you can also forgo from using this syntax altogether and use Angular’s ng-bind directive to bind a variable to a DOM element.

Copying Files in Hudson

Copying files somewhere is pretty standard procedure when deploying an app. Interestingly I’ve never had to copy files directly in Hudson. Until now I’ve only deployed .NET apps in Hudson and used MSBuild’s Copy task to copy the files to a share folder. Now that I want to deploy Angular and other client apps in Hudson, I’d have to copy the files manually.

My initial attempt was to add a new Execute Shell build step to run xcopy. Since Hudson executes the command through the Bourne shell, which interprets any backslash in the path as the escape character, I’d have to escape the backslash itself. Why Windows has to be different from everyone else is a major source of annoyance if you have to move back and forth between Windows and *nix.

1
xcopy c:\\source\\app\\* \\\\server\\share\\app

This worked, but in order for xcopy to copy subdirectories, refrain from prompting everytime it needs to overwrite a file and take other actions necessary for unsupervised execution, I needed to specify additional parameters.

1
xcopy c:\\source\\app\\* \\\\server\\share /c /k /e /r /y /exclude:c:\\source\\xcopy_exclude.txt

Unfortunately this was when xcopy blows up with the “invalid number of parameters” error. It probably had to do with the shell not passing the parameters to xcopy. No dice if I put the command in a batch file then executing it. Ditto when I tried robocopy, which prints a nicer error message but is functionally the same as xcopy. As a note, you could test all of this in a Bourne shell rather than invoking Hudson every time.

It occurred to me to use something that’s native to the shell, rather than trying to get it to play nice with xcopy. You’d still have to deal with the backslash in the share:

1
cp -r /c/source/app/* \\\\server\\share

This worked! Also for things like “rm -rf”. We’re done, right? Well, this mixmatching of different command-line styles seems unwieldy. After I experimented further, it turned out that I had made an error from the start. In addition to the Execute Shell build step, there’s another type called Execute Windows Batch Command. This build step allows you to run commands in a Windows shell. Thus you can run execute xcopy, or anything else, as you would in the command prompt without needing to escape backslash.

I’d also like to be able to specify the share folder at run time so the project can be deployed to different servers. In my initial attempt, I created a drop-down list parameter for the server name. Then I figured to use the following xcopy command which incorporates the SERVER parameter.

1
xcopy c:\source\app\* \\%SERVER%\app

The %SERVER% syntax is used to reference an environment variable in Windows. This approach turned out to be an abject failure. It turned out that entire share has to be specified as an environment variable; e.g., %SHARE% should point to \server\app. Then I could issue commands such as:

1
2
del /s /q %SHARE%\*
xcopy c:\source\app\* %SHARE%

A MoCA Home Network

The benefits of a wired home network are many. I use the SiliconDust HDHomerun Prime network TV tuners to watch cable TV on any computer in the house without the need for a set-top box. However, it is quite bandwidth-intensive and even a wireless-N connection just doesn’t cut it. The picture would be plagued with stutters, pause and artifacts and pretty much unwatchable, especially for hockey with its lightning-fast actions.

For wired connectivity without an existing Ethernet network there are three options. The best option from both cost and performance standpoints is available if the house is wired for a land line with Cat-5/5e/6 wiring, which is invariably the case with a modern construction. In this case you can just repurpose the existing wiring and swap out the RJ11 telephone jacks for RJ45 Ethernet jacks. This requires a bit of manual labor but the results are beautiful and satisfying. I did this for a previous residence and it worked wonderfully. Unfortunately in my current house the wiring leaves something to be desired. The phones lines are not wired point-to-point but daisy-chained together; this rules out an Ethernet network.

The second option is to use the power lines themselves, which hopefully every house comes equipped with. I used a pair of TP-Link AV500 Nano power line adapters. Installation couldn’t be any easier—plug them into the power sockets and they just work. The LAN connection is automatically detected in Windows and OS X. My tests showed that the connection was able to achieve the the full bandwidth that I am supposed to get. My TV feed no longer stuttered or became corrupted. Unfortunately this solution isn’t perfect either. Under heavy load the adapters had a nasty habit of tripping the circuit breakers. I couldn’t even run the network speed test while Windows Media Center was playing without bringing the power down. Since the powerline adapters don’t play nice with surge protectors I don’t really have a way to prevent it from happening.

This leaves the last option, a home network over the coaxial cabling, which like most American household we have plenty of. Each room has one coaxial outlet, or even two. The industry standard for a home network over coaxial cabling is the Multimedia over Coaxial Alliance (MoCA) standard. It seems to be a relatively young technology as there aren’t many products for it. Interestingly this is something I’m already using as a Verizon FiOS subscriber. Verizon uses MoCA for the Optical Network Terminal (ONT) which converts the optical fiber signal to electric signal. To use the coaxial cabling to carry Ethernet network traffic requires a MoCA network adapter or bridge, which converts between the different signal formats. I have the standard Actiontec MI424WR router, which has a built-in MoCA adapter. It would act as one endpoint. For each additional endpoint, I would need a separate MoCA network adapter. There are only a handful of MoCA network adapters, the most popular of which the Actiontec ECB2500C.

I wasn’t sure about the best way to set up the MoCA network. The MI424WR router can connect to the ONT via a coaxial or Cat-5 cable. In my case the MoCA WAN comes into the router via the coaxial connection. However, since the router has only one coaxial terminal, I wasn’t sure whether I could connect to the MoCA LAN using the same coaxial terminal. I thought that since the router is connected to the MoCA WAN coaxially, I would need an additional MoCA adapter for each outgoing connection from the router to the MoCA LAN. This possible setup is shown in the below figure. I would need 4 MoCA adapters to extend connectivity to 2 rooms.

Fortunately, this scenario isn’t necessary. It turns out that the router can connect to both MoCA WAN and MoCA LAN through the same coaxial connection. This is because the respective networks operate over different frequency bands and do not interfere with each other. I would only need a MoCA adapter at each receiving end. Since I can use the router as a MoCA adapter in one room, I’d only need one additional MoCA adapter.

If you need more than one Ethernet port, Adaptec also makes a 4-port Home Theater Coaxial Network Adapter (ECB3500T01), which is only marginally more expensive than a 1-port adapter. It would be a more compact solution than using an Ethernet switch with the 1-port adapter.

Of note, the cable splitter must be MoCA-compatible. Since MoCA operates in the 0.5-1.65 GHz frequence band, the splitter should be rated for at least this range. Fortunately there are many inexpensive options.

Update (Oct 9, 2014): It turns out that Verizon sells a Verizon-branded Fios wireless network extender for $75 which comes with 2 Gigabit Ethernet ports as well as Wireless-N and MoCA connectivity, ostensibly to extend a home network to hard-to-reach places. In retrospect the Wireless-N Wi-Fi extender would have made it a better purchase than the 4-port Home Theater Coaxial Network Adapter.

Simple Tasks With Grunt

Developers are lazy by nature and always look for ways to avoid having to perform repetitive tasks. There are plenty of options when you’re working with conventional server-side platforms. Unfortunately, when it comes to JavaScripts and CSS, automation tools are harder to find. That is, until Grunt comes along (following Node, which turned upside down the whole idea of server vs client side in the first place).

As a self-described “task runner”, Grunt is powerful and pleasantly approachable. Its succinctness makes it a joy to use. This carries over into the Grunt documentation, which is really excellent. Inspired by the Sample Gruntfile tutorial, I’d like to walk through my own Gruntfile created for a small JavaScript library. The goal is to automate a typical process of building the app, from generating CSS from Sass, aggregating various source files into a single file, and minifying the result.

Once you have Grunt installed using the Node Package Manager, you will need a package.json file, just as for other Node utilities. The quickest way to create a package.json file is to simply run npm init which generates the file after a series of questions.

The following libraries will be used:

The recommended approach, as stated at the beginning of the documentation for each plugin, is to run npm install grunt-contrib-xxx --save-dev, which has the dual benefit of installing the plugin and also adding a reference to it in package.json.

Besides package.json, the only other file you need is Gruntfile, in which you define and configure the tasks to run. Configuration options are specified as the argument to grunt.initConfig.

The first line reads in package.json and turns it into an object. In this case, I’m interested in product name, but it can provide many other useful properties.

1
pkg: grunt.file.readJSON('package.json'),

All Gruntfile tasks share the same basic syntax for specifying options, input and output. Each task’s configuration block is named after the plugin; for example, the configuration block for the grunt-contrib-uglify is simply uglify. Each task can have arbitrary targets. You’ll probably see a “test” target for testing or a “dist” target for building a distribution. Since I want to build CSS and JavaScript, here I have 2 targets, “scripts” and “stylesheets”. When there are multiple targets in a task, each target can be executed directly. For example, grunt concat:scripts runs just the “scripts” target. If you don’t provide a target, then all targets would be run in order.

The Compass task generates CSS from Sass source. Of course, Compass must be installed first. The following options simply instruct Compass to process Sass files in the directory “sass” and generate the corresponding CSS files in the directory “css”.

1
2
3
4
5
6
7
8
compass: {
  stylesheets: {
    options: {
      sassDir: 'sass',
      cssDir: 'css'
    }
  }
}

When developing, it would be nice to be able to preview changes made to the Sass source. This is where the watch plugin comes in. In the simplest use case it simply detects that one or more files have been changed and then runs certain tasks. The following options allow me to re-generate CSS each time a Sass file is modified. Start watching by typing grunt watch at the command prompt.

1
2
3
4
5
6
watch: {
  stylesheets: {
    files: '**/*.scss',
    tasks: ['compass']
  }
}

Typically JavaScript and CSS source is distributed over multiple files. In production you’d want to combine the various source files into a single file for performance reason. This can be done with the concat plugin. For the source, I’m using a globbing pattern for simplicity; src/**/*.js means all .js files in the src directory and any of its sub-directories. The files would be combined in alphabetical order. An alternative would be to specify an array of individual files. While this lets you control the ordering of files, it becomes unwieldy for a large number of files. Of course, there are many ways to skin the cat. For the output, I’m using the project’s name which comes from package.json.

1
2
3
4
5
6
7
8
9
10
11
12
13
concat: {
  scripts: {
    options: {
      separator: ';'
    },
    src: 'src/**/*.js',
    dest: 'dist/<%= pkg.name %>.js'
  },
  stylesheets: {
    src: 'css/**/*.css',
    dest: 'dist/<%= pkg.name %>.css'
  }
}

The cssmin plugin provides CSS minification. The source should be the output of the concat stage. Note the variable substitution syntax.

1
2
3
4
5
6
cssmin: {
  stylesheets: {
    src: '<%= concat.stylesheets.dest %>',
    dest: 'dist/<%= pkg.name %>.min.css'
  }
}

The uglify plugin is the JavaScript equivalent of cssmin. It also mangles variable names which reduces file size further at the expense of readability. Since mangling pretty much makes your JavaScript indecipherable and impossible to debug, you can also provide a source map which lets you view the original, non-uglified source when debugging. The uglify plugin have all these and sundry options. Here it also inserts a comment with some basic information at the top of the file.

1
2
3
4
5
6
7
8
9
10
uglify: {
  options: {
    banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n',
    sourceMap: true
  },
  scripts: {
    src: '<%= concat.scripts.dest %>',
    dest: 'dist/<%= pkg.name %>.min.js'
  }
}

The jshint configuration block just sets some options for jshint.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
jshint: {
  options: {
    curly: true,
    eqeqeq: true,
    immed: true,
    latedef: true,
    newcap: true,
    noarg: true,
    sub: true,
    undef: true,
    unused: true,
    boss: true,
    eqnull: true,
    node: true
  }

Finally register a default task and a useful task. Tasks would be executed in the order specified.

1
2
grunt.registerTask('default', ['jshint', 'compass', 'concat', 'cssmin', 'uglify']);
grunt.registerTask('sassify', ['compass']);

To run a task, simply specify the task’s name as a argument to grunt on the command line; e.g., grunt jshint. Simply typing grunt runs the default task. Below is the output for running the default task; even for this small set of tasks, running them manually or maintaining a script without Grunt would have been a tedious chore.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Running "jshint:scripts" (jshint) task
>> 2 files lint free.

Running "compass:stylesheets" (compass) task
unchanged sass/darkbox.scss
unchanged sass/mixins.scss
unchanged sass/screen.scss
Compilation took 0.012s

Running "concat:scripts" (concat) task
File dist/darkbox.js created.

Running "concat:stylesheets" (concat) task
File dist/darkbox.css created.

Running "cssmin:stylesheets" (cssmin) task
File dist/darkbox.min.css created: 3.9 kB  2.78 kB

Running "uglify:scripts" (uglify) task
File dist/darkbox.min.map created (source map).
File dist/darkbox.min.js created: 5.45 kB  2.51 kB

Done, without errors.

Below is the entire Gruntfile. Happy grunting.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
module.exports = function (grunt) {

  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json'),
    compass: {
      stylesheets: {
        options: {
          sassDir: 'sass',
          cssDir: 'css'
        }
      }
    },
    concat: {
      scripts: {
        options: {
          separator: ';'
        },
        src: 'src/**/*.js',
        dest: 'dist/<%= pkg.name %>.js'
      },
      stylesheets: {
        src: 'css/**/*.css',
        dest: 'dist/<%= pkg.name %>.css'
      }
    },
    cssmin: {
      stylesheets: {
        src: '<%= concat.stylesheets.dest %>',
        dest: 'dist/<%= pkg.name %>.min.css'
      }
    },
    jshint: {
      options: {
        curly: true,
        eqeqeq: true,
        immed: true,
        latedef: true,
        newcap: true,
        noarg: true,
        sub: true,
        undef: true,
        unused: true,
        boss: true,
        eqnull: true,
        node: true
      },
      scripts: {
        src: ['Gruntfile.js', 'src/*.js']
      }
    },
    uglify: {
      options: {
        banner: '/*! <%= pkg.name %> <%= grunt.template.today("yyyy-mm-dd") %> */\n',
        sourceMap: true
      },
      scripts: {
        src: '<%= concat.scripts.dest %>',
        dest: 'dist/<%= pkg.name %>.min.js'
      }
    },
    watch: {
      stylesheets: {
        files: '**/*.scss',
        tasks: ['compass']
      }
    }
  });

  grunt.loadNpmTasks('grunt-contrib-compass');
  grunt.loadNpmTasks('grunt-contrib-concat');
  grunt.loadNpmTasks('grunt-contrib-cssmin');
  grunt.loadNpmTasks('grunt-contrib-jshint');
  grunt.loadNpmTasks('grunt-contrib-uglify');
  grunt.loadNpmTasks('grunt-contrib-watch');

  grunt.registerTask('default', ['jshint', 'compass', 'concat', 'cssmin', 'uglify']);
  grunt.registerTask('sassify', ['compass']);
};

Quelques Impressions Sur La Culture Française

Il me parait que nous avons les même ou presque les même idées sur la notion d’amitié. Celles-ci sont les idées qu’un ami est quelqu’un avec qui on a des gôuts en commun, avec qui on aime passer du temps, qui on s’entend très bien, sur qui on peut compter, en qui on peut faire confiance, qui n’hésite pas à l’aider aux moments difficiles, ou qui respecte les opinions de l’autre et qui partage ses opinions sans hésitation. Mails je pense qu’il existe une différence entre nous. Pour les Francais il existe deux catégories, les amis et les copains. Pour les Américains, il y a aussi quelques catégories, comme “friends” ou “acquaintances”, qui peut être l’equivalence pour “copain”. Mails généralement, la démarcation entre les catégories n’est pas très claire. Un copain est quelqu’un avec qui on peut échanger des salutations à la classe le matin. On a tendance à appeler quelqu’un un ami quand on lui parle souvent, même si on ne le connait pas très bien. De même façon, on est prêt à considérer un étranger un ami après l’avoir seulement rencontré quelques fois, bien que peut-être il ne soit pas un bon ami. Il y a un concept appelé “making friends”. Quand on va quelque part, on essaie de dialoguer et rencontrer plusieurs personnes pour s’adapter; on essaie de ne pas être seul. Dans un cas radical, il y a des personnes qui se rencontrent sur l’Internet et qui considérent l’autre un ami, même s’ils ne se rencontrent jamais en face!

Sur le forum, il est clair que pour vous il y a une grande distinction entre un ami et un copain. Par exemple, pour vous les autres étudiants en classes préparatoires peut-être sont votre copains parce que vous ne les revoyez plus après, bien que vous les rencontriez souvent. Pour nous, les camarades de classe sont souvent notre bons amis, précisément parce que nous les rencontrons tous les jours. C’est mon impression que les Américains sont un peu plus ouverts à l’egard d’amitié. Comment pensez-vous de cette vue? Elle est d’accord avec l’auteur Caroll. Selon elle, les enfants américains sont encouragés à “jouer avec d’autres enfants du même age” ou à “apprendre à entrer en relation avec des étrangers”. Est-ce que les parents français encouragent leurs enfants à faire comme ça?

J’ai un deuxième commentaire sur l’éducation d’enfant en France et aux E.U. Je suis d’accord avec l’auteur que les enfants français devraient se comporter bien dans la société, donc ils apprennent les notions de discipline, de respect, d’être responsables. En revanche, les enfants américains ont plus de libertés dont leurs parents croient qu’ils ont besoin pour explorer, pour se dévélopper, pour s’epanouir. Elle a raison, mais selon moi ce n’implique pas que les enfants américains sont laissés tout faire! Au moin, la notion de politesse est un élément très important dans l’éducation des enfants américains. Par exemple, les mots “merci” et “s’il vous plait” sont les plus accentués, comme “les mots magiques”, par les parents. Ce n’est pas très étonnant que les Américains, les enfants et même les adultes, utilisent ces mots très souvent. Ce n’est pas étonnant aussi que les Américains sont expertes dans les entreprises de service, dans lesquelles la politesse et l’attention pour les clients sont très importants. On peut généraliser les principes fondamentaux, mais l’actualité est souvent plus compliquée. Je suis d’accord avec Matthieu, qui a remarqué que les relations entre parents et enfants, françaises ou américaines, dépendent de la personnalité des parents, de leur conviction sur l’éducation, ou de la façon dont ils ont été élévés.À l’egard d’éléver un enfant, je pense qu’il est préférable qu’on l’aide et l’influence au lieu de le forcer à faire quelquechose. En revanche, on doit intervenir quand il va faire mal et, si nécessaire, je crois qu’on peut le gronder ou lui donner une fessée. Surtout, il est essentiel de lui donner les occasions de s’explorer. On devrait l’encourager à découvrir lui-même ce qu’il aime, ce qu’il veut devenir, ce qu’il veut étudier à l’université, etc. Bien sûr, on devrait l’encourager à suivre une bonne voie ou à choisir un bon métier, mais toujours on l’encourage à avoir l’indépendance de pensées et d’actions. Par exemple, l’enfant voudrait devenir une artiste; même si ses parents désireraient qu’il devienne un médécin, ils le soutiendront encore. Les parents sont les conseillers et aussie une autorité. Je pense que c’est un aspect très positif de la philosophie d’éducation américaine, parce qu’il est probable que l’enfant sera heureux dans la vie. Il parait que le système français est plus rigide, dans le sens que l’enfant doit obéir aux désirs des parents. Est-ce que c’est vrai ou faux? Pensez-vous que l’indépendance de pensée est importante pour les enfants? Selon l’auteur, les parents français et leur manière d’éducation de leurs enfants sont soumis à un “test” par le reste de la sociéte. Trouvez-vous naturel si les autres donnent conseil aux parents pour éduquer leurs enfants ou les reprimandent? Je pense que l’éducation d’un enfant est une affaire très privée. Je saurais quand mon enfant n’a pas de comportement acceptable. Je serais prêt à recevoir les conseils de la famille ou d’amis, parce qu’ils auraient les désirs sincères, mais je ne pense pas que les jugements de l’autre soient appropriés.

My Git Cheat Sheet

There are lots of great Git guides out there. This is intended simply to be a reference for me during those dark times when I can’t remember the syntax for creating a tracking branch.

Config

1
2
3
4
# These will show up in git log
git config --global user.name "Yen Tran"
git config --global user.email yen@yentran.org
git config --list

Info

1
2
3
4
5
6
7
8
gitk
git status
git log
git log origin/master
# Show endpoints such as origin
git remote show
# Show origin params
git remote show origin

Branch (Local)

1
2
git branch <branch> <sha1_of_commit>
# Ex: git branch foo 7e2785d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Delete branch
git branch -d <branch>

# Create branch:
git branch <branch>
git checkout <branch>
# Check out and automatically creates branch
git checkout -b <branch>
# List local branches
git br 
# List remote branches
git branch -r
# List all branches
git branch -a
# Verbose list of local branches and HEAD commit
git branch -v

Check out a previous check-in

1
2
3
4
5
git log
#  Locate desired checkin and copy SHA; e.g., f2bc540fc6817b0409571f6e5a562dffa6396017
git checkout <SHA>
# Do stuff, publish, etc.
git checkout master

Cherry-pick

1
2
3
4
5
6
7
8
9
10
# Source branch
git co 2012-01-01-A
# Get top log entry
git log -1
# Copy first few characters of SHA hash
# Destination branch
git co master
git cherry-pick 96f39d92de93
# Review changes
git diff HEAD~1

Clone

1
2
3
4
# SSH keys under ~/.ssh
git clone git@donny:repo.git
# Directory is optional; automatically created if not specified
git clone gitolite@donny:repo.git [directory]

Commit

1
2
3
4
5
6
7
8
# Also automatically stage and commit
git commit -a -m "commit message here"
# Add to previous commit or Rename a commit message
git commit --amend
# Select reword for the desired commit in the interactive editor
# Edit the message
# Save
git rebase -i master

Create new repo

1
2
3
4
5
6
7
git init
#  So that subsequent add can ignore files
git add .gitignore
git remote add origin git@donny:repo.git
git remote add origin git@github.com:user/repo.git
# add and commit files (-u option updates local)
git push -u origin master

Diff

1
2
3
4
5
6
7
8
# Graphical version
git difftool
git diff <directory>
git difftool HEAD~1
# Diff files that have been staged
git diff --cached
# Diff local changes against repository
git diff origin/master

Help

1
2
3
man git-log
man git-show
# etc.

Log

1
2
3
4
5
6
# Get top 1 log entry
git log -1
# Show history beyond renames
git log --follow file
# Include diffs
git log -p file

Stage

1
2
3
4
git add <file>...
git add .
# Opposite of add; use when files are renamed, etc.
git rm <file>...

Stash

1
2
3
4
5
6
git stash
git stash list
# Show changes in stash, where 0 is the number of the stashed item as shown by git stash list
git show @{0}
# Drop specified stash where xx is number shown in stash list
git stash drop stash@{xx}

Drop local changes

1
2
3
4
5
# Change state of working copy to match repository, if file has not been added to index/staging
git checkout -- <filename>
# Ditto, but if file has been added to index
git reset <filename>
git stash save --keep-index

Pull

1
2
git pull origin master
git checkout master

Rebase: resolve conflict

1
2
3
4
5
6
git checkout <branch>
git rebase master
git status
git add <file>
# Fix conflicts
git rebase --continue

Rebase: interactive

1
2
3
git rebase -i origin/master
git rebase -i
# In the text editor, select s or squash on the bottom-most commits to squash them

Refs

You may get this error when running “git branch -d foo”: warning: deleting branch ‘foo’ that has been merged to ‘refs/remotes/origin/foo’, but not yet merged to HEAD. Deleted branch foo (was 334730a). From http://stackoverflow.com/questions/18506456/git-how-to-delete-a-local-ref-branch:

1
2
# Remove local ref
git update-ref -d refs/remotes/origins/foo

Ref log

1
git reflog

Remote

1
2
3
4
# Remove origin
git remote rm origin
git remote add origin git@server:repo.git
git remote show origin
1
2
# Change origin URL
git remote set-url origin git@server:repo.git

Remote Branch

1
2
3
4
5
6
7
8
9
# Create branch. Reference: http://git-scm.com/book/en/Git-Branching-Remote-Branches
git branch <branch>
# Show local branches configured for push/pull, i.e. tracking branches
git remote show origin
git push <remote> <branch>
# Ex: git push origin foo
# or
git push <remote> <local_branch>:<branch>
# Ex: git push origin foo:bar

Tracking branch

Creating a new tracking branch

1
2
3
4
5
6
7
8
9
10
11
git checkout -b <local_branch> <remote>/<branch> 
# Ex:
#   git checkout -b foo origin/foo
#   git checkout -b bar origin/foo # local branch has a different name
# Alternative:
git checkout --track origin/foo
# Local branch has a different name
git checkout -b bar --track origin/foo
# Within a tracking branch, push and pull automatically knows which server and branch to push to/pull from
git pull
git push

Setting up tracking information for an existing branch

1
2
3
4
5
6
7
8
9
10
git branch --set-upstream <remote>/<branch> <local_branch>
# or
git branch -u <remote>/<branch> <local_branch>
# Ex:
# git branch -u origin/foo # when within branch foo
# git branch -u origin/foo foo # when not within branch foo
# Pull branch
git pull <remote> <branch>
# Delete branch
git push <remote> :<branch>

Pruning stale remote branches

1
2
3
4
5
6
7
8
9
10
git remote show origin
    master    tracked
    refs/remotes/origin/fix        stale (use 'git remote prune' to remove)
    refs/remotes/origin/story_5008 stale (use 'git remote prune' to remove)
# Simulate deleting stale branches
git remote prune origin --dry-run
# Actually delete stale branches
git remote prune origin
# Or
git fetch origin --prune

Commit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# Switch to my branch
git checkout <branch>
git status
# View diff per file
git difftool
# Add all files in to staging area
git add .
git status
# Commit to local repository
git commit
# Switch to master
git checkout master
# Pull down master updates
git pull origin master
git checkout <branch>
# Rewind my branch, pull down master updates, then add my update on top
git rebase master
git checkout master
# Merge my update back to local master
git merge <branch>
# Pull down updates that occurred on canonical repository in the meantime
git pull origin master
# Push local updates to canonical repository
git push origin master
git log

Reset

1
2
3
4
5
6
7
8
9
10
# Changes after specified revision show up as uncommitted
git reset SHA
# Changes after specified revision do not show up
git reset SHA
# Unstage
git reset HEAD <file>
# Jump to 1 commit below HEAD, rolling back previous commit
git reset HEAD~1
# Reset everything to HEAD, overwriting local changes...caution!
git reset --hard HEAD

Show

1
2
3
4
# View diff
git show SHA
# View content of file
git show SHA:file

Filing away local changes

Create a new branch and commit the desired changes to it:

1
2
3
4
git branch <branch>
git checkout <branch>
git add <files>...
git commit

Merging origin changes with local changes:

1
2
3
4
5
6
7
8
9
10
11
git checkout <branch>
git add <file>
git commit
git checkout master
git pull origin master
git checkout <branch>
git rebase master
# Resolve conflicts
git rebase --continue
# Roll back your commit
git reset HEAD~1

Better:

1
2
3
4
5
6
7
git checkout <branch>
git stash
git checkout master
git pull origin master
git checkout <branch>
git rebase master
git stash pop

Merging multiple branches

Branch 2
|
Branch 1 (committed)
|
master
1
2
3
4
5
6
7
8
9
10
git commit
git checkout branch1
git merge branch2
# Pick one commit and stash the other, then edit aggregate comment for both commits from the previous branch1 and branch2 comments
git rebase -i master
git co master
# Merge aggregate commits into master
git merge branch1
# Should see aggregate comment
git log

Tag

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Check out a tag
git checkout 2011-09-16
# Make sure your local master contains EXACTLY what you want tagged -- "git log" to double check
git checkout master
# Create a tag
git tag -a "tag name"
# This removes the tag from your local master
git tag -d "tag name"
# DANGER - the colon tells git to DELETE something -- without a confirm prompt ! -- here, we're deleting the remote master's tag
git push origin :"tag name"
# This creates the tag on your local master
git tag "tag name"
# This pushes the new tag back onto remote master
git push --tags
# List tags
git tag
# Fetch all tags
git fetch --tags