Apr 13, 2014

Linux Batch Thumbnail Creation of folder with subdirectories

I have a RaspberryPi with a 750gb attached HDD, on this drive I store a folder called Photos, it contains many sub-directories with photos in them.

I wanted to create a new directory that had the same structure of Photos, but all the images are smaller thumbnailed versions.

To accomplish this I tried a number of programs and scripts I found online for linux, but most didn't handle sub-directories, I then tried to find programs on Windows that would do this across the network, but they either didn't handle sub-directories or didn't handle errors in network traffic and would bomb out.

So I wrote my own Bash script

#!/bin/bash

find $1 -type f -iname '*.jpg' -print0 | while IFS= read -r -d '' file; do
  newFile="${file/$1/$1Thumbs}"
  if [ ! -f "$newFile" ]
  then
    echo "$newFile"
    mkdir -p "$(dirname "$newFile")"
    convert -sample 1024x1024 "$file" "$newFile"
  fi
done

I called this batch-photos.sh, so if I want to create thumbnails of /mnt/sda1/Photos I run

./batch-photos.sh /mnt/sda1/Photos

It will then create a folder /mnt/sda1/PhotosThumbs and do all the sub-directory creation and save the thumbnails there.

For now it resizes to a max width/height of 1024, maintaining the photos aspect. It also uses 'sample' method of ImageMagicks convert, as this is the fastest and still gave a good result when making photos smaller.

It will also skip over thumbnails that already exist, it does a simple check if that filename, in the matching sub-directory exists.

Here are the various convert methods I used to get a speedy result

convert -size 1024x1024 -sample 1024x1024 -quality 80
mnt/sda1/PhotosThumbs/DSC00412.JPG6799ms
mnt/sda1/PhotosThumbs/100702/100_6722.JPG2391ms
mnt/sda1/PhotosThumbs/100702/100_6687.JPG2676ms
mnt/sda1/PhotosThumbs/100702/100_6671.JPG2492ms
mnt/sda1/PhotosThumbs/100702/100_6683.JPG2669ms
mnt/sda1/PhotosThumbs/100702/F1000004.JPG1502ms
mnt/sda1/PhotosThumbs/100702/100_6686.JPG4278ms
mnt/sda1/PhotosThumbs/100702/100_6704.JPG2408ms
mnt/sda1/PhotosThumbs/100702/F1000013.JPG1562ms

convert -size 1024x1024 -sample 1024x1024
mnt/sda1/PhotosThumbs/DSC00412.JPG6598ms
mnt/sda1/PhotosThumbs/100702/100_6722.JPG2382ms
mnt/sda1/PhotosThumbs/100702/100_6687.JPG2711ms
mnt/sda1/PhotosThumbs/100702/100_6671.JPG2571ms
mnt/sda1/PhotosThumbs/100702/100_6683.JPG3920ms
mnt/sda1/PhotosThumbs/100702/F1000004.JPG1507ms
mnt/sda1/PhotosThumbs/100702/100_6686.JPG2843ms
mnt/sda1/PhotosThumbs/100702/100_6704.JPG2442ms
mnt/sda1/PhotosThumbs/100702/F1000013.JPG1570ms

convert -sample 1024x1024
mnt/sda1/PhotosThumbs/DSC00412.JPG6515ms
mnt/sda1/PhotosThumbs/100702/100_6722.JPG2376ms
mnt/sda1/PhotosThumbs/100702/100_6687.JPG3561ms
mnt/sda1/PhotosThumbs/100702/100_6671.JPG3042ms
mnt/sda1/PhotosThumbs/100702/100_6683.JPG2694ms
mnt/sda1/PhotosThumbs/100702/F1000004.JPG1514ms
mnt/sda1/PhotosThumbs/100702/100_6686.JPG2867ms
mnt/sda1/PhotosThumbs/100702/100_6704.JPG2428ms
mnt/sda1/PhotosThumbs/100702/F1000013.JPG1560ms

convert -size 1024x1024 -resize 1024x1024
mnt/sda1/PhotosThumbs/DSC00412.JPG34815ms
mnt/sda1/PhotosThumbs/100702/100_6722.JPG8996ms
mnt/sda1/PhotosThumbs/100702/100_6687.JPG9305ms
mnt/sda1/PhotosThumbs/100702/100_6671.JPG10490ms
mnt/sda1/PhotosThumbs/100702/100_6683.JPG9287ms
mnt/sda1/PhotosThumbs/100702/F1000004.JPG5494ms
mnt/sda1/PhotosThumbs/100702/100_6686.JPG11137ms
mnt/sda1/PhotosThumbs/100702/100_6704.JPG9061ms
mnt/sda1/PhotosThumbs/100702/F1000013.JPG5596ms

convert -resize 1024x1024
mnt/sda1/PhotosThumbs/DSC00412.JPG33566ms
mnt/sda1/PhotosThumbs/100702/100_6722.JPG10510ms
mnt/sda1/PhotosThumbs/100702/100_6687.JPG9336ms
mnt/sda1/PhotosThumbs/100702/100_6671.JPG9072ms
mnt/sda1/PhotosThumbs/100702/100_6683.JPG10677ms
mnt/sda1/PhotosThumbs/100702/F1000004.JPG5540ms
mnt/sda1/PhotosThumbs/100702/100_6686.JPG9707ms
mnt/sda1/PhotosThumbs/100702/100_6704.JPG10465ms
mnt/sda1/PhotosThumbs/100702/F1000013.JPG5561ms


Mar 28, 2014

Dynamically adding aria-describedby using JavaScript

Just a quick one to let others know what I've found, while it seems many of the ARIA tags work happily when added dynamically, aria-describedby will only work if it's present when the DOM is rendered, after that screen readers just won't see it.

The simple ways around this seem to be adding random unique aria-describedbys to all your elements, then dynamically changing the ID of what you want to point too.

Or use a global aria-live element that you change the text in to match the text you would have pointed to with aria-describedby.


Mar 6, 2014

E522BLE4 / E522BRE4 - Fisher & Paykel ActiveSmart Fridge - Bottom Freezer 519L

Old fridge/freezer died, so we figured we'd get a larger one and hopefully not need our stand-alone freezer anymore. After extensive research I signed up to Choice.com.au to read about their fridge reviews, which listed this fridge with an overall score of 81, which is only 2 behind the best fridge, another F&P RF522WDRX4.

The RF522WDRX4 was $700 something more expensive, and didn't really seem to offer anything better, so we settled on the E522BLE4 from Appliances Online.

Appliances Online were great, the website was fast and easy to use, they told me how many were in stock and when delivery could be before I placed an order and offered free delivery and removal of my old fridge. We got through to the customer service almost immediately when we rang to organise delivery, and the delivery occurred 2days later in the middle of their per-arranged 2hr delivery window, by friendly staff. This place has certainly got their business in order.

Edit:
After owning it for a few weeks, it's a great fridge, not whisper quite, but it's in the kitchen, so it's acceptable to have a small amount of noise. Maintains its temperature well, below are some graphs I've taken.

Fridge:
Fridge temperatures



Freezer:

Freezer temperatures



I struggled to find many pictures of this model online, even the F&P website shows pictures for a few models on the product page, so below I've included a bunch of pictures for other interested parties.

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4

E522BLE4 / E522BRE4


Feb 27, 2014

Uploading multiple LCOV and JUnit XMLs for the same source to Grunt-Karma-Sonar

I recently needed to upload  2 LCOV files, and 2 JUnit XML files up to Sonar. The problem was both LCOV and JUnit XMLs covered the same JS source files, so if I'd configured grunt-karma-sonar like normal, with the JS source path in the 'path' option, Sonar would throw an error about duplicate source files.

So I comment out the 'path' option for the second set, but then the LCOV file has the wrong path when it gets uploaded to Sonar, meaning Sonar doesn't count those coverage results.

What I ended up doing, was adding the full path to the 'prefix' option, and commenting out 'path'.
This means it merges both LCOV and JUnit XMLs and uploads with the 1 lot of JS source files.

Here is the config I used

karma_sonar: {
            options: {
                instance: {
                    hostUrl: 'http://localhost:9000',
                    jdbcUrl: 'jdbc:h2:tcp://localhost:9092/sonar',
                    login: 'sonar',
                    password: 'sonar'
                }
            },
            your_target: {
                project: {
                    key: 'project-name',
                    name: 'Full project name',
                    version: grunt.file.readJSON('package.json').version
                },
                sources: [
                    {
                        path: 'web/src',
                        prefix: '...',
                        coverageReport: 'coverage/ui/PhantomJS 1.9 (Windows)/lcov.info',
                        testReport: 'web/test/ui/test-output/test-ui.xml'
                    },
                    {
                        prefix: 'C:/Development/project-name/web',
                        coverageReport: 'coverage/js/PhantomJS 1.9 (Windows)/lcov.info',
                        testReport: 'web/test/js/test-output/test-js.xml'
                    }
                ],
                exclusions: []
            }
        }

Feb 8, 2014

Pre-recored/buffer game play and save to disk with hotkey

Occasionally I'm playing a game and I do something amazing, or more likely, see something funny or someone cheating. I'd like to record these moments, but I don't want to record all my gaming sessions for the odd time this happens.

What I'd like is something to sit in the background and record say the last 30 seconds of game play, then when I press a button it will save this buffer to a file.

Today I found out that the Afterburner beta has this ability, it is also capable of working with 64bit programs, so it records BF4 (Battlefield 4) no problem. Grab the latest copy over at www.guru3d.com

The screenshot below shows the tab in the settings where you can enable this.

With the settings below, this is how you'd use this in game.
Once the game is running, press Ctrl+F12 to begin pre-recording. You will see a pink BUF and a spinner top left of your screen while this is active. Now it is pre-recording, when something happens that you'd like to record, you need to press F12.
This will start the video capture, and it will prepend the pre-recoreded section to the video it is now saving to disk. When what you want to record has finished, press F12 again. The pre-recording is still active until you press Ctrl+F12 again, so you can just press F12 twice each time you want to save the last 30 seconds of game play, or press F12 once to save the last 30 seconds plus anymore real time video until you press F12 again.


A few examples.
Press Ctrl+F12 to start pre-recording.
5 minutes later you press F12 twice.
The video saved to D:\ will contain the last 30 seconds of game play.

Press Ctrl+F12 to start pre-recording.
5 minutes later you press F12 once.
45 seconds later you press F12 again.
The video saved to D:\ will be 1min 15sec long. It will contain the 30 seconds before you pressed F12, plus the next 45 seconds until you pressed F12 the second time.

Dec 9, 2013

Adding Google Analytics to Office 365 websites


The selection of pre-defined templates is decent, and the editing tools are easy to use, my only difficulty was adding the Google Alanytics script across the whole site, from doing a search it appears most suggest you Embed the script on every page.

There was mention of adding an App called 'Web Analytics Integration', though when I entered the App after adding it, I got a blank screen with 3 empty buttons and 3 empty text areas. This was in Firefox.


 The solution is to press F5, once I do this the App works.
Now press the Enable button top right before pasting scripts in, as mine cleared my saved scripts once I clicked Enable.



Hope this helps, leave a comment if you are still having issues.


Aug 1, 2013

Using perceptual diffs to spot mistakes during web development

So I started this a month ago when I saw a video from Google about pdiffs, then got promptly distracted on other projects and games...

Then today saw a post from BBC about this sort of thing and figured I'd post with how far I've gotten, and see if it spurs me to do more testing on how best this can be displayed.

Basically, you want an easy way to check for style/layout changes. Automating this is difficulttime consuming, and eye balling it can all look the same after a few iterations.
So you use perceptual diffs to spot the difference between 2 images, one a control image of your website that's correct. The second image is a current screenshot, and if they differ, you'll have a diff image highlighting the differences.

The problem I had with this diff image was it's hard to match the differences when you're looking at a desktop site, having 3 images side by side, each 960px wide, my eyes couldn't pick smaller differences even with the help of the diff image. Say you add a 1px margin to the bottom of a repeating news element, it get's pretty hard to tell that with 3 separate images (Control, Latest change, Diff).

So my idea is around slicing the Control and Latest change screenshot into 100px wide columns, then alternating them next to each other. Having this 100px side by side really helps pick out the differences that are highlighted by the diff image.

Next I tried using the diff image to darken the sliced images where no differences were. This resulted in a final image like this:


Here you can see the brighter areas are where a difference was detected. So if you look at the image of Obama, in the Control slices you'll notice a white line below, in the Latest change slices you'll notice the white line above. This highlights that the image moved down in the Latest changes from the Control.
You'll also notice that when looking at the text.

 Here is a resized full size you a bit of idea.



So to figure out what worked I've just been using ImageMagick. I think the next step might be writing this in something with a bit more custom control to do more highlighting of changes vs the rest of the image. Below is the commands I've been using.
news.com.au-before-tweaks.png is the Control
news.com.au-after-tweaks.png is the Latest changes

perceptualdiff -output pdiff.png news.com.au-before-tweaks.png news.com.au-after-tweaks.png
..\ImageMagick-6.8.6-0\convert pdiff.png -threshold 0%% pdiff-des.png
..\ImageMagick-6.8.6-0\composite -compose CopyOpacity pdiff-des.png news.com.au-before-tweaks.png news.com.au-before-tweaks-alpha.png
..\ImageMagick-6.8.6-0\composite -compose CopyOpacity pdiff-des.png news.com.au-after-tweaks.png news.com.au-after-tweaks-alpha.png

..\ImageMagick-6.8.6-0\convert news.com.au-before-tweaks.png -level -85%% news.com.au-before-tweaks-bg.png
..\ImageMagick-6.8.6-0\convert news.com.au-before-tweaks-bg.png +gravity -crop 100x6280 bg_tiles_%%03d_a.png
..\ImageMagick-6.8.6-0\convert bg_tiles_*_a.png -matte -bordercolor none -border 1x0 bg_tiles_%%03d_a.png
..\ImageMagick-6.8.6-0\convert news.com.au-before-tweaks-bg.png +gravity -crop 100x6280 bg_tiles_%%03d_b.png
..\ImageMagick-6.8.6-0\convert bg_tiles_*_b.png -matte -bordercolor none -border 1x0 bg_tiles_%%03d_b.png
..\ImageMagick-6.8.6-0\convert bg_tiles_*.png +append news.com.au-before-tweaks-bg.png

..\ImageMagick-6.8.6-0\convert news.com.au-before-tweaks-alpha.png +gravity -crop 100x6280 tiles_%%03d_a.png
..\ImageMagick-6.8.6-0\convert tiles_*_a.png -matte -bordercolor none -border 1x0 tiles_%%03d_a.png
..\ImageMagick-6.8.6-0\convert news.com.au-after-tweaks-alpha.png +gravity -crop 100x6280 tiles_%%03d_b.png
..\ImageMagick-6.8.6-0\convert tiles_*_b.png -matte -bordercolor none -border 1x0 tiles_%%03d_b.png
..\ImageMagick-6.8.6-0\convert tiles_*.png +append tiled_news.png

..\ImageMagick-6.8.6-0\composite tiled_news.png news.com.au-before-tweaks-bg.png news.com.au-final.png
del tile*.png
del bg_tile*.png

Let me know if you go further with this and get a better result, or even a different approach that works for you.

Jun 1, 2013

Raspberry Pi for always on server

Overview

The following is all in AUD

So I leave my main PC running 24/7 as it runs a VOIP server, shares our photos and videos to the internal network and runs SABnzbd. It's not doing much compared to the power required, over a 2 day period it averaged 212W. So leaving it on 24/7 means it will use 1852kW over a year. Costing $463 for me to run.

Looking at when I'm actually using my Pc, it's about 73hrs a week, leaving my Pc running without anyone using it for 95hrs a week. This is using 1047kW a year, costing me $262.

For years I had been meaning to get a low powered Pc to do the 24/7 basics, allowing me to switch my Pc off, finally I got around to looking into it. Initially I wanted an x86 Pc, but after much research it seemed I'd have to get a laptop for best power usage of about 10W, it would be a larger physical footprint and cost more to purchase.

So then I started looking at ARM based solutions, and came across the Raspberry Pi, which I now have running 24/7 sharing my photos, running Mumble and checking my news groups. After monitoring my setup for 260hrs, it had used 1.4kW. This puts it about 47kW for the year, costing me just $11.

So that left me with $251 to set this up and make a saving on my first years power bill. Every subsequent year is all savings though (-:


Given I have 110gb of Photos, I wanted something cheaper than SD cards for my storage needs, which meant an external HDD. As I'm looking to build this as energy efficient as possible I didn't want to power the Raspberry Pi and HDD on separate power packs as there are inefficiencies in the conversion, so I decided to get a powered USB Hub, and run the Raspberry Pi and the HDD off this.

Also means if I need to add more storage down the line I can just plug more HDDs in.

The Build


From MSY
  • Western Digital WD WDBBEP7500ABK-PESN My Passport 750GB Portable Drive. 2.5 USB 3.0 HDD - $46
  • Channel+ UH340 USB3.0 4 Port Hub With Power - $25
  • TP-LINK TL-SF1008D 8 Port 10/100 Switch - $12
  • 8GB Sandisk SD (SDHC) - $7
From Element14.com
  • Raspberry Pi, Model B, 512MB - $36
  • Enclosure, Raspberry Pi, Black - $9
  • Cable, USB A M - Micro B M, 1.8M - $3.80

I went with a 2.5" portable drive because they are meant to be powered by a USB port, and the specs state that a USB device shouldn't pull more than 500mA. So at 5v that would equate to 2.5W.

Here is the power usage as things were added:
USB Hub - 0.3W
USH Hub + HDD - 2.4W
USB Hub + HDD + Raspberry Pi - 4.9W
USB Hub + HDD + Raspberry Pi + Switch - Transferring 110gb of Photos at ~1.4mb/s - 6.7W
USB Hub + HDD + Raspberry Pi + Switch - Transferring  at ~7.5mb/s - 7W

Currently I have the following running:
mumble-server
SABnzbd
samba

May 16, 2013

Installing PhantomJS on NodeJS on Windows behind corporate proxy

Attempting to install phantomjs using npm, got an ETIMEOUT error, as you can see below

C:\Users\UserName\Downloads>npm install -g phantomjs
npm http GET https://registry.npmjs.org/phantomjs
npm http 304 https://registry.npmjs.org/phantomjs
npm http GET https://registry.npmjs.org/rimraf
npm http GET https://registry.npmjs.org/adm-zip/0.2.1
npm http 304 https://registry.npmjs.org/adm-zip/0.2.1
npm http 304 https://registry.npmjs.org/rimraf
npm http GET https://registry.npmjs.org/graceful-fs
npm http 304 https://registry.npmjs.org/graceful-fs
C:\Users\UserName\AppData\Roaming\npm\phantomjs -> C:\Users\UserName\AppData\Roaming\npm\node_modules\phantomjs\bin\phantomjs
 
> phantomjs@1.9.0-3 install C:\Users\UserName\AppData\Roaming\npm\node_modules\phantomjs
> node install.js
 
Requesting C:\Users\UserName\AppData\Roaming\npm\node_modules\phantomjs\tmp\phantomjs-1.9.0-windows.zip
 
events.js:72
        throw er; // Unhandled 'error' event
              ^
Error: connect ETIMEDOUT
    at errnoException (net.js:878:11)
    at Object.afterConnect [as oncomplete] (net.js:869:19)
npm ERR! phantomjs@1.9.0-3 install: `node install.js`
npm ERR! `cmd "/c" "node install.js"` failed with 8
npm ERR!
npm ERR! Failed at the phantomjs@1.9.0-3 install script.
npm ERR! This is most likely a problem with the phantomjs package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node install.js
npm ERR! You can get their info via:
npm ERR!     npm owner ls phantomjs
npm ERR! There is likely additional logging output above.
 
npm ERR! System Windows_NT 6.1.7601
npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-cli.js" "install" "-g" "phantomjs"
npm ERR! cwd C:\Users\UserName\Downloads
npm ERR! node -v v0.10.3
npm ERR! npm -v 1.2.17
npm ERR! code ELIFECYCLE
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR!     C:\Users\UserName\Downloads\npm-debug.log
npm ERR! not ok code 0

I configured npm to use my corporate proxy, and every other package has downloaded and installed successfully, I'm guessing the phantomjs installer is doing something different. Rather than figure out what it's doing different I just downloaded phantomjs-1.9.0-windows.zip to the package so it could install without downloading.

Navigate to
C:\Users\UserName\AppData\Roaming\npm-cache\phantomjs\1.9.0-3\package
Created dir ‘tmp’

Downloaded phantomjs-1.9.0-windows.zip from https://code.google.com/p/phantomjs/downloads/list

Copy phantomjs-1.9.0-windows.zip into the tmp dir, located at
C:\Users\UserName\AppData\Roaming\npm-cache\phantomjs\1.9.0-3\package\tmp
Now I used 7-zip to re-create the package.

Navigate to
C:\Users\UserName\AppData\Roaming\npm-cache\phantomjs\1.9.0-3
TAR the package dir, so you'll create package.tar

Then gzip package.tar, naming it package.tgz, this will overwrite the current file.

Then run the installer again and it will use the local phantomjs-1.9.0-windows.zip to complete the install.

May 3, 2013

Hacking Proxy support into Brackets

Started using Brackets at work, though they use a corporate proxy, so I did a quick hack to get proxy support into Brackets for installing Extensions.

Update:
Sprint 29, here are some screenshots
Change to this


Update:
Sprint 26, this is now on line 433 of ExtensionManagerDomain.js.

I did this for Brackets Sprint 24.

Open brackets/www/extensibility/node/ExtensionManagerDomain.js

Find the _cmdDownloadFile function, line 288
Add
proxy: ‘http://proxy.address:port’,
To the var req = request.get({});
For authenticated proxies,
proxy: ‘http://username:password@proxy.address:port’,
var req = request.get({ proxy: ‘http://proxy.address’, url: url, encoding: null },
Hope this helps, I'm guessing they will add proxy support down the line, but until then, this will get it working.

Update:
Check out Cntlm, it's an Authenticated Proxy. Can store your password as a hash, better than entering it into plain text in the above file.

Apr 19, 2013

HowTo: jqspm - jQuery Selector Performance Monitor

Quick Start:

Head over to the jqspm repo, grab the files and checkout the README.

So I'll assume you've grabbed the files and have read the brief README. So here I'll cover how jqspm (jQuery Selector Performance Monitor) monitors selectors, and how to interpret the results.

How it works

When you call jqspm.start() the script wraps the existing jQuery() and jQuery.find() functions in monitoring functions. These monitoring functions capture what selectors are being passed in, they then call the original jQuery functions and record how long it took for them to execute, and how many elements they returned.

Once you have monitored some calls you'll want to printReport, this is where there may be some confusion, so I'll take some time explaining all the points I think might be confusing. Below is a picture of a report printed via console.table.




Selectors column
Here are the list of selectors that were called during the monitoring. You will notice they are separated by an =>, so the following:
$('UL').find('LI');
Will result in a selector looking like:
"UL => LI"
As you can see from the picture above, this will keep going on the longer you chain calls to jQuery() or jQuery.find().


totalCount column
This is the total number of elements that were returned by the whole chain of selectors.


Counts column
This is a list of the number of elements each part of the selector chain returned. So a selector like:
$('UL').find('LI');
Will return a count of:
[ 3, 11 ]
This tells you it found 3 ULs, and 11 LIs.

totalTime column
This is the combine time it took to find all elements that were returned by the chain of selectors. This is a good number to look at to pick up inefficient selector chains.

Timings column
This is a list of timings for each selector in the chain, so using the same chain as above:
$('UL').find('LI');
Will return:
[ 3, 5 ]

Oddities

Single Id selectors:
Queries like
$('#menuHolder').find('UL');
 Will only return counts/timings for the UL selector. I am looking at adding in the ability to monitor single ID selectors, but it's not high on the priority list as that query should only ever return 1 element, and should be instant.

The problem this currently posses is counts and timings for the above query will only contain 1 number, even though there are 2 selectors occurring, just keep that in mind when investigating results.

Chaining saved selectors:
Queries like
var $obj=$('BODY');
$('UL');
$obj.find('UL');
Will give you 3 results
"BODY"
"UL"
"BODY => UL"
The one I want to bring your attention to is the last one, it's counts and timings will look like this
[ "", 3 ]
This is because you chained a saved selector, $obj, so you know that BODY was from a previous selector.

What to look for

Here are a few screenshots showing how this can help, this is a real selector example I've found in production code. Though in the production environment there were a ton of other elements within each LI, this was causing the iPhone 5 to take 8.5 seconds to execute. By changing it all into the one selector that time was cut to about 50ms.

When you see a large number of elements returned early in the query chain, you should look at ways to optimise. In the IE8 example below, jQuery found 1557 LIs, and each of these was added to an array of elements.

It then loops through that array, executing the second selector (UL) on each element, so it executes it 1557 times. Each time it does this it needs to copy the elements it found back to the main collection array, and in browsers with poor memory management, or systems with limited memory, this can be costly.

Notice how when we moved the second selector into the first query, eliminating the need to loop, query, add to array, we get much better performance.

IE8





Nexus 4 - Chrome



Wrap it up

Hope that has made sense, and you find this useful, let me know any questions you have, or feedback, I'll update this page once I add more features.

Apr 18, 2013

jQuery Selector Performance Monitor - jqspm

In my job as a contract front end developer, I work with large existing code bases. Many times I've started at a company and been asked, "Why does Page X take so long to load?".

There are plenty of tools to figure out other aspects of the pages load/render time, but one I haven't found is to see if there are any poorly written jQuery selectors. An actual example I've found in production code was a selector taking 8.5 seconds on an iPhone 5, tweaking the selector, it now takes 50ms.

So I wrote a small wrapper for jQuery and jQuery.find that will log selectors, the number of items they find, and how long it took to find them. This allows you to check an entire page of widgets, modules and plugins to find any inefficient selectors.

You can find my initial version over at bitbucket, jQuery Selector Performance Monitor - jqspm

I'll be posting a follow up entry explaining more about its use in the coming days. Read the more detailed post.

Here is an example of the output, showing the selector used, the number of elements found, and time to find them.

Feb 11, 2013

TimeZones with Javascript

So I originally wanted to make a TimeZone selector similar to most OSes, you know, the world map with cities visible and you can select the one in your time zone. The thing I didn't want to do with this was all the work of building and maintaining a DB of cities and their GMT offset along with daylight saving rules.

So I looked around and found a time zone database, now hosted at http://www.iana.org/time-zones. I downloaded these and started writing my JS library to parse the list and generate the rules for all the time zone info. And thus jsTimeZone was born.

With this library it allows you to find the date/time in a time zone similar to most BE languages, so using 'Australia/Sydney', or 'Europe/London'. Being FE based, it will calculate based on the machines local date/time, so as long as it's accurate, this library should get the right result.

The added benefit to using this is when I go to make the map, I can just scan through the time zone info and I'll have city name and country all ready to look up GPS coordinates for and plot on my map. Though this one I'll generate once and include a static file in the repo to be used, along with a generation script in case new time zones popup down the track.

I have only tested a few of the more common time zones I use, let me know if any places don't return the right result.

You can also export the tz database from the module and use the returned JSON string when calling the module again.

Quick example of using this module
function queryTime() {
    var ausSyd=jsTZAll.offsetIn('Australia/Sydney');
    console.log('Aus - Syd: %o', ausSyd.getOffset());
    console.log('Aus - Syd: %o', ausSyd.date());
}

var jsTZAll=jsTimeZones('australasia', 'europe');
jsTZAll.init(queryTime);
Which will return
Aus - Syd: { hr: 11, min: 0}
Aus - Syd: Date {Fri Feb 22 2013 15:06:59 GMT+1100 (AUS Eastern Daylight Time)}

To see more, head over to the Bitbucket repo at jsTimeZone.

Dec 15, 2012

Software I use

So I've just done a fresh Format/Install, figured I'd document the tools I've grown a collection of over the years.

DisplayFusion
Purchased. I couldn't live without this, task bars on each monitor, unbelievably useful. If you don't think so, you either don't have 20 applications open at once, or don't know what your missing. Download the demo and try it. The other features are neat, but I rarely use them.
Ditto
Free. Clipboard manager, store a ton of copies, and let you select the one you want to paste with a simple hotkey. I use this a ton while programming, but still find it useful when I'm not coding.
Take Ownership
Free. Registry hack that adds a 'Take Ownership' option to your right click menu. I use this when I copy files from a previous install of Windows, resets the permissions to me on my current install.
TeraCopy
Free. Windows Copy Replacement. Fantastic replacement, cut/copy files and folders like normal, and on paste it will queue your files, and perform the copy. Has error handling, so no more broken copies halfway through, and seems to copy slightly faster. Amazingly well done program, and some of their Android tools are pretty awesome too.
MSI Afterburner
Free. AMD VISION Engine Control Center replacement. Light weight, amazing fan control. This is so good I'd buy an MSI card over other comparable cards.
avast!
Free. Great and light weight antivirus scanner. Does a great job, all the features you want, none of the bloat you don't. It does offer to install Chrome, I select this option as I use both Firefox and Chrome, and it gives avast! some money from Google.
Defraggler
Free. Great defragmenter, see fragmented files, choose which ones to defragment. Super simple and works a treat.
VLC Player
Free. The best media player, light, powerful, great to use. Nothing comes close to VLC.
TightVNC
Free. Great Windows VNC server, small and fast with a multi-platform client(Java).
Irfanview
Free. The best image viewer ever. Reads almost everything, has some great adjustment controls and it's batch mode is simple and fast.
7-Zip
Free. Compression tool, reads/writes ZIPs, TARs, GZIPs and a range of other formats. Lightweight, does a great job.

clumsy
 Simulates bad/laggy networks on Windows at the network driver level, so no need for proxy server settings etc.