'One Dark' Theme For TextMate

After over a year using Atom and putting up with its high resource usage and inability to open large files I’ve switched back to TextMate - it’s fast, stable and doesn’t kill my battery (even if it is missing some of the niceties of Atom).

However, after the switch I started missing the ‘One Dark’ theme from Atom, I find it to be far better than anything TextMate includes nowadays (even my old favourite ‘Blackboard’) so I created a very basic port.

The port includes both standard and ‘bright’ versions (previews below). If you’re interested you can get it as a TextMate bundle from GitHub: https://github.com/digitalpardoe/One-Dark.tmbundle.

iPhone 7 RAW Test Shots

I’ve often wanted more out of the iPhone camera, the images produced by Apple’s processing were good but it felt like the camera had more to give. Fortunately, with the introduction of iOS 10, I’ve had a chance to experiment with all the camera has to offer.

A few of the results I’ve had are included below, you can click on any of the images to view a full resolution version.

Recoverable Shadow Detail

Original JPEG:

The original JPEG photograph from the iPhone 7 of the Palace Hotel in Manchester, UK showing massive underexposure and dark shadows.

Lightroom processed RAW:

Lightroom processed RAW photograph of the Palace Hotel in Manchester, UK showing the ability to recover shadow detail.

Fine Details

Original JPEG:

iPhone 7 native JPEG image of tram tracks in Manchester, UK.Crop of the native JPEG image showing terrible smoothing and loss of fine detail.

Lightroom processed RAW:

Lightroom processed RAW image of tram track in Manchester, UK.Crop of the Lightroom processed RAW showing more noise but far, far more detail across the image.

Difficult Lighting

Original JPEG:

Original JPEG from the iPhone 7 with difficult lighting conditions showing muddy detail within clouds.

Lightroom processed RAW:

RAW image processed in Lightroom showing much greater detail in cloud regions.

Laptop Downsizing

My Laptop Setup

From 512 Pixels:

Currently, I'm using a Mid 2015 15-inch MacBook Pro with Retina display. It has an 2.5 Ghz i7, 16 GB of RAM and a 1 TB of SSD storage. It's the fastest, most capable Mac I've ever owned. ... I went with the 15-inch because I thought I'd be using it as a notebook way more than I actually do.

(http://www.512pixels.net/blog/2016/2/the-notebook-dilemma)

I fell foul of this when I started remote working, I bought a maxed-out 15” Retina MacBook Pro thinking I’d need something powerful for at my desk and something I could carry around for when I needed to get out of the house.

Unfortunately, due to the size of the rMBP, I never took it out and ended up with the same ‘tied to my desk’ feeling I would’ve had with a desktop Mac.

The best thing I did, for my productivity (and my mental wellbeing) was trade the rMBP in for a far, far less powerful 12” MacBook. Yes, it’s slower, which can be especially noticeable when I’m coding or photo editing but now I feel like I have far more freedom and I work in other (more interesting) locations more often.

I don’t regret the decision to trade down, all I had to do was swap some power for a bit of patience, which was worth it in the long run. It’s something I’d definitely recommend considering if you have a large laptop but still feel ‘stuck’ in the home office.

Cloning Permissions And ACLs On OS X

Sometimes permissions get messed up, it’s normally easy to fix, but if the problem also breaks your access control lists (ACLs) then the fix can be much more time consuming (especially when external drives are involved).

Most of the time, for a quick fix, I use the commands below to clone the permissions and ACLs from a known good folder to my broken folder. Usually, this is all I need.

chown $(stat -f%u:%g "$SRCDIR") "$DSTDIR"
chmod $(stat -f%Mp%Lp "$SRCDIR") "$DSTDIR"
(ls -lde "$SRCDIR" | tail +2 | sed 's/^ [0-9]*: //'; echo) | chmod -E  "$DSTDIR"

Either set the environmental variables as you need to or replace them directly (liberal use of sudo may also be required depending on the folder being updated).

I can’t take credit for the above commands, I discovered them on StackOverflow a good while ago and added them to my useful snippets - if I come across the original I will add the necessary credit.

Giving Up On Google

I don’t hate Google, that would be a silly thing to do in public. After all, in the last twelve months they purchased an organisation that developed some of the world’s most impressive (military) robots 1 and have always developed impressive AI systems of their very own 2 - what could possibly go wrong 3.

But that hasn’t stopped me from giving up on them.

The main reason I’ve given up on Google’s services are the ever increasing feelings of becoming locked-in. More and more services coming from Google seem to be Chrome-only, or at least work far better in Chrome than anywhere else and I think this speaks to the future of Google.

There’s no denying that Apple products are a kind of lock-in, but I feel that Apple don’t rely on my data as a source of revenue so I’ll always be able to get it out if I need to - I don’t feel this way about Google.

Whilst this feeling has been building up over time due to factors like requiring a Google+ profile to make full use of YouTube, or Gmail never quite playing nicely with other mail clients; it came into focus more recently with the introduction of Inbox. There’s no denying that Inbox is a great service, it’s innovative and genuinely useful, but I can only access it though Google applications.

This particular lock-in isn’t fundamentally a problem in itself, I can still get at my data, but it leaves me fearful for the future of my email. In the future I imagine Google turning off all POP & IMAP support - access to my email will be via Google (or maybe an approved API) only and my email, my data, will be more ‘stuck’ where it is than I would like - and I imagine the same will be true of all of Google’s other services.

None of Google’s recent behaviour strikes me as ‘open’, Apple is hardly an open company either, but I at least feel like Apple is open with my data on a closed platform rather than, like Google, closed with my data on an open platform.

I’d quite happily pay Google to have better access to my data, but they don’t seem to have much interest in that - so I’m moving all my data elsewhere.

Push Email With FastMail (Sieve) & IFTTT

As I was writing this post FastMail released a new app with push notification support, it wasn’t really for me as I prefer a unified inbox but if you think it might suit you - check it out.

I recently went through the process of prising my email from the jaws of Gmail and getting it into FastMail, the switch went pretty smoothly but I missed Mailbox’s push notifications (even if it gets IMAP support I probably won’t use it any more - having my email flowing through another cloud server never felt quite right).

The simplest thing to do would be to set up the email channel in IFTTT, forward all incoming mail to trigger@recipe.ifttt.com then use the iOS notifications channel to push the alerts. The main problem with this is related to privacy; the email body will also, at some point, end up on IFTTT’s servers.

To solve this problem I created a custom sieve rule inside FastMail’s advanced rules section that strips everything other than the subject. It’s pretty simple and looks like this:

if true {
  notify :method "mailto" :options ["[email protected]"] :message "/ $subject$ /";
}

This rule should probably go after your junk mail filters (which probably look similar to the below), unless you want to get notified about your junk mail too of course.

if not header :contains ["X-Spam-known-sender"] "yes" {
  if allof(
    header :contains ["X-Backscatter"] "yes",
    not header :matches ["X-LinkName"] "*" 
  ) {
    setflag "\\Seen";
    fileinto "INBOX.Junk Mail";
    stop;
  }
  if  header :value "ge" :comparator "i;ascii-numeric" ["X-Spam-score"] ["5"]  {
    setflag "\\Seen";
    fileinto "INBOX.Junk Mail";
    stop;
  }
}

I also have emails coming through to a work Google account that I like to get push notifications for (without using the Gmail app). IFTTT is a little limited as it only supports one email address in one email channel so I set up Gmail to forward all my emails to FastMail. I didn’t really want all this unrelated email cluttering up my inbox so I added another rule, just after my notification rule, to discard them:

if header :matches ["x-forwarded-for"] "[email protected]" {
  discard;
}

And that’s all there is to it really - I get a (nearly instant) push notification that I have a new email and I can open the mail app of my choice when it’s convenient. One added benefit seems to be a little extra battery life after turning off fetch & push in the native iOS Mail app.

Cheltenham Literature Festival 2014

Radio Times Tent At Cheltenham Festival

The Cheltenham Literature Festival is, as the name suggests, a festival in the lovely Spa town of Cheltenham (seriously, go there, it’s a really nice place) that celebrates literature. It was formed in 1949 and I’ve attended for the past couple of years - thanks entirely to my wonderful girlfriend.

The festival takes places over ten days (starting on 3rd October this year) but it’s only really practical to stay in Cheltenham for a week due to travelling and work commitments. Unfortunately this meant that we had to miss one of the best days of the festival this year, the last day.

We still got to attend some pretty interesting talks though.


Welcome To Just A Minute!

One of my highlights of the week was a great performance of Just A Minute presented by Nicholas Parsons himself and with a panel consisting of Pam Ayres, Jenny Eclair, Shappi Khorsandi - the shows first all female panel since the show first aired in 1967.

They were there, of course, to talk about Nicholas’ latest book, but the discussion quickly become a series of anecdotes from all present about the show and its past participants - this was great and gave an excellent insight into the past 900 episodes of this excellent radio panel show.

Whilst it wasn’t entirely without hesitation, repetition or deviation it was very entertaining - including the brief intermission to deal with a slightly lost wasp.

What’s Next For Google

This talk was probably my biggest disappointment of the week. It was presented by Peter Fitzgerald, the UK Sales Director for Google. I was hoping for a discussion of the near-term future for Google, preferably including key issues such as privacy and government spying. It ended up however, as a big Google X advertisement for our “possible future”. Granted, the talk was well rehearsed and Peter, as I’d expect, was a competent presenter but he didn’t seem to want to be there at all.

Interestingly, during the Q&A session, a question was asked about wearables and Google’s future focus. Peter seemed pretty adamant that their focus was purely on the software. This seemed odd to me given Google’s ownership of Motorola and the recent launch of the Moto 360 - Glass didn’t even get a passing mention.

Keep Britain Tidy

Hester Vaizey presented some of her favourite posters from her new book along with some interesting facts. One of the most interesting was the fact that the “Keep Calm And Carry On” poster we’ve all come to love (and probably hate) was never publicly displayed but rediscovered in 2000 and rose to ubiquity from there.

Golden Days Of The Railway

This was a panel discussion between three authors and a poet to determine if there was ever any such thing as the “Golden Days” and if golden days in the future are a possibility. The panel consisted of:

Whilst the panel didn’t really come to a conclusion about the “Golden Days” there were some interesting discussions including their opinion that the on-going restorations of the Flying Scotsman are a waste of money - even if it is good to have a living connection to the past. Also, their opinion that state-controlled railways (such as those on the continent) function far better and are more efficient than the privately controlled railways here in the UK.

One interesting fact too, it apparently cost around £80 to change a fluorescent light bulb in a station here in the UK - ridiculous.

Agatha Christie And The Monogram Murders

Another panel discussion, this time about the new Agatha Christie continuation book - The Monogram Murders with the author Sophie Hannah, Christie’s grandson, Mathew Prichard and expert John Curran.

I’m not the biggest of Christie fans but it was interesting to hear how Sophie created a continuation story by maintaining the well-established Poirot character but changing the stories narrator to suit her style of writing rather than trying to write in the style of Agatha Christie herself. This is in contrast to many continuation books that have appeared recently that choose to alter the main character in some significant way as well as copy the style of the original author.

Victoria: A Life

This talk by A.N. Wilson was more interesting than I expected it to be. He discussed his new book in which he explores, with new research, Queen Victoria as a successful diplomat, writer and anything other than a recluse after Prince Albert’s death.

He also talked about his disappointment at the number of letters in various archives that had been redacted / destroyed after her death leaving large holes in her personal history.


Overall the festival was pretty good, but I still think last year’s was better. One thing that was consistent was the overpriced and not brilliant food & drink - my advice would be to buy food from somewhere else and not from one of the festival tents.

Even if you don’t get the chance to attend a future festival be it literature you should definitely spend some time in Cheltenham. The festivals are always interesting but Cheltenham itself improves them greatly.

My Field Notes

Refining My Podcast Recommendations

After much listening over the past couple of months I’ve added to and removed from my recommend podcasts list to create a playlist that doesn’t feel like a chore to listen to. My refined recommendations are:

Somehow I’ve ended up with one more that my previous selections, but I find this selection far less of a burden to listen to on a weekly basis.

Recommended Podcasts

Since I started working remotely I’ve been listening to a lot of podcasts, pretty much all of them tech related (and many of them Apple related). Here are the few that have kept me interested enough to stay in rotation:

Some of these have been discontinued but they’re still worth subscribing to; for the past episodes and because they’ll soon be resurrected on Relay FM.

My favourite podcast client (podcatcher?) at the moment is Overcast, both Smart Speed and Voice Boost are definitely worth the IAP. There’s no Mac app for now but the web player works in a pinch (albeit without the best features of the app).

Digitising My Negatives

As I mentioned before, I recently began shooting on film again (a little bit anyway). Of course, this meant I wanted someway of getting my images from the negatives and into my digital library for a bit of light editing and sharing.

I found myself on Amazon looking at the plethora of cheap negative scanners. Most of these consist of a 5mp CCD and a backlight; photos are scanned quickly, to JPEG, and mostly without the need to involve a computer. From what I could find though, this type of scanner has three problems: highly compressed JPEG output, relatively low resolution ‘scans’ and extremely mixed quality output.

Maybe I was worrying too much about image quality - but they weren’t good enough for me. I wanted RAW images and higher-resolution output.

The most obvious choice would’ve been something like the Plustek OpticFilm 8100 or a negative-compatible flatbed scanner. I could’ve scanned as TIFF at high resolution and have been done. The main problem with this solution was the price, I couldn’t justify the high cost for something I probably wouldn’t be doing often.

To this end, I decided to make the use of what I already owned (or could make pretty easily).

The Setup

The camera setup itself wasn’t too complicated, I used my Nikon D700 with and old 105mm Micro-Nikkor. The equipment isn’t massively important though, as long as you have a decent DSLR, mirrorless or high-end compact / bridge with a lens that can get close enough to fill the frame with a negative then you’re probably going to get higher quality shots than a cheap dedicated negative scanner. RAW shooting is a massive plus though; the results will need some white-balance correction.

All of this needs to be mounted on a fairly sturdy tripod that can take the weight of your setup pointing straight down.

A couple of things you will need to be able to do though: manual focus, or at least have the ability fix the focus, and a self-timer / cable release function. Shooting so close, things can get blurry quickly.

One useful little accessory is a macro focusing rail, it allows you to finely tune the focus without having to mess around with the camera’s settings too much. It can be especially helpful with older, heavier lenses that tend to fall out of focus when the camera is pointing towards the ground and nudged slightly.

Probably the most difficult bit of the whole setup was coming up with some way to backlight the negative a suitable amount and evenly. Fortunately an Amazon shipping box, printer paper, a torch and a lot of packing tape came to the rescue.

As I didn’t have anything suitable to diffuse the light directly under the negative I made a relatively long tube (appox. 30cm) and lined it with printer paper that curved up towards the negative-sized aperture I cut at one end of the box. This produced diffuse enough light that evenly lit the negative.

A special shout out should probably go to the torch I used, it was the extremely bright LED Lenser P7. This is probably the best torch I’ve ever bought, super-bright for normal torching with a neutral enough light temperature for small photography-related projects like this.

Now for the stuff that really matters…

The Settings

For my negatives I shot in manual mode: 1/50s, f/7.1, ISO 200. I left automatic white-balance enabled as I was shooting in RAW and the white balance would definitely have to be corrected in post-processing anyway.

I chose not to quite fill the frame with the negative to ensure I made the most of the lens’ sharpness in the centre. After cropping, most of my shots worked out at around 8mp, which was pretty good going and definitely better than the cheap negative scanners.

The Results

Straight out of the camera this is how the negatives looked:

Inverting the photo quickly got me to something that looked more sensible. The blue cast to the image is the nature of the colour film and this is what needs to be white-balanced away. This can take a lot of playing with to get right but once you’ve done it for a single image, it should be the same for the whole roll.

After a little pushing & prodding with your image editor of choice (mine is Aperture but I guess that won’t be true for much longer). You can get something that looks perfectly acceptable.

To be honest, this photo probably isn’t the best example, but you can find some of the better ones (B&W and colour) in my Flickr album.

Something that did surprise me during this process was the amount of dynamic range I got from the negatives by digitising them in this way, I could see details from the negatives that the original prints didn’t even give the smallest clue to. The large RAW photos also gave me a lot of latitude when I was editing, it was nice to maintain the atmosphere of film with the advantages of a digital editing workflow.

Did it take a while to do all this: yes, would I have been better off getting a scanner: possibly, would it have been anywhere near as satisfying or fun: definitely not!

Shooting Film Again

I’ve recently started shooting a bit of film again so I ‘scanned’ and uploaded some of the results to Flickr. They’re all in my film album (along with some old ones I uploaded a good while ago). My scanning process isn’t exactly typical - but that’ll all be explained in a post that’s coming soon…

Using GitLab Omnibus With Passenger

GitLab is a great self-hosted alternative to GitHub. I’d set it up for other people before but it always seemed to be more hassle than should be to update and maintain (especially with it’s monthly update cycle) so I’d never set it up for myself.

Thankfully GitLab now has omnibus packages available to make installation and maintenance much easier. Unfortunately these packages contain all of GitLab’s dependencies including PostgreSQL, Nginx, Unicorn etc.. This is great for running on a server dedicated to GitLab but not terribly useful for my setup.

I already had a Postgres database I wanted to make use of along with an Nginx + Passenger setup for running Ruby application. The following describes the configuration changes I needed to make to fit GitLab omnibus into my setup.

The first steps are to create a PostgreSQL user and database for your GitLab instance and install your chosen omnibus package from installation instructions, we need to add some config first.

The first bit of config goes in /etc/gitlab/gitlab.rb, this sets up the external URL for your GitLab instance, configures the database and disables the built-in Postgres, Nginx and Unicorn servers:

external_url "http://git.yourdomain.com"

# Disable the built-in Postgres
postgresql['enable'] = false

# Fill in the values for database.yml
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
gitlab_rails['db_host'] = '127.0.0.1'
gitlab_rails['db_port'] = '5432'
gitlab_rails['db_username'] = 'username'
gitlab_rails['db_password'] = 'password'

# Configure other GitLab settings
gitlab_rails['internal_api_url'] = 'http://git.yourdomain.com'

# Disable the built-in nginx
nginx['enable'] = false

# Disable the built-in unicorn
unicorn['enable'] = false

Now you can run sudo gitlab-ctl reconfigure, this should set up all of GitLab’s configuration files correctly, with your settings, and migrate the database. You’ll also need to run sudo gitlab-rake gitlab:setup to seed the database (this is a destructive task, do not run it on an existing database).

The final bit of configuration goes in /etc/nginx/sites-enabled/gitlab.conf (this assumes you have Nginx + Passenger installed from their instructions):

server {
  listen *:80;
  server_name git.yourdomain.com;
  server_tokens off;
  root /opt/gitlab/embedded/service/gitlab-rails/public;

  client_max_body_size 250m;

  access_log  /var/log/gitlab/nginx/gitlab_access.log;
  error_log   /var/log/gitlab/nginx/gitlab_error.log;

  passenger_ruby /opt/gitlab/embedded/bin/ruby;
  passenger_set_cgi_param PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin";
  passenger_user git;
  passenger_group git;

  passenger_enabled on;
  passenger_min_instances 1;

  error_page 502 /502.html;
}

Most of the above configuration comes directly from GitLab’s omnibus configuration with a few customisations for Passenger. The main configuration options are correctly setting the user and group so there are no permission issues and ensuring that the correct directories exist in the $PATH variable to prevent errors in GitLab’s admin interface.

Currently, files uploaded into GitLab may not appear correctly using these instructions due to a permission issue, this should be corrected with a future omnibus release, more discussion can be found in this merge request.

And we’re done.

That’s about everything, hope it works for you too.

Monitoring Internet Connection Speed With Munin

Considering that an internet connection is now deemed to be a human right you’d think that ISPs, that generally seem to rip us off anyway, would’ve managed to make their connections nice and reliable, especially when you take into account that the internet has been round for a good few years now and they’ve had plenty of time to get it right. Unfortunately this isn’t the case & I decided that I wanted a way to make sure that I wasn’t getting (too) ripped off by my ISP.

To this end I decided to make the use of the tools I already had available, a Speedtest to test the connection.

(The following is entirely Debian & Munin 2 biased, you may need to tweak it for your particular setup.)

The first job was to find some way to run Speedtest from the command line, fortunately while I was stumbling around the internet I came across speedtest-cli which makes life much easier. So first we need to get a copy of the script and put it somewhere:

git clone git://github.com/sivel/speedtest-cli.git speedtest-cli

(You’ll probably be needing to make sure you have a copy of Python installed too, for more info check out speedtest-cli’s GitHub page.)

Then we need to get some useful numbers from the script, we do this as a cron job because the script can take a while to run, use a lot of bandwidth & tends to timeout when run in Munin directly.

Create the file /etc/cron.d/speedtest containing the following (modifying the speedtest-cli path to suit your needs of course):

0,20,40 * * * * root /usr/local/speedtest-cli/speedtest-cli --simple > /tmp/speedtest.out

Finally we need a Munin script to create the graphs, the script below should go in /etc/munin/plugins/speedtest, don’t forget it make it executable too, or it might not run (chmod +x /etc/munin/plugins/speedtest):

#!/bin/bash

echo "graph_category network"
echo "graph_title Speedtest"
echo "graph_args --base 1000 -l 0"
echo "graph_vlabel DL / UL"
echo "graph_scale no"
echo "down.label DL"
echo "down.type GAUGE"
echo "down.draw LINE1"
echo "up.label UL"
echo "up.type GAUGE"
echo "up.draw LINE1"
echo "graph_info Graph of Internet Connection Speed"

OUTPUT=`cat /tmp/speedtest.out`
DOWNLOAD=`echo "$OUTPUT" | grep Download | sed 's/[a-zA-Z:]* \([0-9]*\.[0-9]*\) [a-zA-Z/]*/\1/'`
UPLOAD=`echo "$OUTPUT" | grep Upload | sed 's/[a-zA-Z:]* \([0-9]*\.[0-9]*\) [a-zA-Z/]*/\1/'`

echo "down.value $DOWNLOAD"
echo "up.value $UPLOAD"

Finally, restart munin-node (probably something along the lines of /etc/init.d/munin-node restart), give it an hour or so and enjoy your new statistics.

Check back soon.

Automatic Build Numbers In Xcode

As part of the project I’m working on at the moment I wanted a way to automatically update a targets build number in Xcode. My preferred setting for the build number is the current SCM revision, the script below automatically updates the build number in the Info.plist file each time the application is built.

Just add the script as a ‘Run Script’ build phase for your chosen target in Xcode. The script is only designed for Git, SVN & Git-SVN as they’re all I use at the moment.

#!/bin/bash

#
# Automatically sets the build number for a target in Xcode based
# on either the Subversion revision or Git hash depending on the
# version control system being used
#
# Simply add this script as a 'Run Script' build phase for any
# target in your project
#

git status
if [ "$?" -eq "0" ]
then

    git svn info
    if [ "$?" -eq "0" ]
    then
        buildNumber=$(git svn info | grep Revision | awk '{print $2}')
    else
        buildNumber=$(git rev-parse HEAD | cut -c -10)
    fi
elif [ -d ".svn" ]
then
    buildNumber=$(svn info | grep Revision | awk '{print $2}')
else
    buildNumber=$(/usr/libexec/PlistBuddy -c "Print :CFBundleVersion" Source/Application-Info.plist)
fi

/usr/libexec/PlistBuddy -c "Set :CFBundleVersion $buildNumber" Source/Application-Info.plist

Check back soon.

A Little More About Me

And while I remember, you can also find out a little more out about.me at alexpardoe.co.uk.

Check back soon.

Me On The Internet

As I never seem to have anything interesting enough to write a blog post of any length about it seems like a good time to mention all the other (far more frequently updated) places that you can find me on the internet.

The popular places (where I’m known as digitalpardoe):

If you must:

Really?!:

There’s probably a lot more places you can find me that I’ve forgotten about, but if you haven’t already realised, my username tends to be digitalpardoe so just try a search on your ‘social’ site of choice.

Strangely, in putting this together, I seem to have thought of something longer to write about.

Check back soon.

Open Sourcing A Few Projects

I’ve finally gotten around to it, writing another blog post, almost one year since the last post containing any meaningful content. Rather than apologising for the hiatus and promising to blog more I will instead move on to something more interesting.

Last year I took the decision to open source two of my main projects, iSyncIt and Set Icon. I didn’t make a big deal about doing it, in fact, I didn’t make any ‘deal’ at all, I just set the GitHub repos to ‘public’. Consider this the (6 months late) announcement of their open sourcing.

The iSyncIt repository contains almost the complete history of iSyncIt development. Unfortunately I started development of iSyncIt before I discovered version control and as a result some of the history is only available as source code bundles in the downloads area.

Fortunately Set Icon development started after I had discovered the advantages of version control so its full development history can be seen in the GitHub repository.

Both of the projects have a fairly non-restrictive license, you can read it in either repository. The downloads section on GitHub for both projects also contains all of the versions of the applications I have ever publicly released.

Now for something a little more current.

This afternoon I flicked the switch that open sourced my final-year university project, Chroma32 (under the same license as the other two). The original idea was to create a (dissertation-grand sounding) ‘photographic asset management system’, the scope eventually morphed into creating a document management system that was as extensible as possible.

The whole project was built around alpha & beta versions of Rails 3 and the alpha-version gems that go along with it. Overall I ended up with a themeable system with reasonably tight integration for complex plugins.

If you want to discover more, clone a copy from its GitHub repo and hack away.

Check back soon, go on, it might actually be worth it from now on. I promise.

What Makes A Rich Internet Application (RIA)?

Disclaimer: This blog post may seem a little outdated, after all the term RIA seems to be slowly dropping out of the hype-o-sphere and been replaced with the “cloud”, however, rich internet applications are still the cutting edge of many an enterprise implementation, hence this blog post.


The internet is changing, connections are getting faster, web browsers are getting more advanced and the technologies behind the internet are being constantly improved & updated. Due to this rapid evolution more and more companies are offering services that run on the cloud, accessible anywhere, anytime, anyplace.

Virtually every application that an average use would expect to find on their desktop computer can now be found somewhere on the internet. These are the rich internet applications, applications that finally break free of the desktop into the word of infinite storage and always on availability. This blog post aims to discuss the factors that are required (in my opinion) to produce a rich internet application.

Definition

According to David Mendels (one of the group that coined the phrase ‘rich internet application’ at Macromedia) the most basic definition of an RIA is ‘no page refresh’ (or ‘single screen’ depending on your interpretation). But he himself admits that this was the definition ‘at the time’. [Based on a comment by David at RedMonk].

In the current web-sphere many websites appear to classify themselves as RIAs, this probably due, in part, to the rise of the term ‘rich internet application’ as a buzz-phrase among developers and technology publications. Many technologists involved with RIAs now argue that any website that requires some form of browser-based plugin can be categorised as a RIA, but in the a world of desktop-replacement web applications does the term still apply to websites that simply include a flash video or make extensive use of AJAX to prevent page reloading?

Redefining RIAs

After trawling through many of the different websites that consider themselves rich internet applications I fully agree with the original definition that an RIA must have no (or very little) page refresh, this is one of the factors that makes an RIA more like a desktop application in terms of user experience, you wouldn’t expect the application window in Excel to completely re-draw itself every time you switch work books would you, so why put up with it in web applications that you use.

Every website I came across that I would consider to be an RIA also shared another common attribute, the lack of full page scroll bars. Many of them contained scroll bars to navigate through subsections of content but none ever forced me to trawl through large pages and lose access to key navigational features. Again, this is reminiscent of most, if not all, desktop applications. A desktop application will nearly always retain placement of navigational features the most obvious of these being the menu bar at the top of a window (or screen).

The use of browser plugins and ‘rich media’ however were not present in the RIAs that I came across. Many created a more than optimal user experience through the use of JavaScript, HTML and a few images, features available in all modern web-browsers.

Personally I believe that the only websites that should be considered ‘rich internet applications’, the key word being ‘applications’ are those that most effectively simulate the desktop application user experience; this does not however mean that RIAs should only be limited to the functionality that a desktop application can provide. The World Wide Web offers far greater scope in terms of storage, processing, scalability, accessibility and social interaction, features which should be embraced in the creation of rich internet applications and can only serve to augment the user experience.

Conclusion

In this blog post I have discussed in very simplistic terms, what, in my opinion, makes a RIA. It isn’t the inclusion of media heavy content, or the ability to load content without re-loading the whole page. It is the ability of a website to simulate a desktop user experience, effectively allowing the user to easily replace any desktop application with a browser-based clone.

In the context of modern rich internet applications the browser should be seen, not as a way of ‘browsing the internet’, but as a shell that provides a user with access to the applications which they use every day. The web browser is the operating system of the RIA world.

Check back soon.

Moving To Typo

As you may or may not be aware digital:pardoe has, for the past 4 years, been running atop a custom blogging engine that I developed as way to learn Ruby on Rails. Whilst the system has (nearly) always been stable and (nearly) always fast I felt it was time to retire it, from everyday use at least.

When using the digital:pardoe blogging engine the ‘blogging’ always felt secondary to the actual development of the blog, I was always found myself doing far more of the ‘adding new features’ than the ‘adding new posts’ which, at least in recent months, is not what I intended to happen.

Unfortunately, the loss of my bespoke blogging engine also means a loss of some of the bespoke features I added to the website. The downloads (previously ‘software’) area is now very cut down – everything is still available as before though. The ‘photo’ section has now disappeared completely, if you want to see my photographs you’ll have to visit my Flickr account instead. I’ve made every effort to redirect old pages to their new location but if you find a page that is missing please contact me so I can fix the problem.

In the near future I intend to release the digital:pardoe blogging engine source code (once I’ve cleaned it up of course) as it may be a useful reference to other new RoR developers. Don’t expect the default Typo theme to stick around for long either, I’m currently in the process of porting a digital:pardoe theme to Typo.

And, if you hadn’t guessed already, digital:pardoe is now Typo powered.

Check back soon.

'Merge All Windows' AppleScript

A friend reminded me of how nice it would be to merge all the windows in Safari back together using LaunchBar (or QuickSilver for that matter). So I wrote a quick AppleScript to accomplish the task:

on gui_scripting_status()
  tell application "System Events"
    set ui_enabled to UI elements enabled
  end tell
  if ui_enabled is false then
    tell application "System Preferences"
      activate
      set current pane to pane id "com.apple.preference.universalaccess"
      display dialog "The GUI scripting architecture of Mac OS X is currently disabled." & return & return & "To activate GUI Scripting select the checkbox \"Enable access for assistive devices\" in the Universal Access preference pane." with icon 1 buttons {"Okay"} default button 1
    end tell
  end if
  return ui_enabled
end gui_scripting_status

on click_menu(app_name, menu_name, menu_item)
  try
    tell application app_name
      activate
    end tell
    tell application "System Events"
      click menu item menu_item of menu menu_name of menu bar 1 of process app_name
    end tell
    return true
  on error error_message
    return false
  end try
end click_menu

if gui_scripting_status() then
  click_menu("Safari", "Window", "Merge All Windows")
end if

The ‘gui_scripting_status()’ routine is taken and modified from code that can be found here: http://www.macosxautomation.com/applescript/uiscripting/index.html.

Check back soon.

Software Updates

I’ve finally gotten round to updating iSyncIt and Set Icon for Snow Leopard, as always you can download iSyncIt here and Set Icon here.

The new release of iSyncIt fixes the only bug I could find under Snow Leopard – the icon not changing correctly under bluetooth on / off conditions.

The Set Icon release fixes the problem of the application not performing its one and only function – setting a drive icon. Along with the bug fix I modified the image conversion to prevent the (frankly awful) stretching of non-square images to fill a 512×512 icon, images now scale nicely. I also removed the terrible tool-tips that show up when you start the application I used to think they were ‘cool’ but soon realised the error of my ways. However, in place of this I added some ‘brilliant’ window resizing when you remove an icon – we’ll see how long that lasts. Oh, the application will also run as a 64 bit application now – not that that makes any difference what-so-ever, I just did it because I could.

Check back soon.

Clearing Out Old Sessions

A while ago I started setting up my websites to use ActiveRecord as a session store, this means that the session information for all the visitors to my website is placed in a table in my database. It may or may not be the best way to store sessions but it’s certainly faster than the filesystem and my VPS doesn’t really have the memory capacity for an ‘in memory’ store.

Anyway, one day I decided to perform some DB maintenance, check tables where okay etc, upon logging into the DB I noticed that my sessions table had grown quite large, almost 125,000 records, little did I realize that the sessions are persisted forever in the DB.

I didn’t think it was the best idea to keep all the session data so wrote the following script and put it in lib/tasks/session.rake:

namespace :session do
  desc "Prune old session data"
  task :prune => :environment do
    sql = ActiveRecord::Base.connection()
    sql.execute "SET autocommit=0"
    sql.begin_db_transaction
      response = sql.execute("DELETE FROM sessions WHERE `updated_at` < DATE_SUB(CURDATE(), INTERVAL 1 DAY)");
    sql.commit_db_transaction
  end
end

This gave me a ‘session:prune’ rake task. The task removes all sessions older than 1 day from the sessions table. I then added a CRON job for in the following format:

0 0 * * * cd /home/user/railsapp && rake RAILS_ENV=production session:prune > /dev/null 2>&1

The job above basically calls the session:prune rake task at midnight every night.

The code in the task in MySQL specific but without a model representing the session table I couldn’t (or at least couldn’t think of a way) to make the code any more ruby-fied. In the event that you do have, or decide to create a session model the following code may work in your task (warning: untested):

Session.destroy_all("created_at" < (Time.now - 1.day))

Hope the above snippet solves at least one of your ActiveRecord session woes.

Check back soon.

Clearing Out The rFlickr Cache

Assuming you’ve followed the Caching Your Photographs tutorial at some point, you’ll probably have had a lot of fun either deleting the cache every time you upload a new photo or you’ve written your own automated method by now. For those of you that haven’t written your own method of dumping the cache yet, here’s how I do it.

First of all, I created a lib/actions folder in the root of my rails project. Inside this folder I created the file ‘photography_action.rb’ with the following contents:

class PhotographyAction
  def self.clear_cache
    ActionController::Base.new.expire_fragment(%r{photography.cache})
  end
end

The above fragment naming assumes that your photos are on a page called ‘photography’ if they are elsewhere, change the fragment to expire that page instead.

Fairly simple I think you’ll agree, you may also be asking yourself ‘why the extra file?’, the main reason for the new file is so that the cache clearing can be executed from a new rake task that doesn’t remove all your cached pages or from an admin page on the website.

You’ll also need to update your config.load_paths in environment.rb. After updating, mine looks like this:

config.load_paths += %W( #{RAILS_ROOT}/app/sweepers #{RAILS_ROOT}/lib/actions )

Inside some action in some, preferably protected, controller somewhere, add the following (redirecting to anywhere you fancy):

PhotographyAction.clear_cache
redirect_to :action => 'index'

Now for the rake task. Inside the directory lib/tasks (create it if it doesn’t exists) create the file photography.rake then put the following code inside the file:

namespace :photo do
  namespace :cache do
    desc "Clear out photography cache"
    task :clear => :environment do
      PhotographyAction.clear_cache
    end
  end
end

You should then be able to run:

rake photo:cache:clear

From the base of your project in order to clear the cache.

Bear in mind, the code above is literally just a convenient way of clearing out the fragment cache so new photos show up on your photo page, it does not delete photos, nor does it perform a refresh automatically, although, you could add it to a CRON job.

When I get chance, I intend to automate this process and build it into rFlickr along with a new, improved, caching mechanism. I’m sure the above will tide you over for now though.

Check back soon.

New rFlickr Ruby Gem

After I started to use the rFlickr gem it didn’t take me long to realize that development of the gem had all but halted, yes it worked, which was more than the original Flickr gem did, but it was still a little bit out of date and in the end, a little bit broken.

In one of my older posts I documented a fix for the gem and provided a download to unzip into your plugins folder, however, with the advent of the wonderful GitHub and its marvelous gem support I’ve decided to move the project onto GitHub.

I have preserved the original gem’s GPL license and copied the source code from its original repository on RubyForge to a new, public, GitHub repository. In the process of the move I have dropped old code from the project, updated the readme & license information and generally performed a little house-keeping.

You can find the project at: http://github.com/digitalpardoe/rflickr/. You can install the gem using one of the following methods. First involves adding GitHub as a gem source (always a good idea) and installing the gem:

gem sources -a http://gems.github.com
sudo gem install digitalpardoe-rflickr

The second method it to add the gem as a gem dependency to the environment.rb of your Rails project:

config.gem 'digitalpardoe-rflickr', :lib => 'flickr', :source => 'http://gems.github.com'

And run a rake task to install the gem:

sudo rake gems:install

Whilst performing the code migration I also added the fix that was documented in my original post and implemented support for the (not so) new ‘farm’ based Flickr URLs for images (which should make things easier to implement).

The future plans for rFlickr include new tests, improved usage examples, updated readme / documentation and implementation of missing API methods, time permitting of course.

Until the readme is updated please refer to the original post for information on how to use rFlickr.

That’s all for now, enjoy the new gem and as they say, if you don’t like it, fork it.

Guess Who's Back

As you may have noticed, it’s been a long time since my last post. There isn’t really any good reason for this. Plenty has happened, I just haven’t got round to writing any of it down.

First off I’d like to mention the website, it went through a fairly radical redesign a few months ago and I mentioned nothing about it. For some reason it’s not in my nature to be happy with what I make hence the many faces and iterations of the website. This website, whilst being my home on the internet, is also the test bed for my RoR programming, you may get tired of hearing about its re-designs and re-codes but that’s part of the reason I created it. Anyway, another re-design is coming, this time it’s not visual but all back end, the main difference you will notice is that I am doing away with user accounts and having a more open comment system (I could be shooting myself in the foot with this decision, we’ll have to see how the spam bots take it). To the people that have commented on the blog already, your comments will be preserved and, when I roll out the changes, I intend to reply to all the comments I haven’t yet replied to.

The second thing I wanted to mention, again website related, is my hosting. A good proportion of my posts seem to be apologizing for the downtime of the website. I was actually getting pretty bored of this so decided to, quite literally, take matters into my own hands. The website is now hosted on a virtual private server set up and maintained by me. This again, may be a case where I’ve shot myself in the foot. For those of you interested, the VPS is provided by the wonderful folks at Bytemark Hosting.

Number three. Many of the posts of my website relate to the use of the ‘rflickr’ RubyGem. Development of this gem seems to have been at a stand still for a good while now, I’ve therefore taken the decision to clone it and try to continue development in my spare time. More on this in a later post.

Four. Any of you interested in my photography will have noticed a lack of it over the past few months, it’s not that I haven’t been taking any photographs, it’s just that I’ve not published any. To try and remedy this I uploaded a batch of photos today that have been sitting on my computer for a while. You can take a look at them on the photo page of the website or on my Flickr page.