Using GitLab Omnibus With Passenger

GitLab is a great self-hosted alternative to GitHub. I’d set it up for other people before but it always seemed to be more hassle than should be to update and maintain (especially with it’s monthly update cycle) so I’d never set it up for myself.

Thankfully GitLab now has omnibus packages available to make installation and maintenance much easier. Unfortunately these packages contain all of GitLab’s dependencies including PostgreSQL, Nginx, Unicorn etc.. This is great for running on a server dedicated to GitLab but not terribly useful for my setup.

I already had a Postgres database I wanted to make use of along with an Nginx + Passenger setup for running Ruby application. The following describes the configuration changes I needed to make to fit GitLab omnibus into my setup.

The first steps are to create a PostgreSQL user and database for your GitLab instance and install your chosen omnibus package from installation instructions, we need to add some config first.

The first bit of config goes in /etc/gitlab/gitlab.rb, this sets up the external URL for your GitLab instance, configures the database and disables the built-in Postgres, Nginx and Unicorn servers:

external_url "http://git.yourdomain.com"

# Disable the built-in Postgres
postgresql['enable'] = false

# Fill in the values for database.yml
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
gitlab_rails['db_host'] = '127.0.0.1'
gitlab_rails['db_port'] = '5432'
gitlab_rails['db_username'] = 'username'
gitlab_rails['db_password'] = 'password'

# Configure other GitLab settings
gitlab_rails['internal_api_url'] = 'http://git.yourdomain.com'

# Disable the built-in nginx
nginx['enable'] = false

# Disable the built-in unicorn
unicorn['enable'] = false

Now you can run sudo gitlab-ctl reconfigure, this should set up all of GitLab’s configuration files correctly, with your settings, and migrate the database. You’ll also need to run sudo gitlab-rake gitlab:setup to seed the database (this is a destructive task, do not run it on an existing database).

The final bit of configuration goes in /etc/nginx/sites-enabled/gitlab.conf (this assumes you have Nginx + Passenger installed from their instructions):

server {
  listen *:80;
  server_name git.yourdomain.com;
  server_tokens off;
  root /opt/gitlab/embedded/service/gitlab-rails/public;

  client_max_body_size 250m;

  access_log  /var/log/gitlab/nginx/gitlab_access.log;
  error_log   /var/log/gitlab/nginx/gitlab_error.log;

  passenger_ruby /opt/gitlab/embedded/bin/ruby;
  passenger_set_cgi_param PATH "/opt/gitlab/bin:/opt/gitlab/embedded/bin:/usr/local/bin:/usr/bin:/bin";
  passenger_user git;
  passenger_group git;

  passenger_enabled on;
  passenger_min_instances 1;

  error_page 502 /502.html;
}

Most of the above configuration comes directly from GitLab’s omnibus configuration with a few customisations for Passenger. The main configuration options are correctly setting the user and group so there are no permission issues and ensuring that the correct directories exist in the $PATH variable to prevent errors in GitLab’s admin interface.

Currently, files uploaded into GitLab may not appear correctly using these instructions due to a permission issue, this should be corrected with a future omnibus release, more discussion can be found in this merge request.

And we’re done.

That’s about everything, hope it works for you too.

Using GitLab Omnibus With Passenger

Monitoring Internet Connection Speed With Munin

Considering that an internet connection is now deemed to be a human right you’d think that ISPs, that generally seem to rip us off anyway, would’ve managed to make their connections nice and reliable, especially when you take into account that the internet has been round for a good few years now and they’ve had plenty of time to get it right. Unfortunately this isn’t the case & I decided that I wanted a way to make sure that I wasn’t getting (too) ripped off by my ISP.

To this end I decided to make the use of the tools I already had available, a Speedtest to test the connection.

(The following is entirely Debian & Munin 2 biased, you may need to tweak it for your particular setup.)

The first job was to find some way to run Speedtest from the command line, fortunately while I was stumbling around the internet I came across speedtest-cli which makes life much easier. So first we need to get a copy of the script and put it somewhere:

git clone git://github.com/sivel/speedtest-cli.git speedtest-cli

(You’ll probably be needing to make sure you have a copy of Python installed too, for more info check out speedtest-cli’s GitHub page.)

Then we need to get some useful numbers from the script, we do this as a cron job because the script can take a while to run, use a lot of bandwidth & tends to timeout when run in Munin directly.

Create the file /etc/cron.d/speedtest containing the following (modifying the speedtest-cli path to suit your needs of course):

0,20,40 * * * * root /usr/local/speedtest-cli/speedtest-cli --simple > /tmp/speedtest.out

Finally we need a Munin script to create the graphs, the script below should go in /etc/munin/plugins/speedtest, don’t forget it make it executable too, or it might not run (chmod +x /etc/munin/plugins/speedtest):

#!/bin/bash

echo "graph_category network"
echo "graph_title Speedtest"
echo "graph_args --base 1000 -l 0"
echo "graph_vlabel DL / UL"
echo "graph_scale no"
echo "down.label DL"
echo "down.type GAUGE"
echo "down.draw LINE1"
echo "up.label UL"
echo "up.type GAUGE"
echo "up.draw LINE1"
echo "graph_info Graph of Internet Connection Speed"

OUTPUT=`cat /tmp/speedtest.out`
DOWNLOAD=`echo "$OUTPUT" | grep Download | sed 's/[a-zA-Z:]* \([0-9]*\.[0-9]*\) [a-zA-Z/]*/\1/'`
UPLOAD=`echo "$OUTPUT" | grep Upload | sed 's/[a-zA-Z:]* \([0-9]*\.[0-9]*\) [a-zA-Z/]*/\1/'`

echo "down.value $DOWNLOAD"
echo "up.value $UPLOAD"

Finally, restart munin-node (probably something along the lines of /etc/init.d/munin-node restart), give it an hour or so and enjoy your new statistics.

Check back soon.

Monitoring Internet Connection Speed With Munin

Automatic Build Numbers In Xcode

As part of the project I’m working on at the moment I wanted a way to automatically update a targets build number in Xcode. My preferred setting for the build number is the current SCM revision, the script below automatically updates the build number in the Info.plist file each time the application is built.

Just add the script as a ‘Run Script’ build phase for your chosen target in Xcode. The script is only designed for Git, SVN & Git-SVN as they’re all I use at the moment.

#!/bin/bash

#
# Automatically sets the build number for a target in Xcode based
# on either the Subversion revision or Git hash depending on the
# version control system being used
#
# Simply add this script as a 'Run Script' build phase for any
# target in your project
#

git status
if [ "$?" -eq "0" ]
then

    git svn info
    if [ "$?" -eq "0" ]
    then
        buildNumber=$(git svn info | grep Revision | awk '{print $2}')
    else
        buildNumber=$(git rev-parse HEAD | cut -c -10)
    fi
elif [ -d ".svn" ]
then
    buildNumber=$(svn info | grep Revision | awk '{print $2}')
else
    buildNumber=$(/usr/libexec/PlistBuddy -c "Print :CFBundleVersion" Source/Application-Info.plist)
fi

/usr/libexec/PlistBuddy -c "Set :CFBundleVersion $buildNumber" Source/Application-Info.plist

Check back soon.

Automatic Build Numbers In Xcode

Me On The Internet

As I never seem to have anything interesting enough to write a blog post of any length about it seems like a good time to mention all the other (far more frequently updated) places that you can find me on the internet.

The popular places (where I’m known as digitalpardoe):

If you must:

  • Facebook: https://www.facebook.com/digitalpardoe

Really?!:

There’s probably a lot more places you can find me that I’ve forgotten about, but if you haven’t already realised, my username tends to be digitalpardoe so just try a search on your ‘social’ site of choice.

Strangely, in putting this together, I seem to have thought of something longer to write about.

Check back soon.

Me On The Internet

Open Sourcing A Few Projects

I’ve finally gotten around to it, writing another blog post, almost one year since the last post containing any meaningful content. Rather than apologising for the hiatus and promising to blog more I will instead move on to something more interesting.

Last year I took the decision to open source two of my main projects, iSyncIt and Set Icon. I didn’t make a big deal about doing it, in fact, I didn’t make any ‘deal’ at all, I just set the GitHub repos to ‘public’. Consider this the (6 months late) announcement of their open sourcing.

The iSyncIt repository contains almost the complete history of iSyncIt development. Unfortunately I started development of iSyncIt before I discovered version control and as a result some of the history is only available as source code bundles in the downloads area.

Fortunately Set Icon development started after I had discovered the advantages of version control so its full development history can be seen in the GitHub repository.

Both of the projects have a fairly non-restrictive license, you can read it in either repository. The downloads section on GitHub for both projects also contains all of the versions of the applications I have ever publicly released.

Now for something a little more current.

This afternoon I flicked the switch that open sourced my final-year university project, Chroma32 (under the same license as the other two). The original idea was to create a (dissertation-grand sounding) ‘photographic asset management system’, the scope eventually morphed into creating a document management system that was as extensible as possible.

The whole project was built around alpha & beta versions of Rails 3 and the alpha-version gems that go along with it. Overall I ended up with a themeable system with reasonably tight integration for complex plugins.

If you want to discover more, clone a copy from its GitHub repo and hack away.

Check back soon, go on, it might actually be worth it from now on. I promise.

Open Sourcing A Few Projects

What Makes A Rich Internet Application (RIA)?

Disclaimer: This blog post may seem a little outdated, after all the term RIA seems to be slowly dropping out of the hype-o-sphere and been replaced with the “cloud”, however, rich internet applications are still the cutting edge of many an enterprise implementation, hence this blog post.


The internet is changing, connections are getting faster, web browsers are getting more advanced and the technologies behind the internet are being constantly improved & updated. Due to this rapid evolution more and more companies are offering services that run on the cloud, accessible anywhere, anytime, anyplace.

Virtually every application that an average use would expect to find on their desktop computer can now be found somewhere on the internet. These are the rich internet applications, applications that finally break free of the desktop into the word of infinite storage and always on availability. This blog post aims to discuss the factors that are required (in my opinion) to produce a rich internet application.

Definition

According to David Mendels (one of the group that coined the phrase ‘rich internet application’ at Macromedia) the most basic definition of an RIA is ‘no page refresh’ (or ‘single screen’ depending on your interpretation). But he himself admits that this was the definition ‘at the time’. [Based on a comment by David at RedMonk].

In the current web-sphere many websites appear to classify themselves as RIAs, this probably due, in part, to the rise of the term ‘rich internet application’ as a buzz-phrase among developers and technology publications. Many technologists involved with RIAs now argue that any website that requires some form of browser-based plugin can be categorised as a RIA, but in the a world of desktop-replacement web applications does the term still apply to websites that simply include a flash video or make extensive use of AJAX to prevent page reloading?

Redefining RIAs

After trawling through many of the different websites that consider themselves rich internet applications I fully agree with the original definition that an RIA must have no (or very little) page refresh, this is one of the factors that makes an RIA more like a desktop application in terms of user experience, you wouldn’t expect the application window in Excel to completely re-draw itself every time you switch work books would you, so why put up with it in web applications that you use.

Every website I came across that I would consider to be an RIA also shared another common attribute, the lack of full page scroll bars. Many of them contained scroll bars to navigate through subsections of content but none ever forced me to trawl through large pages and lose access to key navigational features. Again, this is reminiscent of most, if not all, desktop applications. A desktop application will nearly always retain placement of navigational features the most obvious of these being the menu bar at the top of a window (or screen).

The use of browser plugins and ‘rich media’ however were not present in the RIAs that I came across. Many created a more than optimal user experience through the use of JavaScript, HTML and a few images, features available in all modern web-browsers.

Personally I believe that the only websites that should be considered ‘rich internet applications’, the key word being ‘applications’ are those that most effectively simulate the desktop application user experience; this does not however mean that RIAs should only be limited to the functionality that a desktop application can provide. The World Wide Web offers far greater scope in terms of storage, processing, scalability, accessibility and social interaction, features which should be embraced in the creation of rich internet applications and can only serve to augment the user experience.

Conclusion

In this blog post I have discussed in very simplistic terms, what, in my opinion, makes a RIA. It isn’t the inclusion of media heavy content, or the ability to load content without re-loading the whole page. It is the ability of a website to simulate a desktop user experience, effectively allowing the user to easily replace any desktop application with a browser-based clone.

In the context of modern rich internet applications the browser should be seen, not as a way of ‘browsing the internet’, but as a shell that provides a user with access to the applications which they use every day. The web browser is the operating system of the RIA world.

Check back soon.

What Makes A Rich Internet Application (RIA)?

Moving To Typo

As you may or may not be aware digital:pardoe has, for the past 4 years, been running atop a custom blogging engine that I developed as way to learn Ruby on Rails. Whilst the system has (nearly) always been stable and (nearly) always fast I felt it was time to retire it, from everyday use at least.

When using the digital:pardoe blogging engine the ‘blogging’ always felt secondary to the actual development of the blog, I was always found myself doing far more of the ‘adding new features’ than the ‘adding new posts’ which, at least in recent months, is not what I intended to happen.

Unfortunately, the loss of my bespoke blogging engine also means a loss of some of the bespoke features I added to the website. The downloads (previously ‘software’) area is now very cut down – everything is still available as before though. The ‘photo’ section has now disappeared completely, if you want to see my photographs you’ll have to visit my Flickr account instead. I’ve made every effort to redirect old pages to their new location but if you find a page that is missing please contact me so I can fix the problem.

In the near future I intend to release the digital:pardoe blogging engine source code (once I’ve cleaned it up of course) as it may be a useful reference to other new RoR developers. Don’t expect the default Typo theme to stick around for long either, I’m currently in the process of porting a digital:pardoe theme to Typo.

And, if you hadn’t guessed already, digital:pardoe is now Typo powered.

Check back soon.

Moving To Typo

‘Merge All Windows’ AppleScript

A friend reminded me of how nice it would be to merge all the windows in Safari back together using LaunchBar (or QuickSilver for that matter). So I wrote a quick AppleScript to accomplish the task:

on gui_scripting_status()
  tell application "System Events"
    set ui_enabled to UI elements enabled
  end tell
  if ui_enabled is false then
    tell application "System Preferences"
      activate
      set current pane to pane id "com.apple.preference.universalaccess"
      display dialog "The GUI scripting architecture of Mac OS X is currently disabled." & return & return & "To activate GUI Scripting select the checkbox \"Enable access for assistive devices\" in the Universal Access preference pane." with icon 1 buttons {"Okay"} default button 1
    end tell
  end if
  return ui_enabled
end gui_scripting_status

on click_menu(app_name, menu_name, menu_item)
  try
    tell application app_name
      activate
    end tell
    tell application "System Events"
      click menu item menu_item of menu menu_name of menu bar 1 of process app_name
    end tell
    return true
  on error error_message
    return false
  end try
end click_menu

if gui_scripting_status() then
  click_menu("Safari", "Window", "Merge All Windows")
end if

The ‘gui_scripting_status()’ routine is taken and modified from code that can be found here: http://www.macosxautomation.com/applescript/uiscripting/index.html.

Check back soon.

‘Merge All Windows’ AppleScript