Computers, Linux, Open-source, Ruby, Software, Web

Migrating from Bamboo to Cedar

Heroku recently sent me a nice email:

You have one or more applications deployed to Heroku’s Bamboo stack. Bamboo went live almost four years ago, and the time has come for us to retire it.

You may have received a previous message about Bamboo deprecation. That message may have had an erroneous app list, but this email has the correct list of Bamboo apps that you own or are collaborated on. We’re sorry for any confusion.

On June 16th 2015, one year from now, we will shut down the Bamboo stack. The following app/apps you own currently use the Bamboo stack and must be migrated:

This is on the heels of an email about ending legacy routing support. It seems they have been quite busy over at Heroku, but I can’t complain too seriously about free app hosting.

Upgrading a legacy Rails application to the Cedar stack did require a few changes. I’ll document some stumbling blocks for posterity.


The first big change described in the general purpose upgrading article from Heroku: was the use of foreman to manage your web services. I luckily have a simple app, and did not need to worry about resque, mailers, etc. This made my Procfile rather straightforward:

web: bundle exec unicorn -p $PORT -E $RACK_ENV -c config/unicorn.rb

Absent from the foreman crash course is any information about the corresponding .env file. I did need to add a few environmental variables to keep on working in development as usual:


Once the Procfile was committed (end the .env) file added to .gitignore I then added Unicorn to the gemfile. I’m not sure if Unicorn is strictly necessary over webrick in development, however this is the example in the Cedar upgrade guide, so I wanted to run as close to production as practical to prevent any surprises.

After installing Unicorn, I then needed to touch config/unicorn.rb since it did not exist. Against, its in the example, but I’m not sure if its strictly necessary especially given that its just an empty file for me. To start your Rails application, you now issue foreman instead of of the older rails s

Devise incompatability

Not directly related to the Cedar changes, but a common gem, so worth mentioning. Devise 2.x has removed migration helpers. I stupidly didn’t lock my version of Devise in my Gemfile so I was confused why this was failing for me. I found the last 1.x version of Devise by running this command:

gem list devise --remote --all, and the specifying ‘1.5.3’ as the second argument in my Gemfile on the devise entry.


Heroku requires Postgres in production and I was previously using sqlite in my development environment. Again, to mirror production I wanted to use Postgres my development environment so that I could be as close to production setup as practical. I took a quick trip down memory lane to setup and configure PostgreSQL on my Linux development machine: . An interesting observation is that in Ubuntu 14.04 LTS, PostgreSQL runs on port 5433, and not the default 5432. This can be verified with netstat -nlp | grep 5432. If no entries come back, PostgreSQL is running on a non-default port. Try to grep for 5433, or other numbers until you find the process.

Backing up your database

Before I made any server changes, I wanted to have an up to date copy of the production databse on my machine to prepare for the worst. This can be done by using the pgbackup commands:

heroku pgbackups:capture //create a backup
heroku pgbacksup:url //make a public URL for that backup

Then I clicked the public URL and downloaded the file – safe and sound to my computer. By the way, you can import this production database into your local instance of PostgreSQL using the following command:

pg_restore --verbose --clean --no-acl --no-owner -d development /path/to/database

Pre-provisioning your database

A quick note on database availability. I got into a chicken and the egg scenario where the app wouldn’t deploy without the database, and the database seemingly couldn’t be created without the app first being deployed. Heroku has an article on pre provisioning and I found it a necessary prerequisite to deploying to my newly created test Cedar stack:

To pre-provision your database, run the following:

heroku addons:add heroku-postgresql

You can even import the database from production, etc as part of the heroku utility. I used the public database URL I created above to populate my new cedar stack database:

heroku pgbackups:restore

Migrating to Cedar

Once I had tested on a new app, and in a feature branch I had confidence everything was working as expected. I was ready to make my changes to the production app by migrating it to Cedar. To do this, the command is:

heroku stack:migrate cedar

I either have an old version of the Heroku gem, or this is a discrepency in the gem and the non-gem packaging, but the docs misidentify this command as: heroku set:stack cedar which was not a valid command for me. The migrate command above appears to be analagous.

Once I merged my cedar feature branch back into master I was ready to push to master. And FAIL. It turns out that I needed to precompile my assets, which had a dependency on the database existing. I tried to pre-provision as I had done on my cedar branch, however the results were the same after running this command.

A quick search yielded the advise to add the following line in the config/application.rb file:

config.assets.initialize_on_precompile = false


I’ve learned quite a bit about Heroku in this upgrade experience. Their changes force me to use the latest software which is nice in some ways. When everything is running on my website, I don’t often worry about upgrading until I get an email like the one above.

The downside of course is that this upgrade process is a pain in the ass, and is error prone, and affects production websites that are running smoothly. If it isn’t broken, you don’t want to fix it. Except this time you have to in order to have it continue to function after June, 2015.

Best of luck to other people upgrading. Just be patient, and test everything in a new app if you have any doubts.

Computers, Linux, Open-source, Ruby, Software, Web

Setup PostgreSQL with Rails on Linux

Today, I found myself needing to setup a Rails application to work with the PostgreSQL database. I found that the documentation on the PostgreSQL website was like drinking from a fire hose. Worse was every community response for an error message has a slightly different approach to the solution. Lets run through a basic Rails PostgreSQL configuration assuming Rails 3, Postgres 8.x, and Ubuntu 11.04:

Step 1: Installing PostgreSQL and Libraries

Install the PostgresSQL server, the client package to connect (psql), and the pg library needed to compile the Ruby PostgreSQL driver:

$ sudo apt-get install postgresql postgresql-client libpq-dev

After this finishes installing, you can turn to your OS X co-worker and laugh at him while he is still downloading the first tarball file. PostgreSQL will start automatically, under the user postgres. You can verify that the installation is a success by using the psql command line utility to connect as the user postgres. This can be accomplished using the following command:

$ sudo -u postgres psql

This uses sudo to elevate your basic user privileges, and the “-u” switch will execute the following command as an alternate user. As the postgres user, this will run psql. If you connect successfully, you should be at the psql interactive prompt. If not, ensure PostgreSQL is running, and that psql is in the path for postgres.

Note: From the psql interactive prompt, type “q” to exit.

Step 2: Configure a New PostgreSQL database

From the psql prompt, you can run SQL to view the current PostgreSQL users:

select * from pg_user;

You should see a table of database users returned:

usename usesysid usecreatedb usesuper usecatupd passwd valuntil useconfig
postgres 10 t t t ********    

(1 row)

We can see the postgres user that was created automatically during the installation of PostgreSQL. Lets add another user to be an owner for our Rails database. The path of least resistance may be to use your shell account username, since it will keep us from having to change some options in the database configuration file.

$ sudo -u postgres createuser 
# Shall the new role be a superuser? (y/n) n
# Shall the new role be allowed to create databases? (y/n) y
# Shall the new role be allowed to create more new roles? (y/n) n

This will create a new database user (named your shell account name), and grant that user access to login to the database. This will ask you a few questions regarding the user account. It is important for Rails that you answer “y” to whether the user should be able to create databases. If you say no, you will not be able to run any rake tasks that create or drop the database.

We can confirm by selecting from the pg_user table again.

$ sudo -u postgres psql
select * from pg_user;
usename usesysid usecreatedb usesuper usecatupd passwd valuntil useconfig
postgres 10 t t t ********    
<username> 16391 f f f ********    

(2 rows)

Step 3: Configure Rails

Switching to the Rails side, lets configure our application for Postgres. This requires the pg gem. Open your Gemfile and append:

# Gemfile
gem "pg"

Now run bundle install to update your project gems.

$ bundle install

This should compile the Ruby pg database driver, allowing Ruby to talk to Postgres. Now, lets tell our Rails application how to access our database. Open up config/database.yml, and change the adapter line to read “postgresql”. The database name by convention is the name of your project plus “_development”. Finally, your shell username is needed. Because PostgreSQL will authenticate this account locally, you will not need to supply a password option. Delete this line.

# config/database.yml
  adapter: postgresql
  encoding: unicode
  database: _development
  pool: 5

To test, run the rake task to create your database:

rake db:create

If everything works, you should have a newly created database owned by your shell account. You can login using psql by passing the name of the database as an option:

$ psql -d _development

Happy migrating!


If you get the error: “FATAL: Ident authentication failed for user “, ensure that you can see your newly created account in the pg_user table of the postgres database. (See Step 2 above)

If you get the error: “PGError: ERROR: permission denied to create database”, then ensure that your database user account has been granted the privilege CREATE. This can be done during the “createuser” command line account creation by answering “y” to the corresponding question about this permission.

If you get the error: “FATAL: role is not permitted to log in”, try manually granting the privilege to login on your database user account. This can be done by executing the following as postgres in the psql prompt:


Notes on Alternative Authentications

PostgreSQL integrates very deeply into the Linux authentication world, allowing for quite an array of connection options. By default passwords are not accepted for local connections. Instead, PostgreSQL is configured to use the “ident sameuser” method of user authentication. See more at

Events, Family, Personal, Thoughts

My House By the Sea

Last week I received an email from a recruiter looking to staff a Ruby on Rails development shop in Warren, RI. The job is 20 minutes away from Roger Williams Zoo (my wife’s future employer), and was in a quite, cheaper suburb right in the Bristol bay areas. The first house my wife showed me we mapped out directions for on Google, and the commute time was 25 seconds! This is a smaller salary, but I was weighing the trade-offs of salary, with quality of life. I would drastically reduce my commute, get to take my dogs into work, and work on projects that I feel I would be a better match for. For every reason to go, I could find an equally compelling reason not to go. I went in for an interview, and got a job offer. Inside of a week, I talked to my wife, my best friend, my parents, and myself. I wore myself down with little sleep. What I did get was restless, and the thought of switching consumed my every waking thought. Some folks at work lived in the area where this new employer was, and over the course of a few meals, I learned enough past experiences from them that I become convinced that this was where I wanted to be. I took confidence in someone from work who I consider very wise, and he took humor in the fact that this was such a struggle for me. I simply couldn’t see the forest through the trees. After assuring me that I would indeed take the job, and that I would love it, I went home that night on the train. From the time I got on the train, until the Taxi ride to my front door step, I noticed that my health had taken a turn for the worst. I had a heavy cough, I was sore, and I was freezing cold. After an unsatisfying dinner, I went into a sick, restless sleep. In the dreams I had (no doubt fueled by various crossings of medications) I saw my life staying at my current job, and my life at my new employer. I slept for what felt like days. When I woke up, backed with confidence by my wife and my co-worker confidant, I knew what the right decision was. I had just experienced a spiritual journey!

For those of you that are my current co-workers, I will genuinely miss you. It has been a hell of a good time at Beacon, and I feel lucky to have worked with such a great team. I hope that I got to share some of my Southernisms in your lives, as much as you have given me a perspective on the customs of New England. I will remember the lunches that I stuffed myself over, the great Android/iPhone debates, the pool games that I miserably lost, the new terms like ‘Ghabo‘ that are drilled into my vocabulary, new nautical terms learned navigating the Charles River, and tossing around the old pig skin. In the end, the commute, and the type of work was just too much for me to bare. I made it almost a full year, but it is time for me to move on and find my personal happiness. You are all invited down to the new house (wherever it may be). I know Kyle and Louis are in since its their old stomping ground. Fisher, and Josh are always up for a good time. Hoydis loves sailing, which Newport has plenty of. And lo and behold, an On The Border exists in Warwick, so I know Jeremy, the wife and kids are in. I wish I had more resolve to stay, but I know when its time to fold and go get my house by the sea.

Until we meet again!

Computers, Ruby, Software, Web

Rails and SOAP – A Dirth of Information

Scrubbing Bubble

I am writing this post to both vocalise the success of my first encounter with SOAP in Rails, as well as explain the process. For anyone working on a Rails application with SOAP for the first time, its probably become apparent that there is little documentation for the first time users out there.  Read on to learn more about Rails, SOAP, and hopefully gather enough to get your application up and running.

Introduction to SOAP

I will assume that if you are still with me, then you have some experience with Rails. I mean, we do have to start somewhere. I always viewed SOAP as one of the legacies that Java has left on this world. And by legacy, I mean like how Herpes never really goes away. SOAP used to stand for Simple Object Access Protocol, which in English means absolutely nothing more than being a clever acronym. SOAP in layman’s terms is a way that two bases of code can communicate with each other regardless of the Operating System, language, or anything else proprietary. SOAP also allows outside users to make changes to your application while enforcing a level of security.

In SOAP implementations, there are two parts – the client and the server. The client connects to the server to either query for information, or to request changes to that information. The methods that are available for a client to use are entirely up to the server’s definition. This definition is a very real file named the WSDL. Yeah thats right – another clever acronymn. The WSDL file is the Web Services Description Language. This is where methods, their documentation, and what parameters they take in, and spit out are recorded. This file is a plain-Jane XML file, so you can view it on your own if you so desire.

A little trick that I have seen several SOAP server implementations pull is securing this WSDL file from the outside world. Typically this is via Basic Authentication, which we will need to handle with Ruby when connecting. Regardless of the security implementation, we will need to have access to this WSDL file for our Rails application to consume. More on this in a minute.

Speaking of consumption, if you glanced at the WSDL file, you probably had a hard time swallowing it. Those of you familiar with Java will quickly recognize this as a signature feature of the language in general. Simply, the WSDL file was never really intended to be made sense of by mere humans. Most languages have a SOAP parser that will read this XML, and construct classes and methods in its native code to make life a bit easier. For example, Microsoft Visual Studio allows a user to “Add a web service” which will read the file, and generate C# or VB.NET code. Aren’t they smart cookies?

Rails and SOAP Together

Rails is no stranger to the SOAP game, and offers its own way to use web services. To start with, you will need to acquire a few files. Specifically, you will need to obtain the wsdl2ruby script which is part of soap4r.  This can be obtained from their website, but downloading stuff directly is a bit old school now-a-days. Instead, we will use the gem package manager to fetch this file and its dependencies. From a terminal, issue:

gem install soap4r

This will fetch the gem, and its dependencies. Now, you should have a utility named “wsdl2ruby” accessible from your terminal. An important note: soap4r comes bundled with Ruby, but it is a much older version. To you will need to be explicit in which version you want to use. This means in IRB, or in your Rails model, you need to place ‘gem soap4r‘ at the top of your files. This loads the new soap4r gem instead of using the bundled soap4r code.

There are two main ways in which to use wsdl2ruby in your Rails application. The first is calling in the library to parse SOAP files (via “require”), and generating your methods on-the-fly. This is a really good approach for keeping your code as terse as possible, however it makes development a little slower, since the methods you can call are not readily apparent. The second method calls wsdl2ruby manually from a terminal, and generates a driver file, and example code for how to use it in your application. I recommend doing both, the former for your actual application, and the latter for reference during development. After you are finished developing, you can simply delete the generated files without any side-effects.

Generating Client Files for Web Service

The usage syntax for wsdl2ruby is a little ugly, but here is the simplified version of what it can do:

Usage: /ruby/bin/wsdl2ruby.rb --wsdl wsdl_location [options]
  wsdl_location: filename or URL

  For client side:
    /ruby/bin/wsdl2ruby.rb --wsdl myapp.wsdl --type client

  --wsdl wsdl_location
  --type server|client

You can see from the usage above, that we can run wsdl2ruby against a WSDL file location, and have it generate the classes and methods needed for our rails application. An example might be (substituting your own WSDL file of course):

wsdl2ruby --wsdl --type client --force

The “–wsdl” argument indicates that we want to generate files based on a WSDL definition. The “–type client” argument indicates that we want to generate a “driver” file, as well as see a sample client library to show how one might use the driver. The “–force” argument makes the parsing of WSDL a little more robust when building the Ruby code. When you run this, it will create 4 files in your current working directory. Of these, the file we are most concerned with is the “xxxClient.rb” file. This is your sample code to use as a reference.

Take a gander at this file, and you will see something similar to the following:

#!/usr/bin/env ruby
require 'defaultDriver.rb'

endpoint_url = ARGV.shift
obj =

# run ruby with -d to see SOAP wiredumps.
obj.wiredump_dev = STDERR if $DEBUG

#   getUnread(parameters)
#   parameters      GetUnread - {}getUnread
#   parameters      GetUnreadResponse - {}getUnreadResponse
parameters = nil
puts obj.getUnread(parameters)

The name of my Web Service is getMBXUnread. Each method available for use is shown by the synopsis section. For example, my first method shown here is the “getUnread” method. I can see that it takes a parameter (which parameter is mysteriously omitted, but we can find it easy enough), and returns a value in the property “getUnreadResult” when called. To find out the name of the parameter it is expecting, look at the default.rb file. In my example, I see the following class (named the same as my method in the Synopsis above:

require 'xsd/qname'

# {}getUnread
#   strUsername - SOAP::SOAPString
class GetUnread
  attr_accessor :strUsername

  def initialize(strUsername = nil)
    @strUsername = strUsername

I can now see that this method takes a value for “attr_accessor :strUsername” upon creation. This means that I can say “…getUnread(:strUsername => ‘username’)” in my Rails application. This is what we need to get started programming.

SOAP in your Rails Application

Open up one of your Rails models and get ready to roll up your sleeves. At the top, we will need to require a file to be loaded to make the SOAP functions accessible from the model class. Place the following at the very top of the model:

require 'soap/wsdlDriver'

Now, you can create a new method, and reference your WSDL resource we were playing with earlier. I will need to initialize my SOAP client like so:

soap_client ="").create_rpc_driver

The URL that I have as a parameter will need to reflect your WSDL location. The “create_rpc_driver” method will instruct Ruby to read this resource, and construct the classes we already saw earlier. The only difference is now it is doing it on-the-fly instead of creating files in the working directory. Now that our client is ready, we can query our SOAP server for information:

soap_client.getUnread(:strUsername => username).getUnreadResult

I called the “getUnread” method on my client and passed in the parameter it was expecting from earlier. This is still available to reference in your “xxxClient.rb” file if you want to refer back to it. Next, I pass in a username variable as the parameter in a hash format. At this point, I now have a SOAP object, and barring any errors connecting, I should be able to retrieve the “getUnreadResult” property of this request.

Basic Authentication

Earlier, I mentioned that some SOAP server implementation may make use of Basic Authentication (prompting for a username / password) before allowing access to the WSDL file. This is handled by soap4r (and thus wsdl2ruby) by using a property file on startup. You will need to create this property file inside a folder named “soap” somewhere on you application’s load path. For IRB, this load path is configured using the “-I” switch, and specifying a directory following. For example, if I am working out of “C:”, then I could create the file “C:soapproperty”, and load this file in IRB by issuing “irb -I ‘C:“. In Rails, you can place this soap/properties in a number of locations, but I would recommend sticking it inside the vendor folder.

Inside this property file, we can associate URLs with credentials. Here is an example property file listing:

client.protocol.http.basic_auth.1.url = http://example/path/to/wsdl
client.protocol.http.basic_auth.1.userid = username
client.protocol.http.basic_auth.1.password = PaSsWoRd

Note that this file does not have an “.rb” extension – that is because this is a property file and is not valid Ruby syntax! You can also create multiple basic authorization definitions by incrementing the group number. The first group shown above is “1”.

If you are working with SSL, then check out this article. You can use the same property file above, with some added settings.


SOAP isn’t easy. In fact, I read in the Enterprise Integration with Ruby book that SOAP now doesn’t stand for Simple Object Access Protocol because it is no longer simple. SOAP just means SOAP.  As the complexity of your WSDL file increases so do the odds of wsdl2ruby choking on it. A lot of this has to do with mapping classes to Ruby equivalent objects. wsdl2ruby has made remarkable ground, but it still isn’t perfect.

If you get stuck, try generating the client files, or viewing the “methods” of your soap_client inside of a Rails console, or IRB. The help out there isn’t great, but try checking out “;, for the soap4r API upon which wsdl2ruby is based.

The book Enterprise Integration with Ruby has about five pages talking about wsdl2ruby in particular, and an entire chapter talking about RPC calls with Ruby, including how to create your own SOAP server.

Good luck!

Computers, Ruby, Software, Web

Adventures with Rails: Part Deux

The L****** management website continues to progress. I hope that I can finish it in time to present it at Georgia Summit this year.

The website periodically polls for active users in the L****** portal periodically via AJAX:

All of the links show work, as the website bootstraps “cptool”. This means that people can view near-realtime views of all users, enable accounts, reset passwords, etc. I see this page being useful for the helpdesk.

Also, you can view user information by selecting an account. I wrote a wrapper for the data that comes back, so this is accessible anywhere in the project:

Groups can also be viewed, complete with memberships, and adding / removing capabilities. Also, I have used the will_paginate plugin to make large data sets manageable:

Coming up will be the modular permissions management, and wizards for creating extracts from Banner and importing them into L******. Also, a Targeted Announcement wizard (which doesn’t suck) will be included that will allow for easy population selections from Banner and migrations into the portal.

After this point, I hope to have the website go in a limited live beta state, so that people can begin using the wizards, and the helpdesk can begin checking on accounts while I finish developing the rest.

Computers, Linux, Open-source, Ruby, Software, Web

How to setup a Hoard of Mongrels

Update: Forget this, and check out mod_rails instead

My wife recently commented to me that her website takes quite a while to load. This website is powered by the Ruby on Rails framework, and was served via Mongrel behind an Apache proxy. This single instance of Mongrel did the job, but not very well. It took anywhere between 10-15 seconds for the URL to resolve, and begin rendering. After I setup 3 Mongrel servers, this time has been cut down to being around a second.

I did some research, and quickly found that the power of Mongrel was its ability to parse and server dynamic languages with a small memory footprint. (Each instance of Mongrel consumes anywhere from 15MB to 40MB). The problem however is that where Apache can accept multiple connections and simultaneously process requests in parallel, Mongrel cannot. This means sequential access for each requested file inside of your Rails application. If you have 20 images, 3 CSS stylesheets, and 5 Javascript files, you could be looking a quite a delay.

The de-facto solution then, was to run multiple instances of Mongrel (called a cluster) and use Apache to intelligently route the requests based on each instance’s load. This way you achieve simultaneous connections with parallel processing inside of your Rails application. I was off to Google to learn how to setup this environment when I was confronted by problem #1: there isn’t much on the subject floating around out there. Problem #2 is what I did find was geared towards Capistrano (another layer of complexity I was hoping to avoid). Problem #3 is this seems to be even documentation for Ubuntu is even more scarce.

I pieced together information from various sources, and came up with a working solution. Read below to replicate this at home: (Keep in mind that this guide is geared towards someone who already has one instance of Mongrel running their Rails application). For more information on setting up just a plain old instance of Mongrel, read this article

My setup:

  • Ubuntu 7.10 (any *nix distribution should do – but your files may be located in different places)
  • Apache 2.2 with mod_proxy, mod_proxy_balancer, mod_rewrite
  • Ruby on Rails 1.8.4 and a working project ready to go live

I started with getting what I needed (it is possible that the only thing really needed is mongrel and mongrel_cluster):

sudo gem install daemons gem_plugin mongrel mongrel_cluster --include-dependencies

Next, lets make sure that the modules that we need in Apache2 are enabled (Ubuntu style):

sudo a2enmod rewrite ; sudo a2enmod proxy ; sudo a2enmod proxy_balancer

I then started my project by navigating to my projects root directory and issuing the command:


This started my project (using mongrel and port 3000). This was my proof of life before I starting mucking around with all kinds of code. I connected to the URL to confirm that it works. Next, lets generate our mongrel_cluster configuration (its pretty straightforward)

mongrel_rails cluster::configure -e production -p 8000 -a -N 3

This should return something like “mongrel_cluster.yml configuration created inside ./config/”. The directives above are the same as when starting WeBrick, or regular Mongrel. The “e” switch is for your environment (development, production, etc). The “p” switch is to specify what port to start on. Ports will be sequentially bound based on the number of server instances (switch “N”) you specify. In this case, 8000, 8001, and 8002 will be used my Mongrel. And finally, the “a” switch locks down the Mongrel servers to only listen on the localhost address. This means only the machine this is running on can access these resources.

Now that our configuration file has been generated, we can test our progress. Run the following to start the clusters:

mongrel_rails cluster::start

You should see output detailing the servers starting up, and the ports they have bound to. For a full list of options, just run “mongrel_rails”. Its very similar to other “init” scripts in *nix. Verify that these instances are running by connecting to the ports manually on your machine using something like “lynx” with the URL All should connect for you at this point.

Now comes the hard(er) part – configuring Apache. We need to create a VirtualHost directive inside of “/etc/apache2/sites-available/default” file. For many distributions, this file will be “httpd.conf”. Inside this file, create something like the following:

  DocumentRoot /var/rails/

  <Directory "/var/rails/">
    Options FollowSymLinks
    AllowOverride None
    Order allow,deny
    Allow from all

  RewriteEngine On

  # Make sure people go to, not
  RewriteCond %{HTTP_HOST} ^$ [NC]
  RewriteRule ^(.*)$$1 [R=301,L]

  # Rewrite index to check for static
  RewriteRule ^/$ /index.html [QSA] 

  # Rewrite to check for Rails cached page
  RewriteRule ^([^.]+)$ $1.html [QSA]

  <Proxy *>
    Order Allow,Deny
    Allow from all

  # Redirect all non-static requests to cluster
  RewriteRule ^/(.*)$ balancer://mongrel_cluster%{REQUEST_URI} [P,QSA,L]

  # Deflate
  AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml application/xhtml+xml text/javascript text/css
  BrowserMatch ^Mozilla/4 gzip-only-text/html
  BrowserMatch ^Mozilla/4.0[678] no-gzip
  BrowserMatch \bMSIE !no-gzip !gzip-only-text/html

Again, this should go inside a VirtualHost container. You will need to replace the occurrences of “” with your actual DNS name. DocumentRoot should also point to the full path of your Rails project location. This does a few checks and then proxies your request to (an as of yet unwritten) our proxy balancer.

Something important that I do not see a lot of mention of is the securities needed to use a proxy. Note the section <Proxy *>…</Proxy>. If you do not put this inside your VirtualHost, you will receive an error 403: Access Forbidden when you attempt to connect.

Next, outside of the content of our VirtualHost container, we will need to create our Proxy balancer. Basically, we give this an arbitrary name (which is already defined in our VirtualHost above) of “mongrel_cluster”. Paste the code below underneath your closing tag for VirtualHost:

<Proxy balancer://mongrel_cluster>
  Order Allow,Deny
  Allow from all


It is important to note that the address to load must be your internal loopback address (or Originally, I was using name based virtual hosts, and assumed that this would need the name to correctly resolve. However this creates an outbound request and so fails because Mongrel is only listening to local connections.

Again for security, I stuck in the “Order Allow, Deny, Deny from all” directives to allow public access to this resource. After this, save your file and issue a restart command to Apache2:

sudo /etc/init.d/apache2 force-reload

With luck, you will be served the product of your hard working hoard of Mongrel servers.