Computers, Open-source, Software

Assign Pull Requests with Hubot

ChatOps was a term our DevOps person shared with me. Its the concept of automating everything you can through your company chat client. This could include monitoring, deploying, and just having fun (/pugbomb anyone?)

We implemented a pull request policy for all new code going to master. A pull request is opened by a developer on Github for the feature branch they want to merge into master. One of the biggest benefits to opening a pull request is having a peer review happen on your code where at least one other developer provides a review before it is committed to master. Fixing issues in the code at this point in the development cycle is relatively inexpensive compared to having this code fail QA, or BA sign off, or worst case, having a production incident.

Who reviews the pull requests? In the interest of being fair, it shouldn’t be a single developer that carries that burden. Pull requests can be a fantastic tool for transferring technical and domain knowledge so its best if everyone can participate. A developer might have never had occasion to work on the part of the codebase that contains the changes. For these reasons, we decided to do a round robin assignment of PRs to the developers. And yes, we fixed the bug where you get assigned your own PR 🙂

To accomplish this we utilized an open source project called Hubot that is maintained by Github and has integrations with Slack, as well as other chat clients. Hubot scripts are written in CoffeeScript, and the platform runs on Node.js. We enabled the integration between Github and Slack. A chat bot will publish a message on Slack when a pull request is opened. Hubot listens for these messages and then runs the code that you specify in response to the event. It sounds complex, but most of the heavy lifting is already done for you. You just listen for keywords, and take certain actions.

In our case, mentioning to a developer that he or she is assigned (via Slack) to a PR was sufficient for our use case. We created a new Slack channel, setup a webhook, and write a few lines of CoffeeScript to do the round robin assignment. It looks like this in Slack:

Screen Shot 2016-08-11 at 6.30.50 PM

HubotSlack = require 'hubot-slack'

module.exports = (robot) ->

Require the hubot-slack module. Inside this function is where we will do our work

# { <github username>: <slack username> }
 assignees =
 'bsimpson': 'ben.simpson'

getAssignee = ->
 index = robot.brain.get('assignee_index') || 0
 assignee_key = Object.keys(assignees)[index]
 assignee = assignees[assignee_key]

 if index >= Object.keys(assignees).length-1
  index = 0

 robot.brain.set('assignee_index', index)

We list our assignees in an hash and then use robot.brain to store where we left off. This allows us to restart our client without losing our place in the round robin assignment. You will notice that the assignment hash has two usernames to allow mapping from Slack to Github user accounts in the event they are different.

regex = /Pull request submitted:.* by (.*)/i

robot.listeners.push new HubotSlack.SlackBotListener robot, regex, (msg) ->
 author = msg.match[1]
 assignee = getAssignee()
 until (assignee != assignees[author])
 assignee = getAssignee()

 msg.send "<@#{assignee}> you've been assigned to #{author}'s pull request."

This is the binding we make to listen to Github bot messages. We can’t use robot.listen since this only works for human messages. Since we are talking bot to bot, we have to utilize the SlackBotListener.

Now that we have our code in place, we can create a new Slack webhook. Inside your Slack settings click “Apps and Integrations” > “Manage” > “Incoming Webhooks”. Add a new one for the channel you want Hubot to post to and get the token. This will be needed when we start Hubot.

HUBOT_SLACK_EXIT_ON_DISCONNECT=true HUBOT_SLACK_TOKEN=xxxxxxx./bin/hubot --adapter slack & echo $! > /path/to/hubot/

This will install any dependencies and run Hubot. Substitute your token from earlier for the HUBOT_SLACK_TOKEN. The pid tracking is important if you are using Monitrc to monitor your process.

Happy ChatOps-ing!


Hardware, Linux, Open-source, Ruby, Software, Uncategorized

Delayed Job Performance Tuning

We found a bug. A bug that affected a lot of historical records that we now have the pleasure of reprocessing. Fortunately we already had an async job infrastructure in place with the delayed job gem. Unfortunately, this gem is intended for fairly small batches of records and isn’t tuned to handle 1M+ records in the delayed_jobs table.

After reading some best practices we decided on a queue based approach. To keep our day to day async jobs running we would use a “default” queue. And to reprocess our old records we used a new queue. Starting the workers up with a “–queue” flag did the trick. We had hardware dedicated for day-to-day operations, and new hardware dedicated to our new queue operations. Now it was simply a matter of filling up the queue with the jobs to reprocess the records.

Our initial approach maxed out the CPU on our database server. This was largely due to us not tuning our SQL in our async jobs. Because the volume we processed was always low, this was never really a noticeable problem. But when we threw lots of new jobs into the queues, it became very noticeable. The workers would start up, then mysteriously die. After some digging in /var/log/kern.log we discovered the workers were being killed due to an out of memory manager. Attaching a small swap partition helped, but once you’ve hit swap, things become horribly slow. Whats the point in keeping the worker alive if its running a thousand times slower? Clearly we needed to profile and tune. So we did. (The specifics are out of scope for this article, but it involved consolidating N+1 queries and limiting columns returned by the SELECT).

With our newly tuned SQL our spirits were high as we cranked up the workers again. Only to reach the next process bottleneck. And this is where databases gets frustrating. Delayed job workers run a query each time they are looking for a new job to find out which job to pick up. It puts a mutex lock on the record by setting locked_at and locked_by. The query looks like this:

UPDATE delayed_jobs
SET `delayed_jobs`.`locked_at` = '2016-06-05 11:48:28', 
 `delayed_jobs`.`locked_by` = 'delayed_job.2 host:ip-10-203-174-216 pid:3226' 
WHERE ((run_at <= '2016-06-05 11:48:28' 
AND (locked_at IS NULL OR locked_at < '2016-06-05 07:48:28') OR locked_by = 'delayed_job.2 host:ip-10-203-174-216 pid:3226') 
AND failed_at IS NULL) 
ORDER BY priority ASC, run_at ASC 

The UPDATE does an ORDER which results in a filesort. Filesorts are typically something an index can resolve. So I optimistically added the following:

CREATE INDEX delayed_job_priority
ON delayed_jobs(priority,run_at);

Sadly, this index was completely ignored when I ran an EXPLAIN on my UPDATE. And the reason is that MySQL doesn’t execute an UPDATE query the same way as if you did a SELECT with the same conditions. The index probably made things worse, because now with each record update, we now also have an index update as well. I could probably fork the code and probably use some type of isolation level in a transaction to get the best of both worlds with an index based SELECT, and a quick UPDATE on a single record by id. But there are easier solutions to try first.

My UPDATE statements were pushing 40 seconds in some cases according to MySQL. Eventually the lock wait timeout is exceeded and you see an error in the delayed_jobs.log:

Error while reserving job: Mysql2::Error: Lock wait timeout exceeded; 
try restarting transaction

Jobs were moving very slowly, and throwing more workers at it didn’t make an improvement. This is because each time a worker picks up a job, it was waiting 40+ seconds. The UPDATE was doing a filesort, and any index was being ignored. (And MySQL doesn’t support UPDATE hints). It was pretty clear that all of the jobs from the reprocessing queue needed to find a new home that didn’t negatively impact my filesort. I settled on the following solution:

CREATE TABLE delayed_jobs_backup LIKE delayed_jobs;

INSERT INTO delayed_jobs_backup
SELECT * FROM delayed_jobs WHERE queue='new_queue';

DELETE FROM delayed_jobs WHERE queue='new_queue';

This creates a new database table with the structure of the existing delayed_jobs table. The table is then populated with the jobs that needed to find a new home (All 1M+ of them). And finally, deleted from the original delayed_jobs table. Be careful doing this, and do some SELECT/EXPLAIN queries in between to ensure you are doing what you think you are doing. (Deleting 1M+ records from a production database makes me sit up in my chair a bit).

Looking at MySQL’s process list I no longer have System locks on my UPDATE statements (presumably because the table size is small enough that the filesort is mostly painless):

# Id, User, Host, db, Command, Time, State, Info
1, user,, localhost, Query, 0, System lock, UPDATE ...

The important columns here are the Time (in seconds), State, and Info. This proves that my job locking was happening quickly. I was seeing Time values of 40+ seconds before. I kept referring back to this process list to verify that the UPDATES were remaining snappy while I modified the number of workers running, and the number of jobs in the queue. I had a goal of keeping the UPDATE system lock times under 2 seconds. Adding more workers pushed the times up. Adding more jobs to the queue pushed the times up. Its a balance that probably depends very much on what you are processing, how much your database can handle, and what your memory constraints are on your worker servers.

To conclude – my job over the next few days will be to run the following command to put some jobs into the delayed_jobs table 10,000 at a time:

INSERT INTO delayed_jobs
SELECT * FROM delayed_jobs_backup LIMIT 10000;

DELETE FROM delayed_jobs_backup LIMIT 10000;

You can of course automate this. But my objective was met. The jobs can reprocess old records without impacting day to day jobs in the default queue. When the delayed_jobs table is almost empty, I move over another batch of jobs from the delayed_jobs_backup table. Rinse and repeat until there are no more jobs left to process. Its a bit more painful, but day to day operations continue to function, and I can cross of the reprocessing task from my list of things to do. All without any code changes!

I’ve been reading up on transaction isolation levels thinking something like a SELECT FOR UPDATE lock might be worthy contribution to the delayed_job codebase:

Computers, Open-source, Ruby

Rspec Bisect

During a journey to upgrade to Rails 4, the test suite was crucial for success. I ran the test suite and recorded all of the test failures. Then I went through test by test fixing them until I was all done. Barely containing my excitement I pushed to remote and waited. Sadly, our CI server reported a build failure. What the heck?! The tests all passed in isolation. Why do they not pass when run together? Well…

The mark of a good test is that they can be run in any order and still pass. In order to accomplish this, we need each test (or the suite at large) to be responsible for setting up its own data, and tearing down that data when its finished. You want to leave the state of everything exactly like how you found it before the test was run. This can include clearing caches, truncating database tables, or rolling back transactional changes. Many test suites including Rspec have the capability to do these things for you without much headache.

But what happens when your tests are run in a different order and fail? Or what happens when your tests pass in isolation, but fail when run in part of the larger test suite? Likely something in an unknown test is causing your test failure. And finding which other test is being naughty can be a real pain. Fortunately, this is the case for a new feature in Rspec 3.3 called bisect.

In case you are not familiar with bisect its a simple, but powerful algorithm. Suppose you want to drop an egg off a building at different floors to discover the lowest floor on which the egg will break. There are multiple ways to accomplish this task. The most wasteful way would be to start at floor one, drop the egg, if it doesn’t break, go to floor two and repeat. When you drop the egg and it breaks, then you have your answer. This is slow. In Big O notation, this would be O(n) where n is the number of floors in the building.

We can do better. What if we could cut the floors that we test on in half each time we perform our egg drop test? For argument sake, if the building has 100 floors (and we have an unusually strong egg!) we might bisect the floors and start at floor 50. If we drop the egg and it breaks, we know that the answer is somewhere between 1 and 50. If it doesn’t break, we know the answer is between 51 and 100. Lets assume it breaks. We can then do our next test (with a new egg) on floor 25. If it breaks, we know our answer is between 1 and 25. If it doesn’t break, we know our answer is between floors 26 and 50. Just by dropping two eggs on different floors, we’ve cut our sample size from 100 to 25. In Big O notation, this would be O(n/2). The number of times we have to do this test is the number of powers of 2 we can divide into our number of floors. On a 100 floor building, we would have our answer in ~6 tests (2^6). We are going to save a lot of eggs.

So why are we talking about eggs when we have test failures to fix? Rspec offers a sweet utility called “bisect” that will do something analogous to our egg drop algorithm in our test suite. We have a known test failure in the larger suite, but it passes in isolation. And we want to know which currently unknown test is interfering and causing our known test failure. How would you tackle this? Well, you could remove half your tests (keeping your failing test) and see if you still get a test error when running the test suite. If you do, you remove half again, and repeat your testing. Do this until you get down to two tests – the known failing test, and the previously unknown (now known) passing test. But that is tedious work to do manually.

Lets take a look at Rspec bisect:

$ bundle exec rspec --seed 1234 --bisect
Bisect started using options: "--seed 1234"
Running suite to find failures... (3 minutes 14.1 seconds)
Starting bisect with 3 failing examples and 2475 non-failing examples.
Checking that failure(s) are order-dependent... failure appears to be order-dependent

Round 1: bisecting over non-failing examples 1-2475 . ignoring examples 1-1238 (1 minute 38.37 seconds)
Round 2: bisecting over non-failing examples 1239-2475 . ignoring examples 1239-1857 (57.25 seconds)
Round 3: bisecting over non-failing examples 1858-2475 . ignoring examples 1858-2166 (29.98 seconds)
Round 4: bisecting over non-failing examples 2167-2475 .. ignoring examples 2322-2475 (46.11 seconds)
Round 5: bisecting over non-failing examples 2167-2321 .. ignoring examples 2245-2321 (43.61 seconds)
Round 6: bisecting over non-failing examples 2167-2244 .. ignoring examples 2206-2244 (42.92 seconds)
Round 7: bisecting over non-failing examples 2167-2205 .. ignoring examples 2187-2205 (43.65 seconds)
Round 8: bisecting over non-failing examples 2167-2186 .. ignoring examples 2177-2186 (41.96 seconds)
Round 9: bisecting over non-failing examples 2167-2176 . ignoring examples 2167-2171 (24.15 seconds)
Round 10: bisecting over non-failing examples 2172-2176 .. ignoring examples 2175-2176 (36.95 seconds)
Round 11: bisecting over non-failing examples 2172-2174 .. ignoring example 2174 (37.3 seconds)
Round 12: bisecting over non-failing examples 2172-2173 . ignoring example 2172 (24.68 seconds)
Bisect complete! Reduced necessary non-failing examples from 2475 to 1 in 9 minutes 1 second.

The minimal reproduction command is:
  rspec ./spec/models/file_1.rb[1:4:3] ./spec/models/file_2.rb[1:1:1,1:1:2,1:1:3] --seed 1234

That is a lot of output. Lets break it down:

  • We call rspec with a given --seed. What this seed value does is determine the order of the tests. When --seed is omitted a random number is chosen which makes the tests order random. This is normally a good thing. But when bisecting, its a bad thing because we want to reproduce a certain arrangement of tests to reliably generate our failure.
  • The test suite initially takes 3 minutes 14 seconds to run. This is the longest test run because it is discovering which test failures we have so it must run the full suite.
  • The first round we ignore the first half of our tests. This takes half the time to run because we are running half the tests.
  • The second round we detected the failure from round one, so we know the half we tested includes the failure. We’ve just eliminated 50% of our tests from being the culprit. The next bisect will ignore the first half of the remaining pool and test again. We half the test run time again (tests don’t have exactly the same duration)
  • The third round we repeat what we did in round two, but with the remaining pool, and so on. If the failure is no longer detected, we test on the half we most recently removed.
  • Finally we get a completed message with a command to reproduce the failure. I can copy and paste this into a terminal and see the two files that are causing a failure.

rspec ./spec/models/file_1.rb[1:4:3] ./spec/models/file_2.rb[1:1:1,1:1:2,1:1:3] --seed 1234 What is that identity notation you might ask? ./spec/models/file_1.rb is straightforward. This is just the filename. The next part is a bit more interesting: [1:4:3]. This is telling us the nesting of the assertion that is failing when combined with the other file. Not only does Rspec bisect tell us the two files that are interfering each other, but which tests within those two files are interfering! The notation 1:4:3 is telling us its the first describe context, then the fourth context within that context, and finally the third test within those previous two contexts. In the case of the second notation: ./spec/models/file_2.rb[1:1:1,1:1:2,1:1:3] its just telling us that we have multiple tests that are failing (3 to be precise), separated by commas. Notice that the command also includes the seed value we used when bisecting. This is still important because your failure might be order dependent.

And what if your tests fail in a different order? Same principle. When you get an intermittent test failure when running the test suite that you suspect is related to order, note the seed value that was used. Perform the same rspec bisect algorithm (and the same command) and use this seed value instead of 1234. This will reproduce the test failure you got by running the same tests in the same order as when the failure was recorded.

The Rspec team did an awesome job with the bisect utility. This used to be a dreaded problem to face as someone running tests. To verify your fix you had to run the entire test suite just to know if your change worked. This would have been a 4 minute wait in the example above, but could easily have been much longer. When reproducing failures, you want the shortest interval possible. Running 4 tests is certainly faster than running 2475 tests!

Another shout out goes to the authors of the transpec gem that is used when upgrading versions of Rspec. This automatically changes your deprecated code with the newer style syntax. These two features together show that the Rspec team has a powerful and mature product.

More information:

Computers, Open-source, Ruby, Software

Upgrading to Rails 4

Recently, I started a new job and the first big assignment was to upgrade their software stack to a more recent version. They are running Rails 3.2 and wanted to upgrade as far forward as they can. With Rails 3.2 support gone for all but severe bug fixes, and Rails 5 due any month now, this is something they wisely didn’t want to put off.

Its a smaller company, and they have been open to a lot of my feedback and suggestions. I was basically given the reins and told to do what needed to be done to get us upgraded.

So the first task was some research, and I stumbled upon the official Rails upgrade guide pretty quickly. It nicely outlines the breaking changes. Fortunately, the big change was to strong parameters, but this can be deferred by including protected_attributes and kicking this can down the road. We will be logging what controller actions receive which parameters, instead of raising so we will have some time to collect some data before we switch over in one painful release.

The guides stressed that the test suite is critical during the upgrade. I was fortunate enough to have a project with adequate testing coverage. It wasn’t the 80% sweet spot, but it was certainly valuable at ~40%. However, the suite had fallen into disuse, so the first task was to get them back to green.

Once the test suite was green, it became a matter of KEEPING it green. Luck smiled a second time and they had an old CI server that had fallen into disuse. It was powered by CruiseControl.rb and it was little fuss to get it back up and running again. The migrations could no longer be played from the projects inception to the current time.

This is where luck stopped smiling upon me. The project did not track db/schema.rb and the migrations were not playable. The only way to get an instance of the database was to download the schema from production. Not the best practice, so I went about tracking the schema, and getting adoption of this new practice. Further complicating the schema approach was the decision to move all older migrations into subfolders in db/migrate by year (e.g. 2011, 2012, etc). This was done I found out because Textmate doesn’t like big directories. The issue is that db:schema:load isn’t recursive in its retrieval of migration versions. It took me a bit to understand what was happening, and how it was happening. After a failed monkey patch to the migrator logic in ActiveRecord, I decided to just move the migrations back into db/migrate and eliminate the subdirectories. Sorry Textmate!

Now the database could be rapidly provisioned, and I got a seed working with a minimal set of data. Back in CI I reconfigured the build script to use db:schema:load instead of db:migrate and with the green test suite, we got builds working again.

We used a utility called CC Menu to show the build status in the notification bar in OS X:

To make the builds even more visible, I discovered an integration with Slack to report the build status in our chat. . I made my own fork and added some famous movie quotes for successes and failures since I found the default messages lacking: . I didn’t think our female developers would appreciate the “you’re a stud!” message.

Back to the Rails 4 upgrade. The tests are passing in master, so I made a feature branch that will be long lived called “rails-upgrade”. I merge master in daily. The “rails-upgrade” branch will serve as an integration point for other features branches that will merge into it. The plan is to keep any upgrade related changes out of master until its time to deploy. That means separate branches, separate CI builds, and separate staging servers for manual QA.

One lesson I’ve learned is that a deprecation warning may not always be just informational. In particular, Rails 4 requires all scopes to be callable (lambdas, or procs). This was breaking the way that associations with scopes would be built: users.roles.admin.find_or_create! would previously find an associated admin record, or create it. However, in Rails 4, it fails creation because the role’s reference to user is nil. I’m not sure why, but its reproducable, and changing the admin scope on Role to a callable restores this reference back to user.

Ideally, I’d have wanted to get the test suite green before tackling deprecation warnings because I want to change as little as possible before I get back to a known good status. However, not fixing this deprecation warning was actually causing tests to fail.

Now we are down to a handful of failings tests on Rails 4. Most deal with the ActiveRecord syntax changes. Hopeful I can get these knocked out quickly. Then its on to manual QA.

In summary – get your test suite green. Keep it green. Do the upgrade and get it back to green. Then make any changes to remove deprecation warnings, keeping the suite green. The test suite is your guide during an upgrade like this.

Open-source, Ruby, Software, Thoughts, Uncategorized

Implementing a Configuration

Have you ever seen a configuration in Ruby that yields to a block where properties are set on the yielded object? Rails does this with its environments files:

Your::Application.configure do
  config.cache_classes                     = true
  config.consider_all_requests_local       = false
  config.action_controller.perform_caching = true

Devise is another great example of this pattern:

Devise.setup do |config|
  config.secret_key = '38c8e4958385982971f'
  config.mailer_sender = ""
  config.mailer = "AuthenticationMailer"

This is an established way to pass in configuration options in the Ruby world. How does this work? What are the alternatives?

Under the Hood

Lets start with our own implementation from scratch. We know that we are calling a method (Application.configure, or Devise.setup) and that method yields to a block:

module Example
  class << self
    def configure
      yield Config

You can see that this will yield the class Config when we call Example.configure and pass in block. Config will need to define our properties:

module Example
  class Config
    class << self
      attr_accessor :api_url, :consumer_key, :consumer_secret

We can now call our configure method and pass in a block:

Example.configure do |config|
  config.api_url = ""

If we try to set a property that does not exist in our Example::Config class, then we get a helpful NoMethodError:

Example.configure do |config|
  config.not_a_real_config_setting = ""

# => NoMethodError: undefined method `not_a_real_config_setting=' for Example::Config:Class

This helps to define your configuration interface. Options are explicitly declared. They cannot be mistyped, or dynamically added without an explicit runtime error.

In practice, The definitions of Example, and Example::Class would live in a gem. The Example.configure invokation would live within the code that wants to configure the properties of that gem with things such as credentials, URLs, etc. This seperation of concerns makes sense. Gems should not know anything about the business logic of your application. It should be instantiated with configuration options passed in from the project. However the project should only be aware of the public interface of the gem. We are agreeing upon where these pieces of information are moving from one domain to another. The project doesn’t know where this information will be used, and the gem doesn’t know what the information is until we pass it. So far so good!

Using this Configuration

Now that we’ve passed our information inside the configuration block, we can reference these class level (static) properties in our gem:

module Example
  class Request
    def self.get(path)
      request = Net::HTTP.get(URI.parse(ExampleConfig.api_url + path))
      request['Authorization'] = auth_header(:get, ExampleConfig.api_url)


    def self.auth_header(request_type, uri), uri.to_s, {},
                              {consumer_key: Config.consumer_key,
                              consumer_secret: Config.consumer_secret}).to_s

This will do a simple GET request passing in SimpleOAuth headers. Inside the get method we call Config.api_url to know where the API lives. This was set by us earlier using the Config object. SimpleOAuth headers are supplied by again calling the Config. You would invoke it like so:

Example.configure do |config|
  config.api_url = ""
  consumer_key = "1895-1192-1234"
  consumer_secret = '76asdfh3heasd8f6akj3hea0a9s76df'

Example::Request.get('/products') # => {products: [product1, product2, etc]...}"
Example::Request.get('/users') # => "{users: [user1, user2, etc]...}"

Example::Config becomes the holding location for your configuration information. And by making the properties static, you don’t have to worry about passing around a reference to the instance.


If the yielding to a block is a little too clever for you, you can always instantiate a class and pass in the configuration as part of the constructor:

class Example
  class << self
    attr_accessor :api_url, :consumer_key, :consumer_secret

    def config(api_url:, consumer_key:, consumer_secret:)
      self.api_url = api_url
      self.consumer_key = consumer_key
      self.consumer_secret = consumer_secret

This can be instantiated like so:

  api_url: ""
  consumer_key: "1895-1192-1234"
  consumer_secret: '76asdfh3heasd8f6akj3hea0a9s76df'

Example.api_url # => ""

This feels less encapsulated to me. Instead of having an interface for our configuration settings, we are just settings properties directly onto a class.

What are your thoughts? What advantages does the block style configure offer over the alternative above?

For more information on the Ruby gem configuration pattern see this excellent blog post:

Computers, Open-source, Ruby, Software

Don’t Use “#” In the Paperclip Gem

I’ve learned a whole lot more about ImageMagick commands last week than I ever really wanted to know. The problem was that our uploaded images were having content cropped off the top, bottom, and sides.  Like many folks in the Rails world, we pass our attachments through Paperclip to handle all of the nitty gritty resizing operations.

I was interested in understanding how we could prevent our content from being cropped off when I came across an interesting idiom in the geometry settings:

has_croppable_attachment :image,
styles: {
:'630x315' => { geometry: "630x315#", format: :jpg },

Well take a look at that. There is a “#” symbol suffixed to my image geometry. I went to ImageMagick to lookup what this flag meant. Spoiler alert: It doesn’t exist there. After some digging around, I discovered that this idiom is provided by the Paperclip gem, and translates to the following convert command:

convert '/path/to/source.jpg' -resize "630x" -crop "630x315+0+0" '/path/to/output.jpg'

You can see the resize + crop combination of commands being built by Paperclip according to their documentation:

Paperclip also adds the “#” option (e.g. “50×50#”), which will resize the image to fit maximally inside the dimensions and then crop the rest off (weighted at the center).

Well, that is no good! If an image is not a 2:1 aspect ratio as per my dimensions 630×315 (or whatever aspect ratio you have) YOU WILL LOSE CONTENT! Time to rethink this…

The dimensions are 306x306
The dimensions are 306×306


The image was scaled until it was large enough to cover a 630x315 canvas, then the tops and bottoms cropped off
The image was resized until it was large enough to cover a 630×315 canvas, then the top and bottom was cropped off

Instead of resizing maximally (so that an image is a minimum of 630 width AND a minimum of 315 height) lets resize minimally (so that an image is a maximum of 630 width OR 315 height). The aspect ratio is preserved in both scenarios.

We want to resize while preserving aspect ratio , but we also need to make our canvas 630×315. The canvas dimensions are referred to as the extent command. When we do this, we will likely have space on the top and bottom, or sides we need to fill to have exactly these dimensions. What you fill this background with can be a color (in my case white). We also likely want to center the minimally resized image on this canvas. You can pass these convert options into Paperclip like so:

convert_options: {
:'630x315' => " -background white -gravity center -extent 630x315",

The resulting command will look something like this:

convert '/path/to/source.jpg' -resize "630x315" -background white -gravity center -extent 630x315 '/path/to/output.jpg'

Notice that our lossy crop flag has been replaced with a nicer extent flag. We can see the results this has on a similarly sized image:

We have an image smaller than the target 630x315
We have an image smaller than the target 630×315


We now have a 630x315 image with the sides filled in with a white background to preserve the dimensions
We now have a 630×315 image with the sides filled in with a white background to preserve the dimensions. Note the sides of the image are white.

As a final note, this works for both images larger and smaller than the target outcome dimensions. Looking at the ImageMagick documentation for flags can be helpful, but daunting as the real power lies in chaining multiple flags for a desired effect.

With a little effort I was able to get what is (in my opinion) a better image resize with just a few custom flags.