We created AxiomQ by merging two companies. Both of them had their own sites. To keep search engines happy we need to setup 301 redirects. Unfortunately, one site was running on Github Pages which doesn’t support 301 redirects.

The solution was to create a simple Rack app, use rack-rewrite gem to redirect traffic, and deploy it on Heroku.

Rack-rewrite is a nice gem which implements a rack middleware that acts similar to Apache mod_rewrite. Rewrite rules are simple to define. I encourage you to take a look at rack-rewrite docs.

We only needed the most simple 301 http redirect which redirects a page to new URL. Like this one redirecting about page to new team page:

r301 %r{/about}, 'https://www.axiomq.com/team/'

Most basic Rack application only requires a config.ru file, but for ease of use and simpler deployment we also added a Gemfile.

# Gemfile
source 'https://rubygems.org'

gem 'rack'
gem 'rack-rewrite'

# config.ru
require 'rack/rewrite'

use Rack::Rewrite do
  r301 %r{/about}, 'https://www.axiomq.com/team/'
  r301 %r{/contact}, 'https://www.axiomq.com/contact/'
  # ... more redirects ...

run lambda { |env| [301, {'Location'=>'https://www.axiomq.com/'}, StringIO.new("")] }

r301 lines define every redirect we need (you can also use regular expressions). The last line is a catch-all redirect just in case we missed something - it will still be redirected to new site homepage.

To try this app locally you can use rackup like this:

bundle exec rackup -p 9292 config.ru

Finally deploying to Heroku is really simple as pure Rack apps are supported by Heroku. Just follow the usual setup outlined in their documentation for deploying Rack apps to Heroku.

Recently we were working on a Rails project where we needed to import data from legacy database. In the next few steps I will describe how we did it.

In database.yml we specified information for interaction with legacy data.

# config/database.yml
  host: localhost
  database: legacy

In models directory, we added a legacy directory to which we will later add all legacy models. But first we created a Legacy::Base class which will inherit from ActiveRecord::Base. In it, we established a connection with a legacy database. This way we isolate our connection to the legacy database and all our legacy models will inherit from Legacy::Base.

# app/models/legacy/base.rb
class Legacy::Base < ActiveRecord::Base
  establish_connection :legacy_development
  self.abstract_class = true

So now we will create a legacy model and we will specify a table name from legacy database.

# app/models/legacy/post.rb
class Legacy::Post < Legacy::Base
  self.table_name = 'posts'

Once we have done this it is the time to check if legacy posts are available. So we can run bundle exec rails console in our terminal and get all legacy posts: Legacy::Post.all

In our case we added a new method which helps us to migrate data. In it we will match legacy data with our new model attributes.

 class Legacy::Post < Legacy::Base
   self.table_name = 'posts'

   def to_new_model
     new_post        = Post.new
     new_post.id     = self.id
     new_post.title  = self.title
     new_post.text   = self.text
     new_post.author = "#{self.first_name self.lastname}".strip


We used a rake task to migrate data. In it we are going to loop through all legacy posts and use to_new_model method to set the data, if the data is valid they will be saved otherwise we will catch an error.

  # lib/tasks/migrate_posts.rb
  namespace :legacy do
    task migrate_posts: :environment do

      Legacy::Post.all.each do |post|
        new_post = post.to_new_model

        if new_post.save
          puts "Post with id: #{new_post.id} is created."
          puts "Post with id: #{new_post.id} errors: #{new_post.errors.full_messages.join(", ")}\n"

So the final step is to run rake task: bundle exec rake legacy:migrate_posts

I hope that this helps you :)

Closure actions were introduced in Ember v.1.13.0 and they brought a lot of improvements over old action handling mechanism in Ember. These improvements enabled Ember to adopt new data flow model called - Data Down Actions Up (DDAU) that simplified communication between parent and child components.

What are closure actions?

Closure actions are based on JavaScript closures which are basically functions that remember environment in which they were created. So closure actions are just functions that remember context in which they were defined. Since they are just functions we can pass them as a value and call them directly as a callback. This enables us to pass them to inner components and call them directly from components. With the old approach we had to use sendAction() from component and call action on controller or route.

How to create closure actions?

Every action defined in controller can become a closure action. It’s important to note that it’s possible to make actions defined in routes closure actions but you need to use ember-route-action-helper addon for that. Closure actions are created in template using action helper which wrapps the action in the current context and returns it as a closure function.


Let’s define action inside our application controller.

export default Ember.Controller.extend({
  actions: {
    submit() {
      //some logic

In application.hbs we create closure action using action helper and we assign it to example-comp’s save attribute.

{{example-comp save=(action 'submit')}}

Submit action is now assigned to save attribute of example-comp component so we can call it directly inside component.

<!-- example-comp.hbs -->

<button onclick={{action 'saveItem'}}>Save</button>
export default Ember.Component.extend({
  actions: {
    saveItem() {
      // Calling save directly

Passing closure action through multiple levels of components is easy. Let’s say we want to call submit action in example-comp-child component. Since we have save attribute inside example-comp with submit action assigned to it we just need to pass it one level down.

{{example-comp-child save=(action save)}}

Calling action from example-comp-child is the same as for example-comp component.

export default Ember.Component.extend({
  actions: {
    saveItem() {

Closure actions simplify action passing mechanism in Ember but they can also return values, enable currying and much more. It definitely worth spending your time learning all closure actions features and I hope this introduction can help with that.

Most modern web app deployments have automated scripts that perform all tasks needed to deploy the app. They handle all the dirty details, while the developer just needs to do something simple like cap deploy. In other words, usually you don’t need to access the remote servers directly.

However, sometimes you run into one-time tasks (or less frequent tasks) that might not have been automated. For example, dumping production data and importing on local machine, syncing uploaded files between production and staging environments, etc.

These often involve transferring files between your local machine and remote server (or two remote servers). There are few ways you can handle this depending on what you need to transfer between servers. We are going to cover methods using wget, scp, and rsync.

While working on different projects and in different environments, we often need to export a dump from one database and then import it into another. A while ago Slobodan wrote how to export and import a mySQL dump, and here is a guide how do it for PostgreSQL.

Export a PostgreSQL database dump

To export PostgreSQL database we will need to use the pg_dump tool, which will dump all the contents of a selected database into a single file. We need to run pg_dump in the command line on the computer where the database is stored. So, if the database is stored on a remote server, you will need to SSH to that server in order to run the following command:

pg_dump -U db_user -W -F t db_name > /path/to/your/file/dump_name.tar

Here we used the following options:

  • -U to specify which user will connect to the PostgreSQL database server.
  • -W or --password will force pg_dump to prompt for a password before connecting to the server.
  • -F is used to specify the format of the output file, which can be one of the following:
    • p - plain-text SQL script
    • c - custom-format archive
    • d - directory-format archive
    • t - tar-format archive

custom, directory and tar formats are suitable for input into pg_restore.

To see a list of all the available options use pg_dump -?.

With given options pg_dump will first prompt for a password for the database user db_user and then connect as that user to the database named db_name. After it successfully connects, > will write the output produced by pg_dump to a file with a given name, in this case dump_name.tar.

File created in the described process contains all the SQL queries that are required in order to replicate your database.

Import a PostgreSQL database dump

There are two ways to restore a PostgreSQL database:

  1. psql for restoring from a plain SQL script file created with pg_dump,
  2. pg_restore for restoring from a .tar file, directory, or custom format created with pg_dump.

1. Restore a database with psql

If your backup is a plain-text file containing SQL script, then you can restore your database by using PostgreSQL interactive terminal, and running the following command:

psql -U db_user db_name < dump_name.sql

where db_user is the database user, db_name is the database name, and dump_name.sql is the name of your backup file.

2. Restore a database with pg_restore

If you choose custom, directory, or archive format when creating a backup file, then you will need to use pg_restore in order to restore your database:

pg_restore -d db_name /path/to/your/file/dump_name.tar -c -U db_user

If you use pg_restore you have various options available, for example:

  • -c to drop database objects before recreating them,
  • -C to create a database before restoring into it,
  • -e exit if an error has encountered,
  • -F format to specify the format of the archive.

Use pg_restore -? to get the full list of available options.

You can find more info on using mentioned tools by running man pg_dump, man psql and man pg_restore.

Starting with v9.2, PostgreSQL added native JSON support which enabled us to take advantage of some benefits that come with NoSQL database within a traditional relational database such as PostgreSQL.

While working on a Ruby on Rails application that used PostgreSQL database to store data, we came a across an issue where we needed to implement a search by key within a JSON column.

We were alredy using Ransack for building search forms within the application, so we needed a way of telling Ransack to perform a search by given key in our JSON column.

This is where Ransackers come in.

The premise behind Ransack is to provide access to Arel predicate methods.

You can find more information on Arel here.

In our case we needed to perform a search within transactions table and payload JSON column, looking for records containing a key called invoice_number. To achieve this we added the following ransacker to our Transaction model

ransacker :invoice_number do |parent|
   Arel::Nodes::InfixOperation.new('->>', parent.table[:payload], 'invoice_number')

Now with our search set on link_type_cont (cont being just one of Ransack available search predicates), if the user entered for example 123 in the search filed, it would generate a query like this:

SELECT  "transactions".* FROM "transactions"  WHERE ("transactions"."payload" ->> 'invoice_number' ILIKE '%123%')

basically performing a search for records in transactions table that have a key called invoice_number with value containing a string 123, within a JSON column payload.

I recently worked on a Rails project, which had parts of pages in different languages. That may be a problem if you have already translated their entire text to all required languages. You can even be tempted to hardcode parts of the text into other languages. Fortunately, there is an elegant way to solve that problem, just wrap parts of template or partials into blocks with desired locale, like this:

<% I18n.with_locale('en') do %>
  ...part of your template
  <%= render partial: 'some/partial' %>
<% end %>


Suppose, there is a template with only header and two paragraphs.

<h1><%= t('my_great_header') %></h1>

<p><%= t('first_paragraph') %></p>

<p><%= t('second_paragraph') %></p>

And locale in English and French for that template.

# in config/locales/en.yml
  my_great_header: "My English great header"
  first_paragraph: "First English paragraph"
  second_paragraph: "Second English paragraph"

# in config/locales/fr.yml
  my_great_header: "My French great header"
  first_paragraph: "First French paragraph"
  second_paragraph: "Second French paragraph"

In the lifetime of every application the time comes for it to be presented to everyone. That’s why we have to put our application on a special server which is designed for this purpose. In one word, we need to deploy our application. In this post you will see how to deploy app with Capistrano 3.

Capistrano is a great developers tool that is used to automatically deploy projects to remote server.

Add Capistrano to Rails app

I will assume you already have a server set up and an application ready to be deployed remotely.

We will use gem ‘capistrano-rails’, so we need to add this gems to Gemfile:

group :development do
  gem 'capistrano', '~> 3.5'
  gem 'capistrano-rails', '~> 1.1.6'

and install gems with $ bundle install.

Initialize Capistrano

Then run the following command to create configuration files:

$ bundle exec cap install

This command creates all the necessary configuration files and directory structure with two stages, staging and production:


Sooner or later every new Ruby developer needs to understand differences between this two common rake tasks. Basically, these simple definition tells us everything we need to know:

  • rake db:migrate runs migrations that have not run yet
  • rake db:schema:load loads the schema.db file into database.

but the real question is when to use one or the other.

Advice: when you are adding a new migration to an existing app then you need to run rake db:migrate, but when you join to existing application (especially some old application), or when you drop your applications database and you need to create it again, always run rake db:schema:load to load schema.


I am working on application which use globalize gem for ActiveRecord model/data translations. Globalize work this way:

  • first specify attributes which need to be translatable
class Post < ActiveRecord::Base
  translates :title, :text

If you use Vagrant, VirtualBox and Ubuntu to build your Rails apps and you want to test it with Cucumber scenarios, this is the right post for you. By default Vagrant and VirtualBox use Ubuntu without an X server and GUI.

Everything goes well until you need @javascript flag for your cucumber scenario. @javascript uses a javascript-aware system to process web requests (e.g. Selenium) instead of the default (non-javascript-aware) webrat browser.

Install Mozilla Firefox

Selenium WebDriver is flexible and lets you run selenium headless in servers with no display. But in order to run, Selenium needs to launch a browser. If there is no display to the machine, the browsers are not launched. So in order to use selenium, you need to fake a display and let selenium and the browser think they are running in a machine with a display.

Install latest version of Mozilla Firefox:

sudo apt-get install firefox

Since Ubuntu is running without a X server Selenium cannot start Firefox because it requires an X server.

Setting up virtual X server

Virtual X server is required to make browsers run normally by making them believe there is a display available, although it doesn’t create any visible windows. Xvfb (X Virtual FrameBuffer) works fine for this. Xvfb lets you run X-Server in machines with no display devices.