Rail’s Acts_as_revisable now summarizes changes

If you are a frequent reader, please congratulate me next time you see me on my first ever GiHub commit. I have been a long time user of GitHub, but I haven’t contributed anything back yet. I decided to start with the project acts_as_revisable, because I have some ideas for it that I think are useful.

The commit tonight adds a “revisable_changes” column, which records the attributes “before” and “after” values in the revision record. This data is calculated from the ActiveRecord instance method “changes”. To demonstrate:

# User.create :username => 'ben'
#[nil, "ben"]}>
# User.find(1).update_attributes :username => 'bob'
#["ben", "bob"]}>

This should serve as a good base to build summaries in a human readable format in an application. After tossing around a few different word choices to describe the changes, I decided that the better approach was to just record the data that changed in a structured format and leave it up to the implementation. Such an implementation may look something like this:

  def changes_summary
    self.revisable_changes.map do |attribute, values|
      before, after = values[0], values[1]
      if before.blank? and after
        "Added #{attribute} as #{after}"
      elsif before and after.blank?
        "Removed #{attribute} value of #{before}"
        "Changed #{attribute} from #{before} to #{after}"
    end.join('; ')

Using our user instance above, this method out return: “Changed username from ben to bob”.

Other plans for acts_as_revisable in the future include tracking associations of a model. For example, if a user has many permissions, and the permissions change, the revisable information will include the permissions for that user at that point in time. I haven’t worked out the specifics of this yet, as it could potentially generate a lot of unwanted data. I plan on parsing the data from ActiveRecord’s “reflect_on_all_associations” method to gather the associations. I will then provide a way for a user to configure which associations should, or should not be tracked. Then I will iterate through these objects using ActiveRecord hooks and somehow record the state. Either a shadow table, or by serializing the associations in a column. The trick here will be to marshal the objects back to the live data when a revert is called. More soon…

Living in an Apple World

Welcome readers to what is a first here on my blog – a review about Apple’s OS X. As some of you may know, part of my new job is working on a Mac for 8 hours a day, 5 days a week. Someone asked me about my experiences, and I feel up to sharing my findings. I want to be fair in my assessments, so if it sounds like I am starting to get a little slanted, keep me in check with a comment!

First things first – the initial impression. I have a 27″ iMac and I was initially impressed by the appearance of the machine. The iMac screens and case are one piece, so I have plenty of room to kick around beneath my desk with minimal cord entanglement (not that it matters because I sit cross-legged all day). The compact style keyboard has an aluminum casing, which matches the iMac. The mouse is the Mighty Mouse. Both are wired, which I appreciate – especially on the mouse. I hated the compact keyboard since it feels shrunken, and the addition of the “Fn” key in the bottom row meant every time I tried to press “Control” I missed. After swapping this out for a full-sized keyboard I was much happier, and even unlearned some bad habits. The Mighty mouse absolutely sucks. The tiny wheel stops responding all the time from the slightest spec of dirt, and you have to turn it over and rub it back and forth on your jeans, or the mouse pad. Its one saving feature is the ability to vertically, and horizontally scroll which is occasionally helpful. I am a right click fan, and though invisible, the region on the mouse that registers as a right click versus a left is about 10 times smaller. It feels like the edge of the mouse.

The keyboard on a Mac is different in important ways from its PC counterparts. The “Windows” key is replaced with the Command key, which is utilized far more than the Windows key ever was. In fact, most of the operations of the machine are done using Command (copy, paste, new tab, close window, etc) effectively making it closer to the “Control” key in Windows. However, the Control key remains, which actually introduces a whole new key combination to more effectively use shortcuts. The Command key is located next to the space bar, which is much more convenient than the extreme left placement of the Control key. I do copy, paste, etc operations using my thumb, and not my pinky finger – much less strain.

The computer screen can be tilted, which is nice since the whole world seems to be moving towards the annoying high gloss screens. I can tilt it down, and out of the florescent overhead lights. I really feel that gloss is a showroom gimmick just like turning the brightness up to max on the TVs in the store. If I wanted to look at myself, I would sit in front of a mirror. Fortunately, I have a second non-gloss monitor, and I do most of my coding on this screen. Also, it would be nice if the monitor had a height adjustment, as second monitor isn’t quite the height of the iMac screen.

Enough about appearance – lets talk hardware. This is a dual core Intel-based processor, with 2 GB of memory (later upgraded to 4GB). The video card is decent I suppose (however the interface can get quite “laggy” at times). I don’t have any idea what the machine costs, but this is definitely unimpressive hardware. 2GB of RAM is the minimum I would work with, and it being slow laptop RAM doesn’t help at all. At least there isn’t a laptop hard in it too.

As for the Operating System, it seems pretty stripped down. This isn’t necessarily a bad thing – I can quickly find what I am looking for, without going on a damn field trip through obscure dialog windows. The flip-side to this is it doesn’t feel very “customizable”. You use the stock features, or you don’t use a Mac. Perhaps there are a bunch of third party utilities that I don’t know about? Sometimes I am disappointed by the lack of customization options (there are just a handful of settings for the dock). To be honest, I am not sure what I would customize, but I like to poke around, and I often leave the System Preferences disappointed having not found “setting xyz“.

I really enjoy the file system indexing, and they have the best implementation for full-text search I have seen. It doesn’t bog down the computer, and the results are instantly updated. Magic. It effectively is the starting point for all my open actions. I don’t know why it isn’t available for the first 10 minutes after a boot, but I don’t shut down that much so its ok.

I was surprised by the lack of a default system-wide notification system – something that Growl has aimed to fill. I was also disappointed by the lack of package management on the Mac – again third party solutions exist. The system updates are just as annoying as in Windows which was a disappointment. Once the “restart” prompt stole my typing focus and proceeded to shut down the system. A few times the machine has “beach balled” (the Mac “hourglass” icon), and hard locked. Most of time its fairly responsive and stable which I can appreciate.

Other points of interest are the window management. I use Expose almost as regularly as I do the task switcher (Command + Tab), though admittely sometimes I get lost in the special effects and forget what I was doing. There are a bunch of other window groupings, but I don’t really find them that useful. One particularly frustrating observation is that once you minimize a window, you can’t Command + Tab back to it. Isn’t that the point of the task switcher? It even shows up in the task switcher, but when it is selected, absolutely nothing happens.

As for the software available on the Mac it is more comprehensive than Linux, and less comprehensive than Windows. Some of my co-workers commented that in OS X, there is usually one utility to do something, whether you like it or not. I use Google Chrome, JetBrain’s RubyMine, Ruby, Terminal, Lotus Notes, Adium, and Propane almost exclusively. Because of this, I can’t really assess the state of the Mac software ecosystem, but I will say that all these programs run damn well on the Mac. The only software crash I have is Flash. Flash on Linux and Windows is stable, however on the Mac probably one in ten uses causes the browser tab to lockup. I am not sure whether this is a Chrome issue or not, but something is seriously wrong with the state of Flash on my Mac. Now I understand why so many Mac users hate Flash – as a Windows user, I never experienced the constant crashing.

In summary, due to the nature of my work, I use the Mac at work in essentially the same manner I would use Linux. The terminal is where I spend my time, and I am more or less indifferent about the operating system around it, as long as I can install the system libraries I need for Ruby extensions, and it stays responsive. My next computer purchase will be a netbook and I will install Ubuntu on it, as I can’t justify spending the designer prices of Apple products to use a terminal and a web browser.  Toe to toe with Windows, and many Linux distributions, OS X excels in many areas. Its a fantastic operating system, but I am not sure that it is worth its cost. If I could throw it on my PC at home it would be worth $100. Buying a special machine just to run it is just silly.

Ubuntu 10.04 – Very Refined

A lot has changed with Linux since I have last visited Ubuntu. I had an old crufty version of Ubuntu 8.10 sitting on my hard drive that I hadn’t booted into in quite some time. Realizing that April was a release month for Ubuntu, I decided to go get the latest and greatest.

There was a time when the software that I used on Linux was very exclusive to Linux. It took a lot of hunting down of programs to find what the best ones were for what I was doing since the names were all unrecognizable. That no longer seems to be the case. Google Chrome, has an official Linux client that runs quite well. Bookmark syncing to your Google account provides an easy way to import your information. Dropbox has a Linux client that integrates in with the Nautilus file manager.

Rhythmbox integrates in with Last.fm, Magnatune, and the new music store Ubuntu One. Empathy integrates in with Facebook chat, Google Talk, AIM, IRC, and many others. Gwibber integrates in with Facebook, Twitter, Flickr, Digg, and others. All of these integrate in with Ubuntu’s new Indicator Applet.

The new theme is nice, and the nVidia drivers are stable as always. The new theme does away with the Brown, and moves to a darker theme which I prefer. Compiz is running “discretely” providing effects that enhance with user experience without overwhelming it. The gravy on the cake is the new Ubuntu Software Center which takes all of the “apt-cache, and apt-get” out of the equation. The interface is revamped from the old “Synaptic package manager” and provides some nice touches such as “Featured Applications”, category views, and a seamless search, select and install experience.

If you are doing Rails development on Windows, do yourself a favor and revisit this classic to see how much improvement there has been to the Ubuntu experience.

Have You Had Your Daily Dose of Editors?

The Boss decided to purchase a license to RubyMine for me to use, and the rest of the office to evaluate. I wanted to share my experiences, since there doesn’t seem to be a lot of real-world experience on developing Ruby on Rails in a corporate setting using RubyMine. Also, some of my new (and past) coworkers might be curiously looking over their screens with TextMate to see what else is out there.

First, a bit about the Ruby on Rails culture. It is a very Mac OS X oriented, and the preferred editor of choice is TextMate. I really try and stay away from tools that only run on one operating system, and TextMate falls into that category. Ruby is a very terse, dynamic and simple language. Rails developers will tell you that you don’t need an IDE to do Rails work. While this is true, I find not using anything more than a text editor is like using a screwdriver instead of a power tool. If you are a good developer, and you understand Ruby a good editor will only make you more productive. RubyMine isn’t meant to be relied on like IDEs are for other strongly typed languages including C# and Java. It makes a best effort to provide its features without getting in your way when it fails.

RubyMine offers full support on Windows, Mac and Linux. RubyMine also strives very hard to make the Windows version as strong as the *nix versions. It does this by including an IRB console, and commands to run many rake tasks, and Rails generators. While these tools are a very good solution on Windows, people with the ability to run a native terminal will probably find the offerings lacking in comparison. This review will skip these Windows-audience features, since I don’t feel it represents the majority.


RubyMine does a very good job at trying to autocomplete its code. It will look inside Class definitions, and can find methods, attributes, and associations. If you are using gems that extend classes, such as ActiveRecord, RubyMine will do a fairly robust job at reading these methods from the gem files once they are attached to the project. “Attached” just means that RubyMine is inspecting these gems. It was not able to locate gems provided via Bundler, but this is supposed to be coming. Also, the auto-complete can be slow at times and freeze the editor from further input.

Inline Documentation

When you place the caret over a method, or class, RubyMine will fetch the documentation for that method and show it in the editor. This is doesn’t always locate the documentation however, in cases where the method is defined in a gem that is unattached.

Command+Click Following

You can click class names, and method names to jump straight to the definition. Also useful is clicking on associations, and named scopes. You can also jump to route definitions, and partials.

Cucumber Integration

There is auto-complete provided for your Cucumber tests, however also nice is the Command + mouse over action of displaying the definition of a scenario step. These can be Command + clicked to follow to where the step is defined. Also, if your step does not match a definition, you will be notified in the editor.

Safe Refactoring

Refactoring in this sense is renaming a variable, or a filename. The nice part about RubyMine is the ability to optionally search your project for usages of the current variable, or filename and update those references, or just notify you about them.


Not a big selling point, however many editors don’t offer strong spelling support. It checks your comments, and your variable names, but stays out of the way of your code.

Find By Filename / Class

You can pull up a dialog that will allow you to type a filename and it will return all matches regardless of directory level. Filenames can be regular expressions, and can include paths, and even line numbers. RubyMine will find them, and in the case of the line number, it will open the file and jump to that location. Searching by a class name is very similar.

Copy Buffer

Only having a clipboard with one item in it can be frustrating at times. Using the copy buffer feature, I can copy multiple sections of a file, then paste them individually later.

Code Formatting

RubyMine allows for manual formatting, or formatting on paste. You can also auto-format a complete document with a keystroke, based on your auto-format settings. It even works on HTML/ERB, HAML, Javascript, and CSS.

RubyMine isn’t a perfect tool however, and there are things about it that are less than ideal. Specifically, the footprint of RubyMine can be quite large. This seems to be a sin it shares with many of its Java IDE brothers. After watching it creep (unnecessarily) up to 400+ MB, I decided to do something about it. The solution turned out to be very simple.  On OS X, look for the file “Info.plist” in the /Applications/RubyMine 2.0.2.app/Contents/ directory. On Linux, change the file in the rubymine/bin/rubymine.vmoptions file. Change the value for Xmx to be 128m. This is the memory cap in which RubyMine will run. Runs like a charm now, and for days too.

Other annoyances include the default editor settings. Changing to soft tabs was more confusing than it should have been. Allowing “virtual space” after then of a line leads to a lot of accidental whitespace. The right gutter line isn’t helpful for Rails development. The font face was terrible. I had to customize the default theme to make it use the Apple default font. And finally, I don’t like the “Project” oriented state. I would rather open from within a directory in the terminal and work from there. I also don’t care for it generating a work folder within my Rails project – its just one more thing I have to pay attention to when using version control.

All in all, this is certainty one of the best editors I have seen yet for Ruby and Rails work, while I am sure I haven’t even scratched the surface of what this editor is capable of doing. It beats Netbeans 6.x, and RadRails. It will be interesting to see how Aptana Studio 3 turns out as the Aptana folks seem to really be putting some love into it. These editors felt like Ruby support was tacked onto what was intended to be a Java editor. The other end of the editor spectrum are hundreds of weak text-editors. I wanted something in-between. RubyMine has a clear focus, and all of its options center around Ruby and Rails work. So, if you are using TextMate as your first, and only Ruby on Rails editor, give yourself some perspective try out RubyMine’s free trial.

Deploying Ultrasphinx to Production

Recently I rolled out a Rails app that used the Sphinx full-text indexing service in conjunction with the Ruby Ultrasphinx gem. I am very impressed with some aspects of this project, and I wanted to share my experiences for anyone looking for a better search experience with SQL databases.

Why Sphinx? Sphinx is an open-source, and stable full-text indexing service. It also has good support in the Rails landscape. Why full-text indexing? In a nutshell, people can spot a crappy search implementation really quick. Google is at the top of their game because it searches the way people think. Just try implementing the following with just SQL:

  • Conditional logic (&, |, -)
  • Rank based search results
  • Case (ben vs Ben), punctuation (bens laptop vs ben’s laptop), plurality (virus vs viruses) insensitive
  • Phonetic searching (candy can match candi)
  • Searching across multiple tables with results being in either, but not both
  • 100,000 rank based results in .02 milliseconds
  • Cached data, with delta scans for minimal performance impact

Yes, you could do all these things – but why? The folks at Sphinx do nothing but this, and have packaged it up for your to use at your whim. There are other niceties that you can include like sorting, pagination, restricting to certain columns, and best of all spell checking via the raspell gem.

To begin, you will need a MySQL, or PostgreSQL backend – something I just happened to luck out on with this particular application.  You should install Sphinx and poke around for a few minutes, to understand what Ultrasphinx provides you.

A note for Windows users – add the Sphinx bin/ folder to your path so you can just call its commands a-la Unix style. Additionally, I had issues running my Rails project in a directory containing spaces. YMMV

Ultrasphinx provides a Rails-centric way of using Sphinx. Sphinx provides the search service, and Ultrasphinx builds the configuration file, and manages the Sphinx process via rake tasks. Inside your models that will be Sphinx-ified, you will need to indicate which fields are indexable, and sortable. A useful feature of Sphinx/Ultrasphinx is the ability to create associated SQL to join multiple tables on the full-text search. See http://blog.evanweaver.com/files/doc/fauna/ultrasphinx/files/README.html for more information.

Once Ultrasphinx is configured, and has created a configuration file, you can start the indexing process, then start your Sphinx service. Notes on doing this in production via Capistrano follows:

desc 'Start Ultrasphinx searchd process'
task :ultrasphinx_start do
  run "cd #{release_path}; rake ultrasphinx:daemon:status RAILS_ENV=production" do |ch, stream, data|
    if data[/stopped/]
      run "cd #{release_path}; rake ultrasphinx:daemon:start RAILS_ENV=production"

desc 'Stop Ultraphinx searchd process'
task :ultrasphinx_stop do
  run "cd #{release_path}; rake ultrasphinx:daemon:status RAILS_ENV=production" do |ch, stream, data|
    if data[/running/]
      run "cd #{release_path}; rake ultrasphinx:daemon:stop RAILS_ENV=production"

desc 'Status of Ultraphinx searchd process'
task :ultrasphinx_status do
  run "cd #{release_path}; rake ultrasphinx:daemon:status RAILS_ENV=production"

desc "Reindex Ultrasphinx via indexer process"
task :ultrasphinx_reindex do
  run "cd #{release_path}; rake ultrasphinx:configure RAILS_ENV=production"
  run "cd #{release_path}; rake ultrasphinx:index RAILS_ENV=production"
before :ultrasphinx_reindex, :ultrasphinx_stop
after :ultrasphinx_reindex, :ultrasphinx_start
after 'deploy:update_code', :ultrasphinx_reindex, :roles => [:app, :web]

This Capistrano deploy.rb fragment has four tasks – start, stop, status, and reindex. The anonymous before and after calls ensure that the service is stopped before re indexing occurs. Note that this is a full reindex, and not a delta scan. My application didn’t have  reliable datetime column to determine new entries with, so I opted to do the full index every three hours instead. The database is a small one, with less than 100MB of data, so I can get away with it here.

Additionally, in Cron, you will want to setup a recurring task in your production server environment:

# Sphinx updates http://blog.evanweaver.com/files/doc/fauna/ultrasphinx/files/DEPLOYMENT_NOTES.html
# This merges delta indexes into main index
0 */3 * * * bash -c 'cd /path/to/app/current/; RAILS_ENV=production rake ultrasphinx:index >> log/ultrasphinx-index.log 2>&1'
# Make sure the service is running
*/3 * * * * bash -c 'cd /path/to/app/current/; RAILS_ENV=production rake ultrasphinx:daemon:start >> log/ultrasphinx-daemon.log 2>&1'

Dedicated Hosting

Just wanted to make a quick shout out to James Sumners for setting up our shiny, new hosting solution. I needed to move for several reasons, one of which being that I got a job in Cambridge, Massachusetts and will no longer be able to use Clayton State’s network. The other reason is its just damn time. I need a greater uptime than our network has offered lately.

If you are seeing this post, then welcome to the new and improved Zone of Mr. Frosti

Bringing the Dead Back to Life

destroyeddriveNo, not zombies. A few days ago, a friend of mine brought me a computer and told me that it was running REALLY slow. I did some investigation, and quickly discovered after running some diagnostics that the hard drive was on the fritz. They had been wanting a new hard drive for a while anyways. One that was bigger, faster, and one that worked. I went to Newegg.com, bought a replacement SATA drive, and after arrival, began the task of migrating to this new drive.

The installation of Windows that was on the old drive was fine (aside from the drive itself causing errors). I didn’t want to reinstall Windows, as it is such a painful and time-consuming process. Hunt down drivers, copy over the documents, reconfigure settings, install the old software again. No thanks. Instead, I decided to explore my options with the open source utility dd. Why not Norton Ghost, or one of the dozens of other programs? They all cost money, and I suspect some just use a ported version of dd to do their dirty work behind the scenes. I am cutting out the middle-man (and the cost) and going straight for dd.

dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data” – Wikipedia

dd allows a user to copy data from a source to a destination bit for bit – an exact mirror image. Even better, if the device has corruptions, you can use dd_rescue. The concept is the same, however this utility assumes that the source is likely damaged and does some special handling for bad sectors.

Making the backup

To make your disk copy, boot the source computer to a Linux live CD such as Ubuntu. A live CD may or may not mount your hard drive inside of the operating system. If it does, you want to unmount this device. In Ubuntu, the mount directory is in “/media”. If you browse for files here, and see a Windows partition, unmount this first. This ensures that there are no write operations occurring during the copy of the disk.

Next, fire up dd_rescue and use the following syntax:

sudo dd_rescue /dev/sda - | ssh user@remote_server "dd of=/path/to/save/backup"
  • sudo runs this command with elevated permissions
  • dd_rescue needs the two parameters of <source> and <destination>
  • Our source is /dev/sda. Note that we are copying the entire drive (sda) instead of a partition (such as sda1, sda2, etc). Your device may differ from “sda”.
  • Our destination is , which is shorthand for standard out. This would direct the binary content to the screen
  • The “|” symbol (pipe) is redirecting the previous output to a new process
  • ssh is a secure way to copy files from one machine to another. In this case, we are specifying a username and a remote server (ip)
  • Finally, “dd” is executed on the remote server with the parameter “of” set to the path where the image will be saved to

Note that you will need to have sudo access, an ssh account on another machine, and permissions to write to the path you will be saving to. Also, make sure that your replacement drive is the same size, or larger than your source drive. Make sense right?

This process will take a while, depending on the damage to the drive, and the size, network speed, etc. I maxed out my router at around 11 MBps (100 Mbps). dd_rescue will provide output, and let you know when it is complete. An 80 GB hard drive took about 2 hours to complete.

Restoring the Backup

Once this completes, you are ready to shut down the source machine, and swap out the bad hard drive for the new hard drive. Reboot to your live CD, and run the following command (taking some values from earlier):

ssh user@remote_server "dd if=/path/to/save/backup"| sudo dd of=/dev/sda
  • We are using ssh again, this time to pull the image back across to the client
  • On the remote server, we are executing dd if. Note the if. We are using this backup as the input file, instead of the output file (of)
  • We are piping this output into our local machine
  • Using sudo, we execute dd with the of parameter. This will do a bit by bit copy of the image to the destination media.

Note that again you want to ensure that the target is not mounted in any way. Also, we are doing the entire hard drive, so I am just using “/dev/sda”

Guess what?! This process will take a while, just like the copy operation before.

Expand the New Partition

Note that if you have a replacement drive that is larger, you will need to expand the Windows partition. This makes sense, since an 80GB drive has less bits than a 250GB drive. When the image (that is 80GB) finishes copying, the rest of the hard drive is completely untouched. To address this behavior, and be able to use the rest of the replacement hard drive, we will need to expand this partition.

You may need to run a “chkdsk /R” a few times in Windows if your hard drive has any bad sectors. As a note, new drives usually have a few bad sectors that need to be identified and disabled. After you run chkdsk, fire up your Live CD again, and launch gparted. This is a graphical tool for managing hard drive partitions. You should see the unallocated space at the end of your drive. Click the preceding partition (NTFS, etc) and choose to resize this partition, taking up all of the unallocated space. Apply your changes, and in a while, you have a full sized hard drive partition for Windows.

If you receive an error about bad sectors preventing ntfsresize from running from gparted, you may not be able to continue. Despite running chkdsk and restarting multiple times as instructed, I was not able to continue in gparted due to this message. There is switch –bad-sectors that can be called for ntfsresize arguments, however the GUI gparted does not let you do this. I first tried setting an alias, but it seems the program references the full path of the command, so the alias did not work.  Finally, I arrived at this solution:

# sudo su
# mv /usr/sbin/ntfsresize /usr/sbin/ntfsresize.real
# touch /usr/sbin/ntfsresize
# chmod a+x /usr/sbin/ntfsresize

Now, edit the file ntfsresize file you just created and do something like the following code:

ntfsresize.real $@ --bad-sectors
exit 0
  • We are becoming root with sudo su
  • We move the existing ntfsresize file to another filename (ntfsresize.real)
  • We then create a new file named ntfsresize with execute permissions, and edit this file
  • Inside this file, we call the original ntfsresize, with our –bad-sectors switch, and $@. This is a way to pass any arguments sent to our script on to the ntfsresize.real script.

After this, run gparted, and you should be able to continue. A resize from 80GB to 250GB took about an hour to complete.


VirtualBox NAT Tunneling

At work, we have database that you cannot connect to without using 802.1x authentication, or VPN. That sucks for my Virtual Machines, where I can’t used Bridged networking because of the requirements. Instead, I setup my host machine to use 802.1x, and pass this into the Virtual Machine using NAT networking.

You can issue commands to do tunneling from host to guest, as outlined by various websites. I am just old fashioned I suppose, and don’t like to blindly fire commands. I want to see what is being changed. I dug around a discovered the following (Windows host):

C:Documents and Settings.VirtualBoxMachines.xml

Substitute in your username, and Virtual Machine name. If the VM name has a space, or special character, you can quote the name in double-quotes.

This is where commands from “VBoxManage” are saved. In the following example, I am substituting in VM for the name of the VM, and Name is an arbitrary name that you label for your mapping. You can do this as many times as needed, provided that the name is unique. For example, if you wanted to share SSH from your host, you may choose “ssh” as the name of the configuration. Additionally, the ports don’t have to match up in the guest and host. You can see the result of the following VBoxManage commands:

cd "c:Program FilesSunxVM VirtualBox" (Windows only)
VBoxManage setextradata <VM> "VBoxInternal/Devices/e1000/0/LUN#0/Config/<Name>/Protocol" TCP
VBoxManage setextradata <VM> "VBoxInternal/Devices/e1000/0/LUN#0/Config/<Name>/GuestPort" 22
VBoxManage setextradata <VM> "VBoxInternal/Devices/e1000/0/LUN#0/Config/<Name>/HostPort" 2222



Now you can go in and change these commands and see the results in the file. Note that changing the file directly doesn’t affect the machine until a restart of the guest VM. Note also, that the network adapter referenced here as “e1000” refers to the Intel network driver. Check and make sure these match.

If you make a mistake creating the command, you can delete the command by issuing “VBoxManage” “VM” “Setting” without issuing a value for the setting. This blank value removes the XML line from the configuration file.

The Great Compromise

Its been a while since my last post. I got the crazy idea in my head the using Putty / WinSCP has become frustratingly inadequate for my day to day work. Putty has weird copy / paste, ugly formatting, no tabs, dumb keypair management, etc. The list goes on. And using WinSCP to bridge the great divide between Windows and Linux is really getting on my nerves. Not to mention IRB on Windows totally blows. Simply put, a great deal of my day to day tasks would be easier if I were working on Linux. Considering that 90% of my time is working with Debian, and Red Hat it seems simple.

But simple is far from the truth. The truth is that Microsoft has all but snubbed out Linux development. Windows is ubiquitous, and hols 95% plus of the OS market. Most of the talent write for Windows. People test software on Windows.  Support resolves issues that affect Windows user’s first, while other OS bugs become understandable second class citizens. So the question is:

How can I satiate my need for Linux, and remain “professional” in the workplace. If my boss, or my co-workers come in and say there is an issue with a website on IE6 that I should look into, it doesn’t cut the mustard to say “Oh, I run Linux now, so I don’t have IE6.”. There are several answers – each with its own pros and cons. I can dual boot, I can virtualize, I can use two different computers, I *could* just stick with Putty.

I am mulling the options around in my head, and in light of some new technologies, the scales seem to be constantly tipping. Now I am going to try Innotek -> Sun -> Oracle’s VirtualBox, with the “seamless” mode (think VMWare Fusion’s “Unity”), and guest 3D acceleration.