Apple, Computers, Hardware, Linux, Software, Windows

Using Synergy Over VPN

I’ve been watching a lot of Linus on Tech and one of their sponsors sells a product called Synergy. More information on their product page: https://symless.com/synergy . To summarize, this is a software KVM (no video, so I guess it is “KM”?) solution. The use case was perfect for me. On my desk I have an Apple work laptop, a Windows desktop, and a Linux personal laptop. Anyone that has done extensive work on a laptop keyboard and touchpad know that it isn’t optimal. I didn’t want multiple keyboards and mice on top of my desk because of the clutter. I dropped some money for Synergy, and it just works!

20170731_110826_HDR.jpg

That is until I had to connect to our company VPN the next week. They use a full tunneling solution. When I connect, I lose everything. I can’t print, I can’t access my NAS, but most importantly I can’t access my keyboard and mouse. (The video is fine because it is a hard wire to an external monitor). What to do?

SSH to the rescue! What is SSH? This is a protocol that will allow one computer to securely interface with another computer. Secure SHell. However, we will just be using it for port forwarding, and not for an interactive session. The goal is to take the OS X machine (Synergy client), and SSH into the Windows server (Synergy server). Using this SSH connection, we can forward ports within it. It is a tunnel running inside the SSH connection. This will expose a local port on OS X for 24800 that is actually pointing to the remote server port 24800. This is the port that Synergy uses for its connections.

You will need a few tools, and a little patience. Having just gone through this, I’m sharing for posterity, or maybe for anyone that has thrown in the towel with how crippled VPN makes accessing home devices.

I have the following Synergy setup:

  • Windows 10 Synergy server (keyboard and mouse are physically connected to the desktop)
  • OS X Synergy Client
  • Linux Synergy Client
  • Router with a local area network all these devices share
  • Admin access to the router for port forwarding
  • Autossh package for OS X (available via brew)

First step, get Windows 10 up to speed with SSH. How this isn’t built in as a service in the year 2017 I have no idea. Grab the OpenSSH server package for Windows from https://www.mls-software.com/opensshd.html . After downloading, extract and run the setup file. This will create a new Windows service for OpenSSH that will run on port 22. It prompts you to generate an SSH key for the server.

Once this server is running, you will need to add your user to the list of SSH users. Open up PowerShell as an administrator and change into the C:\Program Files\OpenSSH\bin directory. Run the following commands:

mkgroup -l >> ..\etc\group
mkpasswd -l >> ..\etc\passwd

Try and connect to your SSH server from the OS X client:

ssh <user>@<server IP> # e.g. ssh Ben@192.168.1.95

You should be prompted for your Windows password. Once you can successfully login to the server, we can setup public key authentication. This removes the need for you to type in your password because you identify yourself with an SSH public key. From your OS X machine get your public key:

cat ~/.ssh/id_rsa.pub

Put the contents of this file on your SSH server in the file C:\Program Files\OpenSSH\home\<user>.ssh . This is actually a symlink to C:\Users\<user>.ssh . If the directory .ssh doesn’t exist, you will need to create it first. Now we need to configure the server to allow public key authentication. Edit the C:\Program Files\OpenSSH\etc\sshd_config file and change the following lines:

StrictModes no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

Restart the OpenSSH server for the changes to take effect:

net stop opensshd
net start opensshd

You should now be able to SSH into the server same as before but without being prompted for a password.

Now we are ready to create an SSH tunnel. Before we incorporate AutoSSH (which handles retries and monitoring) we will do a naive attempt to SSH. In the following command:

  • -f backgrounds the process
  • -L does port tunneling in the format of <local port>:<remote host>:<remote port>
  • -N do not run a command – just tunnel the port
ssh -f <user>@<remote public IP> -L 24800:<remote public IP>:24800 -N

If this works, you should see a [LISTEN] entry for port 24800 when you list open files:

lsof -n -i | grep 24800

You may need to set your server as the DMZ on your network. Or to be safer you can simply setup port forwarding. We will need port 22 and port 24800 to resolve to the Windows server. The instructions for how to do this on a router widely vary by vendor. Typically it is under a WAN section. It typically prompts for a port, a destination IP, and destination port, and protocol. You want ports 22 and 24800 to route to your server IP for TCP and UDP.

Configure your Synergy client to use localhost instead of the remote IP. You should now be able to operate your client from the server’s peripherals via Synergy.

Everything works great until the VPN connection is made. The reason is that the SSH connection is severed. In order to recover automatically, I have added autossh to persist this tunnel. On the OS X client instead of running SSH do the following:

AUTOSSH_POLL=10 autossh -M 20000 -f -N <user>@<remote public IP> -L 24800:<remote public IP>:24800

Now when a VPN connection is made, or a disconnection happens, the autossh package will detect that it is no longer alive and retry. Because Synergy’s software also retries, after a few seconds your connectivity should begin working again.

Thanks to Synergy for making a solid product, and for having first class Linux support.

Advertisements
Computers, Events, Family, Hardware, Personal, Software, Thoughts, Windows

The American Legend

Lately, John Templeman and I have been knocking out some serious multi-player action on the Playstation 3. Uncharted 2, and Resident Evil 5 have been very fulfilling games from a co-operative viewpoint. The PC finally drew me back in though. Starcraft II, Left for Dead, Age of Empires II and some other titles have lead me to do some housecleaning on my computer setup. The first thing that I decided had to go was my tiny (by comparison) 19″ Acer VGA monitor. It looks like a joke next to my wife’s beautiful 24″ ASUS HDMI monitor. Now I have a brand new ASUS MS236H monitor (shown here). Next to go was my broken, six year old installation of Windows XP. Seriously, I have kept the same install running through countless infections, blue screens, hardware upgrades, service packs – the works! Enough was enough, and after my generational jump over Windows Vista, I landed on Windows 7.

I had 7 running on the wife’s machine for a while, and despite it being reloaded a few times (what can you expect, its still Windows), I have been fairly impressed with it. Kristin  is a good litmus test of the stability of a piece of software. I installed it on my machine and ran into a bunch of problems I didn’t anticipate. Apparently my Sound Blaster Live! 5.1 sound card had an EOL around 1998, so no official drivers. I am surprised that Windows didn’t “just work” with this given its age, and its established user base (perhaps I am the last hold out?) After installing some random driver from some guy named Peter in a forum talking about the problem (and advising me not to use his driver with more than 2GB of RAM if I valued uptime) I got the sound working.

Next came the printer. Holy hell. The same driver package (the EXACT same) package I installed on my wife’s computer that just worked kept failing to install the printer driver for our Dell 1600n. After several reboots, and the day turning into ‘morrow, I finally settled on installing the HP Laserjet 4100 drivers.They seem to be mostly inter-changable, so I guess I have solved that problem too.

Another biggy was the unresponsiveness of web browsers on the OS (IE being the exception, where it is always slow). I was experiencing page load times so long, they were timing out while waiting for a connection. I thought it was Comcast, but after some Googling, I found that the “Automatic Proxy Discovery” setting was turning my inbound pipe into dial up.

The weather here has really started to cool down, with high’s in the 60’s. It should be perfect weather to walk to the train station without breaking a sweat while getting some nice exercise, but I just haven’t been in the mood. Its probably due to the soul-crushing amount of work that I have to do to keep on top of my classes. If textbooks had any competition of reader’s choice, the author’s might have to actually invest time into things that make a book worth reading, such as clarity and interestingness. The days of sailing in the Charles River with Hoydis are waning, and the days where I have to strap my feet into a snowboard are soon approaching.

Our Thanksgiving vacation plans are questionable at best right now, so who knows when I will be back in Atlanta. We are making the best of our situation here regardless. Kristin and I have decided to throw a Halloween party for anyone who is interested in joining us. It will be a costume, pot-luck, beer-fest, and should be a great way to bring in the holiday. Zoo keepers, it turns out have some of the most interesting stories, and they will be in attendance so I have been told. We haven’t gotten to show our house, and all its critters off to many folks yet, so here is your chance to see how people outside of the city live. All are invited – and bring a friend!

The wife is still job searching after the crater that Capron Park management left on her career path. She has applied here and there and we are eagerly awaiting any leads. In the meantime, we have been enjoying the atypical time we get to spend together on the weekends. Last week, we went to Conner’s Farm and did our first Maize Maze. It was a-mazing. So corny… Ok, I’ll stop now. Thanks to Michael Hoydis for the invite – we had fun. The picture here is one that Michael and Hannah snapped while we were walking in the Apple Orchards. That green sticker on our shirts means we successfully navigated Clint Eastwood’s face. Seriously.

Tomorrow is Friday, and I have a three day weekend. Lets hope I can convince some of my co-workers to break away from programming for a few minutes and go grab some drinks…if I make it that far.

Hope everyone, and all of their new children are doing well! Hopefully we will see everyone soon back on the flip side.

Update: A picture of the monitor in its new home:



Apple, Computers, Hardware, Linux, Open-source, Personal, Ruby, Software, Windows

Living in an Apple World

Welcome readers to what is a first here on my blog – a review about Apple’s OS X. As some of you may know, part of my new job is working on a Mac for 8 hours a day, 5 days a week. Someone asked me about my experiences, and I feel up to sharing my findings. I want to be fair in my assessments, so if it sounds like I am starting to get a little slanted, keep me in check with a comment!

First things first – the initial impression. I have a 27″ iMac and I was initially impressed by the appearance of the machine. The iMac screens and case are one piece, so I have plenty of room to kick around beneath my desk with minimal cord entanglement (not that it matters because I sit cross-legged all day). The compact style keyboard has an aluminum casing, which matches the iMac. The mouse is the Mighty Mouse. Both are wired, which I appreciate – especially on the mouse. I hated the compact keyboard since it feels shrunken, and the addition of the “Fn” key in the bottom row meant every time I tried to press “Control” I missed. After swapping this out for a full-sized keyboard I was much happier, and even unlearned some bad habits. The Mighty mouse absolutely sucks. The tiny wheel stops responding all the time from the slightest spec of dirt, and you have to turn it over and rub it back and forth on your jeans, or the mouse pad. Its one saving feature is the ability to vertically, and horizontally scroll which is occasionally helpful. I am a right click fan, and though invisible, the region on the mouse that registers as a right click versus a left is about 10 times smaller. It feels like the edge of the mouse.

The keyboard on a Mac is different in important ways from its PC counterparts. The “Windows” key is replaced with the Command key, which is utilized far more than the Windows key ever was. In fact, most of the operations of the machine are done using Command (copy, paste, new tab, close window, etc) effectively making it closer to the “Control” key in Windows. However, the Control key remains, which actually introduces a whole new key combination to more effectively use shortcuts. The Command key is located next to the space bar, which is much more convenient than the extreme left placement of the Control key. I do copy, paste, etc operations using my thumb, and not my pinky finger – much less strain.

The computer screen can be tilted, which is nice since the whole world seems to be moving towards the annoying high gloss screens. I can tilt it down, and out of the florescent overhead lights. I really feel that gloss is a showroom gimmick just like turning the brightness up to max on the TVs in the store. If I wanted to look at myself, I would sit in front of a mirror. Fortunately, I have a second non-gloss monitor, and I do most of my coding on this screen. Also, it would be nice if the monitor had a height adjustment, as second monitor isn’t quite the height of the iMac screen.

Enough about appearance – lets talk hardware. This is a dual core Intel-based processor, with 2 GB of memory (later upgraded to 4GB). The video card is decent I suppose (however the interface can get quite “laggy” at times). I don’t have any idea what the machine costs, but this is definitely unimpressive hardware. 2GB of RAM is the minimum I would work with, and it being slow laptop RAM doesn’t help at all. At least there isn’t a laptop hard in it too.

As for the Operating System, it seems pretty stripped down. This isn’t necessarily a bad thing – I can quickly find what I am looking for, without going on a damn field trip through obscure dialog windows. The flip-side to this is it doesn’t feel very “customizable”. You use the stock features, or you don’t use a Mac. Perhaps there are a bunch of third party utilities that I don’t know about? Sometimes I am disappointed by the lack of customization options (there are just a handful of settings for the dock). To be honest, I am not sure what I would customize, but I like to poke around, and I often leave the System Preferences disappointed having not found “setting xyz“.

I really enjoy the file system indexing, and they have the best implementation for full-text search I have seen. It doesn’t bog down the computer, and the results are instantly updated. Magic. It effectively is the starting point for all my open actions. I don’t know why it isn’t available for the first 10 minutes after a boot, but I don’t shut down that much so its ok.

I was surprised by the lack of a default system-wide notification system – something that Growl has aimed to fill. I was also disappointed by the lack of package management on the Mac – again third party solutions exist. The system updates are just as annoying as in Windows which was a disappointment. Once the “restart” prompt stole my typing focus and proceeded to shut down the system. A few times the machine has “beach balled” (the Mac “hourglass” icon), and hard locked. Most of time its fairly responsive and stable which I can appreciate.

Other points of interest are the window management. I use Expose almost as regularly as I do the task switcher (Command + Tab), though admittely sometimes I get lost in the special effects and forget what I was doing. There are a bunch of other window groupings, but I don’t really find them that useful. One particularly frustrating observation is that once you minimize a window, you can’t Command + Tab back to it. Isn’t that the point of the task switcher? It even shows up in the task switcher, but when it is selected, absolutely nothing happens.

As for the software available on the Mac it is more comprehensive than Linux, and less comprehensive than Windows. Some of my co-workers commented that in OS X, there is usually one utility to do something, whether you like it or not. I use Google Chrome, JetBrain’s RubyMine, Ruby, Terminal, Lotus Notes, Adium, and Propane almost exclusively. Because of this, I can’t really assess the state of the Mac software ecosystem, but I will say that all these programs run damn well on the Mac. The only software crash I have is Flash. Flash on Linux and Windows is stable, however on the Mac probably one in ten uses causes the browser tab to lockup. I am not sure whether this is a Chrome issue or not, but something is seriously wrong with the state of Flash on my Mac. Now I understand why so many Mac users hate Flash – as a Windows user, I never experienced the constant crashing.

In summary, due to the nature of my work, I use the Mac at work in essentially the same manner I would use Linux. The terminal is where I spend my time, and I am more or less indifferent about the operating system around it, as long as I can install the system libraries I need for Ruby extensions, and it stays responsive. My next computer purchase will be a netbook and I will install Ubuntu on it, as I can’t justify spending the designer prices of Apple products to use a terminal and a web browser.  Toe to toe with Windows, and many Linux distributions, OS X excels in many areas. Its a fantastic operating system, but I am not sure that it is worth its cost. If I could throw it on my PC at home it would be worth $100. Buying a special machine just to run it is just silly.

Apple, Computers, Linux, Open-source, Personal, Ruby, Software, Thoughts, Web, Windows

Have You Had Your Daily Dose of Editors?

The Boss decided to purchase a license to RubyMine for me to use, and the rest of the office to evaluate. I wanted to share my experiences, since there doesn’t seem to be a lot of real-world experience on developing Ruby on Rails in a corporate setting using RubyMine. Also, some of my new (and past) coworkers might be curiously looking over their screens with TextMate to see what else is out there.

First, a bit about the Ruby on Rails culture. It is a very Mac OS X oriented, and the preferred editor of choice is TextMate. I really try and stay away from tools that only run on one operating system, and TextMate falls into that category. Ruby is a very terse, dynamic and simple language. Rails developers will tell you that you don’t need an IDE to do Rails work. While this is true, I find not using anything more than a text editor is like using a screwdriver instead of a power tool. If you are a good developer, and you understand Ruby a good editor will only make you more productive. RubyMine isn’t meant to be relied on like IDEs are for other strongly typed languages including C# and Java. It makes a best effort to provide its features without getting in your way when it fails.

RubyMine offers full support on Windows, Mac and Linux. RubyMine also strives very hard to make the Windows version as strong as the *nix versions. It does this by including an IRB console, and commands to run many rake tasks, and Rails generators. While these tools are a very good solution on Windows, people with the ability to run a native terminal will probably find the offerings lacking in comparison. This review will skip these Windows-audience features, since I don’t feel it represents the majority.

Auto-Completion

RubyMine does a very good job at trying to autocomplete its code. It will look inside Class definitions, and can find methods, attributes, and associations. If you are using gems that extend classes, such as ActiveRecord, RubyMine will do a fairly robust job at reading these methods from the gem files once they are attached to the project. “Attached” just means that RubyMine is inspecting these gems. It was not able to locate gems provided via Bundler, but this is supposed to be coming. Also, the auto-complete can be slow at times and freeze the editor from further input.

Inline Documentation

When you place the caret over a method, or class, RubyMine will fetch the documentation for that method and show it in the editor. This is doesn’t always locate the documentation however, in cases where the method is defined in a gem that is unattached.

Command+Click Following

You can click class names, and method names to jump straight to the definition. Also useful is clicking on associations, and named scopes. You can also jump to route definitions, and partials.

Cucumber Integration

There is auto-complete provided for your Cucumber tests, however also nice is the Command + mouse over action of displaying the definition of a scenario step. These can be Command + clicked to follow to where the step is defined. Also, if your step does not match a definition, you will be notified in the editor.

Safe Refactoring

Refactoring in this sense is renaming a variable, or a filename. The nice part about RubyMine is the ability to optionally search your project for usages of the current variable, or filename and update those references, or just notify you about them.

Spelling

Not a big selling point, however many editors don’t offer strong spelling support. It checks your comments, and your variable names, but stays out of the way of your code.

Find By Filename / Class

You can pull up a dialog that will allow you to type a filename and it will return all matches regardless of directory level. Filenames can be regular expressions, and can include paths, and even line numbers. RubyMine will find them, and in the case of the line number, it will open the file and jump to that location. Searching by a class name is very similar.

Copy Buffer

Only having a clipboard with one item in it can be frustrating at times. Using the copy buffer feature, I can copy multiple sections of a file, then paste them individually later.

Code Formatting

RubyMine allows for manual formatting, or formatting on paste. You can also auto-format a complete document with a keystroke, based on your auto-format settings. It even works on HTML/ERB, HAML, Javascript, and CSS.

RubyMine isn’t a perfect tool however, and there are things about it that are less than ideal. Specifically, the footprint of RubyMine can be quite large. This seems to be a sin it shares with many of its Java IDE brothers. After watching it creep (unnecessarily) up to 400+ MB, I decided to do something about it. The solution turned out to be very simple.  On OS X, look for the file “Info.plist” in the /Applications/RubyMine 2.0.2.app/Contents/ directory. On Linux, change the file in the rubymine/bin/rubymine.vmoptions file. Change the value for Xmx to be 128m. This is the memory cap in which RubyMine will run. Runs like a charm now, and for days too.

Other annoyances include the default editor settings. Changing to soft tabs was more confusing than it should have been. Allowing “virtual space” after then of a line leads to a lot of accidental whitespace. The right gutter line isn’t helpful for Rails development. The font face was terrible. I had to customize the default theme to make it use the Apple default font. And finally, I don’t like the “Project” oriented state. I would rather open from within a directory in the terminal and work from there. I also don’t care for it generating a work folder within my Rails project – its just one more thing I have to pay attention to when using version control.

All in all, this is certainty one of the best editors I have seen yet for Ruby and Rails work, while I am sure I haven’t even scratched the surface of what this editor is capable of doing. It beats Netbeans 6.x, and RadRails. It will be interesting to see how Aptana Studio 3 turns out as the Aptana folks seem to really be putting some love into it. These editors felt like Ruby support was tacked onto what was intended to be a Java editor. The other end of the editor spectrum are hundreds of weak text-editors. I wanted something in-between. RubyMine has a clear focus, and all of its options center around Ruby and Rails work. So, if you are using TextMate as your first, and only Ruby on Rails editor, give yourself some perspective try out RubyMine’s free trial.

Apple, Computers, Events, Hardware, Open-source, Personal, Ruby, Software, Thoughts, Web, Windows

3 Days Down, 40 Years to Go

Yesterday at 5:00pm marked the end of my first week at Beacon Interactive Systems. My coworkers are all really nice, and there is a surprising geographic mix between them. Some folks have lived in Massachusetts their whole lives, while others come from Maryland, and Michigan. The cultural differences between “down South” and here are pretty minimal, unless you just feel like having a good laugh. There have been two big adjustments however: Snow is really not a big deal up here – people hardly notice it outside. The second is restaurants don’t have sweet tea. You would have to drink sweet tea to understand why this is a big deal.

In general:

  • The job is much less stressful. Even during crunch times, you hear Southpark and Big Lebowski quotes (“I’m not your pal, guy!”).
  • The environment is a lot less structured. You come in whenever, you leave whenever. If you want to go outside and toss around the football, go for it. Good team-builder by the way.
  • The skill sets of my coworkers are all very impressive. Its the rifle vs shotgun approach.
  • The job area is nice – its next to Harvard. Getting there is rough – I have to cut across the city. My 20 minute commute takes about an hour.
  • Developing on a Mac is an easier transition than I thought. I won’t say that I’m in love with it yet, but its workable. The biggest pain has been this silly bundled keyboard and mouse. No one else uses them. Also, package management on Mac sucks compared to Linux. I think I would actually prefer to use Linux. Time will tell on this one.
  • The coffee isn’t as good.

An interesting collision of viewpoints occurred my second day at the job, while I was shadowing a coworker on a joint project. He was showing me their (complex) system of bug detection, and correction. They write up a use case, file a ticket, branch the code, create a changset, rebase it, merge it into QA, verify it, then push it back upstream. Not coming from anything near that complex (“Hey Ben – login to the production server and change it!”) I was amazed that they spent so much time on this process. I asked if they ever just ignore a bug that would be too minimal to matter. My coworker asked me to clarify what I meant. I replied with “You know, its good enough for government.” He paused and looked at me funny, then reiterated that they address all bugs that are discovered. A bug is a bug. It will take me a while to harden my resolve to be like theirs, and aim for perfection. Perfection wasn’t possible before because we had the typical scenario of overworked, underpaid, and on a deadline.

We are moving into our new building in a few weeks. When we move, there will be a train station across the street from the new building, and I will probably make the transition to riding into work. Its about the same amount of time, but I would have the ability to sleep, read, surf the Internet, etc all without causing an accident.

Wish me luck for next week – its been a difficult adjustment.

Computers, Linux, Open-source, Ruby, Software, Thoughts, Web, Windows

Deploying Ultrasphinx to Production

Recently I rolled out a Rails app that used the Sphinx full-text indexing service in conjunction with the Ruby Ultrasphinx gem. I am very impressed with some aspects of this project, and I wanted to share my experiences for anyone looking for a better search experience with SQL databases.

Why Sphinx? Sphinx is an open-source, and stable full-text indexing service. It also has good support in the Rails landscape. Why full-text indexing? In a nutshell, people can spot a crappy search implementation really quick. Google is at the top of their game because it searches the way people think. Just try implementing the following with just SQL:

  • Conditional logic (&, |, -)
  • Rank based search results
  • Case (ben vs Ben), punctuation (bens laptop vs ben’s laptop), plurality (virus vs viruses) insensitive
  • Phonetic searching (candy can match candi)
  • Searching across multiple tables with results being in either, but not both
  • 100,000 rank based results in .02 milliseconds
  • Cached data, with delta scans for minimal performance impact

Yes, you could do all these things – but why? The folks at Sphinx do nothing but this, and have packaged it up for your to use at your whim. There are other niceties that you can include like sorting, pagination, restricting to certain columns, and best of all spell checking via the raspell gem.

To begin, you will need a MySQL, or PostgreSQL backend – something I just happened to luck out on with this particular application.  You should install Sphinx and poke around for a few minutes, to understand what Ultrasphinx provides you.

A note for Windows users – add the Sphinx bin/ folder to your path so you can just call its commands a-la Unix style. Additionally, I had issues running my Rails project in a directory containing spaces. YMMV

Ultrasphinx provides a Rails-centric way of using Sphinx. Sphinx provides the search service, and Ultrasphinx builds the configuration file, and manages the Sphinx process via rake tasks. Inside your models that will be Sphinx-ified, you will need to indicate which fields are indexable, and sortable. A useful feature of Sphinx/Ultrasphinx is the ability to create associated SQL to join multiple tables on the full-text search. See http://blog.evanweaver.com/files/doc/fauna/ultrasphinx/files/README.html for more information.

Once Ultrasphinx is configured, and has created a configuration file, you can start the indexing process, then start your Sphinx service. Notes on doing this in production via Capistrano follows:

desc 'Start Ultrasphinx searchd process'
task :ultrasphinx_start do
  run "cd #{release_path}; rake ultrasphinx:daemon:status RAILS_ENV=production" do |ch, stream, data|
    if data[/stopped/]
      run "cd #{release_path}; rake ultrasphinx:daemon:start RAILS_ENV=production"
    end
  end
end

desc 'Stop Ultraphinx searchd process'
task :ultrasphinx_stop do
  run "cd #{release_path}; rake ultrasphinx:daemon:status RAILS_ENV=production" do |ch, stream, data|
    if data[/running/]
      run "cd #{release_path}; rake ultrasphinx:daemon:stop RAILS_ENV=production"
    end
  end
end

desc 'Status of Ultraphinx searchd process'
task :ultrasphinx_status do
  run "cd #{release_path}; rake ultrasphinx:daemon:status RAILS_ENV=production"
end

desc "Reindex Ultrasphinx via indexer process"
task :ultrasphinx_reindex do
  run "cd #{release_path}; rake ultrasphinx:configure RAILS_ENV=production"
  puts "NOTE THAT THIS CAN TAKE A WHILE"
  run "cd #{release_path}; rake ultrasphinx:index RAILS_ENV=production"
end
before :ultrasphinx_reindex, :ultrasphinx_stop
after :ultrasphinx_reindex, :ultrasphinx_start
after 'deploy:update_code', :ultrasphinx_reindex, :roles => [:app, :web]

This Capistrano deploy.rb fragment has four tasks – start, stop, status, and reindex. The anonymous before and after calls ensure that the service is stopped before re indexing occurs. Note that this is a full reindex, and not a delta scan. My application didn’t have  reliable datetime column to determine new entries with, so I opted to do the full index every three hours instead. The database is a small one, with less than 100MB of data, so I can get away with it here.

Additionally, in Cron, you will want to setup a recurring task in your production server environment:

# Sphinx updates http://blog.evanweaver.com/files/doc/fauna/ultrasphinx/files/DEPLOYMENT_NOTES.html
# This merges delta indexes into main index
0 */3 * * * bash -c 'cd /path/to/app/current/; RAILS_ENV=production rake ultrasphinx:index >> log/ultrasphinx-index.log 2>&1'
# Make sure the service is running
*/3 * * * * bash -c 'cd /path/to/app/current/; RAILS_ENV=production rake ultrasphinx:daemon:start >> log/ultrasphinx-daemon.log 2>&1'
Computers, Family, Hardware, Linux, Open-source, Personal, Software, Windows

Bringing the Dead Back to Life

destroyeddriveNo, not zombies. A few days ago, a friend of mine brought me a computer and told me that it was running REALLY slow. I did some investigation, and quickly discovered after running some diagnostics that the hard drive was on the fritz. They had been wanting a new hard drive for a while anyways. One that was bigger, faster, and one that worked. I went to Newegg.com, bought a replacement SATA drive, and after arrival, began the task of migrating to this new drive.

The installation of Windows that was on the old drive was fine (aside from the drive itself causing errors). I didn’t want to reinstall Windows, as it is such a painful and time-consuming process. Hunt down drivers, copy over the documents, reconfigure settings, install the old software again. No thanks. Instead, I decided to explore my options with the open source utility dd. Why not Norton Ghost, or one of the dozens of other programs? They all cost money, and I suspect some just use a ported version of dd to do their dirty work behind the scenes. I am cutting out the middle-man (and the cost) and going straight for dd.

dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data” – Wikipedia

dd allows a user to copy data from a source to a destination bit for bit – an exact mirror image. Even better, if the device has corruptions, you can use dd_rescue. The concept is the same, however this utility assumes that the source is likely damaged and does some special handling for bad sectors.

Making the backup

To make your disk copy, boot the source computer to a Linux live CD such as Ubuntu. A live CD may or may not mount your hard drive inside of the operating system. If it does, you want to unmount this device. In Ubuntu, the mount directory is in “/media”. If you browse for files here, and see a Windows partition, unmount this first. This ensures that there are no write operations occurring during the copy of the disk.

Next, fire up dd_rescue and use the following syntax:

sudo dd_rescue /dev/sda - | ssh user@remote_server "dd of=/path/to/save/backup"
  • sudo runs this command with elevated permissions
  • dd_rescue needs the two parameters of <source> and <destination>
  • Our source is /dev/sda. Note that we are copying the entire drive (sda) instead of a partition (such as sda1, sda2, etc). Your device may differ from “sda”.
  • Our destination is , which is shorthand for standard out. This would direct the binary content to the screen
  • The “|” symbol (pipe) is redirecting the previous output to a new process
  • ssh is a secure way to copy files from one machine to another. In this case, we are specifying a username and a remote server (ip)
  • Finally, “dd” is executed on the remote server with the parameter “of” set to the path where the image will be saved to

Note that you will need to have sudo access, an ssh account on another machine, and permissions to write to the path you will be saving to. Also, make sure that your replacement drive is the same size, or larger than your source drive. Make sense right?

This process will take a while, depending on the damage to the drive, and the size, network speed, etc. I maxed out my router at around 11 MBps (100 Mbps). dd_rescue will provide output, and let you know when it is complete. An 80 GB hard drive took about 2 hours to complete.

Restoring the Backup

Once this completes, you are ready to shut down the source machine, and swap out the bad hard drive for the new hard drive. Reboot to your live CD, and run the following command (taking some values from earlier):

ssh user@remote_server "dd if=/path/to/save/backup"| sudo dd of=/dev/sda
  • We are using ssh again, this time to pull the image back across to the client
  • On the remote server, we are executing dd if. Note the if. We are using this backup as the input file, instead of the output file (of)
  • We are piping this output into our local machine
  • Using sudo, we execute dd with the of parameter. This will do a bit by bit copy of the image to the destination media.

Note that again you want to ensure that the target is not mounted in any way. Also, we are doing the entire hard drive, so I am just using “/dev/sda”

Guess what?! This process will take a while, just like the copy operation before.

Expand the New Partition

Note that if you have a replacement drive that is larger, you will need to expand the Windows partition. This makes sense, since an 80GB drive has less bits than a 250GB drive. When the image (that is 80GB) finishes copying, the rest of the hard drive is completely untouched. To address this behavior, and be able to use the rest of the replacement hard drive, we will need to expand this partition.

You may need to run a “chkdsk /R” a few times in Windows if your hard drive has any bad sectors. As a note, new drives usually have a few bad sectors that need to be identified and disabled. After you run chkdsk, fire up your Live CD again, and launch gparted. This is a graphical tool for managing hard drive partitions. You should see the unallocated space at the end of your drive. Click the preceding partition (NTFS, etc) and choose to resize this partition, taking up all of the unallocated space. Apply your changes, and in a while, you have a full sized hard drive partition for Windows.

If you receive an error about bad sectors preventing ntfsresize from running from gparted, you may not be able to continue. Despite running chkdsk and restarting multiple times as instructed, I was not able to continue in gparted due to this message. There is switch –bad-sectors that can be called for ntfsresize arguments, however the GUI gparted does not let you do this. I first tried setting an alias, but it seems the program references the full path of the command, so the alias did not work.  Finally, I arrived at this solution:

# sudo su
# mv /usr/sbin/ntfsresize /usr/sbin/ntfsresize.real
# touch /usr/sbin/ntfsresize
# chmod a+x /usr/sbin/ntfsresize

Now, edit the file ntfsresize file you just created and do something like the following code:

#!/bin/bash
ntfsresize.real $@ --bad-sectors
exit 0
  • We are becoming root with sudo su
  • We move the existing ntfsresize file to another filename (ntfsresize.real)
  • We then create a new file named ntfsresize with execute permissions, and edit this file
  • Inside this file, we call the original ntfsresize, with our –bad-sectors switch, and $@. This is a way to pass any arguments sent to our script on to the ntfsresize.real script.

After this, run gparted, and you should be able to continue. A resize from 80GB to 250GB took about an hour to complete.

References: