Computers, Hardware, Linux, Software

Trials and Tribulations of the D-Link DNS-320

A funny noise, is something you never want to hear coming out of your primary disk drive. If you are like me, you probably never think too much about backing up your important data before it becomes a looming emergency. I realized while listening to that noise, and watching the red HDD indicator light stay on constantly that I had some pretty important stuff on that drive. Wedding photos, baby photos, videos, documents and other things that you can’t just re-download. Why not have cloud backup? I’m a bit skeptical of entrusting others with my data. At the end of the day, a failure on their end may net you a refund, and you could even take them to court, but no matter what, you are never getting that content back. It was time to take matters into my own hands.

The Setup: I bought a two-bay D-Link DNS-320, and two Western Digital 2TB Red hard disk drives. The plan was to setup the disks in RAID 1, so the odds of a simultaneous failure were as statistically low as my finances would allow. I liked the D-Link 320 because it had compelling features, and a two drive system for a fraction of the cost of other names like Synology. I wanted CIFS and NFS sharing, and RAID 1. Everything else was a perk. Pleasantly enough the DNS-320 also comes with a UPnP server, and has some nice SMART monitoring options, which will send an email to me in the event that errors are detected. I would create one 2TB partition out of the disks, and share this over the network with restricted access. My wife and I would both be able to connect from our computers to back up any data we wanted.

Configuration: CIFS setup was a breeze, but NFS required a bit more poking around. This post is dedicated to overcoming some of the issues I had. Within the NAS web interface, partition your drives, and grab a cup of coffee. Straight forward stuff. I created a single partition labeled “Volume_1”. After this is complete, go to “Management” -> “Application Management” -> “NFS Service”, and check “Enable”, then click “Save Settings”.

Go into “Management” -> “Account Management” -> “Users / Groups” and create a user account if you have not already done so. Within “Account Management”, click on “Network Shares”, and click the “New” button. This will launch the wizard for setting up a share. Select the appropriate users, groups, settings, and on the “Step 2-1: Assign Privileges – Access Methods”, ensure that the “NFS” checkbox is checked.

Move along by clicking “Next” until you reach “Step 2-1-2: NFS Settings”. You will need to specify the Host IP address of the client that will be connecting to this share. This will white list the IP address supplied as a client location. I’m not positive what format you would use the denote multiple IP addresses, however, an asterisk character allows all hosts. I connect with multiple machines via NFS, so using a particular IP address is not sufficient. Despite being accessible to any IP address, the client will still need to authenticate using their credentials entered in “Step 1” and “Step 1-2”. I consider this good enough. Also, ensure the check the “Write” check box if you wish to be able to write files to this location.

Client Configuration: You will need to install the nfs package for your distribution of Linux. I am running Ubuntu 12.10, so the package is named “nfs-client”. Install it using “sudo apt-get install nfs-client”.

Now that you have the nfs-client package, you can use the “showmount” utility to list the shares on the NAS device: “showmount -e NAS_IP_ADDRESS” (e.g. showmount -e 192.168.1.1″). Depending on how you have the disks partitioned and shared in the NAS device, this path will differ.

Export list for 192.168.1.1:
/mnt/HD/HD_a2 *

This path should be consistent with the information in the “Network Shares Information” dialog. This can be accessed by clicking the magnifying glass icon underneath the NFS column in the “Network Shares” interface.

You can now mount your device using the “mount” command: “mount -t nfs NAS_IP_ADDRESS:/REAL_PATH /path/to/mount_point“. The NAS_IP_ADDRESS is the IP address of the NAS device. The REAL_PATH is the information obtained either via showmount, or the “Network Shares Information” dialog. The “/path/to/mount_point” is just an empty directory somewhere on your local machine.

You can also set this mounting option up to be persistent on reboots using the “/etc/fstab” file. Add a new line to this file, and format your entry similar to as follows:

# /etc/fstab: static file system information.
# ...
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# ...
192.168.1.1:/mnt/HD/HD_a2  /media/nas  nfs rw,hard,intr 0  0

The values used are identical to the values used in the mount command preceding this example. The options specify changes to the mount behavior. “rw” specifies “read/write” permissions. “hard” retries requests indefinitely. Coupled with “intr”, this allows requests to retry until the NFS server becomes unreachable, in which case the retries would stop. I would recommend these option when copying large amounts of data, or when on a wireless network as failed transmissions will be silently repeated without raising an exception on temporary timeouts, etc.

Note on Permissions: Initially when I copied my content over to the NAS, I did so via NFS, and I was not able to view the contents via CIFS (Windows sharing). The problem came down to permissions. The directories did not have a executable bit set for “other”, so permission was denied when a request was made to show the contents of a directory. This was difficult to locate as the UPnP server showed all my media without any permissions issues. A quick search of UNIX permissions revealed that the executable bit is necessary to list directory contents, and the NAS is accessing content created via NFS as “other” (neither user, nor group permissions apply). You can recursively grant the execution bit of any existing content by issuing the following command from the top directory: “chmod o+x -R /root_directory“, where “root_directory” is the folder you want to change. The “-R” flag will recursively apply this permission to all content within.

Lessons Learned: I have about 50GB of pictures and video, and another 20GB in purchased content. I underestimated how long this would take to transmit on a 802.11g connection. 54Mbps is just under 7MB/s. This means that a 50GB transfer would take over 2 hours to complete. And that is not counting temporary speed drops, hard drive access times, retries, etc. When working with large amounts of data, I recommend a 1Gbps Ethernet connection. I will probably be investing in a new router soon that can accommodate these higher speeds.

The D-Link DNS-320 is a solid first time NAS device for under $150. Other than a few gotchas when setting up the NAS (most self inflicted), this device is NFS friendly, and has made a fine edition to my hardware ecosystem.

Apple, Computers, Linux, Open-source, Ruby, Software, Thoughts, Web

PostgreSQL for Ruby on Rails on Ubuntu

My new desktop came in at work this week, and the installation was painless thanks to the great driver support of Ubuntu 11.10. For anyone setting up a Rails development box based on Linux, I have some tips to get around some pain points when using a PostgresSQL database.

Installation:

Postgres can be quickly and easily installed using apt-get on Debian or Ubuntu based distributions. Issue the command:

apt-get install postgresql

Ruby Driver

In order for Ruby to connect to PostgreSQL databases, you will need to install the pg gem. This gem will need the development package of PostgreSQL to successfully build its native extension. To install the PostgreSQL development package, issue the following command:

apt-get install libpq-dev # EDIT: postgresql-dev was replaced by this package on Ubuntu 11.10

Setup A PostgreSQL Role

You can configure PostgreSQL to allow your account to have superuser access, allowing your Rails tasks to create and drop databases. This is useful for development, but is strongly discouraged for a production. That being said, we can create a PostgreSQL role by logging into psql as postgres as follows:

su postgres -c psql

This will open a PostgreSQL prompt as the database owner postgres. Next, we need to create an account for our user. This should match the response from “whoami”:

create role  superuser login;

We can now exit from psql by issuing “q“. Try to connect to psql directly by issuing the following command from your shell account:

psql postgres

This should allow you to connect to the default database postgres without being prompted for credentials. You should now be able to issue the rake commands for creating, and dropping the database:

rake db:create

Rspec Prompts for Credentials

I was being prompted by Rspec for credentials when running my test suite. If you would like to remove this credential prompt, please read the following:

There are differences in how the PostgreSQL package is configured in Homebrew on OS X, and how it is packaged in the Ubuntu and across other distributions. One difference is in the level of security configured in the pg_hba.conf file. This file is responsible for identifying which sources using which authentication mechanisms should be allowed or denied. By default, Rspec will cause a prompt for a password even if your shell account has trusted permissions. This is because Rspec connects not as a local process, but to localhost. To allow connections to localhost to be trusted, you will need to modify the pg_hba.conf file.

Next, we can modify the pg_hba.conf file located at /etc/postgresql/<version>/main/pg_hba.conf

Comment out the lines any lines at the bottom of the file and append the following:

local   all             all                                      trust
host    all             all              127.0.0.1/32            trust
host    all             all              ::1/128                 trust

This will allow connections from the shell, as well as connections to 127.0.0.1 (localhost) using both IPv4 and IPv6.

You will need to restart PostgreSQL for the changes from this file to take affect:

/etc/init.d/postgresql restart

PostgreSQL Extensions

If you want to make use of any of the additional extensions to Postgres, including fuzzystrmatching, you will need to install the postgresql-contrib package:

apt-get install postgresql-contrib

The extensions will install to /usr/share/postgresql/<version>/extension/

Using the Postgres version 9, you can create these extensions in your database by using the new CREATE EXTENSION syntax. In the case of the fuzzystrmatch extensions, you can issue the following command from inside a PostgresSQL command prompt to load the extensions:

psql ;

Once inside your database:

create extension fuzzystrmatch;
Computers, Linux, Open-source, Ruby, Software, Thoughts, Web

Ubuntu 10.04 – Very Refined

A lot has changed with Linux since I have last visited Ubuntu. I had an old crufty version of Ubuntu 8.10 sitting on my hard drive that I hadn’t booted into in quite some time. Realizing that April was a release month for Ubuntu, I decided to go get the latest and greatest.

There was a time when the software that I used on Linux was very exclusive to Linux. It took a lot of hunting down of programs to find what the best ones were for what I was doing since the names were all unrecognizable. That no longer seems to be the case. Google Chrome, has an official Linux client that runs quite well. Bookmark syncing to your Google account provides an easy way to import your information. Dropbox has a Linux client that integrates in with the Nautilus file manager.

Rhythmbox integrates in with Last.fm, Magnatune, and the new music store Ubuntu One. Empathy integrates in with Facebook chat, Google Talk, AIM, IRC, and many others. Gwibber integrates in with Facebook, Twitter, Flickr, Digg, and others. All of these integrate in with Ubuntu’s new Indicator Applet.

The new theme is nice, and the nVidia drivers are stable as always. The new theme does away with the Brown, and moves to a darker theme which I prefer. Compiz is running “discretely” providing effects that enhance with user experience without overwhelming it. The gravy on the cake is the new Ubuntu Software Center which takes all of the “apt-cache, and apt-get” out of the equation. The interface is revamped from the old “Synaptic package manager” and provides some nice touches such as “Featured Applications”, category views, and a seamless search, select and install experience.

If you are doing Rails development on Windows, do yourself a favor and revisit this classic to see how much improvement there has been to the Ubuntu experience.

Computers, Family, Hardware, Linux, Open-source, Personal, Software, Windows

Bringing the Dead Back to Life

destroyeddriveNo, not zombies. A few days ago, a friend of mine brought me a computer and told me that it was running REALLY slow. I did some investigation, and quickly discovered after running some diagnostics that the hard drive was on the fritz. They had been wanting a new hard drive for a while anyways. One that was bigger, faster, and one that worked. I went to Newegg.com, bought a replacement SATA drive, and after arrival, began the task of migrating to this new drive.

The installation of Windows that was on the old drive was fine (aside from the drive itself causing errors). I didn’t want to reinstall Windows, as it is such a painful and time-consuming process. Hunt down drivers, copy over the documents, reconfigure settings, install the old software again. No thanks. Instead, I decided to explore my options with the open source utility dd. Why not Norton Ghost, or one of the dozens of other programs? They all cost money, and I suspect some just use a ported version of dd to do their dirty work behind the scenes. I am cutting out the middle-man (and the cost) and going straight for dd.

dd is a common Unix program whose primary purpose is the low-level copying and conversion of raw data” – Wikipedia

dd allows a user to copy data from a source to a destination bit for bit – an exact mirror image. Even better, if the device has corruptions, you can use dd_rescue. The concept is the same, however this utility assumes that the source is likely damaged and does some special handling for bad sectors.

Making the backup

To make your disk copy, boot the source computer to a Linux live CD such as Ubuntu. A live CD may or may not mount your hard drive inside of the operating system. If it does, you want to unmount this device. In Ubuntu, the mount directory is in “/media”. If you browse for files here, and see a Windows partition, unmount this first. This ensures that there are no write operations occurring during the copy of the disk.

Next, fire up dd_rescue and use the following syntax:

sudo dd_rescue /dev/sda - | ssh user@remote_server "dd of=/path/to/save/backup"
  • sudo runs this command with elevated permissions
  • dd_rescue needs the two parameters of <source> and <destination>
  • Our source is /dev/sda. Note that we are copying the entire drive (sda) instead of a partition (such as sda1, sda2, etc). Your device may differ from “sda”.
  • Our destination is , which is shorthand for standard out. This would direct the binary content to the screen
  • The “|” symbol (pipe) is redirecting the previous output to a new process
  • ssh is a secure way to copy files from one machine to another. In this case, we are specifying a username and a remote server (ip)
  • Finally, “dd” is executed on the remote server with the parameter “of” set to the path where the image will be saved to

Note that you will need to have sudo access, an ssh account on another machine, and permissions to write to the path you will be saving to. Also, make sure that your replacement drive is the same size, or larger than your source drive. Make sense right?

This process will take a while, depending on the damage to the drive, and the size, network speed, etc. I maxed out my router at around 11 MBps (100 Mbps). dd_rescue will provide output, and let you know when it is complete. An 80 GB hard drive took about 2 hours to complete.

Restoring the Backup

Once this completes, you are ready to shut down the source machine, and swap out the bad hard drive for the new hard drive. Reboot to your live CD, and run the following command (taking some values from earlier):

ssh user@remote_server "dd if=/path/to/save/backup"| sudo dd of=/dev/sda
  • We are using ssh again, this time to pull the image back across to the client
  • On the remote server, we are executing dd if. Note the if. We are using this backup as the input file, instead of the output file (of)
  • We are piping this output into our local machine
  • Using sudo, we execute dd with the of parameter. This will do a bit by bit copy of the image to the destination media.

Note that again you want to ensure that the target is not mounted in any way. Also, we are doing the entire hard drive, so I am just using “/dev/sda”

Guess what?! This process will take a while, just like the copy operation before.

Expand the New Partition

Note that if you have a replacement drive that is larger, you will need to expand the Windows partition. This makes sense, since an 80GB drive has less bits than a 250GB drive. When the image (that is 80GB) finishes copying, the rest of the hard drive is completely untouched. To address this behavior, and be able to use the rest of the replacement hard drive, we will need to expand this partition.

You may need to run a “chkdsk /R” a few times in Windows if your hard drive has any bad sectors. As a note, new drives usually have a few bad sectors that need to be identified and disabled. After you run chkdsk, fire up your Live CD again, and launch gparted. This is a graphical tool for managing hard drive partitions. You should see the unallocated space at the end of your drive. Click the preceding partition (NTFS, etc) and choose to resize this partition, taking up all of the unallocated space. Apply your changes, and in a while, you have a full sized hard drive partition for Windows.

If you receive an error about bad sectors preventing ntfsresize from running from gparted, you may not be able to continue. Despite running chkdsk and restarting multiple times as instructed, I was not able to continue in gparted due to this message. There is switch –bad-sectors that can be called for ntfsresize arguments, however the GUI gparted does not let you do this. I first tried setting an alias, but it seems the program references the full path of the command, so the alias did not work.  Finally, I arrived at this solution:

# sudo su
# mv /usr/sbin/ntfsresize /usr/sbin/ntfsresize.real
# touch /usr/sbin/ntfsresize
# chmod a+x /usr/sbin/ntfsresize

Now, edit the file ntfsresize file you just created and do something like the following code:

#!/bin/bash
ntfsresize.real $@ --bad-sectors
exit 0
  • We are becoming root with sudo su
  • We move the existing ntfsresize file to another filename (ntfsresize.real)
  • We then create a new file named ntfsresize with execute permissions, and edit this file
  • Inside this file, we call the original ntfsresize, with our –bad-sectors switch, and $@. This is a way to pass any arguments sent to our script on to the ntfsresize.real script.

After this, run gparted, and you should be able to continue. A resize from 80GB to 250GB took about an hour to complete.

References:

Computers, Hardware, Linux, Open-source, Software, Web, Windows

Its Almost Here

Laptopmag.com has just reviewed a pre-production unit of the Dell Inspiron Mini 9. It seems pretty impressive, with the following specs:

  • $349 base pricetag
  • 1.6-GHz Intel Atom processor
  • Tailored version of Ubuntu (or Windows XP)
  • 9.1 x 6.8 x 1.3 inches
  • 2.3 lbs
  • 8.9 inch 1024 x 600-pixel resolution
  • 1.3-megapixel webcam
  • 3 USB 2.0 slots, , VGA out, Ethernet, headphone and a microphone jacks, 4-in-1 memory card reader
  • Bluetooth and wireless G (with Mobile broadband)
  • 4GB solid state drive (also available with a larger 8GB and 16GB SSD)
  • 3 1/2 hour battery

Looks pretty tasty…

Computers, Hardware, Linux, Open-source, Personal, Software, Thoughts, Windows

Ubuntu – How I Have Missed You…

Since I started working in Administrative Systems, I have been tasked with supporting a myriad of Windows-only applications. I assumed that it would be close to impossible to try and continue running any form of Linux on my work machines – especially with my boss popping in my office and telling me to pull up application X at any given second.

However, now I am tasked to work with Solaris about 90% of my day and I have to say that despite how great Putty can be – it just isn’t the best solution. Nothing beats a native terminal connection. Especially given that Windows doesn’t know jack about any filesystems other than its own. This makes editing files on the Solaris machine difficult and slow.

Slowly Linux started creeping back into my mind, and it made me homesick everytime I would go visit Scott and Chris over in VS (Well that problem took care of itself…). I have had much time to ponder how feasible a switchover would be (and what I would need to take care of as prerequisites) and I came up with a list of issues I would have to resolve first:

  1. Where can I place files that would be common to both Windows and Linux?
  2. How could I synchronize my email clients, and web browsers (history, bookmarks, passwords)?
  3. How can I access Windows applications if there is no other alternative?

These issues required some research on my part, but I finally found the following solutions:

  • ntfs-3g:  This particular piece of software is the read/write driver for NTFS partitions for Mac/Linux.  am counting on this to read/write data on the NTFS partition. It has matured so much recently that the latest version of Ubuntu can be installed inside the Windows NTFS partition. Condition #1 satistied – the files can stay where they are.
  • Mozilla Thunderbird / Mozilla Firefox: The Mozilla corporation did something so clever I have to applaud them (*clap clap clap*) – they made all application data, as well as settings reside in a profile folder. On Windows, Firefox is located at “C:Documents and Settings<user>Application DataMozillaFirefoxProfiles<profile instance>”. In Linux, this is located at “/home/<user>/.mozilla/firefox/profiles/<profile instance>”. Mozilla Thunderbird is essentially the same. The applause is becase the settings are the same on any OS! I placed the folders on the Linux partition by symlinking them to the Windows partition. Condition #2 satisfied – Email and Web browsers are always in sync because it is the same instance.
  • VMWare Server: No surprises here – this kind of software is a dime a dozen today. However VMWare offers a feature where with a bit of configuration the Operating System you can run can be the physical partition of your existing Windows partition. Pretty slick – that is after Windows throws a bitch fit that its configuration has been change and you absolutely positively must activate it again. The solution for that is to create a seperate hardware profile for Windows (a configuration that Windows made mandatory because of its bitch fits). Condition #3 satisfied – if I need Windows I can just flip over to Workspace 4 (I named it hell) and Windows is waiting for my input.