Apple, Computers, Hardware, Linux, Software, Windows

Using Synergy Over VPN

I’ve been watching a lot of Linus on Tech and one of their sponsors sells a product called Synergy. More information on their product page: https://symless.com/synergy . To summarize, this is a software KVM (no video, so I guess it is “KM”?) solution. The use case was perfect for me. On my desk I have an Apple work laptop, a Windows desktop, and a Linux personal laptop. Anyone that has done extensive work on a laptop keyboard and touchpad know that it isn’t optimal. I didn’t want multiple keyboards and mice on top of my desk because of the clutter. I dropped some money for Synergy, and it just works!

20170731_110826_HDR.jpg

That is until I had to connect to our company VPN the next week. They use a full tunneling solution. When I connect, I lose everything. I can’t print, I can’t access my NAS, but most importantly I can’t access my keyboard and mouse. (The video is fine because it is a hard wire to an external monitor). What to do?

SSH to the rescue! What is SSH? This is a protocol that will allow one computer to securely interface with another computer. Secure SHell. However, we will just be using it for port forwarding, and not for an interactive session. The goal is to take the OS X machine (Synergy client), and SSH into the Windows server (Synergy server). Using this SSH connection, we can forward ports within it. It is a tunnel running inside the SSH connection. This will expose a local port on OS X for 24800 that is actually pointing to the remote server port 24800. This is the port that Synergy uses for its connections.

You will need a few tools, and a little patience. Having just gone through this, I’m sharing for posterity, or maybe for anyone that has thrown in the towel with how crippled VPN makes accessing home devices.

I have the following Synergy setup:

  • Windows 10 Synergy server (keyboard and mouse are physically connected to the desktop)
  • OS X Synergy Client
  • Linux Synergy Client
  • Router with a local area network all these devices share
  • Admin access to the router for port forwarding
  • Autossh package for OS X (available via brew)

First step, get Windows 10 up to speed with SSH. How this isn’t built in as a service in the year 2017 I have no idea. Grab the OpenSSH server package for Windows from https://www.mls-software.com/opensshd.html . After downloading, extract and run the setup file. This will create a new Windows service for OpenSSH that will run on port 22. It prompts you to generate an SSH key for the server.

Once this server is running, you will need to add your user to the list of SSH users. Open up PowerShell as an administrator and change into the C:\Program Files\OpenSSH\bin directory. Run the following commands:

mkgroup -l >> ..\etc\group
mkpasswd -l >> ..\etc\passwd

Try and connect to your SSH server from the OS X client:

ssh <user>@<server IP> # e.g. ssh Ben@192.168.1.95

You should be prompted for your Windows password. Once you can successfully login to the server, we can setup public key authentication. This removes the need for you to type in your password because you identify yourself with an SSH public key. From your OS X machine get your public key:

cat ~/.ssh/id_rsa.pub

Put the contents of this file on your SSH server in the file C:\Program Files\OpenSSH\home\<user>.ssh . This is actually a symlink to C:\Users\<user>.ssh . If the directory .ssh doesn’t exist, you will need to create it first. Now we need to configure the server to allow public key authentication. Edit the C:\Program Files\OpenSSH\etc\sshd_config file and change the following lines:

StrictModes no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

Restart the OpenSSH server for the changes to take effect:

net stop opensshd
net start opensshd

You should now be able to SSH into the server same as before but without being prompted for a password.

Now we are ready to create an SSH tunnel. Before we incorporate AutoSSH (which handles retries and monitoring) we will do a naive attempt to SSH. In the following command:

  • -f backgrounds the process
  • -L does port tunneling in the format of <local port>:<remote host>:<remote port>
  • -N do not run a command – just tunnel the port
ssh -f <user>@<remote public IP> -L 24800:<remote public IP>:24800 -N

If this works, you should see a [LISTEN] entry for port 24800 when you list open files:

lsof -n -i | grep 24800

You may need to set your server as the DMZ on your network. Or to be safer you can simply setup port forwarding. We will need port 22 and port 24800 to resolve to the Windows server. The instructions for how to do this on a router widely vary by vendor. Typically it is under a WAN section. It typically prompts for a port, a destination IP, and destination port, and protocol. You want ports 22 and 24800 to route to your server IP for TCP and UDP.

Configure your Synergy client to use localhost instead of the remote IP. You should now be able to operate your client from the server’s peripherals via Synergy.

Everything works great until the VPN connection is made. The reason is that the SSH connection is severed. In order to recover automatically, I have added autossh to persist this tunnel. On the OS X client instead of running SSH do the following:

AUTOSSH_POLL=10 autossh -M 20000 -f -N <user>@<remote public IP> -L 24800:<remote public IP>:24800

Now when a VPN connection is made, or a disconnection happens, the autossh package will detect that it is no longer alive and retry. Because Synergy’s software also retries, after a few seconds your connectivity should begin working again.

Thanks to Synergy for making a solid product, and for having first class Linux support.

Hardware, Linux, Open-source, Ruby, Software, Uncategorized

Delayed Job Performance Tuning

We found a bug. A bug that affected a lot of historical records that we now have the pleasure of reprocessing. Fortunately we already had an async job infrastructure in place with the delayed job gem. Unfortunately, this gem is intended for fairly small batches of records and isn’t tuned to handle 1M+ records in the delayed_jobs table.

After reading some best practices we decided on a queue based approach. To keep our day to day async jobs running we would use a “default” queue. And to reprocess our old records we used a new queue. Starting the workers up with a “–queue” flag did the trick. We had hardware dedicated for day-to-day operations, and new hardware dedicated to our new queue operations. Now it was simply a matter of filling up the queue with the jobs to reprocess the records.

Our initial approach maxed out the CPU on our database server. This was largely due to us not tuning our SQL in our async jobs. Because the volume we processed was always low, this was never really a noticeable problem. But when we threw lots of new jobs into the queues, it became very noticeable. The workers would start up, then mysteriously die. After some digging in /var/log/kern.log we discovered the workers were being killed due to an out of memory manager. Attaching a small swap partition helped, but once you’ve hit swap, things become horribly slow. Whats the point in keeping the worker alive if its running a thousand times slower? Clearly we needed to profile and tune. So we did. (The specifics are out of scope for this article, but it involved consolidating N+1 queries and limiting columns returned by the SELECT).

With our newly tuned SQL our spirits were high as we cranked up the workers again. Only to reach the next process bottleneck. And this is where databases gets frustrating. Delayed job workers run a query each time they are looking for a new job to find out which job to pick up. It puts a mutex lock on the record by setting locked_at and locked_by. The query looks like this:

UPDATE delayed_jobs
SET `delayed_jobs`.`locked_at` = '2016-06-05 11:48:28', 
 `delayed_jobs`.`locked_by` = 'delayed_job.2 host:ip-10-203-174-216 pid:3226' 
WHERE ((run_at <= '2016-06-05 11:48:28' 
AND (locked_at IS NULL OR locked_at < '2016-06-05 07:48:28') OR locked_by = 'delayed_job.2 host:ip-10-203-174-216 pid:3226') 
AND failed_at IS NULL) 
ORDER BY priority ASC, run_at ASC 
LIMIT 1;

The UPDATE does an ORDER which results in a filesort. Filesorts are typically something an index can resolve. So I optimistically added the following:

CREATE INDEX delayed_job_priority
ON delayed_jobs(priority,run_at);

Sadly, this index was completely ignored when I ran an EXPLAIN on my UPDATE. And the reason is that MySQL doesn’t execute an UPDATE query the same way as if you did a SELECT with the same conditions. The index probably made things worse, because now with each record update, we now also have an index update as well. I could probably fork the code and probably use some type of isolation level in a transaction to get the best of both worlds with an index based SELECT, and a quick UPDATE on a single record by id. But there are easier solutions to try first.

My UPDATE statements were pushing 40 seconds in some cases according to MySQL. Eventually the lock wait timeout is exceeded and you see an error in the delayed_jobs.log:

Error while reserving job: Mysql2::Error: Lock wait timeout exceeded; 
try restarting transaction

Jobs were moving very slowly, and throwing more workers at it didn’t make an improvement. This is because each time a worker picks up a job, it was waiting 40+ seconds. The UPDATE was doing a filesort, and any index was being ignored. (And MySQL doesn’t support UPDATE hints). It was pretty clear that all of the jobs from the reprocessing queue needed to find a new home that didn’t negatively impact my filesort. I settled on the following solution:

CREATE TABLE delayed_jobs_backup LIKE delayed_jobs;

INSERT INTO delayed_jobs_backup
SELECT * FROM delayed_jobs WHERE queue='new_queue';

DELETE FROM delayed_jobs WHERE queue='new_queue';

This creates a new database table with the structure of the existing delayed_jobs table. The table is then populated with the jobs that needed to find a new home (All 1M+ of them). And finally, deleted from the original delayed_jobs table. Be careful doing this, and do some SELECT/EXPLAIN queries in between to ensure you are doing what you think you are doing. (Deleting 1M+ records from a production database makes me sit up in my chair a bit).

Looking at MySQL’s process list I no longer have System locks on my UPDATE statements (presumably because the table size is small enough that the filesort is mostly painless):

mysql> SHOW FULL PROCESSLIST;
# Id, User, Host, db, Command, Time, State, Info
1, user, 0.0.0.0:34346, localhost, Query, 0, System lock, UPDATE ...

The important columns here are the Time (in seconds), State, and Info. This proves that my job locking was happening quickly. I was seeing Time values of 40+ seconds before. I kept referring back to this process list to verify that the UPDATES were remaining snappy while I modified the number of workers running, and the number of jobs in the queue. I had a goal of keeping the UPDATE system lock times under 2 seconds. Adding more workers pushed the times up. Adding more jobs to the queue pushed the times up. Its a balance that probably depends very much on what you are processing, how much your database can handle, and what your memory constraints are on your worker servers.

To conclude – my job over the next few days will be to run the following command to put some jobs into the delayed_jobs table 10,000 at a time:

INSERT INTO delayed_jobs
SELECT * FROM delayed_jobs_backup LIMIT 10000;

DELETE FROM delayed_jobs_backup LIMIT 10000;

You can of course automate this. But my objective was met. The jobs can reprocess old records without impacting day to day jobs in the default queue. When the delayed_jobs table is almost empty, I move over another batch of jobs from the delayed_jobs_backup table. Rinse and repeat until there are no more jobs left to process. Its a bit more painful, but day to day operations continue to function, and I can cross of the reprocessing task from my list of things to do. All without any code changes!

I’ve been reading up on transaction isolation levels thinking something like a SELECT FOR UPDATE lock might be worthy contribution to the delayed_job codebase: http://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html

Computers, Hardware, Software, Vacations

Welcome to 2010

I broke down. I compromised my moral integrity. I did what I laughed at others for doing. I bought a tablet, and I couldn’t be happier.

What changed? Did my opinion change? Not drastically. I still don’t see them as the future of computing. They are a consumption device, and it would be difficult to do much more with them than that. But that is what I wanted.

Pricing has also changed drastically. When the iPad first came out, it was a 10″ behemoth, and it costed around $500, putting it well outside of my interests. (A high end laptop could be found starting at ~$800.) However, within the last year, some solid contenders have entered the 7″ ~$200 arena. The nVidia Tegra 3 chipset, quad core processing, and the latest Android experience on sale in the Nexus 7 for $155 shipped was too good for me to pass.

I have long said that I could’t justify a tablet when I have a desktop, two laptops, and a smartphone all within reach. My circumstances changed however during our latest vacation, and I found myself draining the battery on my smartphone daily trying to stay connected. I have discovered a few good areas that a tablet excels over other devices

  • Entertaining when space is limited (car, airplane, bed, etc)
  • Reading eBooks
  • Reading technical posts with code examples
  • Games developed for the touchscreen
  • Quick reference during certain table-top activities…

I find my discovery process similar to getting my first smartphone. I remember a few days after I had my smartphone I had the realization that I could get from my location to any other location without ever missing a turn again. I drove to a retail store to make a purchase and realized that I could mitigate buyer’s remorse by price checking while standing in the store! Information is power, and I had the Internet in my pocket. I could check reviews, prices, availability, stock nearby – all without carefully planning my trip beforehand at home.

While that smartphone does some things really well, it is a small form factor. For anyone that has ever upgraded their monitor to a larger size, or their computer to a faster model, you will know the feeling when you migrate from a smartphone to a tablet.

Despite my fears, I don’t think it will quickly become a device that collects dust. I’ve heavily used my smartphone for close to four years now, and there is no sign that this will change in the near future. The tablet is the extension of the smartphone.

I’m not saying to go blindly buy one – you should still have good reasons, and stick to a budget. But if you find yourself running your battery down on your smartphone from overuse, let me recommend a tablet to you.

And welcome to the year 2010!

Computers, Hardware, Linux, Software

Trials and Tribulations of the D-Link DNS-320

A funny noise, is something you never want to hear coming out of your primary disk drive. If you are like me, you probably never think too much about backing up your important data before it becomes a looming emergency. I realized while listening to that noise, and watching the red HDD indicator light stay on constantly that I had some pretty important stuff on that drive. Wedding photos, baby photos, videos, documents and other things that you can’t just re-download. Why not have cloud backup? I’m a bit skeptical of entrusting others with my data. At the end of the day, a failure on their end may net you a refund, and you could even take them to court, but no matter what, you are never getting that content back. It was time to take matters into my own hands.

The Setup: I bought a two-bay D-Link DNS-320, and two Western Digital 2TB Red hard disk drives. The plan was to setup the disks in RAID 1, so the odds of a simultaneous failure were as statistically low as my finances would allow. I liked the D-Link 320 because it had compelling features, and a two drive system for a fraction of the cost of other names like Synology. I wanted CIFS and NFS sharing, and RAID 1. Everything else was a perk. Pleasantly enough the DNS-320 also comes with a UPnP server, and has some nice SMART monitoring options, which will send an email to me in the event that errors are detected. I would create one 2TB partition out of the disks, and share this over the network with restricted access. My wife and I would both be able to connect from our computers to back up any data we wanted.

Configuration: CIFS setup was a breeze, but NFS required a bit more poking around. This post is dedicated to overcoming some of the issues I had. Within the NAS web interface, partition your drives, and grab a cup of coffee. Straight forward stuff. I created a single partition labeled “Volume_1”. After this is complete, go to “Management” -> “Application Management” -> “NFS Service”, and check “Enable”, then click “Save Settings”.

Go into “Management” -> “Account Management” -> “Users / Groups” and create a user account if you have not already done so. Within “Account Management”, click on “Network Shares”, and click the “New” button. This will launch the wizard for setting up a share. Select the appropriate users, groups, settings, and on the “Step 2-1: Assign Privileges – Access Methods”, ensure that the “NFS” checkbox is checked.

Move along by clicking “Next” until you reach “Step 2-1-2: NFS Settings”. You will need to specify the Host IP address of the client that will be connecting to this share. This will white list the IP address supplied as a client location. I’m not positive what format you would use the denote multiple IP addresses, however, an asterisk character allows all hosts. I connect with multiple machines via NFS, so using a particular IP address is not sufficient. Despite being accessible to any IP address, the client will still need to authenticate using their credentials entered in “Step 1” and “Step 1-2”. I consider this good enough. Also, ensure the check the “Write” check box if you wish to be able to write files to this location.

Client Configuration: You will need to install the nfs package for your distribution of Linux. I am running Ubuntu 12.10, so the package is named “nfs-client”. Install it using “sudo apt-get install nfs-client”.

Now that you have the nfs-client package, you can use the “showmount” utility to list the shares on the NAS device: “showmount -e NAS_IP_ADDRESS” (e.g. showmount -e 192.168.1.1″). Depending on how you have the disks partitioned and shared in the NAS device, this path will differ.

Export list for 192.168.1.1:
/mnt/HD/HD_a2 *

This path should be consistent with the information in the “Network Shares Information” dialog. This can be accessed by clicking the magnifying glass icon underneath the NFS column in the “Network Shares” interface.

You can now mount your device using the “mount” command: “mount -t nfs NAS_IP_ADDRESS:/REAL_PATH /path/to/mount_point“. The NAS_IP_ADDRESS is the IP address of the NAS device. The REAL_PATH is the information obtained either via showmount, or the “Network Shares Information” dialog. The “/path/to/mount_point” is just an empty directory somewhere on your local machine.

You can also set this mounting option up to be persistent on reboots using the “/etc/fstab” file. Add a new line to this file, and format your entry similar to as follows:

# /etc/fstab: static file system information.
# ...
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# ...
192.168.1.1:/mnt/HD/HD_a2  /media/nas  nfs rw,hard,intr 0  0

The values used are identical to the values used in the mount command preceding this example. The options specify changes to the mount behavior. “rw” specifies “read/write” permissions. “hard” retries requests indefinitely. Coupled with “intr”, this allows requests to retry until the NFS server becomes unreachable, in which case the retries would stop. I would recommend these option when copying large amounts of data, or when on a wireless network as failed transmissions will be silently repeated without raising an exception on temporary timeouts, etc.

Note on Permissions: Initially when I copied my content over to the NAS, I did so via NFS, and I was not able to view the contents via CIFS (Windows sharing). The problem came down to permissions. The directories did not have a executable bit set for “other”, so permission was denied when a request was made to show the contents of a directory. This was difficult to locate as the UPnP server showed all my media without any permissions issues. A quick search of UNIX permissions revealed that the executable bit is necessary to list directory contents, and the NAS is accessing content created via NFS as “other” (neither user, nor group permissions apply). You can recursively grant the execution bit of any existing content by issuing the following command from the top directory: “chmod o+x -R /root_directory“, where “root_directory” is the folder you want to change. The “-R” flag will recursively apply this permission to all content within.

Lessons Learned: I have about 50GB of pictures and video, and another 20GB in purchased content. I underestimated how long this would take to transmit on a 802.11g connection. 54Mbps is just under 7MB/s. This means that a 50GB transfer would take over 2 hours to complete. And that is not counting temporary speed drops, hard drive access times, retries, etc. When working with large amounts of data, I recommend a 1Gbps Ethernet connection. I will probably be investing in a new router soon that can accommodate these higher speeds.

The D-Link DNS-320 is a solid first time NAS device for under $150. Other than a few gotchas when setting up the NAS (most self inflicted), this device is NFS friendly, and has made a fine edition to my hardware ecosystem.

Computers, Hardware, Linux, Personal, Software, Thoughts

Self Realizations – Part I

During World War II, when you needed to get communications between two points, you often had to run a telegraph wire through enemy territory. I’m picturing the scene from Enemy at the Gates – where a soldier puts on a helmet, gets a spool of wire, and crawls on his belly through the mud, dodging enemy fire, and landmines. The goal is to not get picked off before your reach your destination because everyone is counting on you to make the connection.

Lately I have been engrossed in a side project that has given me an opportunity to work with the Android SDK. I have been so tickled at figuring out everything for the first time. Though I am moving at a snail’s pace, and it can be painful to have to constantly reference the documentation, StackOverflow, and Google at large, it has been a fun experience. Small things like talking to a database, or rotating a bitmap feel like big achievements, and make the struggling worth it. Seeing the Java side of the world puts some things about Ruby into perspective too. I know I am better having tinkered with it, and I had fun while doing it.

I have come to realize that its why I love programming. I love running that first line across unknown territory. It is proof that I can accomplish what I set out to do even with almost no prior knowledge about an environment. It is the same rush I get when tinkering with my car, or building computers, installing a ceiling fan, compiling a kernel, or raising a kid. It is about creating something to solve a problem using common tools and applying knowledge to make something awesome of it all. If I didn’t program, I’m not sure what other career I would have that would give me this same chance to tinker with new stuff.

As part of this self realization, I have discovered by my child-like excitement in my accomplishments, how much I miss this in my current work capacity. I’m not building new things anymore. I’m just polishing the same things, and the details don’t really excite me like the prototypes do. I like “broad strokes”. We need people that do the detail work too, but its decidedly not for me.

So find out what it is that you love, and make it happen. Your job and your passion aren’t always in phase, but don’t let let your passion die out just because you are getting paid to do something else.

Computers, Events, Family, Hardware, Personal, Software, Thoughts, Windows

The American Legend

Lately, John Templeman and I have been knocking out some serious multi-player action on the Playstation 3. Uncharted 2, and Resident Evil 5 have been very fulfilling games from a co-operative viewpoint. The PC finally drew me back in though. Starcraft II, Left for Dead, Age of Empires II and some other titles have lead me to do some housecleaning on my computer setup. The first thing that I decided had to go was my tiny (by comparison) 19″ Acer VGA monitor. It looks like a joke next to my wife’s beautiful 24″ ASUS HDMI monitor. Now I have a brand new ASUS MS236H monitor (shown here). Next to go was my broken, six year old installation of Windows XP. Seriously, I have kept the same install running through countless infections, blue screens, hardware upgrades, service packs – the works! Enough was enough, and after my generational jump over Windows Vista, I landed on Windows 7.

I had 7 running on the wife’s machine for a while, and despite it being reloaded a few times (what can you expect, its still Windows), I have been fairly impressed with it. Kristin  is a good litmus test of the stability of a piece of software. I installed it on my machine and ran into a bunch of problems I didn’t anticipate. Apparently my Sound Blaster Live! 5.1 sound card had an EOL around 1998, so no official drivers. I am surprised that Windows didn’t “just work” with this given its age, and its established user base (perhaps I am the last hold out?) After installing some random driver from some guy named Peter in a forum talking about the problem (and advising me not to use his driver with more than 2GB of RAM if I valued uptime) I got the sound working.

Next came the printer. Holy hell. The same driver package (the EXACT same) package I installed on my wife’s computer that just worked kept failing to install the printer driver for our Dell 1600n. After several reboots, and the day turning into ‘morrow, I finally settled on installing the HP Laserjet 4100 drivers.They seem to be mostly inter-changable, so I guess I have solved that problem too.

Another biggy was the unresponsiveness of web browsers on the OS (IE being the exception, where it is always slow). I was experiencing page load times so long, they were timing out while waiting for a connection. I thought it was Comcast, but after some Googling, I found that the “Automatic Proxy Discovery” setting was turning my inbound pipe into dial up.

The weather here has really started to cool down, with high’s in the 60’s. It should be perfect weather to walk to the train station without breaking a sweat while getting some nice exercise, but I just haven’t been in the mood. Its probably due to the soul-crushing amount of work that I have to do to keep on top of my classes. If textbooks had any competition of reader’s choice, the author’s might have to actually invest time into things that make a book worth reading, such as clarity and interestingness. The days of sailing in the Charles River with Hoydis are waning, and the days where I have to strap my feet into a snowboard are soon approaching.

Our Thanksgiving vacation plans are questionable at best right now, so who knows when I will be back in Atlanta. We are making the best of our situation here regardless. Kristin and I have decided to throw a Halloween party for anyone who is interested in joining us. It will be a costume, pot-luck, beer-fest, and should be a great way to bring in the holiday. Zoo keepers, it turns out have some of the most interesting stories, and they will be in attendance so I have been told. We haven’t gotten to show our house, and all its critters off to many folks yet, so here is your chance to see how people outside of the city live. All are invited – and bring a friend!

The wife is still job searching after the crater that Capron Park management left on her career path. She has applied here and there and we are eagerly awaiting any leads. In the meantime, we have been enjoying the atypical time we get to spend together on the weekends. Last week, we went to Conner’s Farm and did our first Maize Maze. It was a-mazing. So corny… Ok, I’ll stop now. Thanks to Michael Hoydis for the invite – we had fun. The picture here is one that Michael and Hannah snapped while we were walking in the Apple Orchards. That green sticker on our shirts means we successfully navigated Clint Eastwood’s face. Seriously.

Tomorrow is Friday, and I have a three day weekend. Lets hope I can convince some of my co-workers to break away from programming for a few minutes and go grab some drinks…if I make it that far.

Hope everyone, and all of their new children are doing well! Hopefully we will see everyone soon back on the flip side.

Update: A picture of the monitor in its new home:



Apple, Computers, Hardware, Linux, Open-source, Personal, Ruby, Software, Windows

Living in an Apple World

Welcome readers to what is a first here on my blog – a review about Apple’s OS X. As some of you may know, part of my new job is working on a Mac for 8 hours a day, 5 days a week. Someone asked me about my experiences, and I feel up to sharing my findings. I want to be fair in my assessments, so if it sounds like I am starting to get a little slanted, keep me in check with a comment!

First things first – the initial impression. I have a 27″ iMac and I was initially impressed by the appearance of the machine. The iMac screens and case are one piece, so I have plenty of room to kick around beneath my desk with minimal cord entanglement (not that it matters because I sit cross-legged all day). The compact style keyboard has an aluminum casing, which matches the iMac. The mouse is the Mighty Mouse. Both are wired, which I appreciate – especially on the mouse. I hated the compact keyboard since it feels shrunken, and the addition of the “Fn” key in the bottom row meant every time I tried to press “Control” I missed. After swapping this out for a full-sized keyboard I was much happier, and even unlearned some bad habits. The Mighty mouse absolutely sucks. The tiny wheel stops responding all the time from the slightest spec of dirt, and you have to turn it over and rub it back and forth on your jeans, or the mouse pad. Its one saving feature is the ability to vertically, and horizontally scroll which is occasionally helpful. I am a right click fan, and though invisible, the region on the mouse that registers as a right click versus a left is about 10 times smaller. It feels like the edge of the mouse.

The keyboard on a Mac is different in important ways from its PC counterparts. The “Windows” key is replaced with the Command key, which is utilized far more than the Windows key ever was. In fact, most of the operations of the machine are done using Command (copy, paste, new tab, close window, etc) effectively making it closer to the “Control” key in Windows. However, the Control key remains, which actually introduces a whole new key combination to more effectively use shortcuts. The Command key is located next to the space bar, which is much more convenient than the extreme left placement of the Control key. I do copy, paste, etc operations using my thumb, and not my pinky finger – much less strain.

The computer screen can be tilted, which is nice since the whole world seems to be moving towards the annoying high gloss screens. I can tilt it down, and out of the florescent overhead lights. I really feel that gloss is a showroom gimmick just like turning the brightness up to max on the TVs in the store. If I wanted to look at myself, I would sit in front of a mirror. Fortunately, I have a second non-gloss monitor, and I do most of my coding on this screen. Also, it would be nice if the monitor had a height adjustment, as second monitor isn’t quite the height of the iMac screen.

Enough about appearance – lets talk hardware. This is a dual core Intel-based processor, with 2 GB of memory (later upgraded to 4GB). The video card is decent I suppose (however the interface can get quite “laggy” at times). I don’t have any idea what the machine costs, but this is definitely unimpressive hardware. 2GB of RAM is the minimum I would work with, and it being slow laptop RAM doesn’t help at all. At least there isn’t a laptop hard in it too.

As for the Operating System, it seems pretty stripped down. This isn’t necessarily a bad thing – I can quickly find what I am looking for, without going on a damn field trip through obscure dialog windows. The flip-side to this is it doesn’t feel very “customizable”. You use the stock features, or you don’t use a Mac. Perhaps there are a bunch of third party utilities that I don’t know about? Sometimes I am disappointed by the lack of customization options (there are just a handful of settings for the dock). To be honest, I am not sure what I would customize, but I like to poke around, and I often leave the System Preferences disappointed having not found “setting xyz“.

I really enjoy the file system indexing, and they have the best implementation for full-text search I have seen. It doesn’t bog down the computer, and the results are instantly updated. Magic. It effectively is the starting point for all my open actions. I don’t know why it isn’t available for the first 10 minutes after a boot, but I don’t shut down that much so its ok.

I was surprised by the lack of a default system-wide notification system – something that Growl has aimed to fill. I was also disappointed by the lack of package management on the Mac – again third party solutions exist. The system updates are just as annoying as in Windows which was a disappointment. Once the “restart” prompt stole my typing focus and proceeded to shut down the system. A few times the machine has “beach balled” (the Mac “hourglass” icon), and hard locked. Most of time its fairly responsive and stable which I can appreciate.

Other points of interest are the window management. I use Expose almost as regularly as I do the task switcher (Command + Tab), though admittely sometimes I get lost in the special effects and forget what I was doing. There are a bunch of other window groupings, but I don’t really find them that useful. One particularly frustrating observation is that once you minimize a window, you can’t Command + Tab back to it. Isn’t that the point of the task switcher? It even shows up in the task switcher, but when it is selected, absolutely nothing happens.

As for the software available on the Mac it is more comprehensive than Linux, and less comprehensive than Windows. Some of my co-workers commented that in OS X, there is usually one utility to do something, whether you like it or not. I use Google Chrome, JetBrain’s RubyMine, Ruby, Terminal, Lotus Notes, Adium, and Propane almost exclusively. Because of this, I can’t really assess the state of the Mac software ecosystem, but I will say that all these programs run damn well on the Mac. The only software crash I have is Flash. Flash on Linux and Windows is stable, however on the Mac probably one in ten uses causes the browser tab to lockup. I am not sure whether this is a Chrome issue or not, but something is seriously wrong with the state of Flash on my Mac. Now I understand why so many Mac users hate Flash – as a Windows user, I never experienced the constant crashing.

In summary, due to the nature of my work, I use the Mac at work in essentially the same manner I would use Linux. The terminal is where I spend my time, and I am more or less indifferent about the operating system around it, as long as I can install the system libraries I need for Ruby extensions, and it stays responsive. My next computer purchase will be a netbook and I will install Ubuntu on it, as I can’t justify spending the designer prices of Apple products to use a terminal and a web browser.  Toe to toe with Windows, and many Linux distributions, OS X excels in many areas. Its a fantastic operating system, but I am not sure that it is worth its cost. If I could throw it on my PC at home it would be worth $100. Buying a special machine just to run it is just silly.