Computers, Hardware

Move over Router – Mesh is Here

Its been a while since I’ve had my quality of life dramatically improved by a device upgrade. I recently moved my home office upstairs, and with it came a shuffling around of wireless equipment from the first floor to the second. The new office space is on the opposite side and floor from the living room with our smart TV. No matter how I positioned the wifi router one side suffered. And putting the router in the middle meant it would be in the kids rooms.

Adding a wireless range extender practically made the problem worse as the devices tried to connect to the wrong network for their location, and the speeds while connected to the range extender were terrible.

Fed up I started doing some research into routers with the furthest range, highest speeds, etc. That is when I came across a new category of “mesh” networks. These devices offer multiple access points called nodes that promise to seamlessly shuffle clients around based on the optimal connection path. After some research I decided on the TP Link Deco M4 3 pack . I had a promo code and the total price came out to ~$150 shipped.

After using it for a few weeks, I’m ready to review. Spoiler alert – I’m bursting with happiness. I’ll address a few main categories of these devices:

Range

I have a 2,500 sq foot house on 2 floors + deck . The 3 nodes cover this easily. Nowhere in the house, or yard have less than 3 bars. I have configured these in a triangular arrangement with two nodes being on opposite sides on the house on the 2nd floor (as diagonal as I could get them). The other node is on the 1st floor half way between the nodes on the top floor.

I haven’t tried a two node setup, which might be more representative of what the old router + range extender were delivering, but why would I? The whole point of a mesh network is that you can keep adding nodes until you have great coverage.

As an experiment I walked around the cul-de-sac and stayed connected an impressive way out. Whatever is inside these nodes (or maybe they are working in aggregate) had great transmitting power, all without looking garish with external antennas everywhere.

Speed

On the TP-Link Deco M4 network, I get 300+Mbps anywhere inside the house. Outside on the deck this drops to 200Mbps For comparison, with the old ASUS RT-AC68U + Linksys RE6500 range extender I would get ~200Mbps in the same room as the router. The range extender never got above 100Mbps, and the deck (when I could get signal) would be around 20Mbps. The mesh network link speed blows away the traditional router + extender setup.

One more technical note here – the nodes are tri-band which means that you get the full bandwidth through each node instead of it being halved.

Setup

The TP-Link (and many of the other commercial mesh kits) come with a smartphone app to setup the devices. I was initially turned off by this. After all – everything today claims it needs you to install an app when a basic mobile website is probably sufficient.

The app however is clean, and aided in setup versus the traditional laptop approach, potentially having to plugin with an ethernet cable to the router to initially configure the network.

The nodes are all identical, so it doesn’t matter which on you connect to the modem. It correctly figured out that was the Internet connection, and even circumvented the silly tendency for modems to only bind to one MAC address. The physical setup involves nothing more than plugging in the node to an AC outlet, and for the initial node plugging it into the modem. The app detects the node you are next to, and walks you through setting up the wireless network.

Flashing lights on each of the nodes informs you if they are online and working properly or experiencing an issue.

The nodes all share the same wifi network name, and devices will switch between them automatically. Setup options are pretty standard (maybe even somewhat limited). You choose an SSID, create a password, and choose whether to broadcast the network. You don’t even pick between 2.4Ghz and 5Ghz networks – this is all managed for you. The device will use the best network available. My old laptop can’t see 5Ghz networks and connected just fine. The Deco offers a few other features like QOS settings, reserved IP addresses, blacklisting, reports, etc.

Price

This looks to have been a historical weak point for mesh networks. New technologies typically come with premium price tags. I think enough time has passed that mesh network kits are about on part with a new router and range extender. I paid $140 having caught a sale that took $40 off the price.

Conclusion

I would absolutely recommend a mesh network to just about anyone, possibly with the exception of someone that has advanced needs for their network setup. This feels like an evolution of the wireless router. It offers superior range and speeds relative to my previous router + range extender setup for about the same price. Setup is painless, and this has fixed all of my wireless issues throughout the entire house. I’ve retired my router, and range extender.

I’ve also retired my USB wireless adapter for my desktop since I have a mesh node sitting on the desk, and have opted instead to connect with an ethernet cable. I’ve also managed to retire a wifi bridge for a NAS device that I have because again with 3 nodes, I can easily place this NAS next a node and connect with an ethernet cable.

All said and done I threw out more equipment than I setup. This was an absolutely great decision in my opinion and at the risk of sounding like a sponsored post – I can say I couldn’t be happier.

Apple, Computers, Hardware, Linux, Software, Windows

Using Synergy Over VPN

I’ve been watching a lot of Linus on Tech and one of their sponsors sells a product called Synergy. More information on their product page: https://symless.com/synergy . To summarize, this is a software KVM (no video, so I guess it is “KM”?) solution. The use case was perfect for me. On my desk I have an Apple work laptop, a Windows desktop, and a Linux personal laptop. Anyone that has done extensive work on a laptop keyboard and touchpad know that it isn’t optimal. I didn’t want multiple keyboards and mice on top of my desk because of the clutter. I dropped some money for Synergy, and it just works!

20170731_110826_HDR.jpg

That is until I had to connect to our company VPN the next week. They use a full tunneling solution. When I connect, I lose everything. I can’t print, I can’t access my NAS, but most importantly I can’t access my keyboard and mouse. (The video is fine because it is a hard wire to an external monitor). What to do?

SSH to the rescue! What is SSH? This is a protocol that will allow one computer to securely interface with another computer. Secure SHell. However, we will just be using it for port forwarding, and not for an interactive session. The goal is to take the OS X machine (Synergy client), and SSH into the Windows server (Synergy server). Using this SSH connection, we can forward ports within it. It is a tunnel running inside the SSH connection. This will expose a local port on OS X for 24800 that is actually pointing to the remote server port 24800. This is the port that Synergy uses for its connections.

You will need a few tools, and a little patience. Having just gone through this, I’m sharing for posterity, or maybe for anyone that has thrown in the towel with how crippled VPN makes accessing home devices.

I have the following Synergy setup:

  • Windows 10 Synergy server (keyboard and mouse are physically connected to the desktop)
  • OS X Synergy Client
  • Linux Synergy Client
  • Router with a local area network all these devices share
  • Admin access to the router for port forwarding
  • Autossh package for OS X (available via brew)

First step, get Windows 10 up to speed with SSH. How this isn’t built in as a service in the year 2017 I have no idea. Grab the OpenSSH server package for Windows from https://www.mls-software.com/opensshd.html . After downloading, extract and run the setup file. This will create a new Windows service for OpenSSH that will run on port 22. It prompts you to generate an SSH key for the server.

Once this server is running, you will need to add your user to the list of SSH users. Open up PowerShell as an administrator and change into the C:\Program Files\OpenSSH\bin directory. Run the following commands:

mkgroup -l >> ..\etc\group
mkpasswd -l >> ..\etc\passwd

Try and connect to your SSH server from the OS X client:

ssh <user>@<server IP> # e.g. ssh Ben@192.168.1.95

You should be prompted for your Windows password. Once you can successfully login to the server, we can setup public key authentication. This removes the need for you to type in your password because you identify yourself with an SSH public key. From your OS X machine get your public key:

cat ~/.ssh/id_rsa.pub

Put the contents of this file on your SSH server in the file C:\Program Files\OpenSSH\home\<user>.ssh . This is actually a symlink to C:\Users\<user>.ssh . If the directory .ssh doesn’t exist, you will need to create it first. Now we need to configure the server to allow public key authentication. Edit the C:\Program Files\OpenSSH\etc\sshd_config file and change the following lines:

StrictModes no
PubkeyAuthentication yes
AuthorizedKeysFile .ssh/authorized_keys

Restart the OpenSSH server for the changes to take effect:

net stop opensshd
net start opensshd

You should now be able to SSH into the server same as before but without being prompted for a password.

Now we are ready to create an SSH tunnel. Before we incorporate AutoSSH (which handles retries and monitoring) we will do a naive attempt to SSH. In the following command:

  • -f backgrounds the process
  • -L does port tunneling in the format of <local port>:<remote host>:<remote port>
  • -N do not run a command – just tunnel the port
ssh -f <user>@<remote public IP> -L 24800:<remote public IP>:24800 -N

If this works, you should see a [LISTEN] entry for port 24800 when you list open files:

lsof -n -i | grep 24800

You may need to set your server as the DMZ on your network. Or to be safer you can simply setup port forwarding. We will need port 22 and port 24800 to resolve to the Windows server. The instructions for how to do this on a router widely vary by vendor. Typically it is under a WAN section. It typically prompts for a port, a destination IP, and destination port, and protocol. You want ports 22 and 24800 to route to your server IP for TCP and UDP.

Configure your Synergy client to use localhost instead of the remote IP. You should now be able to operate your client from the server’s peripherals via Synergy.

Everything works great until the VPN connection is made. The reason is that the SSH connection is severed. In order to recover automatically, I have added autossh to persist this tunnel. On the OS X client instead of running SSH do the following:

AUTOSSH_POLL=10 autossh -M 20000 -f -N <user>@<remote public IP> -L 24800:<remote public IP>:24800

Now when a VPN connection is made, or a disconnection happens, the autossh package will detect that it is no longer alive and retry. Because Synergy’s software also retries, after a few seconds your connectivity should begin working again.

Thanks to Synergy for making a solid product, and for having first class Linux support.

Hardware, Linux, Open-source, Ruby, Software, Uncategorized

Delayed Job Performance Tuning

We found a bug. A bug that affected a lot of historical records that we now have the pleasure of reprocessing. Fortunately we already had an async job infrastructure in place with the delayed job gem. Unfortunately, this gem is intended for fairly small batches of records and isn’t tuned to handle 1M+ records in the delayed_jobs table.

After reading some best practices we decided on a queue based approach. To keep our day to day async jobs running we would use a “default” queue. And to reprocess our old records we used a new queue. Starting the workers up with a “–queue” flag did the trick. We had hardware dedicated for day-to-day operations, and new hardware dedicated to our new queue operations. Now it was simply a matter of filling up the queue with the jobs to reprocess the records.

Our initial approach maxed out the CPU on our database server. This was largely due to us not tuning our SQL in our async jobs. Because the volume we processed was always low, this was never really a noticeable problem. But when we threw lots of new jobs into the queues, it became very noticeable. The workers would start up, then mysteriously die. After some digging in /var/log/kern.log we discovered the workers were being killed due to an out of memory manager. Attaching a small swap partition helped, but once you’ve hit swap, things become horribly slow. Whats the point in keeping the worker alive if its running a thousand times slower? Clearly we needed to profile and tune. So we did. (The specifics are out of scope for this article, but it involved consolidating N+1 queries and limiting columns returned by the SELECT).

With our newly tuned SQL our spirits were high as we cranked up the workers again. Only to reach the next process bottleneck. And this is where databases gets frustrating. Delayed job workers run a query each time they are looking for a new job to find out which job to pick up. It puts a mutex lock on the record by setting locked_at and locked_by. The query looks like this:

UPDATE delayed_jobs
SET `delayed_jobs`.`locked_at` = '2016-06-05 11:48:28', 
 `delayed_jobs`.`locked_by` = 'delayed_job.2 host:ip-10-203-174-216 pid:3226' 
WHERE ((run_at <= '2016-06-05 11:48:28' 
AND (locked_at IS NULL OR locked_at < '2016-06-05 07:48:28') OR locked_by = 'delayed_job.2 host:ip-10-203-174-216 pid:3226') 
AND failed_at IS NULL) 
ORDER BY priority ASC, run_at ASC 
LIMIT 1;

The UPDATE does an ORDER which results in a filesort. Filesorts are typically something an index can resolve. So I optimistically added the following:

CREATE INDEX delayed_job_priority
ON delayed_jobs(priority,run_at);

Sadly, this index was completely ignored when I ran an EXPLAIN on my UPDATE. And the reason is that MySQL doesn’t execute an UPDATE query the same way as if you did a SELECT with the same conditions. The index probably made things worse, because now with each record update, we now also have an index update as well. I could probably fork the code and probably use some type of isolation level in a transaction to get the best of both worlds with an index based SELECT, and a quick UPDATE on a single record by id. But there are easier solutions to try first.

My UPDATE statements were pushing 40 seconds in some cases according to MySQL. Eventually the lock wait timeout is exceeded and you see an error in the delayed_jobs.log:

Error while reserving job: Mysql2::Error: Lock wait timeout exceeded; 
try restarting transaction

Jobs were moving very slowly, and throwing more workers at it didn’t make an improvement. This is because each time a worker picks up a job, it was waiting 40+ seconds. The UPDATE was doing a filesort, and any index was being ignored. (And MySQL doesn’t support UPDATE hints). It was pretty clear that all of the jobs from the reprocessing queue needed to find a new home that didn’t negatively impact my filesort. I settled on the following solution:

CREATE TABLE delayed_jobs_backup LIKE delayed_jobs;

INSERT INTO delayed_jobs_backup
SELECT * FROM delayed_jobs WHERE queue='new_queue';

DELETE FROM delayed_jobs WHERE queue='new_queue';

This creates a new database table with the structure of the existing delayed_jobs table. The table is then populated with the jobs that needed to find a new home (All 1M+ of them). And finally, deleted from the original delayed_jobs table. Be careful doing this, and do some SELECT/EXPLAIN queries in between to ensure you are doing what you think you are doing. (Deleting 1M+ records from a production database makes me sit up in my chair a bit).

Looking at MySQL’s process list I no longer have System locks on my UPDATE statements (presumably because the table size is small enough that the filesort is mostly painless):

mysql> SHOW FULL PROCESSLIST;
# Id, User, Host, db, Command, Time, State, Info
1, user, 0.0.0.0:34346, localhost, Query, 0, System lock, UPDATE ...

The important columns here are the Time (in seconds), State, and Info. This proves that my job locking was happening quickly. I was seeing Time values of 40+ seconds before. I kept referring back to this process list to verify that the UPDATES were remaining snappy while I modified the number of workers running, and the number of jobs in the queue. I had a goal of keeping the UPDATE system lock times under 2 seconds. Adding more workers pushed the times up. Adding more jobs to the queue pushed the times up. Its a balance that probably depends very much on what you are processing, how much your database can handle, and what your memory constraints are on your worker servers.

To conclude – my job over the next few days will be to run the following command to put some jobs into the delayed_jobs table 10,000 at a time:

INSERT INTO delayed_jobs
SELECT * FROM delayed_jobs_backup LIMIT 10000;

DELETE FROM delayed_jobs_backup LIMIT 10000;

You can of course automate this. But my objective was met. The jobs can reprocess old records without impacting day to day jobs in the default queue. When the delayed_jobs table is almost empty, I move over another batch of jobs from the delayed_jobs_backup table. Rinse and repeat until there are no more jobs left to process. Its a bit more painful, but day to day operations continue to function, and I can cross of the reprocessing task from my list of things to do. All without any code changes!

I’ve been reading up on transaction isolation levels thinking something like a SELECT FOR UPDATE lock might be worthy contribution to the delayed_job codebase: http://dev.mysql.com/doc/refman/5.7/en/innodb-transaction-isolation-levels.html

Computers, Hardware, Software, Vacations

Welcome to 2010

I broke down. I compromised my moral integrity. I did what I laughed at others for doing. I bought a tablet, and I couldn’t be happier.

What changed? Did my opinion change? Not drastically. I still don’t see them as the future of computing. They are a consumption device, and it would be difficult to do much more with them than that. But that is what I wanted.

Pricing has also changed drastically. When the iPad first came out, it was a 10″ behemoth, and it costed around $500, putting it well outside of my interests. (A high end laptop could be found starting at ~$800.) However, within the last year, some solid contenders have entered the 7″ ~$200 arena. The nVidia Tegra 3 chipset, quad core processing, and the latest Android experience on sale in the Nexus 7 for $155 shipped was too good for me to pass.

I have long said that I could’t justify a tablet when I have a desktop, two laptops, and a smartphone all within reach. My circumstances changed however during our latest vacation, and I found myself draining the battery on my smartphone daily trying to stay connected. I have discovered a few good areas that a tablet excels over other devices

  • Entertaining when space is limited (car, airplane, bed, etc)
  • Reading eBooks
  • Reading technical posts with code examples
  • Games developed for the touchscreen
  • Quick reference during certain table-top activities…

I find my discovery process similar to getting my first smartphone. I remember a few days after I had my smartphone I had the realization that I could get from my location to any other location without ever missing a turn again. I drove to a retail store to make a purchase and realized that I could mitigate buyer’s remorse by price checking while standing in the store! Information is power, and I had the Internet in my pocket. I could check reviews, prices, availability, stock nearby – all without carefully planning my trip beforehand at home.

While that smartphone does some things really well, it is a small form factor. For anyone that has ever upgraded their monitor to a larger size, or their computer to a faster model, you will know the feeling when you migrate from a smartphone to a tablet.

Despite my fears, I don’t think it will quickly become a device that collects dust. I’ve heavily used my smartphone for close to four years now, and there is no sign that this will change in the near future. The tablet is the extension of the smartphone.

I’m not saying to go blindly buy one – you should still have good reasons, and stick to a budget. But if you find yourself running your battery down on your smartphone from overuse, let me recommend a tablet to you.

And welcome to the year 2010!

Computers, Hardware, Linux, Software

Trials and Tribulations of the D-Link DNS-320

A funny noise, is something you never want to hear coming out of your primary disk drive. If you are like me, you probably never think too much about backing up your important data before it becomes a looming emergency. I realized while listening to that noise, and watching the red HDD indicator light stay on constantly that I had some pretty important stuff on that drive. Wedding photos, baby photos, videos, documents and other things that you can’t just re-download. Why not have cloud backup? I’m a bit skeptical of entrusting others with my data. At the end of the day, a failure on their end may net you a refund, and you could even take them to court, but no matter what, you are never getting that content back. It was time to take matters into my own hands.

The Setup: I bought a two-bay D-Link DNS-320, and two Western Digital 2TB Red hard disk drives. The plan was to setup the disks in RAID 1, so the odds of a simultaneous failure were as statistically low as my finances would allow. I liked the D-Link 320 because it had compelling features, and a two drive system for a fraction of the cost of other names like Synology. I wanted CIFS and NFS sharing, and RAID 1. Everything else was a perk. Pleasantly enough the DNS-320 also comes with a UPnP server, and has some nice SMART monitoring options, which will send an email to me in the event that errors are detected. I would create one 2TB partition out of the disks, and share this over the network with restricted access. My wife and I would both be able to connect from our computers to back up any data we wanted.

Configuration: CIFS setup was a breeze, but NFS required a bit more poking around. This post is dedicated to overcoming some of the issues I had. Within the NAS web interface, partition your drives, and grab a cup of coffee. Straight forward stuff. I created a single partition labeled “Volume_1”. After this is complete, go to “Management” -> “Application Management” -> “NFS Service”, and check “Enable”, then click “Save Settings”.

Go into “Management” -> “Account Management” -> “Users / Groups” and create a user account if you have not already done so. Within “Account Management”, click on “Network Shares”, and click the “New” button. This will launch the wizard for setting up a share. Select the appropriate users, groups, settings, and on the “Step 2-1: Assign Privileges – Access Methods”, ensure that the “NFS” checkbox is checked.

Move along by clicking “Next” until you reach “Step 2-1-2: NFS Settings”. You will need to specify the Host IP address of the client that will be connecting to this share. This will white list the IP address supplied as a client location. I’m not positive what format you would use the denote multiple IP addresses, however, an asterisk character allows all hosts. I connect with multiple machines via NFS, so using a particular IP address is not sufficient. Despite being accessible to any IP address, the client will still need to authenticate using their credentials entered in “Step 1” and “Step 1-2”. I consider this good enough. Also, ensure the check the “Write” check box if you wish to be able to write files to this location.

Client Configuration: You will need to install the nfs package for your distribution of Linux. I am running Ubuntu 12.10, so the package is named “nfs-client”. Install it using “sudo apt-get install nfs-client”.

Now that you have the nfs-client package, you can use the “showmount” utility to list the shares on the NAS device: “showmount -e NAS_IP_ADDRESS” (e.g. showmount -e 192.168.1.1″). Depending on how you have the disks partitioned and shared in the NAS device, this path will differ.

Export list for 192.168.1.1:
/mnt/HD/HD_a2 *

This path should be consistent with the information in the “Network Shares Information” dialog. This can be accessed by clicking the magnifying glass icon underneath the NFS column in the “Network Shares” interface.

You can now mount your device using the “mount” command: “mount -t nfs NAS_IP_ADDRESS:/REAL_PATH /path/to/mount_point“. The NAS_IP_ADDRESS is the IP address of the NAS device. The REAL_PATH is the information obtained either via showmount, or the “Network Shares Information” dialog. The “/path/to/mount_point” is just an empty directory somewhere on your local machine.

You can also set this mounting option up to be persistent on reboots using the “/etc/fstab” file. Add a new line to this file, and format your entry similar to as follows:

# /etc/fstab: static file system information.
# ...
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# ...
192.168.1.1:/mnt/HD/HD_a2  /media/nas  nfs rw,hard,intr 0  0

The values used are identical to the values used in the mount command preceding this example. The options specify changes to the mount behavior. “rw” specifies “read/write” permissions. “hard” retries requests indefinitely. Coupled with “intr”, this allows requests to retry until the NFS server becomes unreachable, in which case the retries would stop. I would recommend these option when copying large amounts of data, or when on a wireless network as failed transmissions will be silently repeated without raising an exception on temporary timeouts, etc.

Note on Permissions: Initially when I copied my content over to the NAS, I did so via NFS, and I was not able to view the contents via CIFS (Windows sharing). The problem came down to permissions. The directories did not have a executable bit set for “other”, so permission was denied when a request was made to show the contents of a directory. This was difficult to locate as the UPnP server showed all my media without any permissions issues. A quick search of UNIX permissions revealed that the executable bit is necessary to list directory contents, and the NAS is accessing content created via NFS as “other” (neither user, nor group permissions apply). You can recursively grant the execution bit of any existing content by issuing the following command from the top directory: “chmod o+x -R /root_directory“, where “root_directory” is the folder you want to change. The “-R” flag will recursively apply this permission to all content within.

Lessons Learned: I have about 50GB of pictures and video, and another 20GB in purchased content. I underestimated how long this would take to transmit on a 802.11g connection. 54Mbps is just under 7MB/s. This means that a 50GB transfer would take over 2 hours to complete. And that is not counting temporary speed drops, hard drive access times, retries, etc. When working with large amounts of data, I recommend a 1Gbps Ethernet connection. I will probably be investing in a new router soon that can accommodate these higher speeds.

The D-Link DNS-320 is a solid first time NAS device for under $150. Other than a few gotchas when setting up the NAS (most self inflicted), this device is NFS friendly, and has made a fine edition to my hardware ecosystem.

Computers, Hardware, Linux, Personal, Software, Thoughts

Self Realizations – Part I

During World War II, when you needed to get communications between two points, you often had to run a telegraph wire through enemy territory. I’m picturing the scene from Enemy at the Gates – where a soldier puts on a helmet, gets a spool of wire, and crawls on his belly through the mud, dodging enemy fire, and landmines. The goal is to not get picked off before your reach your destination because everyone is counting on you to make the connection.

Lately I have been engrossed in a side project that has given me an opportunity to work with the Android SDK. I have been so tickled at figuring out everything for the first time. Though I am moving at a snail’s pace, and it can be painful to have to constantly reference the documentation, StackOverflow, and Google at large, it has been a fun experience. Small things like talking to a database, or rotating a bitmap feel like big achievements, and make the struggling worth it. Seeing the Java side of the world puts some things about Ruby into perspective too. I know I am better having tinkered with it, and I had fun while doing it.

I have come to realize that its why I love programming. I love running that first line across unknown territory. It is proof that I can accomplish what I set out to do even with almost no prior knowledge about an environment. It is the same rush I get when tinkering with my car, or building computers, installing a ceiling fan, compiling a kernel, or raising a kid. It is about creating something to solve a problem using common tools and applying knowledge to make something awesome of it all. If I didn’t program, I’m not sure what other career I would have that would give me this same chance to tinker with new stuff.

As part of this self realization, I have discovered by my child-like excitement in my accomplishments, how much I miss this in my current work capacity. I’m not building new things anymore. I’m just polishing the same things, and the details don’t really excite me like the prototypes do. I like “broad strokes”. We need people that do the detail work too, but its decidedly not for me.

So find out what it is that you love, and make it happen. Your job and your passion aren’t always in phase, but don’t let let your passion die out just because you are getting paid to do something else.

Computers, Events, Family, Hardware, Personal, Software, Thoughts, Windows

The American Legend

Lately, John Templeman and I have been knocking out some serious multi-player action on the Playstation 3. Uncharted 2, and Resident Evil 5 have been very fulfilling games from a co-operative viewpoint. The PC finally drew me back in though. Starcraft II, Left for Dead, Age of Empires II and some other titles have lead me to do some housecleaning on my computer setup. The first thing that I decided had to go was my tiny (by comparison) 19″ Acer VGA monitor. It looks like a joke next to my wife’s beautiful 24″ ASUS HDMI monitor. Now I have a brand new ASUS MS236H monitor (shown here). Next to go was my broken, six year old installation of Windows XP. Seriously, I have kept the same install running through countless infections, blue screens, hardware upgrades, service packs – the works! Enough was enough, and after my generational jump over Windows Vista, I landed on Windows 7.

I had 7 running on the wife’s machine for a while, and despite it being reloaded a few times (what can you expect, its still Windows), I have been fairly impressed with it. Kristin  is a good litmus test of the stability of a piece of software. I installed it on my machine and ran into a bunch of problems I didn’t anticipate. Apparently my Sound Blaster Live! 5.1 sound card had an EOL around 1998, so no official drivers. I am surprised that Windows didn’t “just work” with this given its age, and its established user base (perhaps I am the last hold out?) After installing some random driver from some guy named Peter in a forum talking about the problem (and advising me not to use his driver with more than 2GB of RAM if I valued uptime) I got the sound working.

Next came the printer. Holy hell. The same driver package (the EXACT same) package I installed on my wife’s computer that just worked kept failing to install the printer driver for our Dell 1600n. After several reboots, and the day turning into ‘morrow, I finally settled on installing the HP Laserjet 4100 drivers.They seem to be mostly inter-changable, so I guess I have solved that problem too.

Another biggy was the unresponsiveness of web browsers on the OS (IE being the exception, where it is always slow). I was experiencing page load times so long, they were timing out while waiting for a connection. I thought it was Comcast, but after some Googling, I found that the “Automatic Proxy Discovery” setting was turning my inbound pipe into dial up.

The weather here has really started to cool down, with high’s in the 60’s. It should be perfect weather to walk to the train station without breaking a sweat while getting some nice exercise, but I just haven’t been in the mood. Its probably due to the soul-crushing amount of work that I have to do to keep on top of my classes. If textbooks had any competition of reader’s choice, the author’s might have to actually invest time into things that make a book worth reading, such as clarity and interestingness. The days of sailing in the Charles River with Hoydis are waning, and the days where I have to strap my feet into a snowboard are soon approaching.

Our Thanksgiving vacation plans are questionable at best right now, so who knows when I will be back in Atlanta. We are making the best of our situation here regardless. Kristin and I have decided to throw a Halloween party for anyone who is interested in joining us. It will be a costume, pot-luck, beer-fest, and should be a great way to bring in the holiday. Zoo keepers, it turns out have some of the most interesting stories, and they will be in attendance so I have been told. We haven’t gotten to show our house, and all its critters off to many folks yet, so here is your chance to see how people outside of the city live. All are invited – and bring a friend!

The wife is still job searching after the crater that Capron Park management left on her career path. She has applied here and there and we are eagerly awaiting any leads. In the meantime, we have been enjoying the atypical time we get to spend together on the weekends. Last week, we went to Conner’s Farm and did our first Maize Maze. It was a-mazing. So corny… Ok, I’ll stop now. Thanks to Michael Hoydis for the invite – we had fun. The picture here is one that Michael and Hannah snapped while we were walking in the Apple Orchards. That green sticker on our shirts means we successfully navigated Clint Eastwood’s face. Seriously.

Tomorrow is Friday, and I have a three day weekend. Lets hope I can convince some of my co-workers to break away from programming for a few minutes and go grab some drinks…if I make it that far.

Hope everyone, and all of their new children are doing well! Hopefully we will see everyone soon back on the flip side.

Update: A picture of the monitor in its new home: