Computers, Open-source, Ruby, Software, Thoughts, Web

new Job().beginTraining()

It has been an exciting month for me. I put in my notice with Influence Health at the beginning of June, and served through the end of the month. After that took 2 weeks off before my training begins for Doximity on July 16th. The training is in San Francisco, and then immediately followed up with a July 30th co-location in Boulder, Colorado. This position is remote, aside from the several co-locations done each year. I am excited to start with a new team, on a new project, with a mix of old and new technologies.

Over the career at Influence Health I don’t feel that I got much deeper in my knowledge of Rails. What I feel I gained instead was the breadth of general programming skills. I configured Github repos, setup Jenkins, scripted a Hubot instance to assign pull requests, and made a continuous integration system. I created new policies following best practices to open a pull request with a change, write unit tests, and have one other person on the team review your changes before merging. I implemented linting, and worked with one of my coworkers to bring Webpack into Rails to rethinking how we manage Javascript. I also went very deep into the AWS services touching S3, Lambda, RDS, Redshift, Data Pipelines, Batch, Step, ELB, EC2, ECS, Cloudfront, and into other technologies like PostgreSQL, Docker, ElasticSearch, Capistrano, EventMachine, and Daemons. Being exposed to all of these new services has made me consider different approaches to traditional problems, and I feel has made me a better developer.

The new job at Doximity sheds my managerial role that I was voluntold to do at Influence Health. I thought I might have enjoyed it (maybe I would have under better circumstances). At the end of the day it wasn’t being a manager that killed the deal for me. It was being a manager, while still being a tech lead, while being an architect, a core contributor, and many other things. To manage well is a full time job. Tacking it onto an existing role made me feel inadequate as a manager, and I don’t like that feeling. So with the managerial role off my title the new role is back to software developer, and I’m ok with that. The compensation is right, and I felt like I was getting further away from the code than I wanted to be. At the end of the day developing something and seeing it work is what drives me. There is a technical lead track that I might pursue in several months if I feel like I am ready.

The technology stack is a mixture of Ruby and Javascript. After working with Javascript more heavily the last 6 months I have mixed feelings. I’m definitely excited because I do think the future of web development has coalesced on Javascript. And Javascript has risen to the challenge and gotten a lot better. Gone are the imposter “rich internet applications” like Silverlight, and Flex. Gone are the browser plugins for languages like Java and Flash. Javascript just works. And the browsers are really blazing trails, even Microsoft, so I believe that learning Javascript is a solid career investment. There is an excitement in the ecosystem (a little too excited imo, but I’ll take that over being dead)

Popularity aside, Javascript has less magic than Ruby which is a again, both good and bad. I appreciate seeing require statements, and knowing with absolute certainty that private methods are private. In Ruby for everything you can do to protect something, someone else can (and I find frequently does) find a way to circumvent it. I especially appreciate the strong linting culture that mitigates entire debates on code style.

I find the syntax of Javascript to be unattractive coming from Ruby, but it is more consistent. All of the parenthesis, semicolons, etc are just noisy. The surface area of Javascript is also much smaller which leads everyone to jump to utility libraries like Lodash, Underscore, etc. The language just needs to keep maturing and building in common methods. Date manipulation in particular is atrocious. async/await seems like we finally have a clean syntax for managing asynchronous code.

I do still feel like we are fitting a square peg into a round hole by having client side, single page applications. This isn’t the way the web was designed (it was made for request/response), and the fat client pattern still feels immature. Having a client side framework like Angular, (or even a library like React) does take care of managing some of the complexities . GraphQL takes the sting out of fetching all the combinations of data you might want from the server. Auth is taken care of with JWT and services like Auth0.

On the server side, using Node has been a mixed bag. I would like to see a few big frameworks that the community standardizes on, however the mentality seems to be build your own framework from a collection of your favorite libraries. As a result you can’t just jump into a project and immediately know where things are. I do however really enjoy asynchronous execution of code. It is a little harder to write and understand but wow can it be fast. I have already seen very positive results from Ruby implementations of batch jobs that took hours converted to Javascript and taking minutes. You simply don’t wait around, and it can think of your dependency tree in a completely new way.

At the end of the day I am excited but a little cautious. Ruby + Javascript sounds like a killer combination if you use each tool for what it does best. I don’t see the Ruby community lasting another decade so this is the perfect transition point to jump into Javascript. And I’m glad that it was Javascript that won out over Flex, Silverlight, JSP, etc. At least for the next 5 years until the new shiny technology comes out and people jump ship.

Advertisements
Computers, Linux, Open-source, Ruby, Software, Web

Migrating from Bamboo to Cedar

Heroku recently sent me a nice email:

You have one or more applications deployed to Heroku’s Bamboo stack. Bamboo went live almost four years ago, and the time has come for us to retire it.

You may have received a previous message about Bamboo deprecation. That message may have had an erroneous app list, but this email has the correct list of Bamboo apps that you own or are collaborated on. We’re sorry for any confusion.

On June 16th 2015, one year from now, we will shut down the Bamboo stack. The following app/apps you own currently use the Bamboo stack and must be migrated:

This is on the heels of an email about ending legacy routing support. It seems they have been quite busy over at Heroku, but I can’t complain too seriously about free app hosting.

Upgrading a legacy Rails application to the Cedar stack did require a few changes. I’ll document some stumbling blocks for posterity.

Foreman

The first big change described in the general purpose upgrading article from Heroku: https://devcenter.heroku.com/articles/cedar-migration was the use of foreman to manage your web services. I luckily have a simple app, and did not need to worry about resque, mailers, etc. This made my Procfile rather straightforward:

web: bundle exec unicorn -p $PORT -E $RACK_ENV -c config/unicorn.rb

Absent from the foreman crash course is any information about the corresponding .env file. I did need to add a few environmental variables to keep on working in development as usual:


RACK_ENV=development
PORT=3000

Once the Procfile was committed (end the .env) file added to .gitignore I then added Unicorn to the gemfile. I’m not sure if Unicorn is strictly necessary over webrick in development, however this is the example in the Cedar upgrade guide, so I wanted to run as close to production as practical to prevent any surprises.

After installing Unicorn, I then needed to touch config/unicorn.rb since it did not exist. Against, its in the example, but I’m not sure if its strictly necessary especially given that its just an empty file for me. To start your Rails application, you now issue foreman instead of of the older rails s

Devise incompatability

Not directly related to the Cedar changes, but a common gem, so worth mentioning. Devise 2.x has removed migration helpers. I stupidly didn’t lock my version of Devise in my Gemfile so I was confused why this was failing for me. I found the last 1.x version of Devise by running this command:

gem list devise --remote --all, and the specifying ‘1.5.3’ as the second argument in my Gemfile on the devise entry.

PostgreSQL

Heroku requires Postgres in production and I was previously using sqlite in my development environment. Again, to mirror production I wanted to use Postgres my development environment so that I could be as close to production setup as practical. I took a quick trip down memory lane to setup and configure PostgreSQL on my Linux development machine: https://mrfrosti.com/2011/11/postgresql-for-ruby-on-rails-on-ubuntu/ . An interesting observation is that in Ubuntu 14.04 LTS, PostgreSQL runs on port 5433, and not the default 5432. This can be verified with netstat -nlp | grep 5432. If no entries come back, PostgreSQL is running on a non-default port. Try to grep for 5433, or other numbers until you find the process.

Backing up your database

Before I made any server changes, I wanted to have an up to date copy of the production databse on my machine to prepare for the worst. This can be done by using the pgbackup commands:


heroku pgbackups:capture //create a backup
heroku pgbacksup:url //make a public URL for that backup

Then I clicked the public URL and downloaded the file – safe and sound to my computer. By the way, you can import this production database into your local instance of PostgreSQL using the following command:


pg_restore --verbose --clean --no-acl --no-owner -d development /path/to/database

Pre-provisioning your database

A quick note on database availability. I got into a chicken and the egg scenario where the app wouldn’t deploy without the database, and the database seemingly couldn’t be created without the app first being deployed. Heroku has an article on pre provisioning and I found it a necessary prerequisite to deploying to my newly created test Cedar stack: https://devcenter.heroku.com/articles/pre-provision-database

To pre-provision your database, run the following:

heroku addons:add heroku-postgresql

You can even import the database from production, etc as part of the heroku utility. I used the public database URL I created above to populate my new cedar stack database:


heroku pgbackups:restore

Migrating to Cedar

Once I had tested on a new app, and in a feature branch I had confidence everything was working as expected. I was ready to make my changes to the production app by migrating it to Cedar. To do this, the command is:


heroku stack:migrate cedar

I either have an old version of the Heroku gem, or this is a discrepency in the gem and the non-gem packaging, but the docs misidentify this command as: heroku set:stack cedar which was not a valid command for me. The migrate command above appears to be analagous.

Once I merged my cedar feature branch back into master I was ready to push to master. And FAIL. It turns out that I needed to precompile my assets, which had a dependency on the database existing. I tried to pre-provision as I had done on my cedar branch, however the results were the same after running this command.

A quick search yielded https://devcenter.heroku.com/articles/rails-asset-pipeline#troubleshooting the advise to add the following line in the config/application.rb file:

config.assets.initialize_on_precompile = false

Summary

I’ve learned quite a bit about Heroku in this upgrade experience. Their changes force me to use the latest software which is nice in some ways. When everything is running on my website, I don’t often worry about upgrading until I get an email like the one above.

The downside of course is that this upgrade process is a pain in the ass, and is error prone, and affects production websites that are running smoothly. If it isn’t broken, you don’t want to fix it. Except this time you have to in order to have it continue to function after June, 2015.

Best of luck to other people upgrading. Just be patient, and test everything in a new app if you have any doubts.

Open-source, Software, Web

Inserting Large Data Sets in MySQL

Its always interesting for me to work with large data sets. The solutions that work in lower orders of magnitude don’t always scale, and I am left with unusable solutions in production. Often the problems require clever refactoring that at a cursory glance appear identical, but somehow skirt around some expensive operation.

I had a requirement to tune a script that was responsible for inserting 300k records in a database table. The implemented solution of iterating through a collection and calling ‘INSERT’ was not scaling very well and the operation was taking long enough to time out in some runs. This gave me the opportunity to learn about a few things in MySQL including the profiler, and (spoiler!) the INSERT multiple records syntax.

I needed some real numbers to compare the changes I would be making. My plan was to change one thing at a time and run a benchmark to tell if the performance was 1) better 2) worse, or 3) not impacted. MySQL has an easy to use profiler for getting this information. Inside of the MySQL CLI, you can issue the command:

SET profiling=1;

Any subsequent queries you run will now be profiled. You can see a listing of queries you want to know more about by typing:

SHOW profiles;

This command will show an index of queries that have run, along with their associated Query_ID. To view more information about a particular query, issue the following command replacing x with the Query_ID:

SHOW profile FOR QUERY x

Here is an example output:

+------------------------------+----------+
| Status                       | Duration |
+------------------------------+----------+
| starting                     | 0.000094 |
| checking permissions         | 0.000003 |
| checking permissions         | 0.000002 |
| checking permissions         | 0.000001 |
| checking permissions         | 0.000003 |
| Opening tables               | 0.000021 |
| System lock                  | 0.000008 |
| init                         | 0.000039 |
| optimizing                   | 0.000012 |
| statistics                   | 0.000717 |
| preparing                    | 0.000014 |
| Creating tmp table           | 0.000023 |
| executing                    | 0.000002 |
| Copying to tmp table         | 0.016192 |
| converting HEAP to MyISAM    | 0.026860 |
| Copying to tmp table on disk | 2.491668 |
| Sorting result               | 0.269554 |
| Sending data                 | 0.001139 |
| end                          | 0.000003 |
| removing tmp table           | 0.066401 |
| end                          | 0.000009 |
| query end                    | 0.000005 |
| closing tables               | 0.000011 |
| freeing items                | 0.000040 |
| logging slow query           | 0.000002 |
| cleaning up                  | 0.000015 |
+------------------------------+----------+

In one iteration of my SQL query, I was spending an excessive amount of time “copying to tmp table”. After reading the article http://www.dbtuna.com/article/55/Copying_to_tmp_table_-_MySQL_thread_states, I was able to isolate the cause of this to an ORDER clause in my query that wasn’t strictly necessary. In this example, Not too much exciting is going on, which is a Good Thing.

For a comprehensive listing of thread states listed in the Status column, view: http://dev.mysql.com/doc/refman/5.0/en/general-thread-states.html

Now that I know my query is as optimized as it can be, its time to pull out the bigger guns. On to plan B – consolidating those INSERT statements!

An INSERT statement, though executing seemingly instantaneously under small loads is comprised of many smaller operations, each with its own cost. The expense of these operations is roughly the following: (http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html)

  • Connecting: (3)
  • Sending query to server: (2)
  • Parsing query: (2)
  • Inserting row: (1 × size of row)
  • Inserting indexes: (1 × number of indexes)
  • Closing: (1)

As you can see, connecting to the server, sending the query, and parsing are relatively expensive operations. In the script I was modifying, 300k INSERT statements were generating 300k records. Fortunately for us, MySQL doesn’t force our records to be 1:1 with our INSERT statements thanks to allowing multiple insertions per INSERT. To use this feature instead of having 3 INSERT statements:

INSERT INTO foo (col1, col2) VALUES (1, 1);
INSERT INTO foo (col1, col2) VALUES (2, 2);
INSERT INTO foo (col1, col2) VALUES (3, 3);

We can instead coalesce them into a single INSERT statement

INSERT INTO foo (col1, col2) VALUES (1, 1), (2, 2), (3, 3);

How many values can we coalesce into the same INSERT statement? This isn’t driven by a max number of records, but rather a server system variable sysvar_bulk_insert_buffer_size: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_bulk_insert_buffer_size This can be modified, but the default is 8388608 bytes. The exact number of records will vary depending on the number of columns, and the amount of data being inserted into those columns. I conservatively chose to coalesce 5k records at a time. I tried to bump this to 10k, but I encountered an exception when I exceeded this server system variable maximum.

With my INSERTS coalesced, I was able to reduce my total number of INSERT statements to 60 (300k / 5k). This yielded massive performance boosts. I was able to take the query from over an hour to run to completing in just 2 minutes. Quite a nice trick, considering the data is unchanged.

Is there room for improvement? Absolutely. A statement executed 60 times may be worth preparing, or wrapping inside of a transactional block. My real world tests didn’t yield a significant enough performance boost to make these complexities worth implementing. This may not be true with data in higher orders of magnitude, or different schema layouts. MySQL also understands INDEX hints, which allow you to suggest INDEXES that may be missed by the query planner, or force the inclusion or exclusion of beneficial, or detrimental INDEXES despite what the query planner thinks! (http://dev.mysql.com/doc/refman/5.0/en/index-hints.html)

Speaking of INDEX, if any are using UNIQUE, BTREE type, these can be dropped while the mass INSERT is running, then added back later to side-step the 1n INDEX operational hit.

In the next order of magnitude, I will probably have to rethink my approach of using INSERT statements to load data. According to the MySQL documentation, LOAD DATA INFILE is “roughly 20 times faster” than a MySQL INSERT statement. My script would no longer generate statements, but rather output to a file in a comma delimited format. This could then be loaded assuming appropriate permissions are in place.

Happy profiling!

Computers, Open-source, Ruby, Web

Making Rails Routes Fantastic

I took a week off from work to move into our new house. It was a time of rest, and relaxation despite the chaos around what moving can bring. I’ve had a few personal projects on the back burner, but never seemed to have the time or the energy to make much progress. Recently a talk with a friend reminded me how important completing those pet projects can be for your personal happiness. I’m proud to present the completion of an idea I’ve had a for a while: Routastic.

What is Routastic? It serves as an interactive Rails routes editor. Simply, I got tired of the pattern of modifying config/routes.rb, then running rake routes and grepping for some result. This is completely inefficient. My inspiration came from the beautiful Rubular.com and its interactive regular expression building. Its quick. Its painless. Its a valuable tool for everyday programming.

Please check out http://routastic.herokuapp.com/ and let me know how I can improve it.

Special thanks to Avand Amiri for suggesting the name (despite the name screaming Web 2.0, it actually is quite memorable!)

Apple, Computers, Events, Family, Personal, Ruby, Software, Thoughts, Vacations, Web

Cloudy, Cold and Hip – Two Weeks of Training in Portland

I’ve really enjoyed the last two weeks. My new employer, recently acquired Analog Analytics flew me out to Portland, Oregon for training. Portland is quite an amazing place. Skateboarders, cyclists, and runners abound, but with a laid back attitude. Its the greenest city I have ever visited. Stores seem to only dispense recyclable materials including paper bags, and foods in waxed cardboard containers. The entire city is very walkable without much danger of personal harm. The food was amazing, and the drinks even better. This city knows its coffees, teas, and beers. It has to be home to the most microbreweries of any city. Needless to say I have probably gained 5 pounds, and I am super caffeinated. Also, the proximity to all these hip restaurants is giving me second thoughts about living so far outside of the city limits. No lie, I even glanced at Portland housing prices.

It took me a few days to get oriented to the city and the work environment. The company runs out of the Ford Building, in the heart of quite a few cool restaurants and bars in the Southeast side of the city. In fact, it left me a little jealous considering the hotel is only surrounded by fast food joints.  I got a shiny new MacBook Pro (which I am currently battling to make it as “boring” as possible). I can’t talk too much about the work, but it does hit the sweet spot of what I was looking for – a small team feel with deep pockets, and a launch date.

Kristin and Morrigan joined me for the second week and did their own thing, and they had a blast. They visited OMSI, Powell Books, Finnegans, several parks, and malls, and some tasty food joints. I’m happy they got to experience some of what makes this city awesome.

I’m enjoying several aspects of the job in particular: A remote driven environment, and pair programming. Training isn’t the best test run of this environment, as I am in the office everyday for now. Once I am setup, I pick the hours. People hop online and offline, according to their time zones, availability, etc. Every piece of communication, and workflow is centered around remote teams.

Pair programming makes programming social. Despite the image that telling someone you are a programmer conjures, I really enjoy interacting with people. I remember teaming up with James, John and many others at Clayton State to tackle some large issues with our portal and other systems. Since Clayton State, I have worked on a couple teams, and it was almost always in isolation, save for 5-10 minute high level meetings. The best part is, its actually kind of fun.

Pair programming was a tough adjustment for me. I’m used to presenting a final product and defending its implementation. I have all the answers. I know what the talking points are up front, and I am comfortable because I am the authority on the subject. Pair programming is letting your guard down, and conceding as much as contributing. You are two people working on a problem together, with neither party starting off knowing the complete solution. The work is certainly slower than solo programming, as incorporating input, early refactoring, and general discussion takes up time. This team takes an interesting approach to combat some of the time drain; You can either pair program and merge directly, or work solo but your code requires a peer review before merging. The choice is yours. The solo programming option will probably act as a safety value for those days when I just want some time to myself. They also encourage “switching drivers” to vary the work. Interestingly, being the passenger requires more focus than driving, as you are trying to proactively find issues with the current approach.

I’m still struggling to embrace TDD. I don’t like the zealotry in the community when the topic comes up; presenting the only two options as either you test first, or you are just ignorant, undisciplined, or apathetic to the code you write. The truth is far from it. I figure things out by moving the pieces around – not by staring at it from a distance. That is not to say that there aren’t times when testing first is extremely useful, like when clarifying requirements. The test assertions (even with missing test bodies) is often enough to help solidify an attack plan. The amount of code coverage can be a hindrance though, as real world tests always end up being more tightly coupled than you ideally want them to be. If you make seemingly small code changes, you can end up with quite a bit of the test suite failing (all though with the same few errors repeating). If you mock and stub too much, you aren’t testing much that is useful. Even worse, the workflow doesn’t seem realistic: Write the tests, verify the tests fail, write the code, verify the tests pass. The reality seems to be write the tests (heavily guessing at the exact implementation), verify they fail, write the code, refactor almost all of your tests, and verify they pass. Given the choice, I think I’d still rather write code, then test the code to verify it does what I want in all scenarios. I’ve yet to meet a dyed-in-the-wool TDDer that sees any fault with this extra refactoring step. The subject of pre-written tests needing to be refactored seems to be glossed over. Maybe my opinion will be changed yet.

Things are looking awesome for this next step in my life! I’m keeping my fingers crossed for Railsconf tickets, since they are in my employer’s backyard. There are also a few missed restaurants I am meaning to visit next time I’m back up this way…

Computers, Events, Family, Personal, Thoughts, Vacations, Web

One Month Perspective On Working From Home

Today marks one month of being a remote worker for my employer. I’m still learning on how to be the most effective with this new environment, but I wanted to reflect on my experiences for anyone else considering this working arrangement.

Lets get this out of the way: its not all unicorns and rainbows. I think that was the biggest surprise to me. It seems like a dream to wake up, walk into another room of your house, work a few hours, and already be at home when 5 o’clock hits. For those expecting instant happiness – you will be slightly disappointed.

The reality is that, like most things in life, working from home is a mixed bag. For those looking to make the transition, consider the following issues:

Isolation can be a big problem, depending on your personality. I think this blow was softened because I am a software developer, and I am used to working with a computer more than people already. I have already cut my teeth on reduced interaction. What I do miss is the comradery in working in a team environment. You often have lunches with your co-workers, entertaining side conversations, and a million other things that contribute to the work culture. When you are working remotely, you exclude yourself from most of that, and it can be frustrating to feel like your avenue for interaction has been reduced.

Reduced visibility in the company is another disadvantage. I feel that I need twice the participation in communications just to prove that I really do still exist. You aren’t in the chatter loop anymore, so information may come to you seemingly out of the blue. Its important to remember that the company isn’t just swinging at things to see what sticks – they are in discussions that you aren’t part of anymore. There is something to be said for that office grapevine. I also get the feeling that I am quietly passed over when it comes to opportunities. The “online” indicator in a chat room isn’t the same as being a warm body in the room when it comes to picking a person for a job.

Getting into a rut in your routine is something that you have to constantly work against. While it seems so simple to sleep in one room, and work in the next, my mind craves more experiences in a day than the walls of two rooms of my house. Like it or not that soul-sucking commute, and those bleak off-white painted walls in the office provide some stimulation. I think it is key to be mobile. Work from a coffee shop for a day, or visit a local university, or other facility welcoming of guests, and providing free wi-fi.

Take your lunches out a few times a week, just to stay connected with the outside. You will be amazed to know that the rest of the world isn’t in a stasis. Things on the outside change. New restaurants open, roads get built, technology improves, books get published. Partake in the changes by going outside your house.

Join a meetup group for fun, or for professional development. In addition to providing networking, and keeping you up on the times, it is also and excuse to go have a few drinks with some peers.

Its not all gloom and doom, as there are some really positive things about working from home. Some of these you probably already know (and maybe even dream about!):

You will have a lot more time. Simply commuting is an average of two hours a day – 10 hours a week that you instantly get back. Also, if you cook at home for lunch, often you can use the remainder of your time to complete tasks mid-day instead of waiting until the end of the day when you are tired. I often do some laundry, or vacuum, take the dogs for a walk, sit outside and read my book, etc. My wife and I have a seven month old daughter, and every minute is precious to me. Having more time to spend with her is priceless.

There are cost savings to remote work, including reduced wear and tear on your car, and fewer fill-ups at the pump. I actually got to reclassify my vehicle as as “for pleasure” on our auto insurance, since it is no longer used for commuting and falls under the cap for average miles per year. Other savings include cheaper lunches (unless you go out) since you have a full kitchen at your disposal, and a thus are able to prepare a range of foods. You may find other savings including no more mid-day dog-sitters, saving on a parking spot, or public transit (just kidding – this is Atlanta!).

You will be hyper-focused. A co-worker once told me “an office is a great place if you don’t want to get anything done”. I understand what he meant by this now. Co-workers can be lots of fun, but when you are trying to buckle down and squeeze something in on a deadline, the office is the least likely place that is going to happen. I often get “in the flow” for 3-4 hour straight in a day when working from home. Its important that you recognize the speed you are working at, relative to your output before to understand how productive you are. The first week, I felt like I was moving in slow motion trying to adjust to the new environment, only to find that I had increased my work output. I would wager I am twice as productive in a day at home relative to a day in the office.

That being said – take frequent breaks. Your coworkers aren’t there to give your brain a break – so its up to you. I find it the most ethical to take breaks doing tasks I can relate to my work. I read HackerNews, read a technical book, or use the time to test out some new technologies. I have already been able to fold some of this exploratory knowledge back into the projects at work.

You are free to travel (and for extended periods!). Lots of people binge vacation, one or two weeks a year. When working remotely, you aren’t tied to a particular location anymore. As long as you have a laptop, access to the Internet, and power, you can be anywhere in the world. My wife and I are gearing up to spend a month in St. Augustine, Florida. I will work during the day most of the time, taking only a few PTO days. After 5pm, or when the weekend hits you are already in the middle of vacationing. The best part is that the month long vacation schedule is one you can physically sustain, with plenty of rest between the activities.

The final benefit I will mention is being able to set your own schedule. You can’t get carried away, especially if your employer enforces office hours. But if you need to take lunch earlier, or later, you can. If you need to step away from the computer for a few minutes to handle something, you can. There is trust that has to occur between employer and employee, but in my experience, your employer is most concerned with work output. It is a loaded gun to know that you are being entrusted to operate with almost total autonomy. You no longer have the eye of an overseer watching your every movement, which is a liberating feeling. Just get what you need to get done, and don’t go crazy with power!

So far, I am loving it. I have heard mixed reports of people adapting to working remotely. Some people crawl on hands and knees begging to come back to the office, and some people work remote the rest of their career. I think it comes down to your particular personality. If you are like me, you just have to try something to know if it works for you. This is one gamble I am glad that I took.

For anyone seriously considering a teleworking gig, I would highly recommend a few resources that helped me get started. First first is a short post like this one from Kyle-Kulyk. I don’t touch on it, but he makes a great observation about how working from home will affect your relationships with significant others. Benefits of, and managing your new teleworking lifestyle can be found in the 4 Hour Work Week. This book contains lots of great resources for how to negotiate a remote work arrangement, and tips for extended travel. Finally, Joel Gascoigne has some great pointers for keeping yourself mentally happy during this big transition.

Computers, Open-source, Web

Configurable Javascript In Place Editor

Why do we need another in place edit library for Javascript? This one is different of course! This library allows for a high degree of customization due to its modular nature. Each action on an in place edit object (submitting on blur, adding a class on focus, toggling a label on click, etc) is complete isolated and designed to stack with each other. This allows for one library to accomplish a wide variety of in place edit functionality in the same project. You can even have different functionality on the same page if you wish.

Setup is a breeze. Just grab the latest copy of the Javascript library from https://github.com/bsimpson/in_place_edit. You will need to be using jQuery in your project, since this plugin depends on it to work. Once you have include both jQuery, and in_place_edit.js, you can initialize it like this:

  $(function() {
    $('.in_place_edit').inPlaceEdit();
  });

The argument passed is the CSS selector that contains the form on which you wish to use in place edit.

Next, we need to configure the form to tell it what actions we want it to perform. If we want basic edit in place functionality, we can add “submit_on_blur” to have the form submit when a blur event is received. Note that this only occurs if the contents of the form have changed, in order to save on the number of server posts.

<div class="in_place_edit" data-in_place_data="submit_on_blur">
  <form action="#" method="post">
    <input type="text" name="foo" value="foo" />
  </form>
</div>

The options are simple, and the combination are many. Check out the following options to see what you can do with your form fields on the DEMO page.

Happy in place editing!

Update: This plugin has been updated to be a jQuery plugin. Thanks to jsumners for providing lots of help!