I’m a new dad again!

The last week has been the most wonderful week of the entire year. I got to bring home my daughter, Adeline Bree Simpson, born November 15th at 5:12pm. We delivered at North Fulton hospital about 20 minutes from our house and were blessed to have family and friends all come and visit us in the following few days. Quite a different experience than we had with Morrigan in the freezing cold month of January in Rhode Island’s Women and Infants. We did that one with my mother-in-law Brenda. It was six months before my parents even got to see our daughter because of the distance.

I’ve taken two weeks of vacations (and called it paternity leave) to really get some time in with Adeline, and destress, and get some things done around the house while helping out my wife while she recovers. With all this time off, I’ve forgotten how much newborns sleep. Pretty much all day, which leaves me with a lot of free time. I’ve been keeping myself busy with projects. I scored a new sit/stand desk off Craigslist and moved the old desk out of my office single handedly. No small feat considering the weight of the old desk.

I’ve also cleaned out and organized often forgotten corners of the house including the “junk drawers”, and the closets. I’ve donated books to the library, and food we don’t plan to eat to the Norcross co/op for hungry families. I cooked steaks with rosemary and garlic mashed potatoes and side salad for Kristin’s birthday, and scheduled a party for her over the weekend with family and friends. I’ve also been keeping our two year old daughter busy by including her in a lot of activities and taking her to go play at the indoor playground (its cold outside!) I had plans to tackle the backsplash, however my wife has asked for a hiatus on new projects because I’m stressing her out.

I’m thankful for my new addition, and how mature my daughter has been adjusting to the new family dynamics. I’m glad to do it this time around in a house we own with plenty of space, a good job, and no move looming over our heads. Its been a good year and I’d be happy to live out the rest of my life just like the last week has gone.

Migrating from Bamboo to Cedar

Heroku recently sent me a nice email:

You have one or more applications deployed to Heroku’s Bamboo stack. Bamboo went live almost four years ago, and the time has come for us to retire it.

You may have received a previous message about Bamboo deprecation. That message may have had an erroneous app list, but this email has the correct list of Bamboo apps that you own or are collaborated on. We’re sorry for any confusion.

On June 16th 2015, one year from now, we will shut down the Bamboo stack. The following app/apps you own currently use the Bamboo stack and must be migrated:

This is on the heels of an email about ending legacy routing support. It seems they have been quite busy over at Heroku, but I can’t complain too seriously about free app hosting.

Upgrading a legacy Rails application to the Cedar stack did require a few changes. I’ll document some stumbling blocks for posterity.

Foreman

The first big change described in the general purpose upgrading article from Heroku: https://devcenter.heroku.com/articles/cedar-migration was the use of foreman to manage your web services. I luckily have a simple app, and did not need to worry about resque, mailers, etc. This made my Procfile rather straightforward:

web: bundle exec unicorn -p $PORT -E $RACK_ENV -c config/unicorn.rb

Absent from the foreman crash course is any information about the corresponding .env file. I did need to add a few environmental variables to keep on working in development as usual:


RACK_ENV=development
PORT=3000

Once the Procfile was committed (end the .env) file added to .gitignore I then added Unicorn to the gemfile. I’m not sure if Unicorn is strictly necessary over webrick in development, however this is the example in the Cedar upgrade guide, so I wanted to run as close to production as practical to prevent any surprises.

After installing Unicorn, I then needed to touch config/unicorn.rb since it did not exist. Against, its in the example, but I’m not sure if its strictly necessary especially given that its just an empty file for me. To start your Rails application, you now issue foreman instead of of the older rails s

Devise incompatability

Not directly related to the Cedar changes, but a common gem, so worth mentioning. Devise 2.x has removed migration helpers. I stupidly didn’t lock my version of Devise in my Gemfile so I was confused why this was failing for me. I found the last 1.x version of Devise by running this command:

gem list devise --remote --all, and the specifying ’1.5.3′ as the second argument in my Gemfile on the devise entry.

PostgreSQL

Heroku requires Postgres in production and I was previously using sqlite in my development environment. Again, to mirror production I wanted to use Postgres my development environment so that I could be as close to production setup as practical. I took a quick trip down memory lane to setup and configure PostgreSQL on my Linux development machine: http://mrfrosti.com/2011/11/postgresql-for-ruby-on-rails-on-ubuntu/ . An interesting observation is that in Ubuntu 14.04 LTS, PostgreSQL runs on port 5433, and not the default 5432. This can be verified with netstat -nlp | grep 5432. If no entries come back, PostgreSQL is running on a non-default port. Try to grep for 5433, or other numbers until you find the process.

Backing up your database

Before I made any server changes, I wanted to have an up to date copy of the production databse on my machine to prepare for the worst. This can be done by using the pgbackup commands:


heroku pgbackups:capture //create a backup
heroku pgbacksup:url //make a public URL for that backup

Then I clicked the public URL and downloaded the file – safe and sound to my computer. By the way, you can import this production database into your local instance of PostgreSQL using the following command:


pg_restore --verbose --clean --no-acl --no-owner -d development /path/to/database

Pre-provisioning your database

A quick note on database availability. I got into a chicken and the egg scenario where the app wouldn’t deploy without the database, and the database seemingly couldn’t be created without the app first being deployed. Heroku has an article on pre provisioning and I found it a necessary prerequisite to deploying to my newly created test Cedar stack: https://devcenter.heroku.com/articles/pre-provision-database

To pre-provision your database, run the following:

heroku addons:add heroku-postgresql

You can even import the database from production, etc as part of the heroku utility. I used the public database URL I created above to populate my new cedar stack database:


heroku pgbackups:restore

Migrating to Cedar

Once I had tested on a new app, and in a feature branch I had confidence everything was working as expected. I was ready to make my changes to the production app by migrating it to Cedar. To do this, the command is:


heroku stack:migrate cedar

I either have an old version of the Heroku gem, or this is a discrepency in the gem and the non-gem packaging, but the docs misidentify this command as: heroku set:stack cedar which was not a valid command for me. The migrate command above appears to be analagous.

Once I merged my cedar feature branch back into master I was ready to push to master. And FAIL. It turns out that I needed to precompile my assets, which had a dependency on the database existing. I tried to pre-provision as I had done on my cedar branch, however the results were the same after running this command.

A quick search yielded https://devcenter.heroku.com/articles/rails-asset-pipeline#troubleshooting the advise to add the following line in the config/application.rb file:

config.assets.initialize_on_precompile = false

Summary

I’ve learned quite a bit about Heroku in this upgrade experience. Their changes force me to use the latest software which is nice in some ways. When everything is running on my website, I don’t often worry about upgrading until I get an email like the one above.

The downside of course is that this upgrade process is a pain in the ass, and is error prone, and affects production websites that are running smoothly. If it isn’t broken, you don’t want to fix it. Except this time you have to in order to have it continue to function after June, 2015.

Best of luck to other people upgrading. Just be patient, and test everything in a new app if you have any doubts.

AngularJS File Uploads with HTML5 FileAPI

AngularJS has an interesting gap in functionality that can make working with file uploads difficult. You might expect attaching a file to an <input type=”file”> to trigger the ng-change event, however this does not happen. There are a number of Stackoverflow questions on the subject, with a popular answer being to use a native onclick attribute and call into Angular’s internals (e.g. onchange=”angular.element(this).scope().fileNameChaged()”)

This solution feels brittle, and relies on some unsupported Angular interactions from the template. To work around this issue, Github user danialfarid has provided the awesome angular-file-upload library to simplify this process by extending Angular’s attributes to include ng-file-select. This is a cleaner implementation. This library also includes an injectable $upload object and its documentation shows how this abstracts the file upload process in the controller. This abstraction (if used) sends the uploaded file to the server immediately, and without the contents of the rest of the form. I wanted to submit this file change with the traditional all-at-once approach that HTML forms take. This way, the user can abandon form changes by neglecting to press the submit button, and keep the original file attachment unmodified.

In order to achieve this, I’ve created a solution that uses the HTML5 FileAPI to base64 encode the contents of the file, and attach it to the form. Instead of reinventing the ng-file-select event, I opted to use the angular-file-upload library described above. However instead of using the injected $upload functionality referenced in its README, we will serialize the attachment with a base64 encoded string.

To begin, create an AngularJS module for your application, and include the angularFileUpload dependency:

window.MyApp = angular.module('MyApp',
  [
    'angularFileUpload'
  ]
)

Next, we will create our AngularJS template and include our HTML input tags:

<div ng-controller="MyCtrl">
  <form ng-submit="save()">
    <input type="file" ng-file-select="onFileSelect($files)" />
    <input type="submit" />
  </form>
</div>

Now we can create our AngularJS controller, and define the onFileSelect function referenced in the the ng-file-select attribute:

class exports.MyCtrl
  @$inject: ['$scope', '$http']

  constructor: (@scope, @$http) ->
    @scope.onFileSelect = @onFileSelect

  onFileSelect: ($files) =>
    angular.forEach $files, (file) =>
      reader = new FileReader()
      reader.onload = (e) =>
        @scope.attachment = e.target.result
      reader.readAsDataURL file

  save: =>
    @$http(
      method: 'POST',
      url: "/path/to/handler",
      data:
        $.param(
          attachment: @scope.attachment
        )
      headers:
        'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'
        'Accept': 'text/javascript'
    )

Our controller is now in place. When the input’s attachment changes, onFileSelect is called which iterates through the collection of files (if multiple) and creates a FileReader instance for each one. The reader then has functionality attached to its onload event in the way of assigning the result to an attribute in our @scope object. The call to readAsDataURL starts reading the file and creates a data: URL representing the file’s data as a base64 encoded string.

Once the form is submitted, the save function is called from the value of ng-submit on our form tag. This performs a standard AngularJS XHR action, and includes the attachment assignment in the params. I have adjusted the Content-Type in the headers to communicate to the server that the content contains URL encoded data. If we had other form fields, we could serialize and append them to the params collection to send to the server alongside the attachment in the same place in the code.

Image Attachments

For added feedback to the user on image attachments, the img tag’s src attribute can accept a base64 encoded string as a value. Since we have this value from our FileReader object, we can update the view instantly with the file without doing any server side processing. To achieve this, we can add an image tag to our HTML file:

<div ng-controller="MyCtrl">
  <form ng-submit="save()">
    <img ng-src="{{attachment}}" />
    <input type="file" ng-file-select="onFileSelect($files)" />
    <input type="submit" />
  </form>
</div>

Next, we can make a few modifications to our onFileSelect function:

onFileSelect: ($files) =>
  angular.forEach $files, (file) =>
    if file.type in ["image/jpeg", "image/png", "image/gif"]
      reader = new FileReader()
      reader.onload = (e) =>
        @scope.$apply () =>
          @scope.attachment = e.target.result
      @scope.reader.readAsDataURL file

AngularJS two way data binding takes care of the messy details for us. The template is bound to @scope.attachment_url. We do some safety checks that the filetype is an image, and then we assign the attachment_url key to the base64 encoded image. A call to scope.apply() will repaint the screen, and the user will see the image they have attached displayed.

Thanks to Nick Karpenske, and Robert Lasch for help with the implementation!

“The Deadline”: Part 3 on a Series on Leadership in Technology

Quite a bit has changed since my last post on leadership. I’ve been promoted to the capacity of full time team lead after my mentor, and manager left to work for another company. I was learning a great deal everyday, and being turned loose to manage the team on my own has been a baptism by fire. I’ve applied what I’ve learned already from my previous mentorships, and I’ve learned a great deal more over the last few weeks.

High bandwidth communication is key

I would have never gotten through the last few weeks on emails, and instant messenger alone. Its very important to have communications by voice to keep everyone updated and on the same page while respecting each other’s time. Several times in the past week I’ve watched an email thread bounce back and forth between many participants, only to get boiled down into a concise message at a standup meeting. A five minute verbal QA would take hours via email. Especially across different timezones.

Delegate, delegate, delegate

When moving into a managerial capacity, the content of your day to day work shifts dramatically. The first few days without being in the code left me anxious. But you know what? We have good guys that understand even more than I do, and if you give them a problem, they come back with a solution. Its hard to trust that it will happen, but it did. Over and over again. The key is not in just throwing out a problem and coming back the next day to find a solution. You should be available for questions, and clarification continuously throughout the development of the solution. I often found that checking in a few times a day on each developer was sufficient to answer any questions, understand the progress, and get a rough idea of when something would be delivered. A few times they would casually mention that they are stuck trying to figure out ‘x’. Turns out I know about ‘x’ and after a brief chat with some pointing to places in the codebase they got it squared away.

Be crystal clear about your requirements

We had a new screen we wanted to develop. We had a mockup done by our front end guys. Our BA loved it. Everyone was on the same page. Until we began to implement it and all of these tiny edge cases popped up. I’d assume a course of action was the correct one to take, only to discover that our stories were getting rejected by QA. Turns out we don’t share the same brain, and what I call working, someone else will call a defect. That creates a lot of extra work to resolve what is now a defect. Closer to the deadline of the deliverable, when I switched over to voice communication instead of IM, I was able to lock in requirements quickly and get instant feedback on how we should handle certain edge cases. Don’t be afraid to bother the higher ups with technical questions, because it is up to them to provide the definitive answer. You aren’t doing yourself any favors by shielding them from it and substituting their would be solution with something you invented.

Front load work you are less certain of

If you aren’t clear of how something is going to be executed, that is a problem. Its your job as a manager to find out! You don’t need to know every technical detail, but you do need a clear understanding of who the key players are, what the testing process is like, and how you will get this feature through QA and accepted by the business. We had some work on a feature that impacted a few systems downstream. Did we put this off until we took care of the easy stuff? Nope – we started with the pain in the butt feature. And good thing too because we were working on it up until the last minute. We didn’t realize how much latency there would be in getting just a ‘its working’, or ‘its not working’ from all the parties downstream from us. If you aren’t sure, begin immediately to error on the side of caution. The easy work you have a clear understanding of, and it should take a backseat to anything that isn’t as clear.

Pad your estimates

This entire iteration I was too optimistic with what we could turn out in a day. Estimating is hard, and I’m convinced we all suck at it. I’d like to believe my guys can come up to speed on a new technology stack and crank out some impressive code but there is a learning curve. Sometimes the only way forward is a painfully slow dive into the documentation. There are test suites that take forever to run. There are rejections when it comes time to merge, meaning the suite has to be run again. There are the seemingly simple issues that once you dive into become much more complex than anticipated. Rarely did we close a feature or bug in the amount of time I assumed it would take. Because of this I’m going to start padding my estimates to compensate for all of these little details that add up to something bigger.

And finally – know when to ask for help!

There was a certain point in the sprint when I knew we weren’t in good shape to hit our deadline. I mulled it over in my head and stressed about it, and let my pride get in the way a bit and was convinced that asking for help wouldn’t be necessary. It was a mentality of “if I can just get through these few stories here we will be back on top”. But of course, I’m sucked into a meeting, or new bugs pop up, or the feature I’m working on before it runs way over on time. Don’t hesitate to ask for help when you first identify the need for it. Forget how bad you think it will look. Forget the anxiety of revealing rough new code to outsiders. Its worse if you don’t ask and end up missing your target. I was surprised at how strong the support was once we asked for help. I had 10 extra people jump in and had them all working in parallel on the feature. My entire day was pointing people to new work, and answering questions, and following up on when it would be merged. And the astounding thing was by NOT touching the code, we were delivering more than if I had put on headphones and jumped in myself. And for the record, there was a minimum of sniping at our technical solutions from outsiders. It felt good to know we were a team and it didn’t need to be a perfect solution before you left outside people in for help.