Computers, Linux, Open-source, Ruby, Software, Web

Migrating from Bamboo to Cedar

Heroku recently sent me a nice email:

You have one or more applications deployed to Heroku’s Bamboo stack. Bamboo went live almost four years ago, and the time has come for us to retire it.

You may have received a previous message about Bamboo deprecation. That message may have had an erroneous app list, but this email has the correct list of Bamboo apps that you own or are collaborated on. We’re sorry for any confusion.

On June 16th 2015, one year from now, we will shut down the Bamboo stack. The following app/apps you own currently use the Bamboo stack and must be migrated:

This is on the heels of an email about ending legacy routing support. It seems they have been quite busy over at Heroku, but I can’t complain too seriously about free app hosting.

Upgrading a legacy Rails application to the Cedar stack did require a few changes. I’ll document some stumbling blocks for posterity.

Foreman

The first big change described in the general purpose upgrading article from Heroku: https://devcenter.heroku.com/articles/cedar-migration was the use of foreman to manage your web services. I luckily have a simple app, and did not need to worry about resque, mailers, etc. This made my Procfile rather straightforward:

web: bundle exec unicorn -p $PORT -E $RACK_ENV -c config/unicorn.rb

Absent from the foreman crash course is any information about the corresponding .env file. I did need to add a few environmental variables to keep on working in development as usual:


RACK_ENV=development
PORT=3000

Once the Procfile was committed (end the .env) file added to .gitignore I then added Unicorn to the gemfile. I’m not sure if Unicorn is strictly necessary over webrick in development, however this is the example in the Cedar upgrade guide, so I wanted to run as close to production as practical to prevent any surprises.

After installing Unicorn, I then needed to touch config/unicorn.rb since it did not exist. Against, its in the example, but I’m not sure if its strictly necessary especially given that its just an empty file for me. To start your Rails application, you now issue foreman instead of of the older rails s

Devise incompatability

Not directly related to the Cedar changes, but a common gem, so worth mentioning. Devise 2.x has removed migration helpers. I stupidly didn’t lock my version of Devise in my Gemfile so I was confused why this was failing for me. I found the last 1.x version of Devise by running this command:

gem list devise --remote --all, and the specifying ‘1.5.3’ as the second argument in my Gemfile on the devise entry.

PostgreSQL

Heroku requires Postgres in production and I was previously using sqlite in my development environment. Again, to mirror production I wanted to use Postgres my development environment so that I could be as close to production setup as practical. I took a quick trip down memory lane to setup and configure PostgreSQL on my Linux development machine: https://mrfrosti.com/2011/11/postgresql-for-ruby-on-rails-on-ubuntu/ . An interesting observation is that in Ubuntu 14.04 LTS, PostgreSQL runs on port 5433, and not the default 5432. This can be verified with netstat -nlp | grep 5432. If no entries come back, PostgreSQL is running on a non-default port. Try to grep for 5433, or other numbers until you find the process.

Backing up your database

Before I made any server changes, I wanted to have an up to date copy of the production databse on my machine to prepare for the worst. This can be done by using the pgbackup commands:


heroku pgbackups:capture //create a backup
heroku pgbacksup:url //make a public URL for that backup

Then I clicked the public URL and downloaded the file – safe and sound to my computer. By the way, you can import this production database into your local instance of PostgreSQL using the following command:


pg_restore --verbose --clean --no-acl --no-owner -d development /path/to/database

Pre-provisioning your database

A quick note on database availability. I got into a chicken and the egg scenario where the app wouldn’t deploy without the database, and the database seemingly couldn’t be created without the app first being deployed. Heroku has an article on pre provisioning and I found it a necessary prerequisite to deploying to my newly created test Cedar stack: https://devcenter.heroku.com/articles/pre-provision-database

To pre-provision your database, run the following:

heroku addons:add heroku-postgresql

You can even import the database from production, etc as part of the heroku utility. I used the public database URL I created above to populate my new cedar stack database:


heroku pgbackups:restore

Migrating to Cedar

Once I had tested on a new app, and in a feature branch I had confidence everything was working as expected. I was ready to make my changes to the production app by migrating it to Cedar. To do this, the command is:


heroku stack:migrate cedar

I either have an old version of the Heroku gem, or this is a discrepency in the gem and the non-gem packaging, but the docs misidentify this command as: heroku set:stack cedar which was not a valid command for me. The migrate command above appears to be analagous.

Once I merged my cedar feature branch back into master I was ready to push to master. And FAIL. It turns out that I needed to precompile my assets, which had a dependency on the database existing. I tried to pre-provision as I had done on my cedar branch, however the results were the same after running this command.

A quick search yielded https://devcenter.heroku.com/articles/rails-asset-pipeline#troubleshooting the advise to add the following line in the config/application.rb file:

config.assets.initialize_on_precompile = false

Summary

I’ve learned quite a bit about Heroku in this upgrade experience. Their changes force me to use the latest software which is nice in some ways. When everything is running on my website, I don’t often worry about upgrading until I get an email like the one above.

The downside of course is that this upgrade process is a pain in the ass, and is error prone, and affects production websites that are running smoothly. If it isn’t broken, you don’t want to fix it. Except this time you have to in order to have it continue to function after June, 2015.

Best of luck to other people upgrading. Just be patient, and test everything in a new app if you have any doubts.

Computers, Open-source, Software, Thoughts

AngularJS File Uploads with HTML5 FileAPI

AngularJS has an interesting gap in functionality that can make working with file uploads difficult. You might expect attaching a file to an <input type=”file”> to trigger the ng-change event, however this does not happen. There are a number of Stackoverflow questions on the subject, with a popular answer being to use a native onclick attribute and call into Angular’s internals (e.g. onchange=”angular.element(this).scope().fileNameChaged()”)

This solution feels brittle, and relies on some unsupported Angular interactions from the template. To work around this issue, Github user danialfarid has provided the awesome angular-file-upload library to simplify this process by extending Angular’s attributes to include ng-file-select. This is a cleaner implementation. This library also includes an injectable $upload object and its documentation shows how this abstracts the file upload process in the controller. This abstraction (if used) sends the uploaded file to the server immediately, and without the contents of the rest of the form. I wanted to submit this file change with the traditional all-at-once approach that HTML forms take. This way, the user can abandon form changes by neglecting to press the submit button, and keep the original file attachment unmodified.

In order to achieve this, I’ve created a solution that uses the HTML5 FileAPI to base64 encode the contents of the file, and attach it to the form. Instead of reinventing the ng-file-select event, I opted to use the angular-file-upload library described above. However instead of using the injected $upload functionality referenced in its README, we will serialize the attachment with a base64 encoded string.

To begin, create an AngularJS module for your application, and include the angularFileUpload dependency:

window.MyApp = angular.module('MyApp',
  [
    'angularFileUpload'
  ]
)

Next, we will create our AngularJS template and include our HTML input tags:

<div ng-controller="MyCtrl">
  <form ng-submit="save()">
    <input type="file" ng-file-select="onFileSelect($files)" />
    <input type="submit" />
  </form>
</div>

Now we can create our AngularJS controller, and define the onFileSelect function referenced in the the ng-file-select attribute:

class exports.MyCtrl
  @$inject: ['$scope', '$http']

  constructor: (@scope, @$http) ->
    @scope.onFileSelect = @onFileSelect

  onFileSelect: ($files) =>
    angular.forEach $files, (file) =>
      reader = new FileReader()
      reader.onload = (e) =>
        @scope.attachment = e.target.result
      reader.readAsDataURL file

  save: =>
    @$http(
      method: 'POST',
      url: "/path/to/handler",
      data:
        $.param(
          attachment: @scope.attachment
        )
      headers:
        'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'
        'Accept': 'text/javascript'
    )

Our controller is now in place. When the input’s attachment changes, onFileSelect is called which iterates through the collection of files (if multiple) and creates a FileReader instance for each one. The reader then has functionality attached to its onload event in the way of assigning the result to an attribute in our @scope object. The call to readAsDataURL starts reading the file and creates a data: URL representing the file’s data as a base64 encoded string.

Once the form is submitted, the save function is called from the value of ng-submit on our form tag. This performs a standard AngularJS XHR action, and includes the attachment assignment in the params. I have adjusted the Content-Type in the headers to communicate to the server that the content contains URL encoded data. If we had other form fields, we could serialize and append them to the params collection to send to the server alongside the attachment in the same place in the code.

Image Attachments

For added feedback to the user on image attachments, the img tag’s src attribute can accept a base64 encoded string as a value. Since we have this value from our FileReader object, we can update the view instantly with the file without doing any server side processing. To achieve this, we can add an image tag to our HTML file:

<div ng-controller="MyCtrl">
  <form ng-submit="save()">
    <img ng-src="{{attachment}}" />
    <input type="file" ng-file-select="onFileSelect($files)" />
    <input type="submit" />
  </form>
</div>

Next, we can make a few modifications to our onFileSelect function:

onFileSelect: ($files) =>
  angular.forEach $files, (file) =>
    if file.type in ["image/jpeg", "image/png", "image/gif"]
      reader = new FileReader()
      reader.onload = (e) =>
        @scope.$apply () =>
          @scope.attachment = e.target.result
      @scope.reader.readAsDataURL file

AngularJS two way data binding takes care of the messy details for us. The template is bound to @scope.attachment_url. We do some safety checks that the filetype is an image, and then we assign the attachment_url key to the base64 encoded image. A call to scope.apply() will repaint the screen, and the user will see the image they have attached displayed.

Thanks to Nick Karpenske, and Robert Lasch for help with the implementation!

Open-source, Software, Vacations

Colocation Week in Dallas, TX

Working remote has some unconventional consequences. A big one being its very common to have never met in person the people you have worked side by side with. Its a strange sensation to recognize a voice you’ve heard almost daily but not be able to apply a face. It turns out that many people don’t look at all like what you have imagined them to be.

To have some fun and meet our coworkers, our company hosted its first “colocation week” in Dallas, TX. After a 24 hour meet and greet, we all sat down on our new teams and sat together at the same table for the first time. It was a blast! Aside from a few major issues (hotel Internet, and Dallas being a dry county come to mind!), we did a lot of good.

This evening marked the end of our a 30-hour hackathon to compete for the grand prize of taking home a Google Glass dev kit.

It was hard work, with us stopping the night before at 2am, only to get a few hours sleep and jump right back into coding. Our team pitched the idea of brick and mortar stores integrating iBeacons (Bluetooth LE) devices to target proximity based offers and suggestions. The resulting app had some fun mechanics, that I’d love to see make it into stores:

  • Personalization and announcement when you walk into the store with your device
  • Assistance in locating goods at an aisle level
  • Scan as you go shopping
  • Integration with online payments to avoid checkout lines

 

There were strong tie-ins for the business side as well, with foot traffic analysis and hyper relevant offer targeting. The screen show is the Android activity returned as a user enters the geofencing of the first shop’s aisle.

It was tough to jump back into Android development after a few years, but it came back. Java is the language that just won’t die.

We had an awesome team, and its wonderful to work for a company where everyone is as motivated as you to deliver something kickass. Hopefully we will get a chance to work with some of these technologies.

Open-source, Software, Web

Inserting Large Data Sets in MySQL

Its always interesting for me to work with large data sets. The solutions that work in lower orders of magnitude don’t always scale, and I am left with unusable solutions in production. Often the problems require clever refactoring that at a cursory glance appear identical, but somehow skirt around some expensive operation.

I had a requirement to tune a script that was responsible for inserting 300k records in a database table. The implemented solution of iterating through a collection and calling ‘INSERT’ was not scaling very well and the operation was taking long enough to time out in some runs. This gave me the opportunity to learn about a few things in MySQL including the profiler, and (spoiler!) the INSERT multiple records syntax.

I needed some real numbers to compare the changes I would be making. My plan was to change one thing at a time and run a benchmark to tell if the performance was 1) better 2) worse, or 3) not impacted. MySQL has an easy to use profiler for getting this information. Inside of the MySQL CLI, you can issue the command:

SET profiling=1;

Any subsequent queries you run will now be profiled. You can see a listing of queries you want to know more about by typing:

SHOW profiles;

This command will show an index of queries that have run, along with their associated Query_ID. To view more information about a particular query, issue the following command replacing x with the Query_ID:

SHOW profile FOR QUERY x

Here is an example output:

+------------------------------+----------+
| Status                       | Duration |
+------------------------------+----------+
| starting                     | 0.000094 |
| checking permissions         | 0.000003 |
| checking permissions         | 0.000002 |
| checking permissions         | 0.000001 |
| checking permissions         | 0.000003 |
| Opening tables               | 0.000021 |
| System lock                  | 0.000008 |
| init                         | 0.000039 |
| optimizing                   | 0.000012 |
| statistics                   | 0.000717 |
| preparing                    | 0.000014 |
| Creating tmp table           | 0.000023 |
| executing                    | 0.000002 |
| Copying to tmp table         | 0.016192 |
| converting HEAP to MyISAM    | 0.026860 |
| Copying to tmp table on disk | 2.491668 |
| Sorting result               | 0.269554 |
| Sending data                 | 0.001139 |
| end                          | 0.000003 |
| removing tmp table           | 0.066401 |
| end                          | 0.000009 |
| query end                    | 0.000005 |
| closing tables               | 0.000011 |
| freeing items                | 0.000040 |
| logging slow query           | 0.000002 |
| cleaning up                  | 0.000015 |
+------------------------------+----------+

In one iteration of my SQL query, I was spending an excessive amount of time “copying to tmp table”. After reading the article http://www.dbtuna.com/article/55/Copying_to_tmp_table_-_MySQL_thread_states, I was able to isolate the cause of this to an ORDER clause in my query that wasn’t strictly necessary. In this example, Not too much exciting is going on, which is a Good Thing.

For a comprehensive listing of thread states listed in the Status column, view: http://dev.mysql.com/doc/refman/5.0/en/general-thread-states.html

Now that I know my query is as optimized as it can be, its time to pull out the bigger guns. On to plan B – consolidating those INSERT statements!

An INSERT statement, though executing seemingly instantaneously under small loads is comprised of many smaller operations, each with its own cost. The expense of these operations is roughly the following: (http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html)

  • Connecting: (3)
  • Sending query to server: (2)
  • Parsing query: (2)
  • Inserting row: (1 × size of row)
  • Inserting indexes: (1 × number of indexes)
  • Closing: (1)

As you can see, connecting to the server, sending the query, and parsing are relatively expensive operations. In the script I was modifying, 300k INSERT statements were generating 300k records. Fortunately for us, MySQL doesn’t force our records to be 1:1 with our INSERT statements thanks to allowing multiple insertions per INSERT. To use this feature instead of having 3 INSERT statements:

INSERT INTO foo (col1, col2) VALUES (1, 1);
INSERT INTO foo (col1, col2) VALUES (2, 2);
INSERT INTO foo (col1, col2) VALUES (3, 3);

We can instead coalesce them into a single INSERT statement

INSERT INTO foo (col1, col2) VALUES (1, 1), (2, 2), (3, 3);

How many values can we coalesce into the same INSERT statement? This isn’t driven by a max number of records, but rather a server system variable sysvar_bulk_insert_buffer_size: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_bulk_insert_buffer_size This can be modified, but the default is 8388608 bytes. The exact number of records will vary depending on the number of columns, and the amount of data being inserted into those columns. I conservatively chose to coalesce 5k records at a time. I tried to bump this to 10k, but I encountered an exception when I exceeded this server system variable maximum.

With my INSERTS coalesced, I was able to reduce my total number of INSERT statements to 60 (300k / 5k). This yielded massive performance boosts. I was able to take the query from over an hour to run to completing in just 2 minutes. Quite a nice trick, considering the data is unchanged.

Is there room for improvement? Absolutely. A statement executed 60 times may be worth preparing, or wrapping inside of a transactional block. My real world tests didn’t yield a significant enough performance boost to make these complexities worth implementing. This may not be true with data in higher orders of magnitude, or different schema layouts. MySQL also understands INDEX hints, which allow you to suggest INDEXES that may be missed by the query planner, or force the inclusion or exclusion of beneficial, or detrimental INDEXES despite what the query planner thinks! (http://dev.mysql.com/doc/refman/5.0/en/index-hints.html)

Speaking of INDEX, if any are using UNIQUE, BTREE type, these can be dropped while the mass INSERT is running, then added back later to side-step the 1n INDEX operational hit.

In the next order of magnitude, I will probably have to rethink my approach of using INSERT statements to load data. According to the MySQL documentation, LOAD DATA INFILE is “roughly 20 times faster” than a MySQL INSERT statement. My script would no longer generate statements, but rather output to a file in a comma delimited format. This could then be loaded assuming appropriate permissions are in place.

Happy profiling!

Computers, Open-source, Ruby, Web

Making Rails Routes Fantastic

I took a week off from work to move into our new house. It was a time of rest, and relaxation despite the chaos around what moving can bring. I’ve had a few personal projects on the back burner, but never seemed to have the time or the energy to make much progress. Recently a talk with a friend reminded me how important completing those pet projects can be for your personal happiness. I’m proud to present the completion of an idea I’ve had a for a while: Routastic.

What is Routastic? It serves as an interactive Rails routes editor. Simply, I got tired of the pattern of modifying config/routes.rb, then running rake routes and grepping for some result. This is completely inefficient. My inspiration came from the beautiful Rubular.com and its interactive regular expression building. Its quick. Its painless. Its a valuable tool for everyday programming.

Please check out http://routastic.herokuapp.com/ and let me know how I can improve it.

Special thanks to Avand Amiri for suggesting the name (despite the name screaming Web 2.0, it actually is quite memorable!)

Events, Family, Open-source, Personal, Software, Thoughts

Working for a Development Firm is Like Being a Rented Sports Car

As my last post alluded to, I am leaving my current development firm. The reason is primarily a boost in earning potential at another employer, but also a culture change. To explain the culture at a development firm I created this analogy:

Imagine you rent a Mustang, or a Corvette. (Of course you are gonna get the insurance!) What are you going to do with that car? Everything that you fucking can! You are gonna imprint the gas petal into the floorboard, and drive fast and wreckless. After all, you have to get every dime you can out of your rental before your time expires right?

Now imagine you own a Mustang, or a Corvette. Yeah, you would probably hot dog it, but it also your purchase, so if you wreck it, you are gonna be upset with yourself. In other words, you are going to maximize your purchase by caring for your vehicle, and obeying the speed limit (most of the time anyway).

I have just described the difference in my view of an internal development team, and an outsourced development team. Clients want to maximize that dollar when they outsource, which is done by getting the most work in the least amount of billable hours. They want the sports car rental. They aren’t going to set a moderate pace; they are going to speed! I’m not saying that all development firms, or all clients are like this (I’ve worked with great clients in the past). But I am saying there is a struggle between maximizing value and being realistic about what you promise.

How does a client pick your development firm? By your firm being the lowest bid. They understate the hours needed for the work. They over-promise features on an unrealistic deadline. When Company A quotes $100k under Company BCompany A gets the work. And the client isn’t going to be cool with missing deadlines, or cutting functionality. So now management is in a battle with the client who is pissed off because the original bid was unrealistic, and wants to rectify the problem. That shit rolls down hill to you – the developer.

And I can tell you, its not possible to write good code faster. Someone micro-managing me, asking me what I’m doing every five minutes isn’t making me any more productive.

There are lots of companies that push hard. You can make a good living working for these places as long as the compensation, or bonuses are commensurate with the work that you put in. But busting your ass all day, every day – every day feeling 10 hours long – every day being full of epic code pushes, and near impossible deadlines met in the 11th hour – that is a young man’s game. That is pretty appealing when you are 22, fresh out of college, and eager to prove yourself to the world. Stressing out at 4:50 on a Friday, trying to get something delivered while your wife and daughter patiently wait for you to get off work just isn’t worth it. I’d rather enjoy my time with them. I’m not mad about it – its just how the game is played.

Which is why this Sports Car is up for ownership. He is done with the rental game, being driven too hard, too long. He wants a nice garage somewhere, and a driver that just takes him out on Sundays for a trip around town. I want to spend time with my daughter while she is still young, instead of delivering some milestone that I wont remember in a month. If I wanted a stressful culture, I would have worked for a startup. Then at least I have some small chance of hitting it big when we get bought by Google.

Open-source, Personal, Software, Thoughts

The Great ICS Upgrade Scandle: Everyone Just Calm Down

I have been hearing an increasing amount of chatter lately about the infamous Ice Cream Sandwich (ICS) delays for Android. I want to discuss the actual impact, and propose some resolutions to this problem.

The article that inspired me to write this is Jason Perlow’s post “I’m sick to death of Android“. Hopefully that title is a hyperbole, but it does address the primary issue that I have with people complaining about ICS delays – I don’t see it as show stopping. Name me the new features that are in ICS? How is this OS upgrade going to change your day-to-day phone experience? Sure it would be nice, and there are probably plenty of small touches, but this isn’t revolutionary.

Jason is the (proud?) owner of a Motorola Xoom tablet, and the Samsung Galaxy Nexus. The former was recently acquired by Google, and the latter of which is a Google Experience handset, meant to be a developer reference device. He argues that not receiving timely updates has caused him to “throw in the towel”.

I can sympathize with him about not getting updates on his Galaxy Nexus device, as its primary marketing angle appears to be “first” when it comes to updates. If I had dropped the money on that phone, I would be upset if major updates weren’t being released. However, Galaxy Nexus already has ICS, and he is addressing other updates from “bugfix iterations”. Not too exciting. I feel less bad for him about his Motorola tablet. Unless Jason is clairvoyant, he didn’t buy the Motorola tablet because of its strong candidacy for timely updates from Google after they acquired Motorola.

The Problem

ICS was released by Google in October 2011, which has been six months ago, but still accounts for only 1.6% of distribution of Android versions. I can’t defend that. It is a red flag for major distribution problems. Apple’s iOS adoption rate reached 61% in only 15 days and people are tempted to draw a comparison. Google’s Android, and Apple’s iOS are both mobile phone platforms, however they are operating on completely different distribution models. Android was never meant to be a closed ecosystem like iOS. You can’t install iOS on non-Apple hardware. You can with Android.

I think a more apt comparison is between Google and Microsoft’s distribution models. Microsoft makes the Windows operating system, and hardware manufacturers install it on their devices. Its not exactly the same since Microsoft charges for upgrades, and you bypass the hardware vendor to install the upgrade on your device. The mobile carrier middle-man is also non-existent in the Microsoft model.

When Google releases an Android OS upgrade, and handset manufacturers push it to their own devices when they are ready. Further, the mobile carrier may withhold a device OTA update until it deems it is ready (or even necessary). Handset manufactures have clearly prioritized selling new devices over supporting current devices. I’m sure they have ran the numbers, and have made this decision because it yields the most profit. They are a business after all. Apple pushes these updates because they get a cut of every App Store sale, and a failure to upgrade a device is a potential loss of revenue.

Why would a mobile carrier dedicate resources into deploying an OTA update for devices that are “working just fine?”. It comes down to money again, and their decision is clear. Apple probably provides monetary, or exclusivity incentives to the mobile carrier to push their updates. There are many Android phones, but only one iPhone, so carriers probably acquiesce to Apple’s demands.

Solutions

So how can we make this work, without abandoning the entire Android concept over just this one issue?

Incentivise upgrades for carriers/handset vendors. What if OS updates were not free, like in the Microsoft model? A nominal free for upgrading may offset the costs of handset manufacturer, and carrier costs for supporting such an upgrade. Businesses like money, and Ice Cream Sandwich is worth something to me, especially given that most of us are locked into a two year contract anyway. I would rather put some money towards an upgrade now, then wait until my contract runs out to upgrade to a device that has the update.

Educate ourselves. There is no correlation between handset manufacturer’s sales and past performance on OS upgrades. This doesn’t seem to be an issue with the majority of consumers with Android devices. Without it affecting sales, there is little reason to divert resources into maintaining already sold devices.

Open the device boot-loaders. Maybe OS upgrades aren’t the responsibility of handset manufactures or mobile carriers at all, like in the Microsoft model. If people who wanted the OS upgrade had a way to load the update themselves, then this would act as a pressure release value for the current scenario. The idea of a locked boot-loader seems to be archaic anyways, and is rooted in fear. Let the consumer own their own device and do with it as they please.

Make a kickass OS upgrade, and drive consumer demand. Ice Cream Sandwich just seems so lackluster to me. (Maybe I stopped believing it was so cool to keep from going crazy). Short of a few new features, there isn’t anything game changing about this release. Android has plenty of problems that are within the realm of the OS to address. Give me greatly improved battery life, blazing fast performance, zero boot time, fantastic reception, FM radio, overclocking abilities; something – anything to get me excited about an upgrade. I don’t see ICS as changing the day-to-day use of my phone in any meaningful way, and thus I’m not rallying hard for it on my device. I can’t imagine I am alone in patiently waiting for this meek update.

Forget UI customizations; the differentiator should be upgrade latency. People have prophesied about the race-to-the-bottom happening for Android devices the same way it did for PCs. Manufacturers are differentiating themselves in meaningless ways, such as skinning the stock Android UI, or building useless shit that consumers don’t care about. These customizations prolong upgrade turnaround times, when in fact manufacturers should be doing the opposite. As OSNews.com’s Thom Holwerda states: ” they’re wasting considerable resources on useless and ugly crap that does nothing to benefit consumers. Android may have needed customisation a number of versions ago – but not today. ICS is ready as-is. TouchWiz and Samsung’s other customisations add nothing.”. Instead of scaping the bottom of the bucket for ideas on how to differentiate, lets have one manufacturer try this. Hopefully stronger sales would substantiate the idea that consumers care about OS upgrades.

Acknowledge that the lifespan of a phone is only two years. The predominate cell phone sale model in the US is one of subsidized hardware. You pay inflated monthly prices to offset the cost of a low up front purchase cost on your device. Most people upgrade devices at the end of their contract period, since the inflated subsidized price never drops anyway. It is in your best interest to have the latest and greatest because the current model is so abusive to consumers. This being said, the average lifespan of a phone is around two years. How many major OS releases will occur in that timespan? Probably just one. Maybe this short lifespan doesn’t justify the need to have these devices be upgraded at all. Remember that computer you may have bought because it had extra slots to upgrade the memory? Did you actually fill those slots, or just buy a newer faster computer a few years later instead?

Final Thought

So Jason, enjoy your 2.3 experience, because it is probably near identical to the 4.0 experience you are dying to get. I wouldn’t throw in the towel yet on Android because ICS is taking a while to come out. It will get here, and as soon as Google is hurt by lack of adoption they will take action. I hope that my solutions provide some food for thought on how to fix the current problem. Instead of compulsively pressing the “Software Update” option, I’m going to enjoy my experience, and stop letting the media dictate how I should feel. Though “fragmented” we Android users may be, an app targeting the 2.1 platform can be run on 97% of the current devices. That is what developers will be targeting, and I’m sure I’m not missing much from the other 3% of apps that I can’t run before I receive my update.