SWAN Manager 3.0

Ruby on Rails logoIts back to programming lately at my job. I have taken it upon myself to reinvent SWAN Manager (yet again). This is its third iteration, and has come a LONG ways since 1.0. I love the satisfaction in figuring something out, and then implementing the logic in code. It is creating something from nothing. Today I finished working on the audience builder for our Portal. The layout on the Portal side is terrible, spanning across an LDAP branch, and five different database tables. The logic cannot be inferred from the tables, and of course no source code or documentation is ever is provided on our Portal platform. The beauty is, now that is is done (100% mapped to our functions), I can determine whether any user can access any part of a channel, or announcement in Ruby.

In a (sparse) 150 lines of code, I have done what I would imagine has taken thousands of lines of fragmented thoughts stemming from many programmers in all written in Java code. There is a simplicity, and a poetry that I enjoy when writing in Ruby. Maybe someday, folks will see the light.

For this version of our management website, I had the idea while driving home from work, of making the interface for the Portal totally web service driven, and using sparse templates, and making the whole damn thing run in a few “administrative” channels inside the Portal. It seems like a perfect fit. To start with, how can I expect other departments, and other applications to integrate with the Portal if I am the administrator, and I don’t even do it? Besides, it is a management application for the Portal, so what better place? Also, I can verify that the web services are operating as expected, by having a living proof-of-concept.

Also, new in this version, I have reimplemented the way Targeted Announcements are sent. Before we had a Java class (that I threw a fit about until a certain company gave me the source code), that we modified to accept switches when called for the parameters. This class (and its 10MB of dependencies were “jarred up” (fuck you to Java), and placed in our management website. When someone filled out the nice announcement form, I would take all the parameters, and build the switches on the fly, scp the jar files over to the server, and run a Java command from the bash shell. Needless to say, this sucked. Ruby and Java should never mingle. If I wanted to change the way the class was implemented, it was back to Java, and mucking around with an API I didn’t understand (once again because the company is TERRIBLE at documentation).

I finally got smart, and decided to use Wireshark (thanks James!) and “listen” to the mysterious SOAP traffic occurring from my machine to the server and back. After a few minutes of isolating the traffic, the mystery was revealed as little more than a few dozen lines of XML. A lightbulb went off in my head, and I decided to use Ruby’s REXML library to construct this procedurally based on the form (from earlier) the user fills out and submits. The end result is a cleaner interface, no Java, no scp, or Bash environment, and best of all 9.99MB less space. Hold your applause.

I also decided to really take a good, hard look at all of my models and associations and made the startling discovery that the first significant portion of your project should be ironing out these associations. If you skimp here, your entire application will suffer. Badly. That is because you are laying the foundation here, and if you do it wrong, or half-assed you have really missed the power of Rails.

After a few wonderful hours at home, self-medicated on NyQuil, I managed to get a “user” to be created with just a username. For instance, “User.create(:username => ‘bsimpson’)”. This in turn, spawned off a frenzy of associated activities, including building roles the user belongs to, checking community groups the user is a member of, and building a list announcements the user has authored. It is nice to build an application, thinking about the associations between models first and foremost. The semantics pay off quick, with actions such as “user.groups”, “user.announcement_authorships”, “user.channels”, etc.

Hopefully in a few more weeks, we will have the version 3.0 in production, and in use by a few of our channels.


Life has been busy lately. I have stopped teaching in our Continuing Education program, and focused my time on completing my degree starting this summer.

I am currently taking two classes that are 100% online. It has been an adjustment for me for a few reasons. First, being a student again is hard. There is a lot of shit to shovel. Second, I am seeing our new systems operate like the Portal, WebCT, etc from the outside. I have to resist the temptation of “troubleshooting” mode, where I explore ways to make the process better and just focus on the classes. Third, I never got real familiar with WebCT, and pacing myself and doing everything electronically is surprising harder than it sounds. I am having fun though, and that is what counts.

We are almost done setting up our new office. The desks are in place (including the cabinet doors, which we had to hunt down). Kristin’s new desktop is here and being loaded as I write this post. We are still picking out some more lighting, storage, etc to make the room a perfect office. I am even eyeballing one of those portable AC units to keep the temperature a little more comfortable. Pictures soon!


At work, we are on the edge of having resolved a lot of our Portal issues. In addition to performance and reliability improvements, there will be other subtle enhancements that I am anxious to look into further. These include a Facebook channel, mobile Targeted Announcements, a rich text editor, a better Email SSO experience, and resolution to some terrible technical problems that are unfixable right now. Who knows if these updates are of substance, or are just marketing bullets on a sales pitch. We are fully operational in our testing environment, so the switch should be happening within the next week, assuming testing goes well.

In other news, our garage sale has made us almost $300 so far, and the space we got back in our quaint house is quite impressive.

Windows 7 RC here I come…

One Step Forward and One Step Back

The ArchitectAnother section of the Fredrick Brook’s The Mythical Man Month seems to resonate with software development today:

“Lehman and Belady have studied the history of successive releases in a large operating system. They find that the total number of modules increases linearly with release number, but that the number of modules affected increases exponentially with the release number. All repairs tend to destroy the structure, to increase the entropy and disorder of the system. Less and less effort is spent on fixing original design flaws; more and more is spent of fixing flaws introduced by earlier fixes. As time passes, the system becomes less and less well-ordered. Sooner or later the fixing ceases to gain any ground. Each forward step is matched by a backward one. Although in principle usable forever, the system has worn out as a base for progress. Furthermore, machines change, configurations change, and user requirements change, so the system is not in fact usable forever. A brand-new from-the-groud-up redesign is necessary…”

“Things are always at their best in the beginning” – Pascal.

“That is the key to history. Terrific energy is expended – civilizations are built up – excellent institutions devised; but each time something goes wrong. Some fatal flaw always brings the selfish and cruel people to the top, and then it all slides back into misery and ruin. In fact, the machine conks. It seems to start up all right and runs a few yards, and then it breaks down” – C. S. Lewis

Maybe we are meant to throw our original ideas away and pave the road for its successor? Its not a failure, its just the next step. When we implement a project, we do the best we can with the tools and knowledge we have at our disposal. But at some point, the original project becomes so outdated because of changing functionality, or demands that its time to start over.

I would love to see us make progress on the next iteration of our project with all of the knowledge we have garnered from the sins of our current Portal. We could make something truly viable and revolutionary for its constituents. Its just a matter of time and energy. Lately, it seems that the mindset is to continue monkey-patching our current system and pray it helps. I am a firm believer that if something is broken enough, its just time to move on. We can only be as good as the limitations that are imposed upon us allow.

The Portal: A Success?

What a long and wonderful Christmas break. It felt like I was on a cruise without leaving my house. I ate everything in sight, and sleep 12 hours a day. Since January 5th, I am back at work and it is driving me insane.

I feel like I am approaching a stalemate in what is possible with our Portal. We have gotten most of the internal plumbing worked out, and for the most part we are now limited by outside factors. I constantly review the state of our Portal, and try and eliminate the outside factors barring our success. Let me define success:

  • Positive feedback from constituents
  • Strong buy-in from departments and other content contributors
  • Tight integration into our workflow
  • Exclusive ability to solve business needs

Positive feedback from constituents:

Feedback is essential to have – positive or negative. I have heard some people report that the single-sign on functionality is nice, but I have heard many more people complain about various issues. I suppose I need to build a survey channel and back end to properly gather metrics on what needs to change.

Strong buy-in from departments and other content contributors:

We have the traffic to show people are using it – but it is because we force them to. We have a captive audience, and a difference of opinions about what that means. A captive audience to me is a bad thing. If I am forced to use something, I am overly critical about that thing simply because I didn’t have a choice. The higher ups see a captive audience as a good thing. It forces traffic in, which is the “dangling carrot” to get people to update their content and use the features of the Portal. We have statistics to prove that this approach is not working. 10,000 hits a day, and practically zero buy-in from other departments, and authorities on campus. Without buy-in, the content inside the Portal never changes. Instead of being a dynamic, exciting “marketplace” of ideas and information, it looks like a static webpage.

Tight integration into our workflow:

Lets assume that departments wanted to update their content. The technical knowledge required to perform this action has been demonstrated to be too high to make it worthwhile to do so. This is a problem with our choice in Portals – the features that count are lacking. To mitigate this factor, we provide a website that streamlines the process of updating a channel, and have even offered to update channels if we are just sent the information. We still have little buy-in from departments.

But I have a feeling that departments do have information they want disbursed, but outside of the Portal on the main website. Here we have conflicting information systems. Currently, almost all of the content for the school resides in external websites. Without a content-management-system, this information is hard-coded into HTML files. This means that the data cannot be extracted and used inside the Portal, because all of the layout, and design is combined with the information. This means that updating channels in the Portal boils down into a duplication of efforts.

Exclusive ability to solve business needs:

Our Portal should be a ble to provide services to students in a way that is exclusively possible thought the Portal’s framework. There are several crippling limitations to doing developing inside a Portal environment.

The first is the nature of the HTTP request model. You click a link, and the entire page refreshes. Each channel is supposed to be its own system, so this is no good. Immediately you are reliant on exclusively AJAX to make partial updates to your channels. This increases the complexity of your application tremendously.

Second, is the issue of real-estate. If you are looking at say nine channels on the screen, you are have a maximum width and height of no more than 300 pixels. This is like using a cellphone screen to navigate your application.

Third, the Portal is already bloated. With multiple stylesheets, external javascript files, and nine channels rendering on a screen at every page-click, not much room is left for any further overhead.

Business needs solved are consolidation of system credentials by providing single-sign on into applications. Also, the location of these systems is consolidated since the Portal is the entry point.

Where does this leave us?

How many Portal’s do you use in a day? It isn’t a popular web model, with the notable exception of iGoogle. I would like to add more single-sign in functionality, and slowly replace “dead channels” with feeds consumed from the rest of our information sources. Of course, our sources aren’t database driven, which would be the most positive change that can be made for the external website and for the Portal.

SWAN Manager v2.0

It has been a trying last few weeks, but I have finally rolled out the new SWAN Manager. It ended up being almost a total rewrite, and I walked away with a lesson learned:

  • Even on a total rewrite, I should have consistently been checking in my code at “checkpoints”. Instead, I waited until everything was working, then did one single massive checkin. Inevitably I missed stuff, forgot to cleanup stuff, and had to resolve a couple of SVN conflicts where it just didn’t understand what the hell happened.

I have highlighted a few of the more significant changes in screenshots below:

I based the new login screen off of Google Docs login. It shows at a glance what services are offered inside the Manager, and is a little more friendly than just a login box. Also, you can see the new tabbed interface at the top.
The User model underwent the most significant of changes. First, I decided that it was running way to slow, so I reimplemented the way it looks up the data from an indirect (and unreliable) method, to querying the sources directly. Also, the data is cached using memcached for even more speed enhancements.

An area I am particularly proud of is the display of the icons the user should see. I take each role name, and do a Net::HTTP fetch on them, checking for a 200 result, and displaying it. This is all handled in a helper.

The channel model underwent significant changes as well. It has always directly queried for the data on each request, instead of caching the results. In addition to caching and other performance tweaking – I now know a lot more about the channels themselves. The entire model operates as an “acts_as_tree” with parent, and children nodes to show the sections, and sub-sections of a channel. If you can edit a section, it shows up as a link.
The announcements controller has been completely reworked as well. Before the user didn’t have the ability to do things like send to a role, or choose delivery / expiration times, or a destination. Now the user gets to pick all of this (Population Selection with a parameter is shown). A message can be sent with just a few clicks. The announcement model uses “acts_as_state_machine“, a seemingly dead but very useful plugin. The announcement goes through several states with validation checking and routing automatically handled. I have to thank Matt for turning me onto the idea.

Here I have an image of the announcement wizard further down on the same page as the image above. The date selected is handled by a Rails plugin called “unobtrusive_date_picker“. It allows some cool tricks like keyboard arrow navigation, and the ability to define starting / ending date ranges, and minute increments in the select box.

Additionally, once the announcement is sent, rather than going to get a cup of coffee while it runs the process (sometimes 10+ minutes), it now backgrounds it in a separate rake task.

All in all, I think that this is a word of difference from the previous SWAN Manager. This is stable, fast , and easy enough now that I feel it is something the campus as a whole can use without concern.  And it will need to live up to it expectations as well, as we have a few departments already lined up to start using its functionality as soon as we give the green light.

After I pat myself on the back, I suppose its time to get back to working on all those pesky channels…

Visions of Channels Dancing Through my Head

In the last few weeks we have had some important milestones occur inside our campus Portal. These milestones lay the groundwork for what things we can do in the future:

I rolled up my sleeves, and jumped into Jakarta’s HTTPClient. This is a replacement for our existing proxy server file which had some serious limitations. In our previous proxy server, we had no way to deal with cookies, headers, methods or mime types. All of these things are needed to have a transparent proxy server. HTTPClient can do all of these things, and more with ease. This significantly cuts down on code complexity in external channels.

We now know who is currently logged into the Portal in our external applications. The built in functionality for doing this was utterly broken and we were left to find another way. After much experimentation, we developed a working solution of setting the user’s uid and username in a session cookie accessible to the “.clayton.edu” subdomain. Now any application we develop can access this information, and provide truly meaningful content. We now have a channel that shows a faculty member their course survey statistics.

Figuring out how to convert postbacks to partial updates (via AJAX)  inside channels took a while to iron out. Javascript’s same-origin restrictions made this solution only as good as our proxy server (which just had a steroid shot in the arm). Languages like C# that try to take care of the magic of AJAX for you behind the scenes fail to deliver. We needed languages that provided the control and transparency of converting full postback requests to AJAX calls. Thanks to Rails and a few extensions, this is fairly straight-forward, even when routing through a proxy server.

Static HTML is the worst thing a channel can contain. It may look pretty, but these are “dead channels“, because the content never changes. Because it never changes, it isn’t interesting on subsequent visits. My goal is converting nearly all channels from static to dynamic content to have a truly alive Portal. This means consuming RSS feeds, pulling info from databases, LDAP, etc. Recently, I have deployed an in-house RSS feed reader, and I have adapted the “Academic Calendar”, and “HUB Knowledgebase” channels to use it.

A few channels down, hundreds more to go. Moving to dynamic channels would be fairly easy, except hat probably 90% of our external website’s information exists only in static HTML files. This has to migrate into a CMS, and I see this as being the next hurdle. I have a fairly good idea of what needs to be done to make a CMS integrate into the Portal, and campus website, and the barrier now is getting the time, resources, and blessing to tackle such an undertaking.