Over time, our company has narrowed in on the mission to celebrate software and the people that make it. We've become much more developer-focused. This shift in focus needed a public-facing reboot, hence, we committed earlier in the year to rebranding ChallengePost as Devpost.

How it played out

It seems easy, right? Just change the logo, some CSS and text, fiddle with DNS, redirect to a new URL and you're done!

Yes, it could be that easy, but that's largely dependent on the context. For our rebrand, that context included many disparate factors like our users, product, business needs, infrastructure, data, integrated services, email deliverability, SEO, code base, etc. Each one of these factors comes with its own caveats and issues and just being aware of them is hard enough. Coordinating the changeover to minimize potential problems required a lot of planning.

I created a Trello board to capture all the concerns we needed to address. In many cases, I simply listed what I knew to be the problem and what I believed to be its importance. The list included things like:

  • Facebook users must be able to login on Devpost
  • Hackathon homepages must render on the new domain
  • Redesign the site with new logo and theme
  • Images served from CDN via new domain
  • Automated emails are delivered from
  • Spam detection service integrated with new domain
  • We recent subscribe/unsubscribe notifications from Mailchimp on new domain
  • Company blog hosted on new domain

Oh yeah, and the business team also needed us to perform this switch with minimal impact. In other words, as fast as possible.

To accomplish this, we spun up sister versions of the site for our staging and production environments. This meant that could be live well in advance on the switch without concerns about DNS latency. We were also able to upgrade key dependencies of our architecture (like the versions of Ruby and Redis, for example) on the devpost stack and troubleshoot those changes without affecting our production environment.

We also needed several weeks to work through changes in the codebase. In addition to the surface-level changes, like the new site design, we needed to refactor out a bunch of assumptions in the business logic and our data, most notably, the domain on which we served the site. Making these things configurable allowed us to run the site on arbitrary domains - something that might be easy to do for a brand new business - for us, was not straightforward. This meant working through several years of accumulated business and technical decisions that were impeding the current needs.

As with all choices in tech, there are tradeoffs and not everything we needed could be accomplished in advance. To maintain continuity with our OAuth providers, like Facebook and GitHub, we had to switch settings manually during the transition. By running two versions of the site, we also needed to ensure data continuity. Generally, we could have tried a number of things, like share the database layer or set up a parent-child relationship between the stacks. After working with the team at EngineYard, we decided the easiest approach for us to pull off would be to shut down site traffic and run scripts to import the data one time between the stacks during the transition.

When we realized we'd have a number of steps to work through during the transition, we wrote a playbook to orchestrate the changes. Steps included "Put up maintenance page", "Shut off background processes", "Trigger mysql backup", "Kick off Chef recipes", "Enable nginx redirects". This level of detail meant we could assign roles to each member of the development team. We also performed several "dress rehearsals" on our staging environments so every member of the team has the experience of working through with what needed to take place and potential problems that might arise.

On Game Day, we got online before 6am ET (I was at the office at 5:15am!) to make final preparations before kicking off the rebrand at 6:30 in the morning. After some tense moments of running scripts, changing settings, and monitoring the system, we had live for the public before 7:15am. We spent most of the day will followup tasks, including additional settings changes and CSS tweaks we had punted previously.

By noon, at what felt like the end of the day, it was time to celebrate with champagne from the boss and ice cream cake from my wife. Success!

Accomplishments that I'm proud of

I most proud of how my team pulled together to pull this off. We performed the switch relatively quickly with minimal issues save for a few email settings that delayed delivery of certain content for a small percentage of users. Overall, It was a great show of solidarity and efficiency.

What I learned

No matter how much you test, the only place where issues in production happen are in production.

Share this project: