To prevent disruptions and scale up its service while keeping costs down, Twitter has had to drastically change its core infrastructure, taking up open source tools while doing so.
Twitter processes about 6,000 messages a second, adding up to more than 500 million messages per day or about 3.5 billion a week. And at one peak time, Twitter handled a record 143,000 messages in one second during the airing of the movie "Castle in the Sky" in Japan earlier this year, said Chris Aniszczyk, head of open source computing at Twitter during LinuxCon Europe in Edinburgh on Monday.
Handling this number of messages has been challenging for the company, Aniszczyk said. Twitter started out in 2006 using a monolithic Ruby on Rails application rather than a distributed platform. That worked out fine back then because the service wasn't that busy, but the setup led to growing pains in 2008 when a lot fail whales -- the term Twitter uses to describe service disruptions -- started happening.
Twitter's engineers were able to keep up by basically applying Band-Aids, Aniszczyk said. Things got really problematic though during the 2010 football world cup, which was kind of a low point as well as a high point for Twitter. While the service broke 3,000 messages per second it was hard to deal with the number of messages being sent.
"It was painful because from an engineering perspective it was all hands on deck," Aniszczyk said. Anytime anyone scored a goal or got red or a yellow card the site would be down, he said.
So things needed to change. After analyzing the situation, Twitter determined the problem was using one code base to handle everything from managing raw database information to rendering the site graphically. "What we were essentially doing to keep things going was throwing a lot of machines at the problem. Not the best solution because that gets expensive," Aniszczyk said.
Rather than improving the system and rolling out new features, Twitter's engineers went on "whale hunting expeditions" to solve specific failures, which wasn't really what the company needed to do.
Twitter ultimately decided that it was time to invest in new infrastructure and eventually doubled down on JVM (Java Virtual Machine). This allowed them to break the monolithic, single application into different services such as a service that specifically handles messages, Aniszczyk said. Engineering is now set up with mostly self-contained teams that can run independently.
To cut costs and reduce the number of machines it uses, Twitter also turned to Apache Mesos, which originally started as a research effort at the University of California at Berkeley. Mesos is a cluster manager that allows users to run multiple processes on the same machine so hardware can be used more efficiently in order to save money, said Aniszczyk.
Twitter also used other tools like Netty, designed to help create high-performing protocol servers, and Scalding, which makes it easy to write big data jobs. Twitter hasn't been able to move away from the Ruby on Rails application framework entirely, but these tools, combined with switching its core infrastructure to JVM, have helped the company avoid fail whales and improve performance, Aniszczyk said.
"This pretty much has enabled us to grow as a company," he added. Twitter has over 2,000 employees, of which about half are engineers, he said.
One of the lessons Twitter learned is that basing its infrastructure on open source is a good idea. "That is where you find the best software these days," said Aniszczyk, who added that it is also good to give back to the open source community, which Twitter does at twitter.github.io, which lists 100 public repositories.
Incremental change is also a good idea, he said. "It increases the chance of success if you do things in a small piecemeal fashion," he said, adding that companies should also keep learning from universities.
Twitter's efforts to change its core infrastructure is a good example of how companies can benefit from open source technology, said Jim Zemlin, executive director of the Linux Foundation. "It keeps the cost down and it also allows them to meet very quickly the challenges that they have with the scale of infrastructure," he said.
Many businesses, including large companies such as Google, Amazon and Facebook, are all creating open source hyperscale computing platforms, and they can borrow lessons learned, Zemlin said.
Loek is Amsterdam Correspondent and covers online privacy, intellectual property, open-source and online payment issues for the IDG News Service. Follow him on Twitter at @loekessers or email tips and comments to firstname.lastname@example.org
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.