Login Love for your Android App

We love UI components–as you may know through the pre-built UI classes in our iOS SDK. Today, we are bringing this same love to Android. We are launching ParseLoginUI, an open-source library project for building login screens on Android with the Parse SDK. This ultra-customizable library implements screens for login, signup, and password help. We are releasing this as a standalone project (apart from the Parse Android SDK) so that you have the flexibility to make further changes to its look and feel when you integrate it into your app.

To use ParseLoginUI with your app, you should import the library project, and add the following to your app’s AndroidManifest.xml:


Then, you can show the login screen by launching ParseLoginActivity with these two lines of code:

ParseLoginBuilder builder = new ParseLoginBuilder(MyActivity.this);
startActivityForResult(builder.build(), 0);

Within ParseLoginActivity, our library project will automatically manage the login workflow. Besides signing in, users can also sign up or ask for an email password reset. The default version of each screen (login, signup, and recover password) is shown below.

Basic Login Screens

Let’s see how we can configure our login to look different.  Make the following changes to our AndroidManifest.xml:


    <!-- Added these options below to customize the login flow -->

With these simple configurations, we’ve changed the app logo, added Facebook & Twitter logins, and changed the text shown for the password-reset link. We also enabled an option to automatically save the email address as the username, so that you don’t have to manually save it in both fields of the ParseUser object. When this option is turned on, both the login and signup screens are also automatically updated to prompt for email address as username.

Customized Login Screens

Our Android documentation contains guides for both basic and advanced use cases. You can find the source code for ParseLoginUI at our GitHub repository. Try it out and let us know what you think!

Stanley Wang
June 25, 2014

dvara: A Mongo Proxy

We wrote dvara, a connection pooling proxy for mongo, to solve an immediate problem we were facing. We were running into the connection limits on some of our replica sets. Mongo through 2.4 had a max-max conn limit of 20,000. As the number of our application servers grew, the number of concurrent active connections to our replica sets grew. Mongo 2.6 removed this limit, but it was unfortunately not ready at that time (we’re still testing it and haven’t upgraded to it yet). Even if it were ready, the cost per connection is 1MB, which takes away precious memory otherwise used by the database. A sharded cluster with mongos as the proxy was another path we considered. Enabling sharding may have helped, but that change would spill over into our application logic and we use at least some of the restricted features. We are experimenting with sharded replica sets in our environment, and from our experience we weren’t confident they would actually help with our connection limit problem. So we set out on what seemed like an ambitious, and in my mind, a difficult goal of building a connection pooling proxy for mongod.

Down to the Wire

We started off with a simple proof of concept, working backwards from legacy wire protocol documentation. We got it far enough to serve basic read/write queries in a few weeks. We attribute the speed at which we got the prototype working to using Go to build it. Go allowed us to write easy to follow code, and yet not pay the cost of a thread per connection, or the alternative of having to write callbacks or some other form of manually managed asynchronous network IO logic. Additionally, while our proxy prefers to not look at the bytes flowing through or decode the BSON for performance reasons, Gustavo Niemeyer‘s excellent mgo driver, along with its bson library made it trivial for us to introspect and mutate the traffic we needed to. The first of these cases was the isMaster and the replSetGetStatus commands. These command return the member/host information the client uses to decide who to connect and talk to. We need to replace the real host/ports with the proxy host/ports.

Yet another command that needed special handling, and one of the known problems we had to solve was to handle the way Mongo 2.4 and earlier require a second follow up call for getLastError. Fortunately this got some much needed love in 2.6, but until 2.4 mutation operations were essentially split into two parts: first, the mutation itself; and second, the getLastError command which included some important options, including the write concern. Consider what a connection pooling proxy does: a client sends a command, we take a connection from our pool, proxy the command and the response, and put the connection back into the pool for someone else to use. A good proxy would hold a connection from the pool for the least amount of time possible. Unfortunately the design of getLastError means we can’t do that, because getLastError is state that exists in mongod per-connection. This design is awkward enough that it actually requires special logic for the mongo shell to ensure it doesn’t get inadvertently reset. It was clear we’ll need to similarly maintain this state per connection in the proxy as well. Our implementation tries to preserve the semantics mongod itself has around getLastError, though once we’ve moved all our servers and clients to 2.6 this will be unnecessary with the new wire protocol.

Proxying in Production

An aspect we refined before we started using this in production was to auto discover replica set configuration from the nodes. At first our implementation required manual configuration that mapped each node we wanted to proxy. We always need a mapping in order to alter the responses for the isMaster and replSetGetStatus responses mentioned earlier. Our current implementation automatically configures this and uses the provided member list as a seed list. We’re still improving how this works, and likely will reintroduce manual overrides to support unusual situations that often arise in real life.

One of the benefits of dvara has been the ability to get metrics about various low level operations which were not necessarily readily available to us. We track about 20 metrics including things like number of mutation operations, number of operations with responses, latency of operations, number of concurrent connections. Our current implementation is tied to Ganglia using our own go client but we’re working on making that pluggable.

We’ve been using dvara in production for some time, but we know there are mongo failure scenarios it doesn’t handle gracefully yet. We also want a better process around deploying new versions of dvara without causing disruptions to the clients (possibly using grace). We want to help improve the ecosystem around mongo, and would love for you to contribute!

June 23, 2014

Fun with TokuMX

TokuMX is an open source distribution of MongoDB that replaces the default B-tree data structure with a fractal tree index, which can lead to dramatic improvements in data storage size and write speeds. Mark Callaghan made a series of awesome blog posts on benchmarking InnoDB, TokuMX and MongoDB, which demonstrate TokuMX’s remarkable write performance and extraordinarily efficient space utilization. We decided to benchmark TokuMX against several real-world scenarios that we encountered in the Parse environment. We also built a set of tools for capturing and replaying query streams. We are open sourcing these tools on github so that others may also benefit from them (we’ll discuss more about them in the last section).

In our benchmarks, we tested three aspects of TokuMX: 1. Exporting and importing large collections; 2. Performance for individual write-heavy apps; and 3. Database storage size for large apps.

1. Importing Large Collections

We frequently need to migrate data by exporting and importing collections between replica sets. However, this process can be painful because sometimes the migration rate is ridiculously slow, especially for collections with a lot of small entries and/or complicated indexes. To test importing and exporting, we performed an import/export on two representative large collections with varying object counts.

  • Collection1: 143 GB collection with ~300 millions of small objects
  • Collection2: 147 GB collection with ~500 thousands of large objects

Both collections are exported from our existing MongoDB collections, where collection1 took 6 days to export and collection2 took 6 hours. We used the mongoimport command to import collections to MongoDB and TokuMX instances. Benchmark results for importing collection1, with a large number of small objects: TokuMX is 3x faster to import.

# Collection1: exported from MongoDB for 6 days

Database         Import Time
MongoDB           58 hours 37 minutes
TokuMX            14 hours 28 minutes

Benchmark results for importing collection2, with a small number of large objects: TokuMX and MongoDB are roughly in parity.

# Collection2: exported from MongoDB for 6 hours

Database         Import Time
MongoDB           48 minutes
TokuMX            53 minutes

2. Handling Heavy Write Loads

One of our sample write-intensive apps issues a heavy volume of “update” requests with large object sizes. Since TokuMX is a write-optimized database, we decided to benchmark this query stream against both MongoDB and TokuMX. We recorded 10 hours of sample traffic, and replayed it against both replica sets. From the benchmark results, TokuMX performs 3x faster for this app with much smaller latencies at all histogram percentiles.

# MongoDB Benchmark Results
- Ops/sec: 1695.81
- Update Latencies:
    P50: 5.96ms
    P70: 6.59ms
    P90: 11.57ms
    P95: 18.40ms
    P99: 44.37ms
    Max: 102.52ms
# TokuMX Benchmark Results
- Ops/sec: 4590.97
- Update Latencies:
    P50: 3.98ms
    P70: 4.49ms
    P90: 6.33ms
    P95: 7.61ms
    P99: 12.04ms
    Max: 16.63ms

3. Efficiently Using Disk Space

Space efficiency is another big selling point for TokuMX. How much can TokuMX save in terms of disk utilization? To figure this out, we exported the data of one of our shared replica sets (with 2.4T data in total) and imported them into TokuMX instances. The result was stunning: TokuMX used only 379G disk space —about 15% of the original size.

Benchmark Tools

Throughout the benchmarks, we focused on:

  • Using “real” query patterns to evaluate the database performance
  • Figuring out the maximal performance of the systems

To achieve those goals, we developed a tool, flashback, that records the real traffic to the database and replays ops with different strategies. You can replay ops either as fast as the database can accept them, or according to their original timestamp intervals. We are open sourcing this tool because we believe it will be also useful for people who are interested in recording their real traffic and replaying it against different production environments, such as for smoke testing version upgrades or different hardware configurations. For more information on using flashback, please refer to this document. We accept pull requests!

Kai Liu
June 20, 2014

Gogolook and LINE whoscall Look to Parse Push for Product Stability

LINE whoscall AdvertisementGogolook Co., Ltd, a startup based in Taiwan with extensive experience in the IT industry, is using Parse in LINE whoscall, an app that instantly identifies the source of calls and text messages from numbers that are not in your contact list, as well as allowing users to block specific numbers. In choosing a support system with whom to co-develop LINE whoscall, Gogolook chose Parse in addition to LINE, an incredibly popular messaging app on smartphones and PCs with over 450 million users in over 230 countries worldwide.

After a friend from their local startup community recommended Parse, Gogolook, acquired by Naver near the end of 2013, decided to use Parse Push in LINE whoscall. As Peter Su Chen-Hao, Product Director of Gogolook reports, the team appreciated that with Parse Push, they didn’t need to create a notification database, allowing them to focus on product development and function improvement instead.

He continues by explaining,

Parse provides developers and companies with a very stable and convenient backend service. For example, you can use Parse as a login database instead of establishing one on your own. Tools like these make Parse a must-have tool for app developers!

LINE whoscall has been working to protect users from harassment of unexpected phone calls with its powerful database and user sharing community. Now, the team is working to expand service coverage to users all over the world. LINE whoscall can be downloaded on Google Play.



June 20, 2014

Building DryDock on Parse

At Venmo, we’re always looking for ways to improve our tooling and processes for rapid deployment. At any given time we can have upwards of 4 or 5 independent features in-flight, and we need ways to get prototypes into the hands of our team on a daily basis.

TestFlight and HockeyApp both do a good job of assisting with build distribution, but sending links around with random builds has a few problems, worst of which is ‘orphaned installs’. Team members can get stuck on the last build they were prompted to install, reducing the dogfood testing pool and preventing them from getting new features / fixes.

We decided that a good solution to this would be to create an internal installer app that would give team members instant, self-serve access to the latest internal builds, experiments, and also versions released to the App Store.

Getting Started

Creating the app didn’t seem like a large challenge, but one key question at the outset was: Where do we store the builds, and how do we find new builds from the app?

First we considered writing a lightweight Sinatra application to manage and serve up available builds, but we didn’t like that because it would just be one more thing to manage. We also wanted to make DryDock open-source and allow others to use it – having to set up a web service would have made using DryDock far less appealing.

Parse popped into our minds from a developer day they ran at Facebook’s campus last year and so, we decided to take it for a spin and see if it would do what we were looking for. Our requirements for DryDock’s backend were quite simple:

  • Allow storing a list of builds with basic string attributes
  • Visibility can be controlled simply (e.g. for private builds) **future
  • Some interface for developers to manage builds
  • Minimal changes to the iOS code should be needed for a developer to retrieve from their own private builds service
  • As simple as possible to deploy & maintain

** DryDock doesn’t currently support visibility but it’s something that we want to build in the future.

Parse seemed to support everything that we were looking for, so we got started.

Pod “Parse”

We added the Parse pod in our new Podfile (if you don’t know about Cocoapods, Check it out) and Parse was ready to go.

Within about 20 minutes, we had a working prototype that would read a list of builds from Parse and display them.

There are two ways to approach modeling in Parse — one is to subclass PFObject (the synchronizable object base-class in Parse) with your custom fields. The other is to simply use a PFObject as a dictionary, and it will create columns on the remote datastore to hold your data.

In our case, we only wanted 6-7 fields on an object and saw no need to subclass, so we just used PFObject as-is.

Populating our tableview looks like this…

PFQuery *query = [PFQuery queryWithClassName:VDDModelApp];   
[query whereKeyExists:VDDAppKeyName];    
[query findObjectsInBackgroundWithBlock:^(NSArray *objects, NSError *error) {         
    self.apps = objects;     
    [self.tableView reloadData]; 

Okay.. it’s actually a little more complicated than that, but barely! We chose to break out all our model keys into constants to ensure consistency between calls and access.

Break out all model keys

Now our apps array property contains all the remote apps as PFObjects. You can then simply configure a cell by extracting the properties from the app for the current cell.

- (void)configureForApp:(PFObject *)app {
    self.app = app;      
    self.appNameLabel.text = app[VDDAppKeyName];
    self.appDescriptionLabel.text = app[VDDAppKeyDescription];

Now we have a fully populated table view. It doesn’t look very pretty though, so let’s add the app icon.

Adding app icon to fully populated table view

Icons are stored as PFFile attachments to the object, so we have to download the file and then set an ImageView’s image to the response.

PFFile *image = app[VDDAppKeyImage]; 
[image getDataInBackgroundWithBlock:^(NSData *data, NSError *error) {
    if (data && !error) {
        self.appIconView.image = [UIImage imageWithData:data];     

Doesn’t that look better…

Internal builds screen shot after adding the app icon

Using DryDock for Your Own Builds

Our goal with DryDock was not only to be able to use it at Venmo, but also to be able to share it and have other companies easily use DryDock for their own experimentation and build distribution.

Having to export and import all of the column names in order to create the columns for the Parse data browser seemed like a pain.  So, we decided to do an auto-creation of sample data in the app. If there are no apps the first time you run it, it will create a sample app…thereby creating the columns.

void createDemoObject() {
    NSData *imageData = UIImageJPEGRepresentation([UIImage imageNamed:@"VenmoIcon"], 1.0);
    PFFile *file = [PFFile fileWithName:@"image.png" data:imageData];      
    [file saveInBackgroundWithBlock:^(BOOL succeeded, NSError *error) {
        if (!succeeded) {
        PFObject *testObject = [PFObject objectWithClassName:VDDModelApp];
        testObject[VDDAppKeyName] = @"Venmo";
        testObject[VDDAppKeyDescription] = @"Venmo Dogfood Builds";
        testObject[VDDAppKeyType] = @(1);
        testObject[VDDAppKeyVersionNumber] = @"1";
        testObject[VDDAppKeySharable] = @(YES);
        testObject[VDDAppKeyShareUrl] = @"http://someshareurl/";
        testObject[VDDAppKeyInstallUrl] = @"itms-service://someinstallurl/";  
        testObject[VDDAppKeyImage] = file;
        [testObject saveInBackgroundWithBlock:^(BOOL succeeded, NSError *error) {


One of the key problems that we wanted to solve with DryDock was stopping users of experimental builds from getting orphaned and left on the last experimental build they installed. In order to do this, we use DryDock along with VENVersionTracker — our version tracking library (which we’re also migrating to Parse right now!) You can read more about how we use them together to manage builds in our blog post.

We hope that DryDock is useful and helps you to increase the rate at which you experiment internally! Feedback and pull-requests very welcome!

For more on this post and its original form by Chris Maddern, head over to the Venmo blog here.

June 19, 2014

HarperCollins Pages Parse to Tackle Mobile

The Hobbit App on iPhone 5HarperCollins is a UK top four publisher offering content across the spectrum, from enduring classics to cutting-edge contemporary fiction and digital dictionaries to online curricula.  Recognizing the trend towards mobile, the company sees “mobile apps as a means of forging closer relationships with readers— giving them delightful experiences around content, communicating regularly with them, and offering them personalized content,” according to Sam Hancock, Product Manager, Group, William Collins and Fourth Estate.

The company has turned to Parse to help them release several apps, most notably The Hobbit: Official Visual Companion, a guide to Middle-earth with a beautifully rendered 3D map and variety of encyclopedic articles, as well as the release of Brian Cox’s Wonders of the Universe and Wonders of Life.

HarperCollins has a long history of producing engaging apps for consumers, but Parse has helped enable more direct engagement with consumers. Now, HarperCollins is able to better understand how content is being consumed, and in turn reward consumers with more relevant material. Parse Core is used for storing user data and tracking app opens, in addition to enabling the team to find out more about their users in order to market relevant content to them across the apps. Concurrently, Parse Push has allowed the sending of more sophisticated and segmented push notifications.

According to Sam,

Having a solution like this allows us to concentrate more on creating a great user experience while also helping us add scale and reduce cost—all massive positives for our mobile strategy. It’s also giving us the core building blocks of building out a single consumer view—allowing us to give readers more of what they want more of the time, and making sure our communications are tailored to reflect that. We are actively looking to expand our use of Parse across all platforms to help us achieve that objective.

June 13, 2014

Building Apps with Parse and Swift

On Monday at WWDC 2014, Apple released a new programming language called Swift. As always, when we see that developers are excited about a new language or platform, we work quickly to make sure Parse can support that momentum. We’re excited about Swift because it brings a whole host of new language features to iOS and OS X development. Swift’s type inference will save developers a ton of typing and generics will reduce runtime errors by giving us strongly-typed collections. To learn more about Swift, checkout Apple’s reference book.

One of the best things about Swift for existing iOS developers is that it’s fully compatible with existing Objective-C libraries, including system libraries like Cocoa and third-party libraries like Parse. To start using Parse in your Swift projects:

* Add a new file to your project, an Objective-C .m file.
* When prompted about creating a bridge header file, approve the request.
* Remove the unused .m file you added.
* Add your Objective-C import statements to the created bridge header .h file:

#import <Parse/Parse.h>
// or #import <ParseOSX/ParseOSX.h>

This StackOverflow answer gives a more thorough explanation.

Once you’ve added Parse to your bridge header, you can start using the Parse framework in your Swift project. To help you get started, we’ve added Swift example code to our entire iOS/OSX documentation. For example, this is all you need to do to save an object to Parse:

var gameScore = PFObject(className: "GameScore")
gameScore.setObject(1337, forKey: "score")
gameScore.setObject("Sean Plott", forKey: "playerName")
gameScore.saveInBackgroundWithBlock { 
    (success: Bool!, error: NSError!) -> Void in
    if success {
        NSLog("Object created with id: \(gameScore.objectId)")
    } else {
        NSLog("%@", error)

To then query for the object by its id:

var query = PFQuery(className: "GameScore")
query.getObjectInBackgroundWithId(gameScore.objectId) {
    (scoreAgain: PFObject!, error: NSError!) -> Void in
    if !error {
        NSLog("%@", scoreAgain.objectForKey("playerName") as NSString)
    } else {
        NSLog("%@", error)

That’s everything you need to know to start using Swift with Parse. For more examples, don’t forget to visit our iOS/OSX documentation. We can’t wait to see what you build with it!

Fosco Marotto
June 6, 2014

Orbitz App Uses Parse Push, Analytics, and Core to Keep Users Updated

Orbitz app on iPhone 5The award-winning Orbitz Flights, Hotels, Cars app for iOS gives iPhone and iPad users a slick, easy way to book and manage travel. With a fast shopping experience, streamlined booking process, contextual home screen, and details of booked trips, the Orbitz app brings information important for the user’s travel to their fingertips when they need it. Parse Push, Analytics, and Core help the app keep customers on top of their travel plans with up-to-the-minute notifications and by streamlining their search experiences.

Joining the mobile trend in 2006 with a mobile website that allowed customers to look up itineraries and check flight status while traveling, Orbitz launched the first version of the Orbitz iPhone app in 2010 once they realized that online booking via phone was becoming more and more prevalent. After launching on iPhone and Android in 2010, they transitioned their full-service iPhone app to be universal so that iPad users could book flights, hotels, and rental cars via a single app and added a Kindle Fire edition, as well.

Principal Engineer on the iOS development team for Orbitz, Matt Sellars is focused on creating the best travel app experience possible for the company’s customers. With over 5 years experience at Orbitz, Matt came across Parse while chatting with other mobile developers about available platforms. “Simple APIs across many platforms with good documentation,” won him over, and now the app uses Parse to store a user’s search history and to push relevant travel alerts to them.

The app uses Parse Push to notify app users of changes in their trips, such as flight delays, gate changes, or cancellations, on the day that they travel. Matt and his team then use Parse Analytics to monitor usage of the push notifications they send.

The app also uses Parse Core to keep users’ recent Orbitz searches available and to remove old data. The app seamlessly synchronizes search information between any of a user’s devices so that he or she can easily pick up a previous search on a different device without having to re-enter the search criteria. The app also uses Cloud Code to remove expired data for users, allowing the app to move processing from a user’s device to the server and lowering the user’s data usage by removing irrelevant data before the apps fetches it.

According to Matt,

Parse was a huge help for rapid feature development. It took one developer, myself, a couple days of reading and playing with the APIs to build and demo a feature. This normally would have required a few people to create the required backend to support building the prototype at a much higher cost. This enabled us to learn quickly, faster, and overall lower the cost to make an idea tangible for our stakeholders. This ability to rapidly prototype ideas allows our developers and designers to move quickly on ideas as if working in a startup environment together.

After all of the team’s investments in mobile over the last few years – both in apps and in optimizing their website for smartphone and tablet users – they now see nearly 1 in 3 hotel bookings being made via a mobile device. Positive reviews in the app stores make it clear that customers really enjoy using the apps, which have also been acknowledged by Apple as an Editors’ Choice and inducted into the App Store Hall of Fame in 2012, and by Google, who recently named Orbitz a “Top Developer” on Google Play for their Android app.

The app is available for free download on iOS, Android, and Kindle Fire and is deeply integrated with the Orbitz Rewards loyalty program. App users receive higher percentages of Orbucks, a loyalty currency that can be used on bookings, than they would by booking on the website.  

Courtney Witmer
May 23, 2014

Move Uses Parse Push to Power Notifications in Realtor.com App

Realtor.com App on iPhone 5The Realtor.com apps by Move, Inc. provide a tailored search experience for real estate, helping people find the right home with features like photo-centric search, search by school, and map sketch-a-search. The only app to get real estate listings sourced directly from over 800 MLS’s with 90% of listings refreshed every 15 minutes, users of the app can also get updates on price reductions via push notification.

Real estate is all about data, and Move gets real-time data feeds from 98% of the multiple listing services (MLSs) in the United States. Managing that data is a big challenge, and it’s the duty of Alan Lewis, Principal Platform Architect for Move, to guide the development of the dozens of web services that make that data usable by the apps and sites Move supports.

When Alan, a 15 year veteran of the industry, started looking to improve push notification capabilities to support future growth for the company’s apps, he attended Parse Developer Day in San Francisco to learn about Parse Push. Although more research stood before him, he had an inkling Parse was the right choice when, within a few hours while still at the conference, he had a prototype using the service up and running.

The Move team implemented Parse Push to power notifications in their iOS apps. A key factor in this decision was the extra work inherent in supporting push in iOS, which requires that a developer build out additional server-side infrastructure or use a 3rd party services. According to Alan, the key to their decision to use Parse to fulfill this need were, “scale, cost, and developer-friendliness.”

He continues on to explain that,

Parse is a great fit for apps that don’t have a server-side infrastructure, but for a large app like realtor.com where there are multiple existing services that supply the app’s data, what was important for me was to find a platform that could integrate seamlessly with ours. As an architect, I love what I see with Parse, and we can integrate with it via their web services without making compromises within our platform.

Developers should be informed about their options. Best-of-breed services like Parse can save a lot of time and money, and whether you’re a solo developer or working for a big company, there are probably far better uses of your time than building infrastructure that isn’t a key differentiator for your business. For us, push notifications were a must-have, but we weren’t going to gain any advantage by building out the feature from scratch, which is why we chose to go the 3rd-party route. I evaluated all of them, and Parse is the best.

You can download the realtor.com app and the new Rentals app here.

Courtney Witmer
May 16, 2014

Dependency Injection with Go

Dependency Injection (DI) is the pattern whereby the dependencies of a component are provided to it and made part of its state. The pattern is used to isolate components from the implementations of their dependencies. Go, with its interfaces, eliminates many of the reasons for a classic DI system (in Java, etc). Our inject package provides very little of what you’ll find in a system like Dagger or Guice, and focuses on eliminating the need to manually allocate instances and wire up the object graph. This is both because many of those aspects are unnecessary, and also because we wanted to make injection simpler to understand in our Go codebase.

Our path to building inject went through a few stages:



It started with a unanimous, noble goal. We had global connection objects for services like Mongo, Memcache, and some others. Roughly, our code looked like this:

var MongoService mongo.Service

func InitMongoService(url string) {
  MongoService = ...

func GetApp(id uint64) *App {
  a := new(App)
  return a

Typically the main() function would call the various init functions like InitMongoService with configuration based on flags/configuration files. At this point, functions like GetApp could use the service/connection. Of course we sometimes ran into cases where we forgot to initialize the global and so got into a nil pointer panic.

Though in production the globals were shared resources, having them had (at least) two downsides. First, code was harder to ponder because the dependencies of a component were unclear. Second, testing these components was made more difficult, and running tests in parallel was near impossible. While our tests are relatively quick, we wanted to ensure they stay that way, and being able to run them in parallel was an important step in that direction. With global connections, tests that hit the same data in a backend service could not be run in parallel.


Eliminating Globals

To eliminate globals, we started with a common pattern. Our components now explicitly depended on say, a Mongo service, or a Memcache service. Roughly, our naive example above now looked something like this:

type AppLoader struct {
  MongoService mongo.Service

func (l *AppLoader) Get(id uint64) *App {
  a := new(App)
  return a

Many functions referencing globals now became methods on a struct containing its dependencies.


New Problems

The globals and functions went away, and instead we got a bunch of new structs that were created in main() and passed around. This was great, and it solved the problems we described. But… we had a very verbose looking main() now. It started looking like this:

func main() {
  mongoURL := flag.String(...)
  mongoService := mongo.NewService(mongoURL)
  cacheService := cache.NewService(...)
  appLoader := &AppLoader{
    MongoService: mongoService,
  handlerOne := &HandlerOne{
    AppLoader: appLoader,
  handlerTwo := &HandlerTwo{
    AppLoader:    appLoader,
    CacheService: cacheService,
  rootHandler := &RootHandler{
    HandlerOne: handlerOne,
    HandlerTwo: handlerTwo,

As we kept going down this path, our main() was dominated by a large number of struct literals which did two mundane things: allocating memory, and wiring up the object graph. We have several binaries that share libraries, so often we’d write this boring code more than once. A noticeable problem that kept occurring was that of nil pointer panics. We’d forget to pass the CacheService to HandlerTwo for example, and get a runtime panic. We tried constructor functions, but they started getting a bit out of hand, too, and still required a whole lot of manual nil checking as well as being verbose themselves. Our team was getting annoyed at having to set up the graph manually and making sure it worked correctly. Our tests set up their own object graph since they obviously didn’t share code with main(), so problems in there were often not caught in tests. Tests also started to get pretty verbose. In short, we had traded one set of problems for another.


Identifying the Mundane

Several of us had experience with Dependency Injection systems, and none of us would describe it as an experience of pure joy. So, when we first started discussing solving the new problem in terms of a DI system, there was a fair amount of push back.

We decided that, while we needed something along those lines, we needed to ensure that we avoid known complexities and made some ground rules:

  1. No code generation. Our development build step was just go install. We did not want to lose that and introduce additional steps. Related to this rule was no file scanning. We didn’t want a system that was O(number of files) and wanted to guard against an increase in build times.
  2. No subgraphs. The notion of “subgraphs” was discussed to allow for injection to happen on a per-request basis. In short, a subgraph would be necessary to cleanly separate out objects with a “global” lifetime and objects with a “per-request” lifetime, and ensure we wouldn’t mix the per-request objects across requests. We decided to just allow injection for “global” lifetime objects because that was our immediate problem.
  3. Avoid code execution. DI by nature makes code difficult to follow. We wanted to avoid custom code execution/hooks to make it easier to reason about.

Based on those rules, our goals became somewhat clear:

  1. Inject should allocate objects.
  2. Inject should wire up the object graph.
  3. Inject should run only once on application startup.

We’ve discussed supporting constructor functions, but have avoided adding support for them so far.



The inject library is the result of this work and our solution. It uses struct tags to enable injection, allocates memory for concrete types, and supports injection for interface types as long as they’re unambiguous. It also has some less often used features like named injection. Roughly, our naive example above now looks something like this:

type AppLoader struct {
  MongoService mongo.Service `inject:""`

func (l *AppLoader) Get(id uint64) *App {
  a := new(App)
  return a

Nothing has changed here besides the addition of the inject tag on the MongoService field. There are a few different ways to utilize that tag, but this is the most common use and simply indicates a shared mongo.Service instance is expected. Similarly imagine our HandlerOne, HandlerTwo & RootHandler have inject tags on their fields.

The fun part is that our main() now looks like this:

func main() {
  mongoURL := flag.String(...)
  mongoService := mongo.NewService(mongoURL)
  cacheService := cache.NewService(...)
  var app RootHandler
  err := inject.Populate(mongoService, cacheService, &app)
  if err != nil {

Much shorter! Inject roughly goes through a process like this:

  1. Looks at each provided instance, eventually comes across the app instance of the RootHandler type.
  2. Looks at the fields of RootHandler, and sees *HandlerOne with the inject tag. It doesn’t find an existing instance for *HandlerOne, so it creates one, and assigns it to the field.
  3. Goes through a similar process for the HandlerOne instance it just created. Finds the AppLoader field, similarly creates it.
  4. For the AppLoader instance, which requires the mongo.Service instance, it finds that we seeded it with an instance when we called Populate. It assigns it here.
  5. When it goes through the same process for HandlerTwo, it uses the AppLoader instance it created, so the two handlers share the instance.

Inject allocated the objects and wired up the graph for us. After that call to Populate, inject is no longer doing anything, and the rest of the application behaves the same as it did before.


The Win

We got our more manageable main() back. We now manually create instances for only two cases: if the instance needs configuration from main, or if it is required for an interface type. Even then, we typically create partial instances and let inject complete them for us. Test code also became considerably smaller, and providing test implementations no longer requires knowing the object graph. This made tests more resilient to changes far away. Refactoring also became easier as pulling out logic did not require manually tweaking the object graphs being created in various main() functions we have.

Overall we’re quite happy with the results and how our codebase has evolved since the introduction of inject.



You can find the source for the library on Github:


We’ve also documented it, though playing with it is the best way to learn:


We love to get contributions, too! Just make sure the tests pass:


May 13, 2014



RSS Feed Follow us Like us