My cofounders and I started Parse in 2011 to solve our own problems. We were frustrated with all the repetitive work needed to build great cross platform mobile apps. Our first customers were our friends who also built apps and they mostly lived near us in San Francisco. Over time, we started attracting more customers that weren’t just our friends — first in Silicon Valley and soon throughout North America. Today, over 500,000 apps have been created on Parse, and hundreds of thousands of our developers hail from outside of the United States. Our team is incredibly honored that so many developers across the globe have trusted Parse with their businesses and apps. Here are some cool stats about our global presence:
Active apps in Asia grew nearly 90% in the first half of 2014.
Apps in APAC that use all three Parse products—Parse Core, Parse Push, and Parse Analytics—grew by more than 90% in the first half of 2014.
6 of Parse’s 15 largest countries (by number of active apps) are in Asia: India, Japan, Australia, China, Taiwan, Korea.
We are extremely excited about how much our community has grown since 2011. As that growth continues to accelerate, today I’m pleased to share the efforts we have made to make Parse even more friendly to our international community:
Creating a truly global team — When we joined Facebook, our entire team moved from San Francisco to Menlo Park in California. Since then, we’ve expanded our horizons quite a bit. We’re now a global team with dedicated folks in London, Dublin, Paris, and Singapore. And we’re excited to continue expanding wherever mobile developers need help!
We hope you’re as excited as we are about these announcements and we appreciate you being part of our global community.
Parse is excited to be a part of the mobile development Deep Dive course from Brooklyn’s HappyFunCorp this March. Find out how Parse can be a great learning tool in this guest post from HappyFunCorp co-founder Ben Schippers:
At HappyFunCorp, I often get asked the question, “Should I use Parse to build my startup?”
This question generally leads to a much larger discussion about the best ways to build startups and to manage the inevitability of the unknown post-launch. In today’s development climate, bandwidth is cheap, and developers can find massive reservoirs of free information on best practices. Thus, building for mobile and web has become much more accessible—which means user acquisition and distribution has become the new game.
For entrepreneurs starting mobile or web businesses today, it’s critical to build the smallest, most focused subset of features to prove to the market that the idea has viability. It is equally critical to push forward, pivot, or move on to the next idea when necessary. Data is king in making these decisions, so shipping the product as quickly as possible should be the goal when getting started. It’s crucial to use tools that enable you to maneuver quickly, especially during the infancy of the business.
That’s why, when I’m asked “Should I use Parse to build my startup?” I say yes. Parse takes much of the complex web services layer out of the scenario. This allows you to focus on the actual mobile experience. User authentication, push notifications, and storage, just to name a few, are taken care of for you right out of the box. Just as many startups use Heroku to manage dev-ops, Parse takes care of the communications link between your phone and the server. It enables you to focus on the client experience, and perhaps just as importantly, the distribution, while Parse manages much of the heavy web services layer for you. This is perfect for our students at HappyFunCorp, many of whom are just beginning to learn app development; with Parse, they’re up and running with a new app in just minutes.
In addition to Parse’s development offerings, entrepreneurs can take advantage of FbStart, a new program tailored directly to startups and larger companies looking to build amazing mobile application experiences. The beauty of this program is that Facebook and Parse help solve the distribution problem for you by offering application level deep linking, custom analytics, and a whole suite of tools dedicated to user acquisition and distribution. So you can build fast and ship, collect data and iterate, quickly and easily.
Have an idea and want to talk shop? Drop by our Brooklyn offices or sign up for our Parse mobile class.
HappyFunCorp is an engineering firm dedicated to building the very best mobile and web experiences.
We recently launched Push Experiments, which lets you conduct A/B tests on push notification campaigns to identify the most engaging message variant. With Push Experiments, we wanted to make it easy to run successful A/B tests. In this post, we’ll discuss some of the statistical techniques we’re using behind the scenes. And, catch a screencast showing you the ins and outs of the new tool below.
Parse Push Experiments in Action
Ready to try Parse Push Experiments? Watch the screencast here. Or, read on for A/B Testing best practices.
What makes an A/B test successful?
We can say that an A/B test succeeds whenever we get a precise, correct answer to the question that originally motivated us to run the test. In other words, a good platform for A/B testing should try to prevent two kinds of failure:
(1): We should rarely get a result that leaves the answer to our question in doubt. (2): We should rarely get an answer that seems precise, but is actually incorrect.
Parse Push Experiments uses three strategies to prevent these two kinds of failure:
Encourage developers to ask precise questions that can be answered unambiguously.
Prevent developers from reaching wrong conclusions by always reporting results along with a margin of error.
Ensure that most A/B tests will give a precise answer by suggesting the minimum number of users that must be included in an A/B test in order to reasonably expect accurate results.
Step 1: Asking Precise Questions
Here’s one of the most important things you can do while running A/B tests: Commit to the metric you’re testing before gathering any data. Instead of asking questions like, “Is A better than B?”, the Push Experiments platform encourages you to ask a much more precise question: “Does A have a higher open rate than B?”
The distinction between these two questions may seem trivial, but asking the more precise question prevents a common pitfall that can occur in A/B testing. If you allow yourself to choose metrics post hoc, it’s almost always possible to find a metric that makes A look better than B. By committing up front to using open rates as the definitive metric of success, you can rest assured that Push Experiments will produce precise answers.
Step 2: Acknowledging Margins of Error
Once you’ve chosen the question you’d like to answer, you can start gathering data. But the data you get might not be entirely representative of the range of results you’d get if you repeated the same test multiple times. For example, you might find that A seems to be better than B in 25% of your tests, but that B seems to be better than A in the other 75%.
As such, when reporting the difference between the A and B groups (we’ll call this difference the lift), it’s important to emphasize the potential for variability in future results by supplementing the raw result with a margin of error. If you have an A/B test that has a lift of +1% and a margin of error that is between -1% and +3%, you should report that your A/B test’s results were inconclusive. If you simply reported a +1% change, your results would be misleading and might set up unrealistic expectations about the success of your push strategy in the future. By reporting a range of values that should contain the true answer to your question (this range is what a statistician would call a 95% confidence interval), you can help to ensure that anyone reading a report about your A/B test will not reach premature conclusions.
At Parse, we determine margins of error for open rate data using a calculation called the Agresti-Caffo method. When you’re working with push notification open rates, the Agresti-Caffo method produces much more reliable margins of error than naive methods like normal approximations.
In addition to automatically calculating margins of error using the Agresti-Caffo method, the Push Experiments platform only reports results after it’s become clear that either A offers a lift over B or that B offers a lift over A — helping to further protect you from reaching premature conclusions. Until there’s enough data to determine a clear winner, the Push Experiments dashboard will report that there’s still uncertainty on whether A or B is more successful.
Step 3: Choosing the Right Sample Size
Given that the Push Experiments platform will always report results with a margin of error, you’ll want to try to make that margin smaller in order to draw definite conclusions from more of your tests. For example, if you think that your A group will show a lift of 1% over your B group, you’ll want to make sure you gather enough data to ensure your margin of error will be smaller than 1%.
The process of picking a sample size that ensures that your margin of error will be small enough to justify a definite conclusion is called power analysis. The Push Experiments platform automatically performs a power analysis for your A/B test based on the historical open rates for your previous push notifications. To simplify the process, we provide suggested sample sizes based on the assumption that you’ll be trying to measure lifts at least as large as 1% with your A/B tests.
Only running A/B tests with carefully chosen sample sizes makes it much more likely that your A/B tests will succeed. If you select a sample size that’s much smaller than the size we suggest, you should expect that many of your A/B tests will lead to inconclusive results.
Putting It All Together
We believe the combination of precise questions, clean statistics and careful choice of sample size is essential for running a successful A/B test. You can achieve that with Parse Push Experiments, and we hope this look into the statistical methods behind our platform will help you do it.
A long time ago, developers had to painstakingly create multiple versions of their Parse apps in order to manage development and production environments; but no more! After long hours of research, we’ve cracked the app genome and we’re happy to announce the brand new ability to clone apps.
From the newly redesigned Parse dashboard, you’ll now be able to quickly clone an app that you’ve made or that has been shared with you by a colleague. This will clone the schema, cloud code, security and white-labeling settings, and config parameters; everything about the app except the data and the background jobs schedules. (We know cloning data is a highly requested capability, and we’re exploring ways to do it in a limited fashion.)
If you want to customize what goes into your new app, you can head over to the settings page and select exactly what you need in your clone — a clone to order, if you will!
We’re excited to see how you’ll use this new feature and how we can improve similar workflows in the future. After all, large apps require a lot of effort from many developers, and we want to make this as easy and simple as possible with Parse.
If there’s a feature that would make development on Parse easier for your team, join the discussion here!
It all starts with a simple checkbox
Let’s consider the scripting that might go into implementing a checkbox-style switch. When we click a switch that’s “off,” we change its state to “on” and get some visual feedback. We’d probably store its state in a boolean variable somewhere, add a click handler that updates the boolean, and then add or remove CSS classes based upon its current value. From a logic standpoint, it’s a duplication of something the browser can already do with <input type="checkbox">, but we put up with it for years because it let us have pretty switches.
Then the technique of using checked state to style checkboxes hit the mainstream. Using a combination of new CSS selectors and older browser behavior, it was possible to have visual elements that were tied to invisible inputs without writing a single line of JS. We use this same technique for many of our basic toggle switches today, but we’ve also taken it to new extremes with many of our UI components.
When building components, examine their core behavior
When we build new UI components for Parse.com, we follow a simple strategy: use the most similar browser element as a starting point. This goes beyond the obvious cases — such as using an anchor tag for something the user clicks — and looks at the inner behavior of a complex element. Remember the date picker we discussed earlier? The core interaction relies on being able to select a single day out of the month and visually represent that selection. Your browser already knows how to handle this: it’s behaviorally identical to a set of radio buttons! In fact, you’d be amazed by how many complex elements boil down to the radio button’s select-one-of-many behavior. They’re at the core of our own date picker, our fully styled dropdown menus, and a number of other switches and toggles. Knowing that the browser will ensure a single element is selected at any time allows us to eliminate a concern from our client logic.
Simultaneously, we avoid having to synchronize our data with our visualization, because a single interaction within the browser updates both. Along those same lines, at Parse we refrain from storing any data within a component that could be reasonably derived from its inner elements. For our specialized numeric inputs, we get the value by parsing the contents of an inner text input on demand, rather than returning an in-memory value that we update with an onChange handler. This guarantees that our JS component and its visual appearance never get out of sync.
Enough talk, let’s see a demo
When I designed the tree, I broke it down into its core behaviors: it has folders that open and close to display contents, and an overall set of files that can be individually selected. If you’re following along, it should make sense that the folders are backed by checkboxes while the files themselves are a single set of radio buttons.
<!-- Files are a set of radio buttons with the same name field -->
<input type="radio" name="hosted_files" id="f_myfile_js" value="MyFile.js">
<input type="radio" name="hosted_files" id="f_another_js" value="Another.js">
<!-- Folders are checkboxes that toggle the visibility of divs -->
<input type="checkbox" id="f_myfolder">
<label for="f_myfolder">My Folder</label>
In CSS, we simply provide default styles for labels, and additional styles for when they follow a checked input.
With these styles, I don’t need to worry about having numerous click handlers to open or close folders, and a single change handler on the radio buttons can tell me when a new file is selected.
You can examine a slightly modified version of our file tree at this CodePen.
With these methods, you can build custom inputs that play nicely with the existing web. They fire native change events, they don’t risk side-effects in your view logic, and in many cases they even improve screen reader accessibility. If you take a look at the HTML5 spec for radio buttons, you’ll see that it only talks about behavior, not appearance. It’s no long stretch to reuse that behavior in new and creative ways, and I look forward to seeing how others employ it for interface construction.
A couple of months ago, iOS added new screen resolutions for developers to support in their apps. As a result, many developers needed to update their user interfaces to look native on these new screen sizes.
Today we’re introducing ParseUI for iOS, a collection of user interface components aimed to streamline and simplify user authentication, data list display, and other common app elements. You might be familiar with these components already, as they were previously packaged inside the main Parse iOS SDK. With this release, we’ve updated how they look and made them resolution-independent to ensure a polished presentation on any screen.
A New Look
Inside this release you’ll find all new designs for every component with simplified user interfaces plus many under-the-hood improvements, such as smoother animations and faster rendering. To give you an example, here’s how `PFLogInViewController` looked before and how it looks today:
All components were rebuilt from scratch to be resolution-independent, meaning they both look great and are native on any screen resolution. This resolution-independent approach also introduces support for more presentation options. It gives you the flexibility to comprehensively customize everything within your application’s navigation and layout.
ParseUI is all open source, and you can view the code on GitHub. You can also access the new version of the Parse iOS SDK here.
Today, we’re excited to announce Parse Push Experiments, a new feature to help you confidently evaluate your push messaging ideas to create the most effective, engaging notifications for your app. With Parse Push Experiments, you can conduct A/B tests for your push campaigns, then use your app’s realtime Parse Analytics data to help you decide on the best variant to send.
As most developers know, push is one of the best ways to re-engage people in your mobile app. Parse already makes it easy to engage with people through push — in fact, in the past month, apps have sent 2.4 billion push notifications with Parse. We built Parse Push Experiments to make push engagement even simpler and more powerful, and to solve a common problem you may have experienced while designing your push strategy.
Parse Push Experiments in Action
Say you’ve just built a beautiful app on Parse and successfully launched it in the app store. You’re confident about the in-app experience, but want to make sure your users stay engaged with new content in your app, so you come up with some creative ideas about how to enrich your app with push messaging. Now you need a reliable way to tell which of your ideas will generate a better open rate. You could send one message today and the other one tomorrow, then see which one performs better — but what if some external factor, like getting featured in the media, spikes your app’s popularity tomorrow? That might lead you to incorrect conclusions. In order to fairly compare two push options, you need a way to conduct a push messaging experiment while holding all external factors constant, and only changing the thing that’s being tested — the notifications themselves.
Here’s how it works with Parse Push Experiments:
For each push campaign sent through the Parse web push console, you can allocate a subset of your users to two test groups. You can then send a different version of the message to each group.
Afterwards, you can come back to the push console to see in real time which version resulted in more push opens, along with other metrics such as statistical confidence interval.
Finally, you can select the version you prefer and send that to the rest of the users (e.g. the “Launch Group”).
A/B Testing with Parse Push Experiments
In addition to testing content, you can use Parse Push Experiments to A/B test when you send push notifications. This is useful if your app sends a daily push notification and you want to see which time of day is more effective. You can also constrain your A/B test to run only within a specific segment of your users; for example, you might want to run an A/B test only for San Francisco users if your push message is location-specific. Finally, A/B testing works with our other push features such as push-to-local-time, notification expiration, and JSON push content to specify advanced properties such as push sound. For more details, check out our push guide and read tips for designing push notifications here.
Push A/B testing works with existing Parse SDKs that you may already have in your app (iOS v1.2.13+, Android v1.4.0+, .NET v1.2.7+). To try it, simply use our web push console and enable the “Use A/B Testing” option on the push composer page. So let your creativity flow, brainstorm new push campaign ideas, and experiment! We can’t wait to see what you’ll create.
Widely hailed by its users with a five-star rating in the App Store, Timbre is a welcome addition to any music listener’s event-planning experience. Timbre provides a comprehensive list of music events happening in the U.S. and in the U.K., and greatly simplifies the attendance experience. Users can create playlists from gigs and receive relevant notifications when bands they follow are touring near their location. Plus, ticket purchasing becomes much more seamless through Timbre–fans are instantly connected with primary and secondary ticketing providers.
Timbre began as a hackathon project in 2012. By 2013, it was acquired by Seatwave, a UK-based secondary ticketing exchange company. Today, Timbre’s visually captivating experience is available on the App Store, with plans for an Android release in the future. In addition to the Timbre app, the team has built Timbre Afterparty, a web app focused on post-event sharing and celebration. Users can upload images, review gigs, and share with friends. Additionally, Timbre Afterparty provides the setlist for an event, and then allows the user to replay the setlist through a Spotify plugin.
Given the high complexity of pulling information on so many different artists, venues, and shows and linking them with user preferences, Timbre turned to Parse as their backend solution. Parse Core is used to store user preferences, artist images, reviews, and more. To engage with fans, Parse Push has proved exceptionally helpful. Users’ music choices, including artists, venues, and events, are stored in Parse. On a periodic daily basis, Parse is then queried to aggregate event information that is relevant to the user. This information is then sent in a push notification to a music fan, connecting them to what is sure to be a great musical adventure.
For Timbre’s development team, Parse proved to be useful in a variety of ways:
Our favorite aspect of using Parse is its capability for rapid prototyping with small datasets. Using Parse saved us hosting costs, and I would recommend it to teams that are looking to prototype quickly.
Download Timbre on the App Store today, and never miss out a great show again.
Security plays an important part in releasing an app on Parse, and Access Control Lists (ACLs) are one of the most powerful ways to secure your app. As Bryan mentioned in part III of his security series, “If you have a production app and you aren’t using ACLs, you’re almost certainly doing it wrong.” In that blog post he examines how to create and save ACLs in code. Another effective way to create and manage ACLs is through the data browser, and today that is much easier with a new and improved ACL editor. The data browser is already one of our most popular and most frequently used tools, and now you can also use the data browser to easily edit and manage your app’s ACLs.
An ACL is essentially a list of users that are allowed to read or write a Parse object. For example, let’s say your social networking app has a Post object, and you want the user who created that post, jack, to be the only person who can see it. You would then create an ACL on that object and give Jack read permissions. By default, ACLs are public read and write, so to add an ACL on Jack, first uncheck public read and write, and then type Jack’s username into the ACL editor. Each step is shown in the new editor, along with the equivalent code in iOS.
Now Jack’s post is private, and Jack is the only one who can see it. But nobody has write permissions on the object, so its fields can only be changed by using the master key. What if Jack wants to edit his post? To allow this, you need to give Jack write permissions in the object’s ACL.
[acl setWriteAccess:true forUser:jack];
Now Jack can both see and edit his Post object. But what if Jack wants to share his Post with his friend jill? Right now Jack is the only user with read or write permissions on the object. You want to allow Jill to be able to see the post, but you don’t want her to be able to edit it. Therefore you need to add read permissions for Jill on the ACL.
Jack and Jill can see the post now, and Jack is the only person who can edit it. But maybe you want to be able to make sure the content of Jack’s post is appropriate, and so you want a group of administrators to be able to see the post, and to delete it if something is wrong with it. Once you’ve assigned these users to a Role named admin, you could add permissions on that role to the ACL. Given that delete is included in “write” permissions, you would check both read and write in the ACL editor.
What if Jack wants to publish his post publicly? Now you want to give the post public read permissions. Simply check the read checkbox in the public permissions row at the top of the editor, and anyone can now see the post.
Notice that all of the users and roles in the ACL now have read permissions checked automatically. That is because you have enabled public read, so anybody can read Jack’s post. But public write is still disabled, so the only users who can edit the post are Jack and all users that belong to the admin role. Click “Save ACL” and now your post object will have all of these permissions set, summarized in its cell by “Public Read, 1 Role, 2 Users”.
post.ACL = acl; [post saveInBackground];
An undefined ACL acts the same as an ACL with public read and write permissions enabled, so when you first click on a cell without an ACL defined, the editor will show the public read and write boxes checked. To add private permissions on a role or user, you must first uncheck public read and/or write.
In just over two years, Health & Parenting has hurtled to the top of the charts with its global hits, Pregnancy+ and Tiny Beats. As a team, Health & Parenting is focused on creating high-quality, innovative apps for new parents. Among its growing suite of products, Pregnancy+ and Tiny Beats are its flagship apps—Pregnancy+ is one of the most popular pregnancy apps in the world with over 2 million downloads per year. The app and its rich content has been translated into seven languages and is available across countless platforms including the App Store, Google Play Store, Windows Store, Amazon App Store, and more. Tiny Beats, a fetal heart rate monitor app, can be found in fourteen languages and has maintained a consistent spot in the top five rankings of the Medical section of the App Store.
With the use of Parse, the Health & Parenting team has been able to scale their family of apps to reach countless countries and to operate across a wide variety of device platforms. Parse is used comprehensively–from the sign up process to cloud storage to push notifications and more. Parse Core is used to securely store app data, enabling users to seamlessly sync their activity across multiple devices, and to restore their information if a device is lost or accidentally wiped. Using Parse has provided many benefits for the Health & Parenting team and their user base. As John Miles, CEO, notes:
Parse has been quick and relatively easy to integrate. On an ongoing basis, the dashboard gives us clear visibility of our customers’ engagement and retention.
The team’s favorite aspect of using Parse is the Parse Push product. Given the intertwined nature of their apps, many customers of the Pregnancy+ app would gain much value from the Tiny Beats App. Knowing this, the Health & Parenting team utilized Parse Push to announce the launch of Tiny Beats to all existing Pregnancy+ customers. In just a single weekend, Tiny Beats then soared to number seven overall in the paid UK App Store. By using Parse Push, Miles found he was able to engage with customers more intimately than ever before:
Specifically the segmentation of who you’re about to push to, and being able to preview how they will see the notification on their device have been our favorites parts of using Parse Push.
Pregnancy+ and Tiny Beats are available for download across platforms. Find both apps and discover the rest of the Health & Parenting suite of apps here.