The Apex team is always looking for ways to reduce the number of limits that you have to think about when you are building applications on the force.com platform.  To that end, with the Summer ’13 release we are replacing the old asynchronous limits with a single limit for all of the different asynchronous processes.

Our hope is that these changes will help increase your productivity as a developer.  This post should give you some context on why have chosen to combine these limits, and some additional details around how the new limits will operate.

The Old Way

Up until Spring ’13, there were separate limits for @future calls and batch calls.  To confuse things further, the two limits were calculated very differently.  The number of @future calls you could run in a day was a multiple of the number of licenses you had, and the number of batch calls was a fixed number regardless of how many licenses you had. 

These differing calculations drove implementation decisions, and usually in a bad way.  If your org had few licenses, you could utilize batch apex where it wasn’t really needed, because you had a higher daily limit there.  If your org had lots of licenses, you could use @future to do some batch-worthy processes so that you didn’t go beyond the daily batch limit.

For ISVs, this was an even larger problem – you didn’t know, in advance, whether your subscriber would have few licenses or many licenses.  We received many cases where installed packages were over-eating from the @future buffet in smaller orgs, whereas the package was just fine when installed in larger orgs. 

The fact is that our system is handling these two things in more-or-less the same way.  Each execution is a message in MQ, which arrives at the front of the queue and is sent to the appropriate application server.  There are differences between how the dequeue worker deals with the different jobs, but the cost to handle them is roughly equal.  The different limits are from an earlier time when things were different, but now that they’re similar, there wasn’t a compelling reason to retain the two separate limits.

Sense and Sensibility

Rather than the oddity of two separate-but-unequal daily buckets, we are going to track a single asynchronous daily bucket.  This will replace the two existing limits.  You can now make decisions based on the best tool for the job, not based on the limit that is larger in your org.  You can now utilize both @future and batch Apex and share the limits between the two.

The @future limit is gone.  The daily batch limit is gone.  There is now just the daily asynchronous limit.

The new limit calculation for each org is going to be the larger of the two older limits.  This means that you’ll get at least 250,000 asynchronous calls per day regardless of how many licenses you have.  If you have more than 1,250 qualifying licenses, you will get the old @future limit, giving your org 200 asynchronous calls per qualifying license.

For smaller orgs, this means a huge increase in the number of @future calls you can make.  For large orgs, this means much less worry about the batch limit.

Fuzzy Math

Some of you are pulling out your calculators and figuring out how much of an increase this is for your org, and you’re finding that the new combined total is less than the old total.  That is correct.  You will have fewer calls, up to 250,000 fewer calls per day.  What?  An overall limit reduction? 

The reduction will not impact your org.  Yes, you, dear reader; this won’t impact your particular org.  I can say that – despite not being able to see you and know who you are – because this reduction will not impact any orgs.  Before implementing this new limit, we did a lot of data mining and analysis on org behavior.  There were no orgs that are using more calls across all asynchronous job types than would be permitted under the new pattern.  It turns out the existing limits, on aggregate, were already more generous than anyone needed. 

More Stuff In The Bucket

While we were consolidating the limits, we decided to begin tracking all types of asynchronous calls.  Batch start and finish methods will now count towards the limit, and scheduled executions will now count towards this consolidated limit. 

In the old way, batch start and batch finish were not counted.  This meant that a zero-item batch job would not count against the daily limit, despite pushing two messages for processing and consuming two threads.  The start method itself is often a long-running query, so this most certainly should be tracked.  The proper pattern is not to have empty batch jobs; this reduces the available processing capacity for batch jobs with actual work to do.  Adding these to the limit counting will make sure that your org’s batch jobs (which you didn’t write as zero-row jobs, since you wouldn’t do that) will have sufficient processing capability when they are ready to run. 

Scheduled executions were never tracked before, which was appearing as some ugly behavior.  Some orgs are doing thousands of these calls per day, which is not the intention of scheduled Apex.  Again, since each of these jobs is putting a message on the queue and using up its share of resources, it makes sense to count them to ensure that there is enough processing capacity for all asynchronous jobs.

In response to this behavior, we are increasing the number of scheduled jobs that can be scheduled in an org from 25 to 100.  Hopefully this increase gives these deviant orgs enough capacity to operate without using thousands of daily jobs; if not, the fact that these are added to the asynchronous limit count should help dissuade those orgs from consuming so much of your org’s shared resource capacity.

Keeping Vigil 

We will be carefully monitoring how these changes are impacting different orgs.  If there are any problems, you can file a case with support and we’ll make sure you are not blocked from doing processing.

Asynchronous processing is good for your org and good for the service.  For your org, it allows users to move on to the next task without waiting for related processing to finish.  For the service, it improves our load balancing capability, to smooth out the peaks and troughs of service demand by shifting some processing out by a few seconds or minutes.  The more processing that is happening asynchronously, the lower the chance that we’ll reach peak capacity during a busy spike. 

Since asynchronous is good, we want to encourage you to use it.  The limits here should be generous enough for your business needs, yet comprehensive enough to block runaway orgs from throttling your business processes.  This is why we will keep a vigil on the new limit pattern, to ensure that this does nothing to dissuade anyone from utilizing the various asynchronous options we have baked in to the Apex programming language.

Now you know more, and you are a better developer.  Happy coding!

Get the latest Salesforce Developer blog posts and podcast episodes via Slack or RSS.

Add to Slack Subscribe to RSS