When Durable Functions first came out, I was ecstatic because a function with a state meant being able to create serverless actors, and holly sheiß how exciting that was. Unfortunately this wouldn’t work because of this bug.

So when I received a notification over the weekend that Chris Gillum had closed it, the perspective of coming Monday suddenly brightened.

What is an actor?

The actor model is a conceptual model meant to handle concurrent computation. It represents entities of a system as “actors”. An actor is very similar to an object in OOP, except that it’s distributed, long-lived, persistent, and processes one thing at a time. The direct consequence is that an actor is addressable and can work with other actors, maintains an internal state, and processes incoming messages sequentially. The keyword here is sequentially, actors are by design mono-threaded, and this is a very desirable property of theirs.1

Actors are good at representing real-life systems with lots of discrete entities that need to maintain local consistency (atomicity during processing), but communicate through asynchronous mechanisms. An example can be bank accounts. Each bank account is its own actor, making a transfer requires the originating actor to invoke the destination actor to make the transfer2.

  • At the emitter level, you want to make sure that the account balance is high enough when sending the transfer and that no other operation will debit too much from the account while you send the money.
  • At the receiver level, you want to make sure the transfer is correctly recorded into the balance when receiving the transfer, and that no other operation will change the balance amount while updating it.

Local consistency is maintained by having the processing of the transfer made in a unique “thread” in each actor.

Durable functions in a nutshell

Functions are Azure’s3 implementation of serverless computing. The Durable Functions framework adds a state layer on top of it, using a combination of Storage Queues and Storage Tables4. It makes use of two core concepts:

  • orchestrations call activities and maintain some form of state, and should be deterministic and idempotent5. Orchestrations have an id that is either generated by the framework, or provided by you.
  • activities can call external system, modify external states, record into databases or storage, retrieve data from external APIs, etc.

For example, imagine an orchestration calling an external system A’s rest API, then external system B’s rest API with the result. The code would resemble that:

public static async Task RunOrchestrator(
            [OrchestrationTrigger] DurableOrchestrationContext context,
            ILogger log)
{
    var input = context.GetInput<InputClass>();
    var resultFromSystemA = await context.CallActivityAsync("callSystemA", input);
    await context.CallActivityAsync("callSystemB", resultFromSystemA);
}

INSERT DIAGRAM MAYBE?

On the first call, the orchestration calls an activity that encapsulates the rest API call. It effectively triggers a function in the same FunctionApp called callSystemA that calls a REST endpoint. The beauty of it is that the framework actually doesn’t wait for the method to return.

What actually happens is that the function gets killed, and when the “callSystemA” activity returns, it stores the result into storage, then the orchestration is re-invoked. It restart from the beginning, but this time when calling await context.CallActivityAsync("callSystemA", input), instead of actually calling the activity, it’s retrieving the result from storage ; then carries on to the next instruction.

General idea

Leveraging the Durable Functions framework, we will create a new orchestration for each of our actors. The “initialization” of this orchestration is really just

Architectural Considerations

  • Performance:

  • Availability

  • Security

  • Scalability

  • Maintainability

  • Extensibility

Conclusion

First, a word of caution. I haven’t tested the performance of this yet, I don’t know how well it will age. Right off the bat, there are some concerns that need to be considered: debugging might be complex, monitoring much so. Code changes suppose you handle some way of version update ; and you need some form of backup strategy.

While this is a pretty exciting programming model, I wouldn’t use it as the only state store for the application ; but rather have it as a running copy, and at actor initialization and end of processing, retrieve/save it to an external store.

Notes

  1. This is a pretty basic description of actors. The real thing is a mathematical model, a good introduction to the model can be found here

  2. I’m taking shortcuts because in real-life you would probably not update the balance directly but some form of ledger, but you get the idea. 

  3. Yes you can run Azure Functions outside of Azure, but no-one is doing that, soooo. 

  4. Its implementation is quite elegant, I invite you to have a look at the details. 

  5. Given the same parameters, the same results should occur ; and also orchestrations usually get called several times, so they should be invoked multiple times without issues.