2. Evaluate not only scalability—but lights-out elasticity The good news is that most integration platforms will scale for embedded integration needs. But whether your measurement is the number of API calls per second, transactional volume, inserts, upserts, or requests, of course, you’ll want to validate it and put the vendor through its paces on your specific metrics. But if you’re looking at embedding at scale—rolling out integrations to hundreds, thousands, or even tens of thousands of end customers, pure scalability isn’t enough. You need to also understand the operational cost to achieve scalability, that is, how much operational overhead you’ll need to meet end customer demands including monitoring, provisioning, and ongoing sizing to deliver with your embedded integration platform. Here’s where different compute models come in. Older-generation integration platforms are server-centric, which works up until a point. Depending on the integration vendor, you’ll be looking at sizing and adding workers, vCores, or perhaps Atoms and Molecules to support end customer growth. Typically, you’ll pay a fee to the vendor for each worker—and may end up paying more for anticipated or peak load. The more significant issue is that monitoring server-based “workers” on older platforms can be painful. In many cases, the only indication you get that you’re running out of resources is when end customer performance issues start to arise. Other issues with older technology include slow response times and frequent timeouts, application restarts, or errors in the logs that relate to performance issues. Modern serverless computing enables embedded integration at scale You’re probably familiar with serverless computing. If not, here’s a quick primer: It’s a cloud computing model where the provider dynamically allocates the exact resources needed, on-demand. In a true serverless architecture for an embedded integration platform, each step or task in an integration flow (such as a trigger, transformation, or insert) is a serverless function. Serverless architecture elastically allocates resources on demand. Because there are no persistent workers, there’s no need to size or pay for anticipated demand. And there’s no need for your ops team to continually monitor dashboards and logs for customer issues related to under-sizing. As a result, serverless is a recipe for elasticity and end customer integrations at scale. 9

Embedding Integrations in Your Product - Page 9 Embedding Integrations in Your Product Page 8 Page 10