Hi Dina,
Laserfiche Cloud is a dynamically-scaling multitenant environment. Individual customers do not have specific compute resource allocations. There's a post about that here. To give an example:
Cloud's frontend has a cluster of Web Client instances behind load balancers that serve all customers in the region. Depending on various scaling thresholds, such as total number of connections or average CPU utilization per instance, the cluster dynamically adds and removes Web Client instances to scale with the workload.
Most Laserfiche Cloud service components have some form of dynamic scaling like this. The backend of Cloud Workflow/Rules/etc. scales up and down as needed to handle the millions of workflow activities our customers run each month. The high-level design is one in where the system pushes events to evaluate and tasks to run to a message queue service, and a dynamic pool of worker instances reads and processes/executes items off those queues.
If there are particular published service limits you're concerned about hitting with a legitimate use case, please reach out to Support. Many limits are guardrails against design anti-patterns and unintentional endless loops that have the potential to temporarily impact other customers by consuming too many resources before scaling can kick in. Often (but not always) we can adjust these on a per-account basis on request.