You are viewing limited content. For full access, please sign in.

Question

Question

Workflow increase sql timeout

asked on January 11, 2022

Workflow seems to have a timeout period for SQL requests, if reached it shuts down the service. It happens everytime we try to cancel a workflow running instance with too many activities. How do we increase this timeout so that this does not happen?

 <Message>Execution Timeout Expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.</Message>
    <Source>.Net SqlClient Data Provider</Source>

0 0

Replies

replied on January 12, 2022

Hi Chad, Workflow uses the normal/default .NET SQL Client execution timeout of 30s. You can do a lot of SQL in 30s and should not be hitting that from a "Workflow with too many activities". This suggests the Workflow database may have highly fragmented indexes and outdated statistics that are impacting performance.

I highly recommend you run index and statistic database maintenance on the Workflow (and other Laserfiche) databases in this environment and see if the issue persists. Ideally set maintenance plans to run on at least a weekly schedule during off-hours. You can use SQL Server Management Studio's built-in maintenance plan designer and/or Ola Hallengren's highly regarded SQL Server Index and Statistics Maintenance scripts.

0 0
replied on January 12, 2022

Ok, I was thinking it was because the workflows just had too many millions of activities. There must be some theoretical limit to how many activities you can have in a single workflow instance before you hit the timeout.

I will run the Rebuild or reorganize all indexes and Update Statistics scripts from Ola during off hours and see if it fixes the problem, if so will schedule for once a week.

0 0
replied on January 12, 2022

Any limits would depend on hardware and how many of these instances you run and what else is going on in the SQL server at the time. So it's not a hard set number.

Workflows with millions of activities are a bad idea in all cases because Workflow needs to keep track of all those activities and their resulting tokens for the entire duration. It's best if you keep loops under 1000 iterations.

If you run over a set of entries that need the same type of processing, it's fast if you spin off the inner activities into their own workflow that is invoked inside your For Each. That way you take advantage of Workflow's parallel architecture and increase your throughput. On the maintenance side, you end with smaller instances, so less of a chance of time outs when they're being updated or cleaned up from the database.

1 0
replied on January 12, 2022

Ok, was going to recommend that once we got it cleaned up. Just been focused on fixing the enviornment first. They designed a big looping workflow, tried to run it, then everytime they tried to stop it, the primary service would crash.

I assume SQL has a lot of work to do, maybe re-organizing the log data for more permanent storage, when a workflow changes states from running to stopped.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.