You are viewing limited content. For full access, please sign in.

Question

Question

Workflow databases growing out of control

asked on July 25, 2017 Show version history

Hi, I have a customer that wants to do some house cleaning as their database files are over 126GB in size, and their search dbs are over 300GB. They've already reduced the maintenance times to 1 days and delete of old logs to 3 days. Can someone help? or Should I open a case to further troubleshoot? 

Question is: are these database sizes normal? Is there a Laserfiche size estimate per workflow? 

1.png
2.png
3.png
1.png (18.6 KB)
2.png (7.93 KB)
3.png (5.02 KB)
0 0

Replies

replied on July 25, 2017

What do you mean by "search DBs"?

This is probably better handled through a support case. Have you checked you don't have any runaway workflows (through a statistics report). What was the data retention period set to before? Is having 43 million instances complete in that time interval expected?

Data is not instantly deleted from the tables when you change the retention period, so you should give a couple of days or so to catch up on deleting 263 million rows.

0 0
replied on July 25, 2017

hi miruna, sorry, i meant to say the search log files. i will have the customer check for runaway workflows. and will report back. which data retention period are you talking about? 

0 0
replied on July 25, 2017

The lifetime values in your screenshots above. They specify how long the data in the search_X_log tables will be retained after the instance completes. "Reporting data" pertains to the "workflow_reporting_log" table while "instance data" is about the search_X_log tables.

Given that the delta is negative, cleanup is working. Bu we do limit the times when it runs so it doesn't impact performance.

0 0
replied on July 25, 2017

ok. that's good to know. soo the search log sizes are normal? looks a bit loaded.

0 0
replied on July 25, 2017

"Normal" is relative to the throughput expected on this server. search_instance_log shows 40 million instances completed. If that's expected load on the server for the length of time you had to keep completed instances (30 days by default), then it works out to 6-7 activities per instance on average, so that would be pretty normal.

If 40 million instances in 30 days sounds like too much, then either you have (or had) an infinite loop or cleanup was off or cleanup can't keep up with removing old instances. Or all of the above. It could be because the loop is still going or because the server needs more resources.

0 0
replied on July 27, 2017

Ok. Thanks for the update Miruna. I'll let my customer know.

0 0
replied on July 28, 2017

Hi Miruna, the customer found this information beneficial, but would like to open case with Laserfiche on a proper cleanup process.  Evidently, they have ran with those settings in the attachments above for some time with no significant reduction to indicate it’s working. Is this Pre-Sales or LF support?

0 0
replied on July 28, 2017

Tech Support.

0 0
replied on July 28, 2017

ok. thanks.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.