You are viewing limited content. For full access, please sign in.

Discussion

Discussion

Workflow Subscriber (sometimes can't process all MSQ in time)

posted on June 19, 2014

At one of my customer, I have seen some circumstances where the Workflow Subscriber was not able to process/analyse all MSQ messages in the queue in a timely manner.

 

I would like to know what is the proper way to configure the WF subscriber to process more in realtime.  Is there a configuration (like for the number of tasks per CPU for WF service) that we can modify to increase Subscriber analysis capabilities?

 

Also, I have seen a situation where one Workflow did trigger and started modifying a document in loops. It took one day for the customer to notify us and discover the source.  By that time, that document got modified too many times and this causes the MS Queuing service to have over 1 000 000 million notification in the queue.  It took two days for the WF Subscriber service to process them all (even if that document modification was not triggering any Workflow) and allow regular Workflows to start processing normally.

 

In order to prevent such situation, is there a way to modify the Workflow engine to monitor the size of the MSQ and if it gets higher than a specific number, then the WF service can send an email notification to the Laserfiche administrator?

0 0
replied on June 19, 2014

You can temporarily increase the number of threads in the Subscriber to handle the backlog. This setting can be found in the Advanced Server Options dialog in the WF Admin Console. You can change it per repository and it will take effect within a minute without needing to restart the subscriber. You can lower the values back when the backlog is cleared. You can also manually clear the queue if you don't want those messages processed.

 

 

Workflow doesn't have any monitoring for the size of the message queues, however Windows does. The message queues expose performance counters and you can set up notifications off of them.

2 0
replied on August 18, 2017

Miruna,

What is the highest thread count you would recommend for this kind of situation? I noticed the highest it will go on the server I am working on is 32.

0 0
replied on August 18, 2017

Also, how do you manually clear the queue?

0 0
replied on August 18, 2017

I wouldn't go that high. I'd start with 2 and see what happens. We haven't really had cases over the years where a permanent increase was warranted.

Message Queuing is accessible through the Control Panel\Administrative Tools\Services and Applications\Message Queuing. Workflow's queues are under Private Queues. You can clear them from there.

0 0
replied on August 18, 2017

We are migrating about 1.7million documents from 1 repository to another. Would it be of any benefit to increase it past 2?

0 0
replied on August 18, 2017

Are users using the repository and expecting workflows to start during this migration? Do these new documents need to start workflows as they land in the repository?

0 0
replied on August 18, 2017

Yes and yes. 1 workflow brings the document over to the new repository and then invokes a second workflow. After the second workflow the documents are put into an incoming folder. Once there another workflow picks them up and files them. Users are also scanning new documents into the same incoming folder. The issue they are seeing is that they are receiving an Entry Locked error when trying to scan into the folder because workflow has not yet caught up with the thousands of documents that were brought over, so there are thousands still sitting in the incoming folder waiting to be processed.

0 0
replied on August 22, 2017

@████████, I have noticed that even after the documents have run through the workflow there are still messages in the message queue and it takes a while for those to clear out. It appears when I have a lot of messages in the queue it is locking the folder that the documents were put in that kicked off the workflow. Is there a way to prevent this from happening or speeding up clearing those out without having to manually purge the queue?

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.