You are viewing limited content. For full access, please sign in.

Question

Question

Distributed Computing Cluster (DCC) and VMs (CPUs versus vCPUs)

asked on February 25, 2019

We have 2 issues we are trying to address.

First, there is a minimum hardware requirement for DCC of 2.93Ghz or faster multi-core processor. We are working with VMware and our available CPUs are vCPUs of 2.10Ghz. What is the stance of Laserfiche on supporting DCC using vCPUs and what is the requirement in regards to the vCPU assignments?

Second, when we run DCC, it locks up on occasion, where xocr32b.exe has locked up the lfomniocr19.exe process. It may happen multiple times until our CPU is maxing at 99% virtually killing our server. We found a KB article (KB: 1013967) that addressed a similar issue of the locking up of lfomniocr19.exe by xocr32b.exe in the Import Agent product. Was the same fix that was applied to the software via KB: 1013967 applied to the DCC processes? Could the problem we are experiencing be related to the exact same issue that caused the lockups on the Import Agent processes?

We would like the second issue to be addressed but need to address the first issue as well since we are getting pushback from Laserfiche and our channel that we are not up to the needed hardware CPU requirements.

Thank you

Roger Landers / City of Plano

 

0 0

Replies

replied on February 25, 2019

In my experience DCC is not stable enough to run on a server.  Those runaway LFOMNIOCR processes will betray you over and over.  There are things you can do to mitigate this issue. 

  1. Distribute the workload to workstations and schedule your jobs to run during off hours.  When you have a runaway job it doesn't take the whole organization down.  Test on your own machine until you nail down the offending entries.
  2. Use advanced search syntax to eliminate troublesome entries.
    1. Extremely large pages.  I added -{LF:imagesize > 20000000} to my search and that helped a lot.  We had huge scanned maps that would fail and hang the OCR processes every time.
    2. Cutoff documents.  -{LFRM:ActualCutoffDate= "*"}  There is no level of permission that can edit a cut off document.
    3. EntryIDs that appear over and over in failed DCC jobs.  I found five in my repository that DCC doesn't like and eliminated them with - {LF:ID=9999} 
3 0
replied on February 25, 2019

Thank you for replying Erik. These are some great tips and work around items for us. 

We currently use 3 Windows Server 2012 R2 VMs to distribute our DCC workload that are separate from any other workloads so as you mention, we don't take down the whole organization. Previously we had some sharing of processes going on and it was affecting WF so we separated responsibilities. 

We are looking forward to a response from LF on the KB article mentioned as well as the use to vCPUs to comply with minimum CPU requirements.

0 0
replied on February 25, 2019

Something else that you can do is to make sure that once an hour (or whatever works for you) shut down all of the DCC workers using the provided PowerShell modules. Then you can check for any rogue lfomniocr19 processes, and use PowerShell to also kill them off. Once you're sure that everything is under control, you can start queuing documents again.

It's kind of a dirty hack, but until DCC is more reliable, there aren't a lot of options.

2 0
You are not allowed to follow up in this post.

Sign in to reply to this post.