You are viewing limited content. For full access, please sign in.

Question

Question

QF deleting original documents without create new ones

asked on January 16, 2015

We have the dreaded issue in which QF 8.3 is set to delete original documents after processing, but the session ends up deleting the originals and then not creating the new documents.

We have read other posts where it was suggested to do something with the original documents other than deleting them, such as moving or tagging them.  We've thought about that, but these methods do not provide a mechanism to alert us that the new documents have not been created.  And, we end up consuming more storage space by moving or tagging the originals (and thereby creating two copies of each document.)

This deletion of documents happens infrequently, but regularly.  Our users are getting restless.

We are running LF 8.3.1 and QF 8.3.  At present, we cannot upgrade to LF 9.x, but can we employ QF 9 with LF 8.3.1?  Would that help with this issue?

What would be ideal would be if QF could be engineered so that it never deletes a source document until the destination document has been created.  Any chance of that happening?

The text of the error from the log file follows (it's always the same):

<Error Time="01/12/2015 17:41:27" Type="DirectoryNotFoundException" Message="Could not find a part of the path 'C:\ProgramData\Laserfiche\Quick Fields\Files\d0760dbe-bf1e-4d6f-aa01-2991f03819a1\Queue\00000001.d'.">
  <HelpLink />
  <Trace>   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy)
   at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share)
   at System.Xml.XmlTextWriter..ctor(String filename, Encoding encoding)
   at Laserfiche.BatchProcessor.ComponentModel.StorageManager.WriteToXmlFile(String filename, String xml)
   at Laserfiche.BatchProcessor.ComponentModel.StorageManager.WriteDocumentToFile(String documentFile, String documentXml)
   at Laserfiche.BatchProcessor.ComponentModel.StorageManager.EnqueueObject(String objectXml, Boolean isDocument)
   at Laserfiche.BatchProcessor.ComponentModel.StorageManager.EnqueueDocument(String documentXml)
   at Laserfiche.QuickFields.Runtime.QFSessionProcessor.CreateDocument(String documentXml)</Trace>
</Error>

0 0

Replies

replied on January 19, 2015 Show version history

Two things come to mind with this:

 

1) Is an antivirus program delaying the write of this document to this location on occasion? (or something else happening it to not be created yet?)

2) Are you occasionally running out of room on the c: drive of the machine running QF?

 

Quick Fields pulls the data in and stores it from whatever location it's set to pull from, deletes it from that location, then access it from the temp location. It doesn't confirm that the temp data was written correctly. 

 

Because of this I've seen calls into our support center with very similar issues as you have, and one common thing that has caused this is some sort of delay in writing. Since it's delayed, QF can't find the document just a bit later (many times measured in milliseconds) so it fails. You can manually go to the directory above using your favorite .tif viewer and find lost documents here sometimes. 

 

The first thing I would do is make sure it's not choking on a corrupt file in your system. If you have QF Agent, disable all of your sessions. Then if you have QF agent or not manually open up ALL of your quickfield sessions and make sure there is nothing "stuck" in them. Once you have confirmed you have nothing inside any of the sessions you can clean out everything under C:\ProgramData\Laserfiche\Quick Fields\Files manually.

 

If this doesn't help and you can't completely isolate the issue you could upgrade to 9, it will work with 8.3 with the exception of the workflow related pieces. However I don't think that would solve your issue, as I've seen errors of this sort in 9 as well.

 

The most effective way I've ever dealt with mission critical documents is to setup a failsafe workflow for your QF documents that only deletes the original once it's processed through to Laserfiche. Whenever this sort of thing happens I've never seen a page in the middle of a document fail without the whole document after that point failing too.

 

So here's my work around that I use if the original document is a tif. I create a workflow that inserts a pre-ocr'd one page document as the last page of the tif I'm working with. This page will generally have the phrase ZZZ123ZZZENDOFDOCUMENTZZZ123ZZZ on it. I also add a metadata field with the original document ID to it. Then as part of my quick fields identification section I identify this page and store it away someplace. For the identification you can use a whole page text pattern match looking for that that phrase and it won't really add any time to your QF session. I also set QF so that it doesn't delete the original right away. Once this ENDOFDOCUMENT page is stored back into laserfiche I know the original document is safe to delete, so a workflow process can delete the original and the new separated temp page (using the document ID I stored earlier). For those clients who run QF but do not have identification module I would store a token indicating the last page came through then use workflow to remove the original and the last page of the newly stored document.

 

This method is also great for alerting your users to problems with QF not processing by using a scheduled workflow to look for items that are waiting for QF to finish whose modified time is more than X amount of time.

2 0
replied on March 5, 2015

Hello Pete, not sure if you've figured out your issue but I have run into similar issues.  In this case the LF server and QF/Agent were all on same box.  Agent ran several QF sessions non-stop.  There was one particular session that would successfully delete the originals, but the processed files would be nowhere to be found.  After going in circles trying to figure it out we finally saw some patterns in the logs that indicated the session was becoming non-responsive.  there is a setting in Agent for a timeout interval, which was set at 5 minutes.  So basically when the session was non-responsive for 5 minutes, Agent killed it and moved to the next session in the queue.  And in this case, losing the processed docs...after it had already deleted the originals.  Increasing the timeout period resolved our issue.  Not sure why it was taking so long to process this particular session, but the client has since upgraded hardware and everything runs great now.

2 0
replied on January 20, 2015

Chris - thank you for the detailed information.  We have plenty of disk space, but we *do* run antivirus on the Quick Fields server.  Not sure we can avoid that.

0 0
replied on January 20, 2015

Are these documents coming from a location that has already been virus checked? (I.e. when they are put in laserfiche?)

 

You might make an exception for *.tif images in those Quick Fields temp directories. 

0 0
replied on January 20, 2015

Yes - documents are first scanned to our main Laserfiche server, and then picked up for processing by a second server that just runs Quick Fields.  So in effect, a document is picked up from server A, moved to server B for Quick Fields processing, and then stored back to server A.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.