You are viewing limited content. For full access, please sign in.

Question

Question

Search Catalog Failed to Start - SWAP Tag Corrupted

asked on May 7, 2014

Hello..... 

Around 3:00 a.m. this morning, we experienced issues with a storage cluster.  Several VMs including the Laserfiche server crashed.  After resolving the storage cluster issues, Laserfiche was restarted but the Full Text search engine for one of the repositories does not start.  The following nasty grams were sent to me by the folks in Systems…

 

Log Name:      Application

Source:        LFFTS

Date:          5/7/2014 1:14:19 PM

Event ID:      32896

Description: The search catalog TAMUCC failed to start.

 

Log Name:      Application

Source:        LFFTS

Date:          5/7/2014 1:14:19 PM

Event ID:      33179

Description: SWAP tag corrupted.

 

If I try to view Indexing Properties in the Admin Console, I get:

 

Details:

Error Code: 9493

Error Message: The search catalog failed to start. The catalog settings are mis-configured. [9493]

 

------------ Technical Details ------------

 

LFSO:

    Additional Details:

        HRESULT: 0xc0042515 (LFSession::ProcessResponse, LFSession.cpp:3753)

         (LFSO/9.0.3.777)

LFAdmin.dll (9.0.3.798):

    Call Stack: (Current)

        CIndex2PropPage::DisplayNoiseList

    Call History:

        CIndex1PropPage::SetupCatalog

        CIndex2PropPage::DisplayNoiseList

 

 

  1. Will rebuilding the index solve this problem? 
  2. Are there any other procedures that must be performed to get this engine going again?
  3. What is a SWAP tag?...what does it do?

 

The other 2 repository search catalogs started without problems. 

 

All guidance appreciated.

 

Thank you.

 

Dennis

0 0

Answer

SELECTED ANSWER
replied on May 7, 2014

Backup the registry before making any changes.

 

Stop the Laserfiche Full Text Search and Indexing service. Then go into the registry to HKEY_LOCAL_MACHINE\SOFTWARE\Laserfiche\LFFTS\Database and delete the registry key that represents the search catalog for the specific repository that has the issue. You can tell by looking at the Name string. Once the key has been deleted, then restart the Laserfiche Full Text Search and Indexing service. Before creating a new catalog, make sure that you delete the index files for the old catalog. This is normally in the SEARCH folder under the repository path.

0 0

Replies

replied on May 7, 2014

I would suggest deleting the search catalog and creating a new one. If you can't access the index properties in the Laserfiche Administration Console to do this, then you would need to modify the registry to manually delete the catalog and then you can recreate it from the Administration Console. Please contact your Laserfiche reseller for assistance with this.

 

The error message basically means that the search catalog was corrupted as a result of the pagefile error that most likely resulted from the cluster issue you mentioned.

0 0
replied on May 7, 2014

Hi Alexander....

Thanks for the response.

I will pass your suggestion along to the Systems group....but contacting my Laserfiche reseller....well we no longer have one.....now we call Laserfiche personnel....

I used to call Andrew Kamar there at Laserfiche but I found out a couple of days ago that he left and we have a new contact....a Mr. Ken Meissner.... 

 

Tomorrow when I get back to work (I'm working from home right now) I'll give him a call.

 

Regards.

Dennis 

0 0
SELECTED ANSWER
replied on May 7, 2014

Backup the registry before making any changes.

 

Stop the Laserfiche Full Text Search and Indexing service. Then go into the registry to HKEY_LOCAL_MACHINE\SOFTWARE\Laserfiche\LFFTS\Database and delete the registry key that represents the search catalog for the specific repository that has the issue. You can tell by looking at the Name string. Once the key has been deleted, then restart the Laserfiche Full Text Search and Indexing service. Before creating a new catalog, make sure that you delete the index files for the old catalog. This is normally in the SEARCH folder under the repository path.

0 0
replied on May 7, 2014

yes Thank you Alexander....I have passed this information on to Systems.

 

Regards.

 

Dennis

0 0
replied on May 9, 2014

Hi Alexander....

I was able to delete the corrupted catalog using the Admin console interface so we didn't have to use the manual method you provided.  We were able to rebuild the catalog and the full text search is available for that repository.

 

I was surprised at how little time it took to re-index the documents....I thought it might take  a couple days...but this took about 4 hours to complete....and this was in the middle of the day...

That repository contains 

  1.  676,788 child folders
  2.  2,813,055 documents
  3.  5,303,648 images (1687.62 GB)
  4.  1,156,560 text files (6.63 GB)
  5.  1,529,837 electronic files (180.09 GB)

 

I know a lot depends on system load, network traffic, server power...etc....but is this about the "average" amount of time it takes to re-index a repository of this size?

 

Thanks again for your help.  yes

 

Dennis

 

 

 

 

 

0 0
replied on May 12, 2014

Hi Dennis, the index speed also relates to the files:

  • For text files, index speed is fast. In our test, it takes about 5 minutes to index 2GB text files.
  • For electronic files:

          For those which have searchable text generated, LFFTS would not index the electronic files, LFFTS would index the searchable text instead.
          For those extensions for which extraction is supported, LFFTS would extract the searchable text temporary, index the text, and delete the generated text. (Extract text from edoc is time consuming, it depends on ifilters. In our test in 9.0, it takes about 30 minutes to index 1GB pdf documents and 15 minutes to index 1GB word documents.)
          For those extensions for which extraction is NOT supported, LFFTS would not index them.
          
Thus, I suppose it is possible that your repository finished indexing in 4 hours.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.