You are viewing limited content. For full access, please sign in.

Question

Question

Laserfiche Client version 11 - General Database [Error 9008] while Searching

asked on December 4, 2021 Show version history

We are facing a General Database Error [9008], on different computers while searching. 
We are using the Laserfiche version 11, and Database is SQL Server (SQL 2019 CU 14 Standard).
The search criteria is Field Search and Field is EmpID. The Template Fields are already set to indexed. 


Additionally, Whenever we reset the Client settings, it works and search the data for the first 5-6 queries later on for every query it start giving the Error [9008]. The Detail error is as follow:

Error Code: 9008
Error Message: Error executing SQL command.
General database error. [9008]

------------ Technical Details: ------------

LF.exe (11.0.2103.223):
    Call History:
        CSearchView::OnSearchDone
        IdleTimerProc
        CFolderListCtrl::Refresh
         CFolderListCtrl::LoadContentsHelper
          CTocCacheWorkerThread::SetDatabase
        CSelectionProperties::GetFlags
        IdleTimerProc
        IsRepositoryLoggedIn

 

 

##Laserfiche Client
Error Code: 9008
Error Message: Error executing SQL command.
General database error. [9008]

------------ Technical Details: ------------

LF.exe (11.0.2103.223):
    Call Stack: (Current)
        CSearchView::RunSearch
        CSearchView::BeginSearch
    Call History:
          CLFClientAutomation::ExecuteAutomationCommand (GetWindowInfo)
           GetRepositoryProperties
            GetOptionString ([Settings]CloudCustomer-kcc)
          CLFClientAutomation::ExecuteAutomationCommand (GetInstanceInfo)
          CLFClientAutomation::ExecuteAutomationCommand (GetWindows)
          CLFClientAutomation::ExecuteAutomationCommand (GetWindowInfo)
           GetRepositoryProperties
            GetOptionString ([Settings]CloudCustomer-kcc)

1 0

Replies

replied on January 5

I came across this thread after having the same issue for a client of ours. I did an in-place upgrade for them from 10.4.3 to 11 and users started experiencing the general database error. After reading this thread I was reluctant to enable the LCE, so I asked some colleagues if they have enabled it before. Luckily enough, they ran into this issue and the following solved it for them and for my client. So the credit goes to them. 

 

When you do an in-place upgrade, it seems that the database driver doesn't update in the Administration Console. I'm not sure if this happens every time or just to us lucky ones. 

 

If you use the change db connection in the administration console and just re-pick your database settings, it should update it:

 

 

Now it's showing driver 17 and the users are no longer getting the general database error. 

 

Hope this helps the next guy! 
 

8 0
replied on January 12

Thanks Shaun-Deprez ; We made the suggested changes to all out instances.. Few users are no longer facing this issue. But will monitor for some more time, if the issue arises again. 

0 0
replied on January 12 Show version history

@████████, I spoke too soon. Even with that change, it appears that the issue has come back for some users, even myself. There must be more to it....

3 0
replied on January 12 Show version history

Thanks @Shaun Deprez ; we will continue to monitor and take help from LF. As of now we still haven't got any new suggestions / solution from LF support on this.

0 0
replied on July 31 Show version history

I'm just recapping what we've done in case anyone wants to compare notes:

Upgrading the ODBC driver didn't help us at all. We are running Laserfiche 11.02 build 338 on Windows Server 2022 and SQL Server 2019 on Windows Server 2019.

Enabling Legacy Cardinality Estimation might have helped a little but didn't make the problem go away.

It does not seem to be a hardware issue as neither processors nor RAM are ever maxed out.
We tried indexing some fields in our DEV environment but that didn't seem to help either.

I will admit that in our case the repository that throws this error the most often has a very large number of metadata fields and a large number of documents. If any repository of ours was going to have a problem like this then it would be this one.

We tried carefully defining the column profiles based on templates so users don't search for metadata that isn't present in certain documents. Seemed to help a little at first but now not so much.

The errors come and go. The repository can also be searched via WebLink. There are a lot of wildcard searches performed via WebLink and I don't doubt that the desktop user searches might be timing out while the WebLink searches hog resources but I can't prove that. Again, strictly from a hardware point of view, the server never seems to be overtaxed.

We've tried doing SQL profiling to see if that would reveal anything useful. We identified one search in particular that was running for a long time and sent it to LFC Support to examine. We've had a ticket open with them on this for over a year about this.

Our DBAs do have jobs that do indexing maintenance on a weekly basis. They follow the recommendations of some well known SQL guru, but the name escapes me at the moment.

That's all I can recall off the top of my head.

 

-MB

5 0
replied on July 31 Show version history

Thanks for the update. SQL guy's name is likely Brent Ozar. 

This is what stood out for me in that reply:
"We've had a ticket open with them on this for over a year about this."

^this is not OK

4 0
replied on July 31

Thank you for the update Michael. I second to all you have said. We did what we can to narrow down and find the reason for this error too and but no luck. It  mostly just sporadically happens but if we are searching a big amount of data with a lot of columns, it almost guarantees the error. We have to built column profiles to show the least amount of data to get a result from a big search. But again, its not a solution its a workaround. Having "Total Document Size" column will guarantee the error so we have learned to not include that column on any searches.
 

We also have a ticket with our service provider about this and they are also waiting news from Laserfiche. Last time we spoke with our provider, they said they asked this to a senior Laserfiche manager but didn't get a proper/tangible answer. So, we dont know what else to do, just going to use the workaround for now as usual. Seems like Laserfiche is too busy to check this issue properly to fix it.

It's really not okay, this thread is going on for almost 3 years now and so many users are having the same issue.

We didn't have this issue when we were using 10.2, it all started after we updated to Laserfiche 11. And if not now, I'm hoping it will be fixed with Laserfiche 12.(hoping)

 

Thanks!

2 0
replied on September 3

I haven't seen any new comments on this thread from anyone. Wanted to check in and see if anyone has resolved it?

3 0
replied on September 3

I'm still seeing it intermittently when navigating around folders in the Client.

1 0
replied on September 6

When we open support cases Laserfiche tells us not to increase the timeout, to increase the hardware speed. But hardware can be expensive and time consuming, where waiting a little longer than 15 seconds is not expensive or time consuming. We increase the timeout if the customer wants to be able to do the searches, and recommend increasing the hardware.

Reference case: 222080

0 0
replied on September 6

For proper context, see https://answers.laserfiche.com/questions/216556/What-would-be-performing-this-query-from-the-repository#216561

Typically, for issues related to SQL query timeouts, we first recommend performing database maintenance to see if that may help. One example we commonly suggest to customers is example B from https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html

If the query timeout still occurs, then the next suggestion may be to examine what columns the user encountering the issue has enabled. Perhaps they have one or more columns enabled that may particularly tax the SQL Server more than others and perhaps they can disable that column from the listing if it isn't required.

If the issue still occurs after doing this, then a third suggestion is to increase the query timeout specifically for getting entry listings. In the Laserfiche repository database, you can insert the following option into the dboptions table

EntryListingQueryTimeout

and give it a value of 30 to start. Restart the Laserfiche Server service and see if that additional time is enough for the query to complete.

Note that a high timeout value may degrade the performance of other sessions and in the worst case it could cause the server to become unresponsive if it's set too high and there's a large number of very slow listing queries that the server starts to run. 30 is a safe starting point if going this route.

0 0
replied on September 6

Alex, I am curious why this error only started appearing in version 11? I have done a few different upgrades for customers from 10 to 11 and it has shown in the logs after the upgrade. Is Laserfiche Server recording events differently than it used to? If so, that still doesn't explain why users were not experiencing the error when searching in version 10. Is there any light you can shed on that?

0 0
replied on September 6

So having a lower timeout value actually halts the query when the time is reached? With most timeouts, the query keeps running on the SQL server, but the client stops waiting for it so the performance is not impacted by the client simply waiting longer for the data.

0 0
replied on October 23, 2023

Adding here in case anyone else is seeing the error for user searches for existing repository with large dataset 

LaserFiche team recommended the below steps for further performance tuning

 

  • Remove all columns from the Search View and test to see if error still occurs
  • Rebuild (not recreate) all indexes
  • Update Statistics
  • Shrink transaction log
  • Examples from this link can be used to rebuild indexes and update statistics - https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
  • LaserFiche have recommended to apply Example B from this page (towards the bottom, copying it below for reference):

EXECUTE dbo.IndexOptimize
@Databases = 'USER_DATABASES',
@FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y'

2 0
replied on November 8

Hello,

Did this seem to resolve the issue for you?

0 0
replied on December 6, 2021

Please post back if you get a resolution to the issue. 

We are seeing this as well.  Work around for us is to limit the search.  Example being to limit the depth of folders and not searching text.  If the search criteria is as narrow as possible, it won't error out.

1 0
replied on December 6, 2021

Hi Shawn,

I'm finding this issue sometimes also.

I hope Laserfiche can find a solution.

 

0 0
replied on December 20, 2021

I resolved this issue for one of our users by wiping out the account attributes. Once done, the search error went away.

1 0
replied on December 20, 2021 Show version history

If that is using tools-options-reset in the client then it will likely just come back.  I reset my client last Thursday and searches began working again and were fast.  However, it is Monday today and the error is back. 

Another odd thing is I will get the error logging in to 2 different machines.   My laptop is one and the other is a virtual machine.  I use the same login credentials in both cases.  Once I reset the client on my laptop searches begin working for both the laptop and the VM. 

This only started happening to me once I moved to Laserfiche 11 client.  The server has also been upgraded as well. 

0 0
replied on December 20, 2021 Show version history

I took it as LF Admin Console, user properties, attributes. It is also per Repository.

I tested this by removing all attributes from one of my users and the searches worked. Now it is going through them all to try and figure out which of them are the cause.

Edit:  You can view just the attributes related to Search from a drop down list.  I also had a weird problem with OCR on dragging and dropping files into LF that was fixed by removing the attributes for OCR.  Didn't even think about it until the Search removal worked and I saw OCR listed.  Both errors came up after updating to version 11.

0 0
replied on December 22, 2021

I'm that user ;)

You are correct - in the admin console. 

So far it is still working well.

But, when I restore my exported attributes, it still has the problem. So I removed the Search attributes and that ... didn't fix it!

By chance, I then removed the XmlColumnDisplay attributes next and it DID fix it!

For me, as a user sample as one, I narrowed it down to the attribute in

XmlColumnDisplay: 

Name: Data

Data: MULTI-ATTR:2

I've now restored that attribute twice and it causes the search to fail, while removing it fixes search!

Steve

XmlColumDisplayAttribute.jpg
2 0
replied on November 7

Hey Steve,

Has this continued to be the solution for you guys? Just curious after a few years if that seems to still have been the long-term solution. I've run into this enough times now and still don't have a concrete solution, but this one appears to be working at the moment.

0 0
replied on November 8

It does seem to help most of the time. When it doesn't I remove everything from the attributes.

Whether I'd call it a solution? I'm curious if it'll still be a problem in 12.

0 0
replied on August 19, 2022

I am having the same issue.. Has anyone found a solutions?

1 0
replied on October 12, 2023

What is the final resolution for this, we tried deleting the attribute but still its happening. This started happening after we upgraded to V11

1 0
replied on October 12, 2023

We opened multiple cases on this. When you get this error you must continue to narrow your search. After a lot of testing we found for most templates, on an average SQL server, if the result count is under 50k you can usually avoid the error.

If you want to sort by a column, the result count should be less than 10k.

3 0
replied on October 12, 2023

Thanks @████████, Curious to know what LF team suggested as a solution for this. Some users were reporting this error while browsing folders as well - not just searching.

1 0
replied on October 12, 2023

Not sure about the browsing timeout, but for searching they recommended narrowing the search information returns (less columns, less entries, etc)

If your SQL server does not have enough memory it can seriously impact the amount of time certain queries take to complete. Determining how much memory it needs for a given database remains sort of a mystery based on numerous articles and discussions, but you can match memory to disk usage as a baseline.

IE: If your database files are 10GB, allow SQL access to 10GB of dedicated memory.

Database index maintenance is also something you must do manually, the service does not do it for you.

0 0
replied on October 12, 2023

We had this issue with slowness in traversing the  tree structure as well as in Search. When we removed most of the columns it would be fast again. We added columns back to the Search display and found there was an issue with 1 coumn that caused the slowness. Not sure why.. but removing it from the display solved the slowness issue. Can you try that and post back?

0 0
replied on October 12, 2023

Some of our Laserfiche users getting 9008 error message while browsing or searching in Laserfiche.

Please advise

  • what is the cause of this issue?
  • how can we fix this issue?
1 0
replied on October 12, 2023

We still get this error occasionally. Our users have been trained to remove columns and then add them back after results are returned. Pretty lame decrease in functionality. Other times, if the error pops up and I wait, everything loads eventually and the error pop up disappears. So inconsistency there. 

3 0
replied on October 23, 2023 Show version history

One new development is that on the hour we run multiple PowerShell scripts that absolutely kill the CPU usage. Not RAM, only CPU. These scripts are importing quite a bit into a SQL table. That seems to have been a contributing factor here. After moving the scripts off of the SQL server we have seen less of the above database error message. So we were crushing the SQL server. Whoopsey. 

 

1 0
replied on March 22

Another thought...

I can more often than not reproduce the database error with a specific column viewing in the web client (as LF suggested with narrowing the columns down to find the offending one). The SQL server states this is the offending query (similar to the one @Oncur posted). But my query is different in that the offending column is an actual explicit join in the SQL statement (pvj27.prop_id=62 is the offending column). Why would the SQL statement get an explicit join to the column where the other columns do not? ...this is just a straight text field with nothing special in it.

select id,link,type,flags,sdctl,rfid,parentid,toc_uuid,toc_owner,isFrozen,recordState,hasacl,c0,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15,c16,c17,c18,c19,c20,c21,c22,c23,c24,c25,c26,c27 from (select toc.tocid as id,toc.linkto as link,toc.etype as type,toc.toc_flags as flags,toc.toc_sdctl as sdctl,toc.rfid as rfid,toc.parentid as parentid, toc.toc_uuid as toc_uuid, toc.toc_owner toc_owner,case when toc.tocid in (select tocid from frozen) then 1 else 0 end as isFrozen,case when toc.rfid is null then 0 when toc.isDispositioned = 1 then 4 when toc.iscutoff = 1 then 3 when toc.isclosed = 1 then 2 else 1 end as recordState,case when toc.toc_flags & 32 = 32 then 1 else 0 end as hasacl,case when linktoc.etype>=0 then 1 else 0 end as isfolder,toc.name as c0,toc.created as c1,toc.modified as c2,case linktoc.etype when -2 then linktoc.pagecount else null end as c3,linktoc.pset_id as c4,toc.tocid as c5,0 as c6,case when linktoc.etype = 0 and linktoc.rfid = linktoc.tocid then 320 when linktoc.etype = 0 and (linktoc.toc_flags &  1) = 1 then 257 when linktoc.etype = 0 then 256 when linktoc.etype = 1 then 384 when ((linktoc.edoc_mime IS NOT NULL AND linktoc.edoc_mime <> '') or (linktoc.edoc_ext IS NOT NULL AND linktoc.edoc_ext <> '')) and (linktoc.toc_flags &  1) = 1 then 17 when (linktoc.edoc_mime IS NOT NULL AND linktoc.edoc_mime <> '') or (linktoc.edoc_ext IS NOT NULL AND linktoc.edoc_ext <> '') then 16 when (linktoc.toc_flags &  1) = 1 then 1 else 0 end as c7,linktoc.vol_id as c8,linktoc.pset_id as c9,cast(null as varbinary(85)) as c10,toc.etype as c11,toc.linkto as c12,linktoc.etype as c13,linktoc.rfid as c14,case when exists (select * from frozen where frozen.tocid = linktoc.tocid) then 1 else 0 end as c15,linktoc.isdispositioned as c16,linktoc.pset_id as c17,toc.parentid as c18,linktoc.edoc_ext as c19,case when linktoc.edoc_storeid is null then null else case when linktoc.edoc_size is null then 0 else linktoc.edoc_size end end as c20,case when exists (select * from doc where doc.tocid = linktoc.tocid and doc.img_etag is not null and doc.pagenum >= 0) then 1 else 0 end as c21,case when (linktoc.toc_flags &  512) = 512 then 1 else 0 end as c22,case when (linktoc.toc_flags & 512 = 512 and linktoc.toc_flags & 1024 = 0) then (select top 1 ad.who from active_doc ad where ad.tocid = linktoc.tocid) else cast(null as varbinary(85)) end as c23,plock.owner  as c24,case when exists (select * from doc where doc.tocid = linktoc.tocid and doc.txt_etag is not null and doc.pagenum >= 0) then 1 else 0 end as c25,toc.name as c26,pvj27.str_val as c27, l.row_num from toc with (nolock) right outer join #tmplisting l on toc.tocid=l.tocid  left outer join toc linktoc   with (nolock)  on case when toc.etype = -1 then toc.linkto else toc.tocid end = linktoc.tocid left outer join (select plock.*, ROW_NUMBER() OVER (PARTITION BY tocid ORDER BY owner) AS rn from plock   with (nolock) ) plock on linktoc.tocid=plock.tocid and plock.rn = 1 left outer join propval pvj27 on pvj27.tocid=linktoc.tocid and pvj27.prop_id=62 and pvj27.pos=0 and (pvj27.pos2 is null or pvj27.pos2<=1) where ((toc.etype is null or ((toc.etype=-2 or toc.etype>=0) or (toc.etype = -1 and (linktoc.etype=-2 or linktoc.etype>=0)))))) as q1  where q1.row_num>=@P1 and q1.row_num<=@P2 order by q1.row_num

 

1 0
replied on May 31

Has anyone made any progress with this issue?

1 0
replied on May 31

We resolved it with this Setting: Legacy Cardinality Estimation=ON

We also increased Query Timeout value with below Insert:

insert LFWipRepository.dbo.dboptions
select 'EntryListingQueryTimeout', 90

We are using SQL Server 2022, but the setting can be used on lower versions of SQL Server as well.

 

2 0
replied on June 3

These didn't make a difference for me. I've started doing some queries directly in the database.frown

0 0
replied on June 11

This made a difference for us. We did set Legacy Cardinality Estimation back to off though as other things were happening. But the entrylistingquerytimeout setting appears to work well to not have Laserfiche give up on the SQL server so easily.

0 0
replied on July 2

Hey @████████- I've been seeing this issue lately too - just while browsing around in folders via the Windows Client.  I can often get the same folder to load shortly thereafter, so it's not the specific folder or files in it.  Event viewer shows ODBC Driver 17, so it's not the other issue mentioned here about still being on 11.

Commenting here in case anyone responds with better ideas.

1 0
replied on September 3

I know upthread some people have said that there are particular column headings causing the issue in some cases. (I know headings that involve calculations such as the 'Total Document File Size' can cause it to happen.) In our case it is our 'Fiscal Year' field which is part of our Invoice template. I was wondering if there was anything peculiar about it that might cause the issue. Here is how the field is set up:

It's a "List" field which means it appears in a drop down box when doing searches. The 'Blank' option is checked. The field is used in four different templates.

One thing I noticed that is probably 'bad form' is that we left the Width of the field at 40 when it obviously doesn't need to be that long.

The other thing I noticed is that all the values are very similar to each other: They are always 201X or 202X where X is the last digit in the year. Could that matter?

 



The error pops up when browsing the repository but sometimes it takes a couple of tries. Even when the error doesn't pop up, searching feels very slow. If I remove the 'Fiscal Year' column then suddenly all searches are lightning fast. Almost instantaneous.

Document counts by template:

BOA Claim - 1330

Invoice - 43,480

Legal - 4460

Travel - 1397

So, that's a total of 50,667 documents. Not an astronomical number for the documents that use these four templates.

Does anybody see any commonalities between the fields that throw the error for them and our 'Fiscal Year' field here?

1 0
replied on December 6, 2021

Please open a support case.

1 2
replied on December 9, 2021

Here as well. If there is a solution, please post it!

My server is LF 11 and I've tried both the 11 and 10.4 clients. At the bottom of the error in the event log on the server (after all the query text) I see the timeout is set to 15 seconds?

...order by q1.row_num"; Timeout=15(sec).

I know I'm doing a query that might take a little time. I'm asking the search engine to traverse 636 folders and inspect 14,482 documents with 179,620 pages and return to me any that are over a certain size (or even worse, greater than nn and less than nnn!). 

But I'd rather not break it up into smaller chunks of search. I'm ok with it taking a minute or two.

Steve

0 0
replied on January 31, 2022

Anyone know what the 9008 error specifies? Removing all attributes on the user only makes the error go away temporarily by reducing the amount of data displayed on the screen, for example extra columns.

I can not find anyone who knows what this error specifies, is it a SQL timeout?

0 0
replied on January 31, 2022

The details are recorded in the LFS event log. It is most often a timeout, but this is the generic error for any unexpected SQL failure.

0 0
replied on February 1, 2022

Ah ok so it is a timeout in my case, that helps explain things a lot. The time outs for large searches are very short, around 15 seconds. So SQL is getting the job done, but the app is impatient.

0 0
replied on June 21, 2022

Is there a setting for us to change the timeout period and forego some of these generic errors? thanks!

0 0
replied on June 21, 2022

From what we were told no, the search results usually need to be limited to 50k entries on most SQL servers (not sure if some hardware tech exists out there that gives you more)

Also you can only sort on columns with results under 10k entries.

0 0
replied on June 21, 2022

very useful info, thanks!

 

0 0
replied on June 21, 2022

Is there a setting for us to change the timeout period and forego some of these generic errors? thanks!

 

replied on August 19, 2022

In our case it was the Notes column... Removed that and it was fast again

0 0
replied on May 2, 2023

It's 2023 and we're seeing the same thing. Did anyone try upgrading to SQL Server 2022? Did that improve things?

0 0
replied on May 3, 2023 Show version history

We migrated last December to all the newest server options including SQL Server 2022. Unfortunately this has become a training point for users. But...... even when there are only a few small columns appearing, it still happens. 
 

When that happens the message becomes, "wait a minute and hit refresh".

Lame. 

2 0
replied on May 3, 2023 Show version history

I mean, I'm tempted to just throw more processors at it but if it's a limitation of the software itself then that's not going to solve anything.

If we weren't running VMs then I would say maybe we could try faster processors but that's not an option for us. 

0 0
replied on May 3, 2023

From what I understand you would want to throw the processing power on the SQL server and maybe some on the LFS server. You can't ever have too much SQL power, right??

 

1 0
replied on May 3, 2023

It's only money... We upped the RAM on our SQL server to 128 GB but we currently only have 8 cores total configured as 4 sockets with 2 cores each.

0 0
replied on May 3, 2023

Has anyone tried turning on Legacy Cardinality Estimation to see if that helps?

1 0
replied on May 3, 2023 Show version history

I will suggest it to our DBAs. Although is this effectively downgrading the version of the database? If Laserfiche has already upgraded the database to the lastest version then is that going to cause problems?

0 0
replied on May 3, 2023

Not exactly. It is changing the estimator to use an older version, but the rest of SQL Server remains the same. In our case, we had slowness issues within the Windows Client showing folder results when certain columns were shown. Turning on LCE resolved our slowness issues and allowed us to upgrade to version 11 while we continue to look into the cause of the slowness.

2 0
replied on May 4, 2023

Okay, I did a cursory bit of googling and this seems like a good explanation:
 

https://bobcares.com/blog/sql-server-use-legacy-cardinality-estimation-on-specific-query/

 

The Legacy Cardinality Estimator in the SQL Server is a result of histograms from indexes or statistics. In other words. SQL Server relies on constraint information as well as logical rewrites of queries to nail down cardinality.

The Cardinality Estimator is responsible for predicting the number of rows a query will return. Furthermore, it also determines the memory allocation of the query.

 

This makes sense. I could certainly see the error being the result of SQL not allocating enough memory to complete the query, especially if it's returning a large number of rows.

Unfortunately...

 

However, enabling Legacy Cardinality Estimation affects all queries. In some cases, it may also damage the query performance.

 

So turning it on the for entire repository database might not be a good idea. Apparently it's possible to set it on a query by query basis:

 

Adding FORCE_LEGACY_CARDINALITY_ESTIMATION to the query forces the query optimizer to use an older version of Cardinality Estimation.

 

But there's no way to do that in this instance as the actual query in question is generated by the Laserfiche Desktop Client. So seems to me it would have be forced in the code.

So for which database(s) did you turn it on? Did you see any other queries being negatively affected?

1 0
replied on May 4, 2023

We turned it on for the repository database only. We did not see any degradation of performance with other queries. 

2 0
replied on May 7, 2023 Show version history

Hi @████████, curious whether error has recurred since turning it on for the Repo database, @████████ would your team have tried this, any news whether it helps? 

We are seeing it with LF client 11 (11.0.2301.262), Win 2022/SQL 2019, we have created a support case, will post here if any resolution is found
 

In this case it is a timeout in the event logs, not just on searches but on loading the repo or folders with large amounts of rows to return

2 0
replied on May 8, 2023

We never received the error to begin with. I was only giving a suggestion. 

0 0
replied on May 8, 2023

We're sorting some other things out with our SQL server first before we try this but I will let you know if we do.

0 0
replied on May 8, 2023

Thanks for the suggestion @████████, thanks @████████, will keep an eye out. 

We have double checked with LaserFiche team and enabled it, will monitor.

1 0
replied on May 11, 2023

@████████, I am anxiously waiting to hear what you discover with your LF support ticket!

 

0 0
replied on May 11, 2023 Show version history

It depends on your hardware, but for a couple of our environments as long as we kept search results under 50k entries returned and sorting by columns under 10k entries displayed we could meet the timeout demands and prevent this error. I opened a support case but no option was given to increase the timeout to support more entries.

1 0
replied on May 11, 2023

@████████, LF team recommended increasing timeout at db server level, it is already set to 0 (unlimited,no timeout) so we have gone back to see if there are any other suggestions

 

Thanks @████████, we are finding the error hasn't been reported since applying Blake's suggestion to set Legacy Cardinality Estimation=ON

The event viewer still is logging the error with timeout=15s a few times a day; performance otherwise does not seem negatively impacted so far.

 

2 0
replied on May 12, 2023

How long have you had  Legacy Cardinality Estimation=ON?

Thanks for sharing your adventure with us. 

0 0
replied on May 12, 2023

We turned on LCE 3 months ago.

1 0
replied on February 9

This error is still on the go. It happens when users using the quick search. Doesn't happen all the time, but occasionally. Since we updated LF 11, this error randomly appears. We opened a support case with our provider but there is no tangible solution to it. 

 

This is what it shows on the error log;


"Timed out when executing an SQL query.

Session ID=186968; Dialog ID=20989173; Repository="PFC"; Function=SysEntryOpenListingStreamEx; Message="[Microsoft][ODBC Driver 11 for SQL Server]Query timeout expired"; Statement="select id,link,type,flags,sdctl,rfid,parentid,toc_uuid,toc_owner,isFrozen,recordState,hasacl,c0,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15,c16,c17,c18,c19,c20,c21,c22,c23,c24,c25,c26,c27,c28,c29,c30,c30_count,c31,c31_count,c32,c33,c34,c35,c36,c37,c38,c39,c40,c41,c42,c43,c43_count,c44,c45,c46,c47,c47_count,c48 from (select toc.tocid as id,toc.linkto as link,toc.etype as type,toc.toc_flags as flags,toc.toc_sdctl as sdctl,toc.rfid as rfid,toc.parentid as parentid, toc.toc_uuid as toc_uuid, toc.toc_owner toc_owner,case when toc.tocid in (select tocid from frozen) then 1 else 0 end as isFrozen,case when toc.rfid is null then 0 when toc.isDispositioned = 1 then 4 when toc.iscutoff = 1 then 3 when toc.isclosed = 1 then 2 else 1 end as recordState,case when toc.toc_flags & 32 = 32 then 1 else 0 end as hasacl,case when linktoc.etype>=0 then 1 else 0 end as isfolder,toc.tocid as c0,toc.name as c1,linktoc.edoc_ext as c2,toc.parentid as c3,toc.etype as c4,case when linktoc.edoc_storeid is null then null else case when linktoc.edoc_size is null then 0 else linktoc.edoc_size end end as c5,linktoc.etype as c6,toc.etag as c7,toc.linkto as c8,case when linktoc.etype = 0 and linktoc.rfid = linktoc.tocid then 320 when linktoc.etype = 0 and (linktoc.toc_flags &  1) = 1 then 257 when linktoc.etype = 0 then 256 when linktoc.etype = 1 then 384 when ((linktoc.edoc_mime IS NOT NULL AND linktoc.edoc_mime <> '') or (linktoc.edoc_ext IS NOT NULL AND linktoc.edoc_ext <> '')) and (linktoc.toc_flags &  1) = 1 then 17 when (linktoc.edoc_mime IS NOT NULL AND linktoc.edoc_mime <> '') or (linktoc.edoc_ext IS NOT NULL AND linktoc.edoc_ext <> '') then 16 when (linktoc.toc_flags &  1) = 1 then 1 else 0 end as c9,toc.created as c10,case linktoc.etype when -2 then linktoc.pagecount else null end as c11,linktoc.vol_id as c12,linktoc.pset_id as c13,(select hc.hitcount from searchresult12 hc where hc.rsid = sr.rsid  and hc.tocid = toc.tocid) as c14,toc.name as c15,0 as c16,1 as c17,toc.toc_uuid as c18,plock.owner  as c19,cast(null as varbinary(85)) as c20,0 as c21,case when linktoc.final_date is not null or linktoc.isdispositioned = 0 then linktoc.final_date "

 

I really hope it would be solved one day

0 0
replied on February 9

Have you tried updating your ODBC connection for that repository as mentioned previously in this post so it uses the ODBC 17 driver instead of 11?

0 0
replied on February 29

Hi Blake, I'm sorry for the late reply, yes we updated to ODBC driver 17 for SQL server recently. We realized that the search seems to be a bit faster than usual but when we put conditions to search for high amount of files with many columns, we still get the error. Especially if one of the columns is "Total Document Size" it almost guarantees the error. We try to run the high amount file searches with minimum column displays then at least we get a result.
 

0 0
replied on March 19

We had a similar event logged and I posted about it here. Laserfiche Support replied with an option to try.

What would be performing this query from the repository? - Laserfiche Answers

1 0
replied on March 14 Show version history

I upgraded our development repository yesterday from 10.3.1.479 to 11.0.2305.274 which is the latest version 11 available for download. I too am getting the General Database Error 9008. Before I start trying random things from this article, I wanted to post here and check.

 

Has there been any final resolution on this problem? 

 

And, do all of the improvements from this patch: List of Changes for Laserfiche 10.3.1 Update 2 - Knowledge Base exist in Version 11? That patch seems to have a lot of search related fixes.

 

 

0 0
replied on March 14

I have had to train users to start stripping columns of data out of client. In fact we now have a saved column profile that has no columns showing at all. They try that first to see if the problem persists. Almost always it does not so it is the metadata columns showing. 

I would call this a big fat workaround, not a solution. The problem remains. Super annoying.  

1 0
replied on March 14

We tried working the Column Profiles angle where we pared down each department to the minimum number of columns and set up a column profile for that department but still the error persists.

 

0 0
replied on March 14

Whenever you encounter "General database error. [9008]" please check the Laserfiche ContentRepository event viewer logs to get additional details about the error. It may tell you what function call encountered the issue and what the specific SQL error is.

If further assistance may be needed, please reach out to your Laserfiche Solution Provider or support representative.

0 0
replied on March 19

This is the specific error message from the event viewer. 

Timed out when executing an SQL query. Session ID=7; Dialog ID=735; Repository="GAA-WIP11-TEST"; Function=SysSearchOpenListing; Message="[Microsoft][ODBC Driver 17 for SQL Server]Query timeout expired"; Statement="insert into #tmplisting (tocid, row_num) select toc.tocid as id, row_number() over (order by case when toc.etype>=0 or (toc.etype=-1 and exists (select * from toc l where l.tocid=toc.linkto and l.etype >= 0)) then 1 else 0 end,toc.created desc,toc.name desc) from toc join searchresult7 sr on toc.tocid = sr.tocid and toc.tocid<>2  left outer join toc linktoc  on case when toc.etype = -1 then toc.linkto else toc.tocid end = linktoc.tocid where sr.rsid=:rsid<int> and ((toc.etype=-2 or toc.etype>=0) or (toc.etype = -1 and (linktoc.etype=-2 or linktoc.etype>=0)))"; Timeout=15(sec).

0 0
replied on March 19 Show version history

We are seeing a very similar error in the event viewer:

Log Name:      Laserfiche-ContentRepository-Service/Admin
Source:        Laserfiche-ContentRepository-Service
Date:          3/13/2024 6:00:43 AM
Event ID:      197
Task Category: None
Level:         Error
Keywords:      
User:          xxxx
Computer:      xxxx
Description:
Timed out when executing an SQL query. Session ID=329813; Dialog ID=6973241; Repository="Financials"; Function=SysSearchOpenListing; Message="[Microsoft][ODBC Driver 17 for SQL Server]Query timeout expired"; Statement="insert into #tmplisting (tocid, row_num) select toc.tocid as id, row_number() over (order by case when toc.etype>=0 or (toc.etype=-1 and exists (select * from toc l where l.tocid=toc.linkto and l.etype >= 0)) then 1 else 0 end,toc.created desc,toc.name desc) from toc join searchresult1 sr on toc.tocid = sr.tocid and toc.tocid<>2  left outer join toc linktoc  on case when toc.etype = -1 then toc.linkto else toc.tocid end = linktoc.tocid where sr.rsid=:rsid<int> and ((toc.etype=-2 or toc.etype>=0) or (toc.etype = -1 and (linktoc.etype=-2 or linktoc.etype>=0))) and ((toc.toc_flags & 1048576) = 0 or toc.tocid = 1 or not exists (select * from entry_tag et join tag t on et.tag_id = t.tag_id where t.is_secure = 1 and et.tocid <> 1 and et.tocid = toc.tocid and (toc.etype <> -1 or et.tocid = toc.linkto) and not exists (select * from sess_tag li where li.tag_id = et.tag_id and li.sess_id = :sessid1<int>))) and (toc.linkto is null or not exists (select * from entry_tag et join tag t on et.tag_id = t.tag_id where t.is_secure = 1 and et.tocid <> 1 and et.tocid = toc.linkto and not exists (select * from sess_tag li where li.tag_id = et.tag_id and li.sess_id = :sessid2<int>)))"; Timeout=15(sec).
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Laserfiche-ContentRepository-Service" Guid="{82FC23FE-4B0A-435C-82E6-01C7077098AF}" />
    <EventID>197</EventID>
    <Version>0</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x4000000000000000</Keywords>
    <TimeCreated SystemTime="2024-03-13T13:00:43.843084500Z" />
    <EventRecordID>5081692</EventRecordID>
    <Correlation />
    <Execution ProcessID="8228" ThreadID="6700" />
    <Channel>Laserfiche-ContentRepository-Service/Admin</Channel>
    <Computer>xxxx</Computer>
    <Security UserID="xxxx" />
  </System>
  <EventData>
    <Data Name="sessionID">329813</Data>
    <Data Name="dialogID">6973241</Data>
    <Data Name="repository">Financials</Data>
    <Data Name="func">SysSearchOpenListing</Data>
    <Data Name="message">[Microsoft][ODBC Driver 17 for SQL Server]Query timeout expired</Data>
    <Data Name="stmt">insert into #tmplisting (tocid, row_num) select toc.tocid as id, row_number() over (order by case when toc.etype&gt;=0 or (toc.etype=-1 and exists (select * from toc l where l.tocid=toc.linkto and l.etype &gt;= 0)) then 1 else 0 end,toc.created desc,toc.name desc) from toc join searchresult1 sr on toc.tocid = sr.tocid and toc.tocid&lt;&gt;2  left outer join toc linktoc  on case when toc.etype = -1 then toc.linkto else toc.tocid end = linktoc.tocid where sr.rsid=:rsid&lt;int&gt; and ((toc.etype=-2 or toc.etype&gt;=0) or (toc.etype = -1 and (linktoc.etype=-2 or linktoc.etype&gt;=0))) and ((toc.toc_flags &amp; 1048576) = 0 or toc.tocid = 1 or not exists (select * from entry_tag et join tag t on et.tag_id = t.tag_id where t.is_secure = 1 and et.tocid &lt;&gt; 1 and et.tocid = toc.tocid and (toc.etype &lt;&gt; -1 or et.tocid = toc.linkto) and not exists (select * from sess_tag li where li.tag_id = et.tag_id and li.sess_id = :sessid1&lt;int&gt;))) and (toc.linkto is null or not exists (select * from entry_tag et join tag t on et.tag_id = t.tag_id where t.is_secure = 1 and et.tocid &lt;&gt; 1 and et.tocid = toc.linkto and not exists (select * from sess_tag li where li.tag_id = et.tag_id and li.sess_id = :sessid2&lt;int&gt;)))</Data>
    <Data Name="timeout">15</Data>
  </EventData>
</Event>

 

1 0
replied on April 30

Hi everyone, any update about this issue¿?crying

0 0
replied on May 1

I am getting this issue as well. 

0 0
replied on July 30

Hello Team,

 

Any solution for timeout issue while we use quick search.

Timed out when executing an SQL query. Session ID=519077; Dialog ID=8925961; Repository="HFRD"; Function=SysEntryOpenListingStreamEx; Message="[Microsoft][ODBC Driver 11 for SQL Server]Query timeout expired"; Statement="select id,link,type,flags,sdctl,rfid,parentid,toc_uuid,toc_owner,isFrozen,recordState,hasacl,c0,c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11,c12,c13,c14,c15,c16,c17,c18,c19,c20,c21,c22,c23,c24,c25,c26,c27,c28,c29 from (select toc.tocid as id,toc.linkto as link,toc.etype as type,toc.toc_flags as flags,toc.toc_sdctl as sdctl,toc.rfid as rfid,toc.parentid as parentid, toc.toc_uuid as toc_uuid, toc.toc_owner toc_owner,case when toc.tocid in (select tocid from frozen) then 1 else 0 end as isFrozen,case when toc.rfid is null then 0 when toc.isDispositioned = 1 then 4 when toc.iscutoff = 1 then 3 when toc.isclosed = 1 then 2 else 1 end as recordState,case when toc.toc_flags & 32 = 32 then 1 else 0 end as hasacl,case when linktoc.etype>=0 then 1 else 0 end as isfolder,toc.tocid as c0,toc.name as c1,linktoc.edoc_ext as c2,toc.parentid as c3,toc.etype as c4,case when linktoc.edoc_storeid is null then null else case when linktoc.edoc_size is null then 0 else linktoc.edoc_size end end as c5,linktoc.etype as c6,toc.etag as c7,linktoc.pset_id as c8,toc.linkto as c9,case when linktoc.etype = 0 and linktoc.rfid = linktoc.tocid then 320 when linktoc.etype = 0 and (linktoc.toc_flags &  1) = 1 then 257 when linktoc.etype = 0 then 256 when linktoc.etype = 1 then 384 when ((linktoc.edoc_mime IS NOT NULL AND linktoc.edoc_mime <> '') or (linktoc.edoc_ext IS NOT NULL AND linktoc.edoc_ext <> '')) and (linktoc.toc_flags &  1) = 1 then 17 when (linktoc.edoc_mime IS NOT NULL AND linktoc.edoc_mime <> '') or (linktoc.edoc_ext IS NOT NULL AND linktoc.edoc_ext <> '') then 16 when (linktoc.toc_flags &  1) = 1 then 1 else 0 end as c10,toc.created as c11,toc.modified as c12,case linktoc.etype when -2 then linktoc.pagecount else null end as c13,linktoc.vol_id as c14,linktoc.pset_id as c15,toc.name as c16,0 as c17,1 as c18,toc.toc_uuid as c19,plock.owner  as c20,cast(null as varbinary(85)) as c21,0 as c22,case when linktoc.final_date is not null or linktoc.isdispositioned = 0 then linktoc.final_date else rf_toc.final_date end as c23,(case when linktoc.edoc_storeid is null then 0 else (case when linktoc.edoc_size is null then 0 else linktoc.edoc_size end) end) + (select isnull(sizetemp1.sum1, 0) + isnull(sizetemp2.sum2, 0) + isnull(sizetemp3.sum3, 0) from (select sum(cast(img_size as bigint)) + sum(cast(txt_size as bigint)) + sum(cast(loc_size as bigint)) as sum1 from doc where pagenum >= 0 and linktoc.tocid = doc.tocid) sizetemp1, (select sum(cast(altedoc_size as bigint)) as sum2 from altedoc where altedoc.tocid = linktoc.tocid) sizetemp2, (select isnull(sum(cast(datalength(ann.ann_text) as bigint)), 0) + isnull(sum(cast(ann.attach_size as bigint)),0) + isnull(sum(cast(datalength(ann.bitmap) as bigint)), 0) as sum3 from ann where ann.page_id in (select page_id from doc where doc.tocid = linktoc.tocid and doc.pagenum >= 0)) sizetemp3) as c24,case when (linktoc.toc_flags &  512) = 512 then 1 else 0 end as c25,case when ((linktoc.toc_flags &  512) = 512 and (linktoc.toc_flags &  1024) = 0) then 1 else 0 end as c26,case when (linktoc.toc_flags & 512 = 512 and linktoc.toc_flags & 1024 = 0) then (select top 1 ad.who from active_doc ad where ad.tocid = linktoc.tocid) else cast(null as varbinary(85)) end as c27,linktoc.isdispositioned as c28,linktoc.pset_id as c29, l.row_num from searchresult13 sr inner loop join toc on toc.tocid = sr.tocid and sr.rsid = :rsetid<int>  right outer join #tmplisting l on toc.tocid=l.tocid  left outer join toc linktoc   with (nolock)  on case when toc.etype = -1 then toc.linkto else toc.tocid end = linktoc.tocid left outer join toc rf_toc   with (nolock)  on linktoc.rfid = rf_toc.tocid left outer join (select plock.*, ROW_NUMBER() OVER (PARTITION BY tocid ORDER BY owner) AS rn from plock   with (nolock) ) plock on linktoc.tocid=plock.tocid and plock.rn = 1 where ((toc.etype is null or ((toc.etype=-2 or toc.etype>=0) or (toc.etype = -1 and (linktoc.etype=-2 or linktoc.etype>=0)))))) as q1  where q1.row_num>=:begin<int> and q1.row_num<=:end<int> order by q1.row_num"; Timeout=15(sec).

 

Thanks,

Pratik

 

 

Error.png
Error.png (170.88 KB)
0 0
replied on July 31 Show version history

This may be the same issue as some earlier comments on this post, recommending to upgrade from ODBC Driver 11 for SQL Server to version 17.  The version is listed earlier in the text of your error message.

Some of the more recent comments on this post (mine included) mention still having the issue after upgrading, and not having a solution, so that may not resolve it.  But it is worth trying…

0 0
replied on September 6

I have 2 customers experiencing this issue also. upgrading to ODBC 17 did not help.  This issue seems to happen when there is nothing chosen in the search fields.  So if your were just searching for all documents by template the search would error out, but if you selected a field on the template and put in some information, the search completes.

0 0
replied on September 6

Have you tried increasing the timeout? Your likely pulling more information than you can fit in a 15 second time frame.

0 0
replied on October 4

One things I've noticed during troubleshooting is the attributes on the user account can trigger the timeout error. In the past we've deleted and recreated the user account with bare minimum attributes and they don't encounter the issue.

0 0
replied on October 4

Years ago I'd narrowed it down to the XmlColumnDisplay attribute. Next time try just deleting this one and see what happens. 

I wonder if setting up a SQL job to delete this attribute when it pops up would "fix" this?

2 0
replied on October 4

Yeah that rings a bell - although don't think the attributes are stored in SQL directly, I know we have an SDK script for setting up attributes.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.