You are viewing limited content. For full access, please sign in.

Question

Question

MDF abnormal growth.

asked on March 21, 2014

Within a week, our MDF has grown from 17gigs to 120gigs. A DB shrink can only free up about 30 gigs. That is  about 70 gigs of growth. My backups are running fine. What can be causing this?

 

Did a little research and found that Searchresult, SearchResult1, SearchResult2,......... have the most used space.  They all are about have about 6.9 gigs of used space. There are a total of 14 of these "Search" tables which make up a total of about 70 gigs.  Is this correct?

 

LFTableSize.xlsx (15.56 KB)
1 0

Answer

APPROVED ANSWER
replied on July 1, 2014

The search results tables are used to store the active search results for any open session. Once the search is closed or the connection terminated, the rows get deleted. However, deleting rows from a table won't reclaim any used space. You can shrink the database to reclaim space, but note that there are various reasons not to (fragmentation, etc).

 

As Sheldon indicated, you should look into what types of searches are being run on the system and how they are being run to see if there's an issue in that regard. Perhaps Workflow is searching for a large result set and is inadvertently looping.

1 0

Replies

replied on May 12, 2014

Hi Long,

 

I see you don't have a response yet on this question. Did you eventually resolve your issue?

 

I am also experiencing the same problem. I am running on SQL Express and my database size has reached its limit, yet I only have 200 000 entries in the system.

 

Checking the table sizes, the Searchresults tables are definitely the culprits. I don't understand, however, how a table with only 10 000 rows could use 500MB of data? That's just one of the searchresults tables I am referring to.

 

This doesn't sound right to me, since there are only 5 columns in these tables hold small bits of data.

 

How do we maintain these tables and ensure they are not using unnecessary space?

 

Cheers

Sheldon

0 0
replied on May 12, 2014 Show version history

Hi Sheldon,

 

Not sure what the "SearchResults" table is used for but it seems like a temporary holding table for "Search" data. I'm guessing these tables are filled with data and then purged by the Laserfiche system.

 

It looks like the  "Total" table space is not being freed after the table has been purged by the system. As data is inserted into these tables, instead of starting over at an initial size of "0", it keeps on adding onto the current table size. In my case,  many of my "Search" tables had 0 to 10000 row counts and yet over 9 gigs.

 

To make a long story short, I found that rebooting the server released the Total Used space. Once it had released this space, I could then shrink the database to a smaller size.

 

I'm am not sure how you are scheduling your backup or maintenance jobs with sql express since it does not have a job agent, but with SQL standard I am able to automatically perform this maintenance cycle throughout the week:

 

-Reboot server

-Shrink DB

-Rebuild Indexes

 

Hope this helps.

 

0 0
replied on May 13, 2014

Hi Long,

 

Thank you very much for your reply. Here is what I have learned about the issue:

 

1. Restarting the LF Server Service will kill all current sessions

2. Killing current sessions will release the search results and allow SQL's Database SHRINK to free up the space. So you don't have to reboot the entire machine.

3. I was getting high data usage in all my searchresult tables because of my workflow design. I was triggering an instance of workflow per entry that then produced a search using the Search Repository activity. I would have up to 100+ entries being created and thus WF's being triggered at a time, meaning alot of different sessions producing searches. I have subsequently changed my design to produce only one search on a schedule going forward.

4. I have noticed that in my new WF design that my search result returns 701 entries, however, in the searchresult SQL table, there are 129463 rows returned, using over 100MB of data. This seems highly suspect to me until I can understand what I'm seeing.

 

I am still trying to get answers from LF Support as to why this is and will provide feedback.

 

Cheers

Sheldon

0 0
replied on May 14, 2014 Show version history

Not sure if you have this issue. My tempdb(s) would not auto shrink. On many occasions, it has grown to over 60 gigs. In order for it to shrink the tempdb(s) I needed to restart the SQL service. So I decided to do a reboot instead of just killing the LF Service.

 

 

0 0
replied on May 14, 2014

Hi Long,

 

I never noticed it before, but now that I look at it my tempdb it is also rather large, although not reaching the 10GB limit of SQL Express. Makes sense to then restart SQL as well.

0 0
replied on June 26, 2014

Hi Sheldon

 

Have you got a definitive answer from LF support on this issue (growing SEARCHRESULT* tables) ?

Thank you

0 0
replied on July 1, 2014

Hi Sylvain,

 

I never really received any definitive answers to my questions on why the table sizes were so big, even though search results were so small. The case was closed and I never got a reply on that specific question.

 

However, LF Support did confirm that restarting the LF Service and ensuring all sessions were ended would release the reserved space in the SearchResults tables, as I found out above.

 

If your SearchResults tables are too large, perhaps try and understand what searches are being performed either by Workflow or LF Client/Web Access. In my case, I was looping searches in Workflow that ended up contributing to the problem. Fixing that ultimately solved my problem and I have not had the same issue since then.

 

Regards,

Sheldon

0 0
APPROVED ANSWER
replied on July 1, 2014

The search results tables are used to store the active search results for any open session. Once the search is closed or the connection terminated, the rows get deleted. However, deleting rows from a table won't reclaim any used space. You can shrink the database to reclaim space, but note that there are various reasons not to (fragmentation, etc).

 

As Sheldon indicated, you should look into what types of searches are being run on the system and how they are being run to see if there's an issue in that regard. Perhaps Workflow is searching for a large result set and is inadvertently looping.

1 0
replied on February 10, 2016

So my clients MDF file is about 19 Gigs in size.  I feel that the reason for this is due to the Search Tables and SQL not releasing that space back, so it is still saying that the MDF is still 19 Gigs.  So for the people that had the large MDF files, did you end up doing the Shrink command function against the MDF and did that release that space back?  I am curious because this client has less than 500K documents within their repository, so a 19 Gig MDF seems to be a bit high to me.  We just upgraded them from 9.0.1 to 9.2.1 last week and that is when we noticed their MDF was that large.  I have no problem running the Shrink command, I just wanted to know if that was the route everyone else took after figuring out what searches were causing those tables to grow.

Thanks

0 0
replied on February 12, 2016

Hi Derick,

Frist open up SQL Server Management Studio and then righ-click the DB -> Tasks -> Shrink -> Database. You should get a screen like this below. Take note of how much free space is available to be released.

 

I wouldn't shrink the DB just for the sake of it. If the 19GB usage is fine and there's enough free space on the HDD, I'd recommend you leave it.

SQL automatically expands the MDF file to accomodate its needs for additional data storage. Since the LF Search is actually just "temporary storage" I understand the temptation to free up the used space. However, if you adopt this approach then you will find yourself doing this daily, because Laserfiche will require space to store the search results over and over again.

Typically, the SQL database will grow to the maximum size that was needed at some or other point in time during your system usage. It is better to leave the 19GB as is, since it will be easier for SQL to use the reserved space next time, and performance won't suffer.

Shrinking will also just defragment your database and cause performance issues down the line. Only shrink if there is a very good reason for it, perhaps you had a runaway workflow that was searching in a loop.

You could investigate the cause of the size usage in more detail by running this script, which will tell you exactly where the space is being used up:



SELECT 
    t.NAME AS TableName,
    s.Name AS SchemaName,
    p.rows AS RowCounts,
    SUM(a.total_pages) * 8 AS TotalSpaceKB, 
    SUM(a.used_pages) * 8 AS UsedSpaceKB, 
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM 
    sys.tables t
INNER JOIN      
    sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN 
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN 
    sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN 
    sys.schemas s ON t.schema_id = s.schema_id
WHERE 
    t.NAME NOT LIKE 'dt%' 
    AND t.is_ms_shipped = 0
    AND i.OBJECT_ID > 255 
GROUP BY 
    t.Name, s.Name, p.Rows
ORDER BY 
    t.Name

Cheers

Sheldon

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.