You are viewing limited content. For full access, please sign in.

Question

Question

Looking for Audit Trail Best Practices

asked on November 6, 2024

We have a situation with an very large and active on-prem installation with the Audit Trail logs. In exploring the options we realized that we could not find any information on best practices. Hopefully, this thread will be able to document some good ideas and approaches.

 

In this case, we are generating between 4 and 6 50 MB files per day, not using compression.  Here are our questions:

 

What is the performance hit in reporting when compression is turned on? What do we gain in terms of space?

The "look-back" is 360 days.  One idea is to create a new folder for the logs every quarter, and then define a search catalog search catalog for that folder. The thinking is that the searches for more recent data might be faster if the catalog is smaller, or if the search engine does not need to span more data. Is that an accurate way of looking at this?

What is the performance hit if the Archive folder is moved to a network drive? (Approximate, realizing that there are many factors.)

0 0

Replies

replied on December 9, 2024 Show version history

Time waits for no man, so we started our own series of experiments.  We are running a conversion, and that allows us to control the nature of the activity (importing documents basically) and also the numbers, for a real A-B test.  First compression. If you turn compression on, you get a 20:1 reduction in the size of the rollover files. A 10 GB audit trail log will compress down to 500 MB. Its dramatic.

 

The heart of the Audit Trail system is the search catalog:

 

One big difference from the earlier versions (10.4) is that the process to change the amount of live data is different and a little awkward.  Basically, you either have to delete and re-create an existing catalog, or create a new one from scratch:

 

Reports will be offline until this rebuild process is finished.

Recommendation - Do NOT take the default of C:\. 

 

There are circumstances when the files need to be moved to a new location, but where they are still live – that is, managed by the server.  To set this up, to go Repositories:

At the bottom of the page, you can add additional locations:

 

Just hit the Plus sign to add a folder.

 

Warning! The search catalog can be quite large. This one was 80 GB for 60 days of log files that themselves were only 5 GB, a relatively small amount of data.  (A mix of uncompresses and compressed). It is probably a good idea to direct it to a drive with a lot of free space:

And the C drive is certainly not what you want to use.  

The takeaway is that the Catalog indexes the heck out of the data, giving up disk space in exchange for performance.  Since disk space is inexpensive and your/clients time is not, this is a good trade off. Just be aware that this is how the set up works in V11.

 

The last question is how this impacts the reporting performance. We are still testing, but it appears that there are just small differences between compressed and uncompressed files (once the catalog is built) and there are only small differences between reports that capture a lot of events and reports that capture far fewer. If the data is largely pre-staged in the search catalog, this makes a lot of sense.

1 0
You are not allowed to follow up in this post.

Sign in to reply to this post.