You are viewing limited content. For full access, please sign in.

Question

Question

Volume Discussion

asked on June 4, 2024 Show version history

I am seeking input/advice as to how to proceed with improving current volume setup.

 

The existing setup for the LF repository that I'm working with is as follows:

2 Logical Volumes (LVs).

  • LV1
  • LV2
     

Each LV has its own virtual drive. Drive capacity limit set to 4TB each as per organization policy.

  • LV1 -> E: (1 TB)
  • LV2 -> F: (4TB)

*Physical volume rollovers occur at 5GB.

The reason we have 2 LVs is because the original LV1 became full and could not be expanded, so LV2 was created and LV1 was left as is.

A workflow was created to brute force update every single folder in the repo to start using LV2 (Which i believe migrated all the folders to LV2 but not the existing files inside the folders).


The issue

LV2 is approaching the 4TB capacity. In anticipation for this, a new LV and virtual drive have been created.

  • LV3 -> G:

 

The plan was to continue the same pattern of creating a new LV per 4TB drive and update folders to default to the new LV.
 

But after reading other volume rollover discussions and documentation/white pages, it appears that this is may not be the best approach.

It seems best to create a single master LV across all virtual drives and expand (change fixed path) per 4TB drive as needed. (Volume rollover would not handle this as this would be more of "drive" rollover and need to be manually done, correct?).

Can someone confirm if this is the best approach?

If so, would the following outline of the steps to do be accurate?

  1. create master LV and 4 TB drive
  2. use workflow to default all folders to master LV
  3. migrate files to master LV
    1. if full, create new 4TB drive and change master LV fixed path to new drive
    2. repeat as needed
  4. delete old LVs when empty
  5. profit?
     

Would this be a viable solution or would this "kick the can down the road" as eventually even the master LV would get too big and need to be retired and replaced?

Thank you for reading this far!

 

0 0

Replies

replied on June 4, 2024 Show version history

Personally if your drives can not be more than 4TB I would just add a drive once you reach capacity and migrate all documents to the new drive which would free up space again on the primary storage drive for new documents.

Depending on your infrastructure, the primary drive should be your fastest drive tech (IE S3-Express) if possible and the drives holding the older data could be a bit more cost effective tech (IE S3 Intelligent).

However there is one catch to keep in mind. Pages and versions can be added to some documents and this will allow users to add more data to archive volumes. This means if you have this kind of thing happening in your repository you need to plan for it and archive early, leaving remaining space on the new archive drives.

The great thing about having volumes in Laserfiche, is that it allows you to move data around on the backend without any change on the front end. How you want to do it is up to you. You can manually do this in intervals, have workflow move only the oldest documents by date, etc.

 

Add:

I should note that my recommendation also comes from my preference of using fixed volumes over logical volumes since I have trouble updating path locations on logical volumes but not on fixed volumes. If you do want to use a logical volume, you can update the path in the root manually when your drive capacity alarm goes off and span it across drives, leaving space for growth from new pages and versions if necessary. If you work with logical volumes though you might want to do your edits directly in the database rather than admin console, I just find it does not handle path changes well.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.