You are viewing limited content. For full access, please sign in.

Question

Question

best practice large storage requirements over 30 terabytes

asked on September 24, 2024

We have a Laserfiche environment that currently is about 15 Terabytes.  The data has several logical volumes stored on several drives on the server.  Our drive sizing on the server is averaging about 5 terabytes before we add another drive to the server to store documents to.  We want to look at expanding the capacity out to about 50 Terabytes over the next 2 years.  Wondering about best practice strategies and the fact we may run out of the alphabet letters on the server in the long run.  Would be interested in best practices to strategize for the upcoming growth of the storage

0 0

Replies

replied on September 25, 2024

Hi Don,

Have you seen this post about planning for huge repos? Jason is using 100TB for 60 million entries, and Sam has summarised some great advice. I'm working on a system that will grow from 55M to 300M records over the next five years.

It would be great if you could review the post add your queries! A single thread for huge repos would be very useful :)

Regards,

Ben

2 0
replied on September 26, 2024

Beat me to it. And I wouldn't worry about running out of drive letters. First, that's about 125 TB at your current average disk size, which is 10x current usage. Second, you can always spin up a file server or six and store additional volumes on it/them via UNC paths like \\fileserver\repositories\myrepo\volumes\*. Having a storage expansion strategy is obviously good, but I would really not worry about running out of logical disk space for repository volumes.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.