You are viewing limited content. For full access, please sign in.

Question

Question

Any way to view the size of a folder or view folders by size?

asked on August 10, 2020

I can't find a column for size to add to my search view. I would like to see which folders are taking up the most disk space in the repository. Similar to how I would analyze my Windows Disk with TreeSize

 

0 0

Answer

SELECTED ANSWER
replied on August 13, 2020

Chad be grateful that I was incredibly bored last night :)

4 0

Replies

replied on August 10, 2020 Show version history

What is the end goal here?

The repository is taking up disk space by volume, not by folder. You can get sizes and entry counts for volumes in the LF Admin Console.

1 0
replied on August 11, 2020

This is from a users perspective, if they want to see what is taking up space. They have 2 primary ways to catalog/label their files as they are imported. Folders and Templates.

If you think of folders and templates like labels on a box, and the repository as a garage. One day they can't park their car, they look around at the size of the boxes and the labels and decide this huge box labeled Christmas Ornaments has got to go.

I am trying to find a way to help them see the size of their boxes.

0 0
replied on August 11, 2020

Right, boxes here would be volumes. The size of a folder is not relevant if that folder contains documents across 5 volumes and only one of the volumes is on a disk that's running out of space.

0 0
replied on August 11, 2020

But volumes are not related to the contents. Volumes would be equivalent to different areas of the garage, but does not help with determining what is taking up space. For example if I have the shelving, the attic, and the floor loaded with boxes, it doesn't help me at all to notice that any one of these if full.

It doesn't matter to me which volume of space I remove something from, since I can shift things around as needed, it matters more the size of the boxes (the part they are missing) and the label (the folder name and/or template name)

0 0
replied on August 12, 2020 Show version history

How about starting from this 

 

select t.parentid, p.name, sum(convert(bigint, d.img_size))
  from doc as d
  join toc as t on d.tocid = t.tocid
  join toc as p on t.parentid = p.tocid
 where t.parentid <> 2
 group by t.parentid, p.name
 

You're going to have to add the edoc_size for toc entries with a non null edoc_storeid

0 0
replied on August 12, 2020 Show version history

Thanks James. Looks like this gives me some information from the backend, however I don't see my repository structure here. Just seems to be a bunch of random folders. How do I start at the root and drill down, like in the screenshot in the original post?

 

Edit: Or how to I group by template maybe? That would help.

0 0
SELECTED ANSWER
replied on August 13, 2020

Chad be grateful that I was incredibly bored last night :)

4 0
replied on August 13, 2020

Wow, that certainly helps them see where the big folders are! Thanks for putting that together, somebody get this man a playstation!

0 0
replied on August 13, 2020

I went the WF route  wink

 

 

            FolderInfo fi = (FolderInfo)this.BoundEntryInfo;
            long total = fi.GetStatistics().TotalFileSize;
            double totalMB = total/(1024*1024);
            SetTokenValue("FolderSizeBytes", total);
            SetTokenValue("FolderSizeMB", totalMB);

This will give you a multi-value token with one row per folder off the root of the repository and the folder's size.

(I would run it off peak user activity times if the repository is in the terabyte range)

2 0
replied on August 13, 2020

This does allow me to give them a useful interface. I set it up as a business process they can run on folders and it will post the result to the metadata as Size on Disk. Thank you as well!

This is likely something a lot of customers will eventually come across a need for, especially when the cost of storage/backup/redundancy gets high.

1 0
You are not allowed to follow up in this post.

Sign in to reply to this post.