You are viewing limited content. For full access, please sign in.

Discussion

Discussion

5% Memory Check still valid with Today's WIndows Memory Management?

posted on June 29, 2022

Should the workflow API still deny requests if the available memory shows less than 5%

This is the error it throws and it blocks all submissions to the repository if Windows doesn't show at least 5% available memory. But will a Windows file server EVER show this much available memory after several weeks of being online? See below

The service '/Workflow/api' cannot be activated due to an exception during compilation.  The exception message is: Memory gates checking failed because the free memory (153640960 bytes) is less than 5% of total memory.  As a result, the service will not be available for incoming requests.  To resolve this, either reduce the load on the machine or adjust the value of 

I have done a lot of research only to find that Windows will always show that there is no available memory on file servers, even when you have plenty available memory.

This is because Microsoft now uses all free memory for file caching, ready to provide it back when a private process requests it. This can not be turned off. On the servers where we see this message, there is only 5GB of memory being used by private processes, with 9GB available sitting in the mapped file. Task manager will show that all memory is in use, but this is considered normal for Microsoft Server Operating systems by every stackoverflow post I have reviewed.

RAMMap shows that the 9GB of memory is in the mapped file just as the community explained and that Windows itself it not throwing any OOM errors.

Here is just one of the several forum posts as an example

https://forums.tomshardware.com/threads/mapped-file-using-a-lot-of-ram-possible-memory-leak.2602303/

0 0
replied on June 29, 2022

Chad, as explained before, this is not a Workflow setting and you can control the amount of free memory that will trigger the behavior by editing the config file.

If you're asking whether we are considering changing the default value in the config file, the answer is no. Your case is one of the many possible scenarios where the protection is triggered, but we (and likely IIS/.Net) have no way of knowing whether we're in that case or not.

1 0
replied on June 29, 2022

Thanks for the reminder on the config! (I can't remember all this stuff). But I really think this should be considered to be removed. It prevents Forms from archiving to the repository when there is no reason to and since what is happening is complex, many will have trouble even understanding it and will open a support ticket saying that Forms is simply not working.

I have seen this across countless servers now. They are all showing no available memory even when there is plenty, it is a Microsoft decision.

0 0
replied on June 29, 2022

Can I inquire as to why you have Workflow and Forms instances on Windows Server machines with the File Services role? If I'm understanding correctly, this is an analogous scenario to why we (and Microsoft) strongly recommend against installing any other applications on a SQL Server instance - by design, SQL will claim nearly all available memory on the machine to use for caching etc. Applications will generally not perform well under that kind of memory pressure.

1 0
replied on June 29, 2022

I meant the files uploaded to the Laserfiche repository. In these environments SQL is not hosted on the OS.

This is not a case of running out of memory or not having plenty of memory for the services. The previous post Miruna linked to was that case, but this is regarding systems that have allocated plenty of extra memory.

0 0
replied on June 29, 2022

Right, I was responding to:

"But will a Windows file server EVER show this much available memory after several weeks of being online?" and "I have done a lot of research only to find that Windows will always show that there is no available memory on file servers, even when you have plenty available memory."

And took that to mean Windows Server instances being used with the File Services role. 

The Toms Hardware forum post you linked describes a case where the person had a nearly 10 GB Outlook PST mapped and cached, something that should never be the case for any Laserfiche servers. 

To summarize, it's still unclear to me in what scenario you're seeing this occur. I don't recall it ever being an issue in any system I've worked with over many years.

0 0
replied on June 29, 2022

The server I am looking at now has over a TB of files in the repository, there is plenty of data to be mapped and cached (although I have run into this with much smaller 100GB systems as well).

Here is the current state of the server. The total memory installed is  16GB or 16,000,000K

You can see that all private processes combined are currently using only 6,200,000K

There is 8,800,000K available but currently in the Mapped File. Here is what workflow is throwing in the logs every day

So I naturally asked IT to stop using so much memory for Mapped File, that is where I was lead to the fact that it is not an option, Windows uses all available possible memory for the mapped file but the memory is still available for use by any process that needs it. Therefore, this is a mistake for Workflow to reject a request from Forms.

0 0
replied on June 29, 2022 Show version history

Ah. From here:

Standby memory is data that has been Cached into memory, and has not been modified since (though it may have been read) and can be dropped if required. It can be instantly freed up on demand. If the physical RAM was needed for anything else, the Standby memory would be dropped, its kept in memory on the off-chance it'll be needed again, and it'll be quicker to fetch from memory than disk.

I ran RAMMap on a customer system with a high-activity 8 TB repository on a 16 vCPU / 64 GB RAM server hosting LFS and Import Agent.

What's notably different from yours is that this server's "Active" Mapped File RAM use is 428 MB even though it has ~47 GB of Standby RAM.

Your server's Mapped File memory is almost all "Active". My understanding is that unlike "Standby" memory, Active memory is not immediately available for use by other processes. When .NET/IIS tells you "Memory gates checking failed because the free memory (153640960 bytes) is less than 5% of total memory.", it appears it's not wrong. There really isn't memory immediately available for IIS to grab.

LFS itself has fairly low memory consumption - on this server it's 470 MB. Miruna tells me Laserfiche does not explicitly use memory mapped files.

Total Process Private memory here is 2.6 GB vs your server's 6.2 GB. The corresponding Workflow Server process (on another server) is using 750 MB. Workflow shouldn't use much RAM most of the time.

I would start by figuring out what processes are using those 6.2 GB of Process Private memory. Without additional context, I would say that's quite high. You should report back on the process memory usage, which may have some relation to the Mapped File usage. If all the processes running on the server should be there and are using a normal amount of RAM for what they are, it would seem the customer needs to allocate more RAM to the server to relieve memory pressure.

0 0
replied on June 30, 2022 Show version history

LFS and WF are not using a substantial amount of memory, maybe 500MB total at most times. The reason there is 6GB in use is because they run the Full Text Search Engine. It is a memory intensive process. That still leaves over 11GB free though given all system process, LFS, WF, and the Full Text processes, which is plenty.

I am not familiar with switching Mapped File memory from Active to Standby, where is that setting? Why would we ever want to use all our free memory to hold random files if it was not ready to give up for private processes?

 

Wait why would they need to add more ram if they are only using 6GB of 15.7GB?

0 0
replied on June 30, 2022 Show version history

Ah, I think I see what you've misinterpreted. Here is my understanding.

Task Manager/Resource Monitor's memory utilization metrics only show Active memory. If you ever see those tools showing near 100% memory utilization, the server really is capped out on RAM. It is not "only using 6GB of 15.7GB" - looking only at Active Process Private memory and assuming everything else is/should be free on demand isn't accurate.

Looking at your RAMMap screenshot, it shows 16,405,728 K Active / 16,469,596 K Total. That's 99.6% Active memory utilization. There is no "free" memory available on this server and its overall performance is likely suffering from the OS having to engage in constant paging to avoid total memory exhaustion. (Please do not mess with your page file, that's not the problem or solution here.)

Contrast that to the RAMMAP screenshot from my system, which shows 17,779,564 K Active / 67,108,404 K Total for 26.5% memory utilization, which is reflected in Task Manager/Resource Monitor, even though there's 46 GB of Standby Mapped File.

When Mapped File memory is shown as Active, that means it is actively holding files in use by a process and cannot be released because something is holding those files open. That memory is in active use just like Active Process Private memory is.

The Windows caching behavior has to do with Standby Mapped File memory. This is what the line in your original post of "Microsoft now uses all free memory for file caching, ready to provide it back when a private process requests it" refers to. Here's a constructed, simplified example:

  1. I launch the Windows Photos app and open a 1 MB cat picture that was stored as a local file.
  2. The Windows Photo app uses a bit of Active Process Private memory and the 1 MB cat picture I have open uses ~1 MB of Active Mapped File memory. The Active Mapped File memory is part of the total physical memory usage metric in Task Manager (along with all other Active memory).
  3. After admiring the cat picture, I close the Windows Photo app. The Photos process drops its open handle on Mapped File of the cat pic.
  4. Windows keeps the cat pic in memory as a Mapped File and changes that ~1 MB from Active to Standby. It is not longer counted as consumed memory in Task Manager and is immediately available to use for other purposes when needed. Windows doesn't actively remove it from memory because if the RAM is available and something else doesn't need the RAM right away, it might as well keep the file cached in case it's requested again.
  5. A moment later, I decide I want to look at the cat pic again. This time when I open the file in the Photos app, Windows fetches it from memory (much faster) instead of disk and switches those memory sectors from Standby back to Active.
  6. I close Photos again, releasing its Private Process memory and switching the cat pic Mapped File memory back to Standby again.
  7. Later, something else (say the Workflow Web Service IIS Worker process) needs RAM and because not enough completely unused (RAMMap "Free") memory is available, Windows allocates it some of the Standby Mapped File memory (including where my cat pic was).

 

You don't switch Mapped File memory from Active to Standby yourself or via any setting. The operating system does it automatically when the memory is no longer in active use and safe to mark as available.

Your server is actively using all the memory it says its using. Active Mapped File memory isn't holding "random files", it's holding files your processes are using, and Windows won't give up that memory up to other processes because it's not free. Only Standby Mapped File memory is free, and your RAMMap screenshot shows there's maybe 25 MB of that on your server.

I suspect a fair amount of the 6 GB of Active Mapped Memory you see in use contains the LFFTS catalog files. Catalog file access performance is a large driver of LFFTS performance so if that is indeed what's using most of the RAM, that's what you'd want to see.

To summarize: your server is actively using all the memory it says it is. The absence of nearly any Standby Mapped File memory indicates that this memory is being claimed for other purposes almost immediately once released from Active use. This server is under heavy memory pressure.

Allocate it more RAM and/or move Workflow to a different server to avoid impacts from the resource contention.

I recommend a minimum of 24 or 32 GB of RAM for Production servers that will host LFFTS where the customer has ~1 TB or large repositories these days. I also do not put Laserfiche applications other than Laserfiche Server or Audit Trail 11 on the same Production server as LFFTS.

1 0
replied on July 1, 2022 Show version history

I think I understand everything your saying here and summarizing it up to be that the 9GB of extra memory (not used by processes) is being used for "actively open files", which are likely opened by FTS (since we do not have any files open that I know of).

As a test I just tried stopping FTS, sure enough 12GB of memory instantly freed up in task manager.

It was at 15/15 as usual and it dropped to 3/15. So the FTS private process usage of 3GB was only a portion of what was actually being used by FTS (because of open files).

For the moment I am going to leave it off but before we start guessing how much more memory we need:

Is there any option to set an upper limit on how many files FTS keeps actively open on the server? Can we have morevisibility and control in this, rather than just asking IT to try 32, then 64, then 128.

Thank you for your help and detailed explanations!

1 0
replied on July 1, 2022

Glad that helped =)

I would check the sizes of all the "\SEARCH" folders for all the repositories (if multiple), add them up, and then add 20% as a buffer for any smaller temp files it might create. That will likely give you a good indication of how much Active Mapped File memory FTS wants to use.

You can use the Search Engine Configuration Utility to set a max memory threshold for FTS, though I believe that only affects the process memory itself (not the Mapped Files), and I've anecdotally heard that FTS treats that parameter as more of a target than a hard limit.

0 0
replied on July 6, 2022

Interesting, SEARCH is 14GB. That means to do full text searches they need 18GB alone since the service uses memory too.

0 0
replied on July 6, 2022

Any way to use a disk drive instead of RAM for full text search or is it simply too slow?

The gap between disk speed and ram speed is decreasing every few years.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.