I've got a PDF doc that is 81.8MB and 9700 pages. After running through a QF session it's only scanning in the first 841 pages before it stops processing with no obvious error. If it is broken up into chunks, there doesn't seem to be an issue, but we're trying to avoid that. We've tried scanning with the LF capture engine vs the universal capture engine, but no luck. Any suggestions to get this to work without having to break this file up?
Quick Fields Processing Documents with 9000+ pages
If you don't mind me asking (as a Capture Team developer)...
Where did this file come from?
How it get so large?
And why don't you want to break it up?
Unfortunately, you're going to encounter all sorts of timeouts trying to process something like that (esp. with LFCE), timeouts that exist for good reasons.
The typical solution for this situation is to break the document up, either using some external process or with a separate Quick Fields session that sort of pre-processes the document and stores its parts in the repository where another Quick Fields session can pick them up via LFCE. Though, honestly, even using a separate Quick Fields session might not work because it's a PDF and you'd, at the least, need to run page generation (which might take too long--I'm not really sure without trying it) so you can identify how to split the document.