Can Laserfiche handle 30,000 documents (1 to 2 pages documents of average size of 150K) imported using
1. SDK Code OR
2. Import Agent
within 24 hours?
If yes, then which one would be the quicker option, SDK or Import Agent?
Can Laserfiche handle 30,000 documents (1 to 2 pages documents of average size of 150K) imported using
1. SDK Code OR
2. Import Agent
within 24 hours?
If yes, then which one would be the quicker option, SDK or Import Agent?
I would say as long as the server setup is configured to handle it, yes. For optimal performance you should have the LF Server installed on it's own server, with SQL and Importer Agent each on their own servers as well with setup according to their own documentation. For the SQL server, you should have the database and log files each on their own drives for optimal performance.
Blake,
When you say, 'as long as the server setup is configured to handle it', what does that mean? What kind of configurations and where?
Also, I know one way is to give it a test but would you have any idea that how many documents per minutes we could import into laserfiche using LF API OR using Import Agent? I need to know that which one is the quicker route as this will be a everyday task of importing 30,000+- documents.
Thanks
Hardware will be the major factor when considering performance, therefore when choosing between the SDK or import agent, it is really a question of functionality and ease of configuration.
The SDK can be made to act like Import Agent, but will require additional time to script and test. For that reason, Import Agent would be the better option, provided no additional functionality is required. Import Agent 9.0 was just released; here is a link to the Quick Start Guide which covers many of the product’s features. If all your needs can be met out-of-the-box with import agent, I suggest moving forward with this product.
Also, make sure to also take the resources for processing documents (whether by Workflow of QuickFields) into consideration when configuring your servers. Test and monitor performance; There are a number of different places where the process might bottleneck due to disk speed or network traffic. The key to performance is eliminating bottlenecks.
The responsiveness of the LF Server to be able to handle large amounts of document/data input is more related to your hardware configuration then the LF Server software. The LF server will need through put to not only accept the data from the clients, but also to communicate with SQL and if used, network storage. So you need to configure your servers and network to allow for the large amounts of data transfers. These are much more limiting factors then the LF Server software and in large, busy systems, need to be planned for and monitored closely to ensure optimal performance.
THanks Bert.
What do you have to say about which is the best option for handling such situation, SDK or Import Agent?
I personally would think that the Import Agent product should be as efficient as anything you could produce through the SDK and Import Agent would have the benefit of being maintained and supported by Laserfiche.
Thanks everyone.
I should have mentioned before, but the reason of using SDK to built our own import utility is because of the type of metadata files created by another system, which will be an input to Laserfiche. Those metadata file (.idx) are not supported by IA, therefore we are thinking about building our own custom utility (which we have already built and is perfectly operational), as changing the metadata files will be an expensive option.
I know this post is fairly old and you probably have a solution already, but Quick Fields and/or Quick Fields Agent is also a solution for this type of scenario. It might even eliminate the need for the SDK work depending on what the metadata looks like.