I use Chat GPT with LF Cloud. It is just a simple web request rule once you deposit some money in the account and get your authorization information. Now you can ask questions to Chat GPT machine 2 machine.
Alternatively you can run more specific models called NIMs (nVidia Inference Models) hosted on a local server at no cost. This is also good if your information is confidential. You can test these models on build.nvidia.com, then install and run them on local nVidia hardware. Then by installing NoseJS you can use the script agent to hook this up to Laserfiche Cloud. nVidia provides NodeJS script examples for using each model. This might be more practical for business applications going forward as the models are focused on returning more specific data to your needs and the information is secure.
https://www.nvidia.com/en-us/ai/