GenAI Models#
To incorporate a Large Language Model (LLM) into code, initialize it by importing from the mtllm.llms
module built into the langauge.
To download jac-lang with all required python dependencies to use llms:
Here are the list of models/ model providers which are available to use out of the box with jac-lang.
Cloud Hosted LLMs (API Clients)#
Note:
- Theses LLMs require an API Key and the relevent python libraries to be installed. -->
Running Local LLMs#
-
Downlad Ollama from their website, install and run the server by running
ollama serve
. Pull and install your model of choice by bashingollama run <model_name>
on a new terminal. -
Download and run opensource LLMs from the plethora of models available on the Hugging Face website.
Note:
- Running Local LLMs would be demanding for your PC setup where it will either simply not run the model or inference performance will take a hit. Check whether you have sufficient system requirements to run local LLMs.
In the jac program that you require to inference an LLM, please code as following template code snippets.
The llm model is defined in these examples which can be intialized with specific attributes.
Note:
- If the coder wants to visualize the prompts during inference, enable verbose by adding
verbose = True
as an argument when defining the LLM.
This approach allows for the initialization of the desired model as a model code construct with a specific name (in this case, llm
), facilitating its integration into code. -->