Local LLMs with llamafile

About this Course

In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examples

Created by: Duke University


Related Online Courses

Paleontology: Ancient Marine Reptiles is a four-lesson course teaching a comprehensive overview of the evolutionary changes that occur when air-breathing terrestrial animals return to water. This... more
The Data Mining Specialization teaches data mining techniques for both structured data which conform to a clearly defined schema, and unstructured data which exist in the form of natural language... more
The movement of bodies in space (like spacecraft, satellites, and space stations) must be predicted and controlled with precision in order to ensure safety and efficacy. Kinematics is a field that... more
Leaders must have the ability to develop and deploy effective strategies. This specialisation will prepare you to be the strategic change-maker capable of enabling your organisation to compete into... more
In this course, \"Architecting with Google Kubernetes Engine: Workloads,\" you learn about performing Kubernetes operations; creating and managing deployments; the tools of GKE networking; and how... more

CONTINUE SEARCH

FOLLOW COLLEGE PARENT CENTRAL