Local LLMs with llamafile

About this Course

In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examples

Created by: Duke University


Related Online Courses

Whether you\'re writing paychecks or wondering where yours comes from, this course is for you! We begin by asking: \"To succeed, what kind of a person does your organization need to attract,... more
In this 2-hour project-based course, you will learn how to import data into Pandas, create embeddings with SentenceTransformers, and build a retrieval augmented generation (RAG) system with your... more
The Juniper Networks Security Fundamentals specialization provides students a brief overview of cybersecurity problems and how Juniper Networks approaches a complete security solution with Juniper... more
This Specialization is intended for anyone seeking to develop online, internet researching skills along with advanced knowledge of multimedia for the creation of digital objects such as... more
This is a self-paced lab that takes place in the Google Cloud console. In this lab, you will learn how to use the Gemini API context caching feature in Vertex AI.Created by: Google Cloud more

CONTINUE SEARCH

FOLLOW COLLEGE PARENT CENTRAL