Local LLMs with llamafile

About this Course

In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examples

Created by: Duke University


Related Online Courses

This is a self-paced lab that takes place in the Google Cloud console. In this hands-on lab you will learn how to perform basic tasks in Cloud Storage using the gsutil command-line tool. For a... more
This specialization is intended for researchers or research enthusiasts of all different levels to help them know what their omic data means and how to learn about it! With an overhwelming... more
This specialization is for anyone interested in learning more about computer programming, including the fundamental computer science knowledge and skills required for work in this field. Through 4... more
By the end of this project, you will create a website which reports the weather for a specific city. We will learn how to use NodeJS to send API requests to Accuweather, and Pug and CSS to present... more
This specialization is intended for data analysts looking to expand their toolbox for working with data. Traditionally, data analysts have used tools like relational databases, CSV files, and SQL... more

CONTINUE SEARCH

FOLLOW COLLEGE PARENT CENTRAL