Local LLMs with llamafile

About this Course

In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examples

Created by: Duke University


Related Online Courses

This specialization is intended for anyone who owns a startup or who has an interest in learning how to scale up their business exponentially. By the end of the specialization, learners will be... more
This course is designed to help participants examine the implications of constructivism for learning and teaching in science, mathematics, and technology focused areas. Course readings,... more
This course is the second course in the specialization about learning how to develop video games using GameMaker on Windows or Mac. Why use GameMaker instead of using C# and Unity or C++ and... more
Embark on an immersive journey into deep learning, where theoretical concepts meet practical applications. This course begins with a foundational understanding of perceptrons and neural networks,... more
Knowledge of Geographic Information Systems (GIS) is an increasingly sought after skill in industries from agriculture to public health. This Specialization, offered in partnership with ArcGIS... more

CONTINUE SEARCH

FOLLOW COLLEGE PARENT CENTRAL