Local LLMs with llamafile

About this Course

In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examples

Created by: Duke University


Related Online Courses

Digital transformation is a hot topic--but what exactly is it and what does it mean for companies? In this course, developed at the Darden School of Business at the University of Virginia, and led... more
Advance your strategic analysis skills in this follow-up to Foundations of Business Strategy. In this course, developed at the Darden School of Business at the University of Virginia, you\'ll learn... more
This course aims to provide participants with a comprehensive understanding of incident response processes and workflows. The course covers various aspects of automating incident response... more
This course delves into advanced data structures in Python, focusing on the powerful capabilities of the NumPy and Pandas libraries. It introduces the ndarray, a multidimensional array object... more
This intermediate-level course equips learners with a comprehensive understanding of environmental, social, and governance (ESG) principles and practical mastery of applying generative AI (GenAI)... more

CONTINUE SEARCH

FOLLOW COLLEGE PARENT CENTRAL