Local LLMs with llamafile

About this Course

In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examples

Created by: Duke University


Related Online Courses

Through recorded lectures, demonstrations, and hands-on labs, participants explore and deploy the components of a secure Google Cloud solution, including Cloud Identity, the GCP Resource Manager,... more
This course covers how to drive customer equity and transform your marketing strategies using generative AI. Delve into the foundational components of customer equity and understand the impacts on... more
In this 2 hour long project you will create a SCRUM project in Jira, exploring the SCRUM agile methodology and familiarizing with all of its elements. You will create user stories and tasks, plan... more
You will identify ways to implement the six principles of Integrative Nursing in your work setting, then learn how to practice and apply specific integrative therapies at work, in alignment with... more
Seiring semakin meningkatnya penggunaan Kecerdasan Buatan dan Machine Learning di kalangan perusahaan, proses membangunnya secara bertanggung jawab juga menjadi semakin penting. Membicarakan... more

CONTINUE SEARCH

FOLLOW COLLEGE PARENT CENTRAL