Local LLMs with llamafile
About this Course
In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examplesCreated by: Duke University

Related Online Courses
This class is the chance to create your personal essay or extend into a full memoir -- from planning and structure to bold narrative brushstrokes to the layering of significant detail. You will... more
Welcome to Predictive Modeling, Model Fitting, and Regression Analysis. In this course, we will explore different approaches in predictive modeling, and discuss how a model can be either supervised... more
Embark on a journey through the intricate world of deep learning and neural networks. This course starts with a foundation in the history and basic concepts of neural networks, including... more
Embark on a comprehensive journey into the world of IT security with this in-depth course. Learn to safeguard systems, data, and identities while mastering the foundational principles and advanced... more
Ever wanted to seamlessly translate natural language into actionable insights from your data? This Guided Project was created to help learners develop the skillset necessary to utilize OpenAI GPT... more