Local LLMs with llamafile
About this Course
In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examplesCreated by: Duke University

Related Online Courses
In today\'s world, software development is highly complex and often has large teams of developers working on small pieces of a larger software project. This course will go over the basic principles... more
This foundational course will strengthen your understanding of AWS Services. The instructors will teach basic topics regarding computing, networking, storage, database, monitoring, security,... more
Why are some groups healthier than others, and how do these differences emerge and persist over a lifetime? How do social policies on housing, transportation, and employment relate to health and... more
This course uses the latest mobile technology and job tools like React Native, JavaScript, Expo-GO, and 2023 conventions, teaching advanced mobile development with React hooks. In this course, you... more
Steel, ever-evolving material, has been the most preeminent of all materials since it can provide wide range of properties that can meet ever-changing requirements. In this course, we explore both... more