Local LLMs with llamafile
About this Course
In this 1-hour project-based course, you will learn to: * Package open-source AI models into portable llamafile executables * Deploy llamafiles locally across Windows, macOS and Linux * Monitor system metrics like GPU usage when running models * Query llamafile APIs with Python to process generated text * Experience real-time inference through hands-on examplesCreated by: Duke University
Related Online Courses
Organizations in every industry are accelerating their use of artificial intelligence and machine learning to create innovative new products and systems. This requires professionals across a range... more
In this specialization you will learn how to create societal impact through Social Entrepreneurship. Social Entrepreneurship describes the discovery and sustainable exploitation of opportunities to... more
This course is best suited for individuals currently in the healthcare sector, as a provider, payer, or administrator. Individuals pursuing a career change to the healthcare sector may also be... more
This course provides a comprehensive exploration of statistical variability, equipping students with the skills to identify and interpret various measures within data distributions. Learners will... more
This is a self-paced lab that takes place in the Google Cloud console. Google Kubernetes Engine provides a managed environment for deploying, managing, and scaling your containerized applications... more