Published 10/2024
Created by Paulo Dichone | Software Engineer, AWS Cloud Practitioner & Instructor
MP4 | Video: h264, 1280x720 | Audio: AAC, 44.1 KHz, 2 Ch
Genre: eLearning | Language: English | Duration: 43 Lectures ( 3h 13m ) | Size: 2.1 GB
Run custom LLMs privately on your system—Use ChatGPT-like UI—Hands-on projects—No cloud or extra costs required
What you'll learn
Install and configure Ollama on your local system to run large language models privately.
Customize LLM models to suit specific needs using Ollama's options and command-line tools.
Execute all terminal commands necessary to control, monitor, and troubleshoot Ollama models.
Set up and manage a ChatGPT-like interface, allowing you to interact with models locally.
Utilize different model types—including text, vision, and code-generating models—for various applications.
Create custom LLM models from a Modelfile file and integrate them into your applications.
Build Python applications that interface with Ollama models using its native library and OpenAI API compatibility.
Develop Retrieval-Augmented Generation (RAG) applications by integrating Ollama models with LangChain.
Implement tools and function calling to enhance model interactions for advanced workflows.
Set up a user-friendly UI frontend to allow users to interface and chat with different Ollama models.