Generative AI Architectures with LLM Prompt RAG Vector DB
Published 11/2024
MP4 | Video: h264, 1920x1080 | Audio: AAC, 44.1 KHz
Language: English | Size: 2.04 GB | Duration: 5h 27m
Design and Integrate AI-Powered S/LLMs into Enterprise Apps using Prompt Engineering, RAG, Fine-Tuning and Vector DBs
What you'll learn
Generative AI Model Architectures (Types of Generative AI Models)
Transformer Architecture: Attention is All you Need
Large Language Models (LLMs) Architectures
Capabilities of LLMs: Text Generation, Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation
Generate Text with ChatGPT: Understand Capabilities and Limitations of LLMs (Hands-on)
Function Calling and Structured Outputs in Large Language Models (LLMs)
LLM Providers: OpenAI, Meta AI, Anthropic, Hugging Face, Microsoft, Google and Mistral AI
LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral Mixral, xAI Grok
SLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Google Gemma, Microsoft Phi 3.5
How to Choose LLM Models: Quality, Speed, Price, Latency and Context Window
Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3
Installing and Running Llama and Gemma Models Using Ollama
Modernizing Enterprise Apps with AI-Powered LLM Capabilities
Designing the 'EShop Support App' with AI-Powered LLM Capabilities
Advanced Prompting Techniques: Zero-shot, One-shot, Few-shot, COT
Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat and RAG
The RAG Architecture: Ingestion with Embeddings and Vector Search
E2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG Workflow
End-to-End RAG Example for EShop Customer Support using OpenAI Playground
Fine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA, Transfer
End-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI Playground
Choosing the Right Optimization - Prompt Engineering, RAG, and Fine-Tuning
Requirements
Basics of Software Architectures
Description
In this course, you'll learn how to Design Generative AI Architectures with integrating AI-Powered S/LLMs into EShop Support Enterprise Applications using Prompt Engineering, RAG, Fine-tuning and Vector DBs.We will design Generative AI Architectures with below components;Small and Large Language Models (S/LLMs)Prompt EngineeringRetrieval Augmented Generation (RAG)Fine-TuningVector DatabasesWe start with the basics and progressively dive deeper into each topic. We'll also follow LLM Augmentation Flow is a powerful framework that augments LLM results following the Prompt Engineering, RAG and Fine-Tuning.Large Language Models (LLMs) module;How Large Language Models (LLMs) works?Capabilities of LLMs: Text Generation, Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code GenerationGenerate Text with ChatGPT: Understand Capabilities and Limitations of LLMs (Hands-on)Function Calling and Structured Output in Large Language Models (LLMs)LLM Models: OpenAI ChatGPT, Meta Llama, Anthropic Claude, Google Gemini, Mistral Mixral, xAI GrokSLM Models: OpenAI ChatGPT 4o mini, Meta Llama 3.2 mini, Google Gemma, Microsoft Phi 3.5Interacting Different LLMs with Chat UI: ChatGPT, LLama, Mixtral, Phi3Interacting OpenAI Chat Completions Endpoint with CodingInstalling and Running Llama and Gemma Models Using Ollama to run LLMs locallyModernizing and Design EShop Support Enterprise Apps with AI-Powered LLM CapabilitiesPrompt Engineering module;Steps of Designing Effective Prompts: Iterate, Evaluate and TemplatizeAdvanced Prompting Techniques: Zero-shot, One-shot, Few-shot, Chain-of-Thought, Instruction and Role-basedDesign Advanced Prompts for EShop Support - Classification, Sentiment Analysis, Summarization, Q&A Chat, and Response Text Generation Design Advanced Prompts for Ticket Detail Page in EShop Support App w/ Q&A Chat and RAGRetrieval-Augmented Generation (RAG) module;The RAG Architecture Part 1: Ingestion with Embeddings and Vector SearchThe RAG Architecture Part 2: Retrieval with Reranking and Context Query PromptsThe RAG Architecture Part 3: Generation with Generator and OutputE2E Workflow of a Retrieval-Augmented Generation (RAG) - The RAG WorkflowDesign EShop Customer Support using RAGEnd-to-End RAG Example for EShop Customer Support using OpenAI PlaygroundFine-Tuning module;Fine-Tuning WorkflowFine-Tuning Methods: Full, Parameter-Efficient Fine-Tuning (PEFT), LoRA, TransferDesign EShop Customer Support Using Fine-TuningEnd-to-End Fine-Tuning a LLM for EShop Customer Support using OpenAI PlaygroundLastly, we will discussChoosing the Right Optimization - Prompt Engineering, RAG, and Fine-TuningThis course is more than just learning Generative AI, it's a deep dive into the world of how to design Advanced AI solutions by integrating LLM architectures into Enterprise applications. You'll get hands-on experience designing a complete EShop Customer Support application, including LLM capabilities like Summarization, Q&A, Classification, Sentiment Analysis, Embedding Semantic Search, Code Generation.
Code:
https://fikper.com/eH4ga8xqFL/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part1.rar.html
https://fikper.com/qLTagWnahx/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part2.rar.html
Code:
https://fileaxa.com/qfxpiv2xdllj/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part1.rar
https://fileaxa.com/l81mobfh4mh0/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part2.rar
Code:
https://rapidgator.net/file/0182d6f6e91bf968a5099dfd32684e33/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part1.rar
https://rapidgator.net/file/4ffb8b039f4ca249f0b85d454e9d5545/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part2.rar
Code:
https://turbobit.net/j6uxt3o8k8kg/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part1.rar.html
https://turbobit.net/eoqdn55nif8y/Udemy_Generative_AI_Architectures_with_LLM_Prompt_RAG_Vector_DB_2024-11.part2.rar.html