Learn how to use DeepSeek-R1 in this crash course for beginners. Learn about the innovative reinforcement learning approach that powers DeepSeek-R1, exploring how it achieves performance comparable to industry giants like OpenAI’s o1, but at a fraction of the cost. You’ll learn about its architecture, practical applications, and how to deploy this model to leverage its advanced reasoning skills for your own projects.
✏️ Course developed by Andrew Brown from @ExamProChannel .
❤️ Support for this channel comes from our friends at Scrimba – the coding platform that’s reinvented interactive learning: https://scrimba.com/freecodecamp
⭐️ Contents ⭐️
00:00 Introduction
01:01 DeepSeek Overview
06:13 DeepSeek.com V3
15:36 DeepSeek R1 via Ollama
15:36 DeepSeek R1 via LMStudio
52:12 DeepSeek via Hugging Face Transformers
1:26:06 Thoughts and Conclusions
🎉 Thanks to our Champion and Sponsor supporters:
👾 Drake Milly
👾 Ulises Moralez
👾 Goddard Tan
👾 David MG
👾 Matthew Springman
👾 Claudio
👾 Oscar R.
👾 jedi-or-sith
👾 Nattira Maneerat
👾 Justin Hual
—
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news
source
Hey Chat GPT is still good 👍..DeepSeek not working well…
this the dumbest videos , I can just tell a deep seek to tach me how to use itself . all this tech tutorials are going to disappears soon ,the tech industry is doomed .
Running on terminal was a smooth walk
The only R1 I knew before the course was the Yamaha R1 🗿
We went from Will Smith slaps to AI battles—what a timeline to live in
dude must have a time machine to drop this course that fast
Man, your NVIDIA smi says you have nvidia 4060 installed, not 4080.
But anyway I'd launched deepseek with my home PC with 1080 only, and it worked pretty fast.
Can they run on CPU devices?
I don’t think ollama is downloading models from hugging face but from its own repository
One man's stock crash is another man's crash course. Interesting times 😂
Really….so soon….🤷🏻♀️😱…
Omg….Thank you
Nice
How can one install DeepSeek locally in Linuxmint or any Linux system?
How can deepseek helps in Data science and other data sorting beyond text
Thank you.❤
00:03 – Introduction to DeepSeek and its functionalities.
02:12 – DeepSeek R1 excels in text generation with significantly reduced costs.
06:36 – Overview of DeepSeek-R1's functionality and setup for language learning.
08:58 – The process of downloading and uploading files in DeepSeek-R1.
13:37 – DeepSeek-R1 effectively transcribes Japanese text from images.
15:42 – Introduction to DeepSeek's hardware capabilities and setup process.
19:32 – Discusses hardware requirements for running AI models effectively.
21:13 – Important considerations for running large models include RAM and GPU requirements.
24:36 – Overview of different ways to utilize AI models like DeepSeek-R1.
26:20 – Installing and configuring the DeepSeek model in ML Studio.
30:05 – The model effectively assists language learning with built-in reasoning.
31:42 – The agent demonstrates reasoning capabilities during tasks.
35:03 – Loading Llama distilled model shows resource management challenges.
36:42 – Demonstration of DeepSeek-R1 model adjustments and resource management challenges.
40:19 – Exploring GPU offloading for better model performance.
42:19 – Understanding the DeepSeek-R1 model and its comparisons to other AI models.
46:15 – Understanding hardware monitoring and optimization for local machines.
48:08 – Configuring GPU and CPU usage for deep learning models.
51:36 – Optimizing AI models for both GPU and non-GPU setups.
53:19 – Exploring DeepSeek's models and GPU performance limitations.
56:49 – Overview of setting up DeepSeek-R1 with Transformers in vs code.
58:34 – Setting up a new IronPython environment in Jupyter Notebooks.
1:02:13 – Installing necessary packages for DeepSeek setup.
1:03:52 – Installing necessary components for Transformers and DeepSeek setup.
1:07:20 – Installing TensorFlow and PyTorch for DeepSeek usage.
1:09:21 – Installing TensorFlow and PyTorch for DeepSeek-R1 Integration
1:13:08 – Model inference requires a Hugging Face API key.
1:15:10 – Troubleshooting HF token integration for downloading models.
1:18:59 – Managing resource exhaustion while using DeepSeek-R1 tools.
1:21:40 – Challenges in running models with limited resources.
1:25:18 – Understanding memory usage and optimization for DeepSeek-R1 models.
1:27:04 – Challenges of running deep learning models on consumer hardware.
1:30:50 – Future improvements in computing may reduce costs.
Bruhh this is just model integration course nothing more than that , and here I thought they taught us how to build an llm like deepseek or may be deepseek optimization tutorial , bruh ….
Damn. YOU GUYS ARE FAST AF !! 😭😭🫡
So I got Deepseek r1-1.5b to run (albeit pretty subpar experience) on my 4 year old Surface Pro, in about, 20 minutes… Why did I just watch this man struggle and fail to do anything besides what I accomplished for over an hour with a much better setup? Sometimes videos are just a scrap man.
create a course on project IDX pls
FreeCodeCamp 🪖 soldier salute 🫡🫡
OpenAI pays the money for the 95% costs, Deepseek pays all the rest 😅
good learning!
Ok umm what is this ? Lol im interested in learning it but i have no idea where to start
梁文鋒,黃仁勳,郭台銘,彼此都沒有仇恨,仇恨是想利用他們大撈一筆的人,這些人也在搧動仇華
This is the worst tech tutorial ever. In the last part (jupyter notebooks), you have no idea what you are talking about, you spend dozens of minutes trying to understand basic error messages and you keep it in the final cut. At the end it's still not working, you just give up and end the video as is. Is this a joke video?
Here's a crash course:
"I was wondering about the 1989 Tiananmen Square protests and massacre. I would like some reliable information"
"Sorry, that's beyond my current scope. Let’s talk about something else."
Instead of LM Studio, use Msty, its so fast and cool and uses any local LLM previously downloaded.
insufficient training data
just open from the directory, why are wasting time to download again 29:20
I have been using the r1 14b on MSI GP66 Leopard, using the model on chatbox app, it is never giving me any hassle there. It is smooth, looks like intel machine is not so good for this.
00:03 – Introduction to DeepSeek and its functionalities.
02:12 – DeepSeek R1 excels in text generation with significantly reduced costs.
06:36 – Overview of DeepSeek-R1's functionality and setup for language learning.
08:58 – The process of downloading and uploading files in DeepSeek-R1.
13:37 – DeepSeek-R1 effectively transcribes Japanese text from images.
15:42 – Introduction to DeepSeek's hardware capabilities and setup process.
19:32 – Discusses hardware requirements for running AI models effectively.
21:13 – Important considerations for running large models include RAM and GPU requirements.
24:36 – Overview of different ways to utilize AI models like DeepSeek-R1.
26:20 – Installing and configuring the DeepSeek model in ML Studio.
30:05 – The model effectively assists language learning with built-in reasoning.
31:42 – The agent demonstrates reasoning capabilities during tasks.
35:03 – Loading Llama distilled model shows resource management challenges.
36:42 – Demonstration of DeepSeek-R1 model adjustments and resource management challenges.
40:19 – Exploring GPU offloading for better model performance.
42:19 – Understanding the DeepSeek-R1 model and its comparisons to other AI models.
46:15 – Understanding hardware monitoring and optimization for local machines.
48:08 – Configuring GPU and CPU usage for deep learning models.
51:36 – Optimizing AI models for both GPU and non-GPU setups.
53:19 – Exploring DeepSeek's models and GPU performance limitations.
56:49 – Overview of setting up DeepSeek-R1 with Transformers in vs code.
58:34 – Setting up a new IronPython environment in Jupyter Notebooks.
1:02:13 – Installing necessary packages for DeepSeek setup.
1:03:52 – Installing necessary components for Transformers and DeepSeek setup.
1:07:20 – Installing TensorFlow and PyTorch for DeepSeek usage.
1:09:21 – Installing TensorFlow and PyTorch for DeepSeek-R1 Integration
1:13:08 – Model inference requires a Hugging Face API key.
1:15:10 – Troubleshooting HF token integration for downloading models.
1:18:59 – Managing resource exhaustion while using DeepSeek-R1 tools.
1:21:40 – Challenges in running models with limited resources.
1:25:18 – Understanding memory usage and optimization for DeepSeek-R1 models.
1:27:04 – Challenges of running deep learning models on consumer hardware.
1:30:50 – Future improvements in computing may reduce costs.
"How do I run Deepseek on the cheap?" <thinking> </thinking>