DeepSeek-R1 Crash Course



Learn how to use DeepSeek-R1 in this crash course for beginners. Learn about the innovative reinforcement learning approach that powers DeepSeek-R1, exploring how it achieves performance comparable to industry giants like OpenAI’s o1, but at a fraction of the cost. You’ll learn about its architecture, practical applications, and how to deploy this model to leverage its advanced reasoning skills for your own projects.

✏️ Course developed by Andrew Brown from @ExamProChannel .

❤️ Support for this channel comes from our friends at Scrimba – the coding platform that’s reinvented interactive learning: https://scrimba.com/freecodecamp

⭐️ Contents ⭐️
00:00 Introduction
01:01 DeepSeek Overview
06:13 DeepSeek.com V3
15:36 DeepSeek R1 via Ollama
15:36 DeepSeek R1 via LMStudio
52:12 DeepSeek via Hugging Face Transformers
1:26:06 Thoughts and Conclusions

🎉 Thanks to our Champion and Sponsor supporters:
👾 Drake Milly
👾 Ulises Moralez
👾 Goddard Tan
👾 David MG
👾 Matthew Springman
👾 Claudio
👾 Oscar R.
👾 jedi-or-sith
👾 Nattira Maneerat
👾 Justin Hual

Learn to code for free and get a developer job: https://www.freecodecamp.org

Read hundreds of articles on programming: https://freecodecamp.org/news

source

33 thoughts on “DeepSeek-R1 Crash Course”

  1. this the dumbest videos , I can just tell a deep seek to tach me how to use itself . all this tech tutorials are going to disappears soon ,the tech industry is doomed .

  2. 00:03 – Introduction to DeepSeek and its functionalities.
    02:12 – DeepSeek R1 excels in text generation with significantly reduced costs.
    06:36 – Overview of DeepSeek-R1's functionality and setup for language learning.
    08:58 – The process of downloading and uploading files in DeepSeek-R1.
    13:37 – DeepSeek-R1 effectively transcribes Japanese text from images.
    15:42 – Introduction to DeepSeek's hardware capabilities and setup process.
    19:32 – Discusses hardware requirements for running AI models effectively.
    21:13 – Important considerations for running large models include RAM and GPU requirements.
    24:36 – Overview of different ways to utilize AI models like DeepSeek-R1.
    26:20 – Installing and configuring the DeepSeek model in ML Studio.
    30:05 – The model effectively assists language learning with built-in reasoning.
    31:42 – The agent demonstrates reasoning capabilities during tasks.
    35:03 – Loading Llama distilled model shows resource management challenges.
    36:42 – Demonstration of DeepSeek-R1 model adjustments and resource management challenges.
    40:19 – Exploring GPU offloading for better model performance.
    42:19 – Understanding the DeepSeek-R1 model and its comparisons to other AI models.
    46:15 – Understanding hardware monitoring and optimization for local machines.
    48:08 – Configuring GPU and CPU usage for deep learning models.
    51:36 – Optimizing AI models for both GPU and non-GPU setups.
    53:19 – Exploring DeepSeek's models and GPU performance limitations.
    56:49 – Overview of setting up DeepSeek-R1 with Transformers in vs code.
    58:34 – Setting up a new IronPython environment in Jupyter Notebooks.
    1:02:13 – Installing necessary packages for DeepSeek setup.
    1:03:52 – Installing necessary components for Transformers and DeepSeek setup.
    1:07:20 – Installing TensorFlow and PyTorch for DeepSeek usage.
    1:09:21 – Installing TensorFlow and PyTorch for DeepSeek-R1 Integration
    1:13:08 – Model inference requires a Hugging Face API key.
    1:15:10 – Troubleshooting HF token integration for downloading models.
    1:18:59 – Managing resource exhaustion while using DeepSeek-R1 tools.
    1:21:40 – Challenges in running models with limited resources.
    1:25:18 – Understanding memory usage and optimization for DeepSeek-R1 models.
    1:27:04 – Challenges of running deep learning models on consumer hardware.
    1:30:50 – Future improvements in computing may reduce costs.

  3. Bruhh this is just model integration course nothing more than that , and here I thought they taught us how to build an llm like deepseek or may be deepseek optimization tutorial , bruh ….

  4. So I got Deepseek r1-1.5b to run (albeit pretty subpar experience) on my 4 year old Surface Pro, in about, 20 minutes… Why did I just watch this man struggle and fail to do anything besides what I accomplished for over an hour with a much better setup? Sometimes videos are just a scrap man.

  5. This is the worst tech tutorial ever. In the last part (jupyter notebooks), you have no idea what you are talking about, you spend dozens of minutes trying to understand basic error messages and you keep it in the final cut. At the end it's still not working, you just give up and end the video as is. Is this a joke video?

  6. Here's a crash course:
    "I was wondering about the 1989 Tiananmen Square protests and massacre. I would like some reliable information"
    "Sorry, that's beyond my current scope. Let’s talk about something else."

  7. I have been using the r1 14b on MSI GP66 Leopard, using the model on chatbox app, it is never giving me any hassle there. It is smooth, looks like intel machine is not so good for this.

  8. 00:03 – Introduction to DeepSeek and its functionalities.

    02:12 – DeepSeek R1 excels in text generation with significantly reduced costs.

    06:36 – Overview of DeepSeek-R1's functionality and setup for language learning.

    08:58 – The process of downloading and uploading files in DeepSeek-R1.

    13:37 – DeepSeek-R1 effectively transcribes Japanese text from images.

    15:42 – Introduction to DeepSeek's hardware capabilities and setup process.

    19:32 – Discusses hardware requirements for running AI models effectively.

    21:13 – Important considerations for running large models include RAM and GPU requirements.

    24:36 – Overview of different ways to utilize AI models like DeepSeek-R1.

    26:20 – Installing and configuring the DeepSeek model in ML Studio.

    30:05 – The model effectively assists language learning with built-in reasoning.

    31:42 – The agent demonstrates reasoning capabilities during tasks.

    35:03 – Loading Llama distilled model shows resource management challenges.

    36:42 – Demonstration of DeepSeek-R1 model adjustments and resource management challenges.

    40:19 – Exploring GPU offloading for better model performance.

    42:19 – Understanding the DeepSeek-R1 model and its comparisons to other AI models.

    46:15 – Understanding hardware monitoring and optimization for local machines.

    48:08 – Configuring GPU and CPU usage for deep learning models.

    51:36 – Optimizing AI models for both GPU and non-GPU setups.

    53:19 – Exploring DeepSeek's models and GPU performance limitations.

    56:49 – Overview of setting up DeepSeek-R1 with Transformers in vs code.

    58:34 – Setting up a new IronPython environment in Jupyter Notebooks.

    1:02:13 – Installing necessary packages for DeepSeek setup.

    1:03:52 – Installing necessary components for Transformers and DeepSeek setup.

    1:07:20 – Installing TensorFlow and PyTorch for DeepSeek usage.

    1:09:21 – Installing TensorFlow and PyTorch for DeepSeek-R1 Integration

    1:13:08 – Model inference requires a Hugging Face API key.

    1:15:10 – Troubleshooting HF token integration for downloading models.

    1:18:59 – Managing resource exhaustion while using DeepSeek-R1 tools.

    1:21:40 – Challenges in running models with limited resources.

    1:25:18 – Understanding memory usage and optimization for DeepSeek-R1 models.

    1:27:04 – Challenges of running deep learning models on consumer hardware.

    1:30:50 – Future improvements in computing may reduce costs.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top