Model Parallelism: Building and Deploying Large Neural Networks (MPBDLNN)

 

Course Overview

Very large deep neural networks (DNNs), whether applied to natural language processing (e.g., GPT-3), computer vision (e.g., huge Vision Transformers), or speech AI (e.g., Wave2Vec 2) have certain properties that set them apart from their smaller counterparts. As DNNs become larger and are trained on progressively larger datasets, they can adapt to new tasks with just a handful of training examples, accelerating the route toward general artificial intelligence. Training models that contain tens to hundreds of billions of parameters on vast datasets isn’t trivial and requires a unique combination of AI, high-performance computing (HPC), and systems knowledge.

Please note that once a booking has been confirmed, it is non-refundable. This means that after you have confirmed your seat for an event, it cannot be cancelled and no refund will be issued, regardless of attendance.

Prerequisites

Familiarity with:

  • Good understanding of PyTorch
  • Good understanding of deep learning and data parallel training concepts
  • Practice with deep learning and data parallel are useful, but optional

Prijs & Delivery methods

Online training

Duur
1 dag

Prijs
  • Op Aanvraag
Klassikale training

Duur
1 dag

Prijs
  • Op Aanvraag

Op dit moment is deze training niet beschikbaar in het open rooster. De kans is echter groot dat wij u toch een passende oplossing kunnen bieden. Wij horen graag wat uw specifieke wensen zijn. U bereikt ons via 030 658 2131 of info@flane.nl. We helpen u graag!