Termékek Menü

TCSV100MPCIE-PB PNY NVIDIA Tesla V100 - GPU computing processor

3.664.895 Ft (2.885.744 Ft + ÁFA)
Gyártó cikkszám: TCSV100MPCIE-PB
Cikkszám: TCSV100MPCIE-PB
Elérhetőség: Aktív termék
Átlagos értékelés: Nem értékelt
Gyártó: PNY

Leírás és Paraméterek

NVIDIA Tesla V100, 16 GB CoWoS HBM2, 4096-bit, 5120 CUDA, PCI Express 3.0 x16, 8-pin, 111.15x267.7 mm

PNY TCSV100MPCIE-PB. Graphics processor family: NVIDIA, Graphics processor: Tesla V100. Discrete graphics adapter memory: 16 GB, Graphics adapter memory type: High Bandwidth Memory 2 (HBM2), Memory bus: 4096 bit. Interface type: PCI Express x16 3.0. Cooling type: Passive

NVIDIA Tesla V100 is the worlds most advanced data center GPU ever built to accelerate AI, HPC, and Graphics.

NVIDIA Tesla V100 is the worlds most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPUenabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible.
Tesla V100 is the flagship product of Tesla data center computing platform for deep learning, HPC, and graphics. The Tesla platform accelerates over 450 HPC applications and every major deep learning framework. It is available everywhere from desktops to servers to cloud services, delivering both dramatic performance gains and cost savings opportunities.

PNY provides unsurpassed service and commitment to its professional graphics customers offering: 3 year warranty, pre- and post-sales support, dedicated Quadro Field Application engineers and direct tech support hot lines.

  • AI training
    Tesla V100 is a GPU designed to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink connects multiple V100 GPUs s to create powerful computing servers.
  • AI inference
    Tesla V100 is engineered to provide maximum performance in existing hyperscale server racks. With AI at its core, Tesla V100 GPU delivers higher inference performance than a CPU server. This giant leap in throughput and efficiency will make the scale-out of AI services practica


Erről a termékről még nem érkezett vélemény.

Írja meg véleményét

Megjegyzés: A HTML-kód használata nem engedélyezett!
Értékelés: Rossz Kitűnő

Hasonló termékek