gloo-cuda 0.0~git20230519.597accf-2build1 source package in Ubuntu

Changelog

gloo-cuda (0.0~git20230519.597accf-2build1) noble; urgency=medium

  * No-change rebuild against libopenmpi3t64

 -- Steve Langasek <email address hidden>  Wed, 13 Mar 2024 18:25:52 +0000

Upload details

Uploaded by:
Steve Langasek
Uploaded to:
Noble
Original maintainer:
Ubuntu Developers
Architectures:
any
Section:
misc
Urgency:
Medium Urgency

See full publishing history Publishing

Series Pocket Published Component Section

Downloads

File Size SHA-256 Checksum
gloo-cuda_0.0~git20230519.597accf.orig.tar.xz 180.8 KiB 68f7bb91c706808d653cb4c9f81537f7c8a732544f76f992bf4e8d7be29c803a
gloo-cuda_0.0~git20230519.597accf-2build1.debian.tar.xz 5.3 KiB 1dea0f896b860bdf65f6b73f0559cb1b268d59756a215ab270690bae6f32241d
gloo-cuda_0.0~git20230519.597accf-2build1.dsc 2.4 KiB 034e262e9732aa8015ea941d1def020a9d8bd2a14126624018ed5f284fc55318

View changes file

Binary packages built by this source

libgloo-cuda-0: Collective communications library (shared object)

 Gloo is a collective communications library. It comes with a number of
 collective algorithms useful for machine learning applications. These
 include a barrier, broadcast, and allreduce.
 .
 Transport of data between participating machines is abstracted so that
 IP can be used at all times, or InifiniBand (or RoCE) when available.
 In the latter case, if the InfiniBand transport is used, GPUDirect can
 be used to accelerate cross machine GPU-to-GPU memory transfers.
 .
 Where applicable, algorithms have an implementation that works with system
 memory buffers, and one that works with NVIDIA GPU memory buffers. In the
 latter case, it is not necessary to copy memory between host and device;
 this is taken care of by the algorithm implementations.
 .
 This package ships the shared object for Gloo.

libgloo-cuda-0-dbgsym: debug symbols for libgloo-cuda-0
libgloo-cuda-dev: Collective communications library (development files)

 Gloo is a collective communications library. It comes with a number of
 collective algorithms useful for machine learning applications. These
 include a barrier, broadcast, and allreduce.
 .
 Transport of data between participating machines is abstracted so that
 IP can be used at all times, or InifiniBand (or RoCE) when available.
 In the latter case, if the InfiniBand transport is used, GPUDirect can
 be used to accelerate cross machine GPU-to-GPU memory transfers.
 .
 Where applicable, algorithms have an implementation that works with system
 memory buffers, and one that works with NVIDIA GPU memory buffers. In the
 latter case, it is not necessary to copy memory between host and device;
 this is taken care of by the algorithm implementations.
 .
 This package ships the development files.