Debian Bug report logs - #1038326
ITP: transformers -- State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow (it ships LLMs)

Package: wnpp; Maintainer for wnpp is wnpp@debian.org;

Affects: src:transformers

Reported by: "M. Zhou" <lumin@debian.org>

Date: Sat, 17 Jun 2023 04:39:01 UTC

Owned by: Mo Zhou <lumin@debian.org>

Severity: wishlist

Reply or subscribe to this bug.

View this report as an mbox folder, status mbox, maintainer mbox


Report forwarded to debian-bugs-dist@lists.debian.org, debian-devel@lists.debian.org, debian-ai@lists.debian.org, wnpp@debian.org, Mo Zhou <lumin@debian.org>:
Bug#1038326; Package wnpp. (Sat, 17 Jun 2023 04:39:03 GMT) (full text, mbox, link).


Acknowledgement sent to "M. Zhou" <lumin@debian.org>:
New Bug report received and forwarded. Copy sent to debian-devel@lists.debian.org, debian-ai@lists.debian.org, wnpp@debian.org, Mo Zhou <lumin@debian.org>. (Sat, 17 Jun 2023 04:39:03 GMT) (full text, mbox, link).


Message #5 received at submit@bugs.debian.org (full text, mbox, reply):

From: "M. Zhou" <lumin@debian.org>
Cc: Debian Bug Tracking System <submit@bugs.debian.org>
Subject: ITP: transformers -- State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow (it ships LLMs)
Date: Sat, 17 Jun 2023 00:35:27 -0400
Package: wnpp
Severity: wishlist
Owner: Mo Zhou <lumin@debian.org>
X-Debbugs-Cc: debian-devel@lists.debian.org, debian-ai@lists.debian.org

* Package name    : transformers
  Upstream Contact: HuggingFace
* URL             : https://github.com/huggingface/transformers
* License         : Apache-2.0
  Description     : State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow

I've been using this for a while.

This package provides a convenient way for people to download and run an LLM locally.
Basically, if you want to run an instruct fine-tuned large language model with 7B parameters,
you will need at least 16GB of CUDA memory for inference in half/bfloat16 precision.
I have not tried to run any LLM with > 3B parameters with CPU ... that can be slow.
LLaMa.cpp is a good choice for running LLM on CPU, but that library supports less models
than this one. Meanwhile, the cpp library only supports inference.

I don't know how many dependencies are still missing, but that should not be too much.
Jax and TensorFlow are optional dependencies so they can be missing from our archive.
But anyway, I think running a large language model locally with Debian packages will
be interesting. The CUDA version of PyTorch is already in the NEW queue.

That said, this is actually a very comprehensive library, which provides far more functionalities
than running LLMs.

Thank you for using reportbug




Added indication that 1038326 affects src:transformers Request was from Chris Hofstaedtler <zeha@debian.org> to control@bugs.debian.org. (Sat, 29 Nov 2025 16:52:58 GMT) (full text, mbox, link).


Send a report that this bug log contains spam.


Debian bug tracking system administrator <owner@bugs.debian.org>. Last modified: Fri Jan 2 04:12:01 2026; Machine Name: berlioz

Debian Bug tracking system

Debbugs is free software and licensed under the terms of the GNU General Public License version 2. The current version can be obtained from https://bugs.debian.org/debbugs-source/.

Copyright © 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson, 2005-2017 Don Armstrong, and many other contributors.