
Used
Condition
2× Tesla P100 (16 GB HBM2, ECC): ideal for training and fine-tuning heavy models in PyTorch and TensorFlow/Jax. Support quantization (PTQ, QAT, and int8) to reduce model sizes and fit them into VRAM without significantly sacrificing performance. 2× Tesla P40 (24 GB GDDR5, ECC): perfect for inference and virtualization of production workloads. The large memory capacity enables larger batch sizes, multiple VMs, and complex RAG and embedding pipelines. They can run LLMs and chatbots locally at zero cost using tools such as Ollama, Hugging Face Transformers, and Tokenizers. Build your own LLM pipeline with embeddings, indexing, and RAG—without cloud expenses. Special price (complete bundle): €1,800. Priority given to buyers purchasing all four cards at once. Hand delivery available in Lisbon and Santarém, or shipping via postal service at buyer’s expense.
