<!--
.. title: Peer-to-Peer Package Managers can Solve Open Infrastructure's Problems
.. slug: sustainable-open-source-package-management-with-peer-to-peer
.. date: 2026-01-06 04:29:45 UTC
.. tags: 
.. category: 
.. link: 
.. description:
.. type: text
-->

The OpenSSF [published a joint statement](https://openssf.org/blog/2025/09/23/open-infrastructure-is-not-free-a-joint-statement-on-sustainable-stewardship/) in collaboration with numerous public package registries addressing a problem caused by rising costs without an increase in funding or support. These rising costs (largely attributed to the rise of generative agentic AI) have raised serious issues with the current model of public package registries like [crates.io](https://crates.io), [PyPi](https://pypi.org). The OpenSSF statement advocates for commercialization and private partnerships to bolster the current structure. This is risky and out-of-line with the Open Source ethos. We can better solve these problems with peer-to-peer package manager, specifically built on [Hypercore](https://github.com/datrs/hypercore), written in Rust.

## What's Wrong in Package Registry World

Modern programming languages are expected to have package registries that are fast, open, secure, and (very importantly) free. These package registries are an interesting intersection of vital infrastructure, community governance, and commercial incentives.
Funding comes from commercial entities or from Foundations like the [Rust Foundation](https://foundation.rust-lang.org) which manages [crates.io](https://crates.io) or the [Python Software Foundation](https://www.python.org/psf-landing/) which manages [PyPi](https://pypi.org). However these foundations also depend on commercial philanthropy for their survival. There is no direct profit incentive for a firm to fund a registry, because after all, they're expected to be free. However a registry outage can be extremely costly.
So foundations (like the Rust Foundation) solicit funding in the form of memberships from firms. The income helps the foundation fund the registry maintenance, and in exchange the firm gets influence over the foundation (up to getting a seat on the board of directors).

Despite this funding, it isn't enough.
The management of  package registries depends heavily on volunteers, there are paid roles but it's mostly contract work, and often paid at below the current (USA) market rate.
So underpaid, understaffed communities have been carrying the burden of running these registries, and they’ve recently been confronted with a higher demand, along with less "open" use, as the OpenSSF statement details:

 > Automated CI systems, large-scale dependency scanners, and ephemeral container builds, which are often operated by companies, place enormous strain on infrastructure. These commercial-scale workloads often run without caching, throttling, or even awareness of the strain they impose. The rise of Generative and Agentic AI is driving a further explosion of machine-driven, often wasteful automated usage, compounding the existing challenges.


 > On top of the increased demand, the nature of the content distributed has been changing: instead of free and open source software, registries are increasingly being used to distribute binary blobs for for-profit proprietary software.

 In other words, private companies are exploiting public registries for their own financial gain, while driving up hosting costs for non-profits that run these registries.

## OpenSSF Proposes More Commercialization

To solve this problem, OpenSSF advocates for a commercialization of registries, proposing access models and paywalled features, giving corporate benefactors further influence over package registries.
We don't need to do this! Peer-to-peer is such a perfect fit for these problems, and it avoids further entangling community and commercial interests.
We can fix this by building a hybrid peer-to-peer / server-client package manager, built on [Hypercore](https://github.com/datrs/hypercore), written in [Rust](https://www.rust-lang.org).
Almost all of the “modern expectations” for package registries listed by the OpenSFF could be addressed by this.
Let me explain by going through some of “expectations” listed by the OpenSFF, explaining what it can address, what it would and wouldn’t do.

1. `Dependency resolution and distribution must be fast, reliable, and global.`

    Peer-to-peer applications are well known for being able to quickly and reliably deliver high-demand content, in fact, the higher the demand, the more efficient its delivery. But what about content that isn’t “high-demand”? We can use a hybrid approach: package registries act as long term seeders - they would always be available to download a package from (the same way they are now).
    Basically we would fallback to centralized package management for less popular content.

1. `Publishing must be verifiable, signed, and immutable.`

    This is a [first class feature](https://github.com/holepunchto/hypercore#features) of Hypercore. Hypercore is a public key cryptosystem. When the owner of the secret key adds data, it includes an ed25519 signed blake2b hash, which is stored in a merkle-tree-like structure.

1. `Continuous integration (CI) pipelines expect deterministic builds with zero downtime.`

    Hypercore has no single point of failure to cause downtime, and the immutability of data provides the determinism.


1. `Security tooling expects an immediate response from public registries.`

    Things like vulnerability advisories, yanked or deprecated versions, key revocation etc. can all be distributed quickly and easily in a centralized or peer-to-peer way.


1. `Governments and enterprises demand continuous monitoring, traceability, and auditability of systems.`

    Hypercore, being an immutable, cryptographically signed data structure, covers a lot of this. But more interesting are things like attributing a key to a legal entity. The next item is related.


1. `New regulatory requirements, such as the EU Cyber Resilience Act (CRA), are further increasing compliance obligations and documentation demands, adding overhead for already resource-constrained ecosystems.`

    More regulatory requirements. There are many ways you could layer compliance onto a peer-to-peer system. There is still work to do to figure this out, so I'm leaving this up to future implementors.

1. `Infrastructure must be responsive to other types of attacks, such as spam and increased supply chain attacks involving malicious components that need to be removed.`

    These security issues require a centralized authority to address, this would be the registry. It would manage the global package namespace and so it would handle package removal, and typo squatting.

So Hypercore could address some major needs, while simultaneously dropping server costs and maintenance work. Addressing issues of compliance, security, and social concerns, is up to the community, and Hypercore is a tool in whatever solution they choose.


## What would a peer-to-peer package manager look like?

Take Rust’s cargo as a concrete example, here is a simplified version of how it **could** work.

**Publishing** works the same from the developer’s perspective. Under the hood, the registry maintains a Hyperbee (a key/value B-Tree built on Hypercore) keyed by package name. Each entry in that namespace points to a second Hyperbee owned and controlled by the developer, keyed by version. Claiming a new package name means the registry adds an entry pointing to the developer’s public key. Publishing a new version means the developer appends to their own Hyperbee.

Because each Hyperbee is an append-only, cryptographically signed log, every entry is immutable and verifiable by anyone. The registry holds the secret key for the namespace, giving it authority to remove a package name to handle typosquatters or malicious actors. But it can’t alter a developer’s version history, since that Hyperbee belongs to the developer.

**Resolving a dependency** like `tokio = "1.48.0"` means looking up `tokio` in the registry’s namespace Hyperbee to get the developer’s public key, then using that key to find their version Hyperbee, traversing it for `1.48.0`, and downloading the package data from peers.

![Diagram showing the registry’s namespace Hyperbee pointing to developer-owned Hyperbees keyed by version](/hypercore-registry-structure.svg)

**Low-demand packages** are the harder case. P2p only works when peers are online, and an obscure package may have none. The solution is for the registry to seed all packages as a fallback, which is effectively what crates.io already does. Popular packages get delivered peer-to-peer for free; unpopular ones fall back to the registry.

**Hostile networks** are the other edge case. Many corporate networks block p2p traffic entirely. In that case the client falls back to the existing centralized client, plain `cargo` in this example. Any real implementation would need to handle this gracefully, making the hybrid nature explicit rather than a last resort.

## Privacy and Money Making

The OpenSSF proposed "value-added capabilities" and "tiered access models" as ways to generate revenue for package registries, but this risks giving corporate benefactors further influence over the registry, which is precisely the conflict of interest we should be trying to avoid. A peer-to-peer hybrid model offers a cleaner path: charge for the non-p2p option, and keep the p2p path free and open by default.

And they will pay. The biggest reason is privacy: Hypercore is secure but not private, and peers can associate package downloads with IP addresses.
For companies, that's a meaningful exposure: download patterns reveal what dependencies you're pulling, and adversaries could use that to infer what you're building, track your release cadence, or map your internal toolchain.
On top of that, many corporate networks block p2p traffic outright, and compliance teams often require auditable, point-to-point download logs.

This creates a sustainable revenue model without compromising open access: individual developers and open source projects get fast, free, decentralized distribution, while companies that need privacy and auditability pay for centralized access.

## Why we Need Rust

The de facto Hypercore implementation is written in JavaScript.
That works for integrating with npm, but not for something like cargo: you can't call into a JavaScript library from Rust.
More broadly, JavaScript has no reliable path to being language-agnostic.
Rust does: it compiles to a C library, and C libraries can be called from virtually any language.
A Rust implementation of Hypercore could be a foundational building block for integrating Hypercore into any existing package ecosystem.

For the past few years there has been an effort to rewrite Hypercore from the ground up in Rust, and I've been leading that work. As of early 2026, the major pieces are in place: peer discovery, peer-to-peer replication, and Hypercore itself.
We're now in the process of wiring them together into a cohesive whole, so we can provide a stable, ergonomic API which will feel familiar to developers coming from the JavaScript implementation.

The open source infrastructure that the broader ecosystem depends on is at a crossroads.
If this work matters to you, please get involved: check out [github.com/datrs](https://github.com/datrs) and reach out. If you'd like to contribute financially, consider [sponsoring on GitHub](https://github.com/sponsors/cowlicks).
This is one concrete path forward, and it needs people to build it.
