After more than eleven years, I’m discontinuing Scrapoxy. The first commit dates back to August 31, 2015, and the first public demo followed a week later at HumanTalks Paris. It’s been a long run.
I want to thank everyone who used Scrapoxy over the years, and especially those who contributed to it. This project led to great conversations, talks in more than fifteen countries, and a tool that genuinely helped teams operate web scraping at scale. I’m proud of what it became.
What happened
Scrapoxy started as a personal open‑source project. Over time, it grew into production‑critical infrastructure for many companies. With that growth came ongoing demands: feature requests, user support, operational responsibility, and recurring infrastructure costs.
To give you a sense of scale: over its lifetime, Scrapoxy served 1,742 users, tracked 19 million IPs, handled 115 billion requests, and processed nearly 12 petabytes of data across every country in the world and over 122,000 cities. Behind all of that: 387,000 lines of code written over the years, and backend infrastructure I paid for out of pocket.
Running Scrapoxy was never just about maintaining code. The backend services, GeoIP resolution, proxy validation, and monitoring all incurred real, recurring expenses. For years, those costs were fully covered by me.
When it became clear that this was no longer sustainable as a side project, I introduced paid enterprise support. Later, I restricted the source code after parts of the project were reused verbatim by commercial products, while Scrapoxy itself remained free to use. Only support and dedicated infrastructure were paid.
The outcome was simple: very few users were willing to pay. Despite Scrapoxy being used in production and embedded in critical pipelines, most companies treated it as free infrastructure they could depend on indefinitely. If you’ve seen that XKCD comic about all of modern digital infrastructure depending on a project maintained by one person, that was Scrapoxy.

This is not an accusation. It is a common structural reality of open source when a single maintainer also bears recurring operational costs.
The market reality
Looking back, I also have to be honest about the market itself. You might think web scraping is booming with the rise of AI, but the reality is different. Companies building LLMs and large‑scale AI models run their own scraping infrastructure internally. They don’t need external proxy management tools. The high‑volume scraping market as we knew it, crawling entire websites at scale, is a niche. The total addressable market for proxy management software is small, and the economics are tough: companies already pay heavily for proxies themselves, so adding another layer of infrastructure cost on top is a hard sell. Large organizations either build their own internal tools or use open source without paying. That’s not a complaint, it’s just how this market works.
Web scraping has always been a passion for me. One of the clearest lessons from this journey is that passion doesn’t always convert into a sustainable business, no matter how good the product is or how much people rely on it. The market has to be there, and for proxy management tooling, it wasn’t, at least not at a scale that could support a solo maintainer covering real infrastructure costs.
What’s next
I’m moving on to other projects and opportunities. This is not a pause, a transition, or a handoff. Scrapoxy is being discontinued.
For existing enterprise support subscribers, nothing changes until the end of your contract. You will continue to have access to your dedicated version, including its backend infrastructure, and will receive support under the same terms. After that, Scrapoxy shuts down for good.
For everyone else, you should plan to migrate. Docker images have been removed from public registries, public documentation has been taken offline, and shared infrastructure has been shut down for non‑paying users. A separate Q&A details the operational implications.
There is no community takeover planned, no new maintainer, and no replacement service provided by me.
Thank you to everyone who used, discussed, improved, or supported Scrapoxy over the years. I wish you the best going forward.
Fabien, February 6, 2026