Reddit’s API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.

The key point: This doesn’t touch Reddit’s servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.

What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.

API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.

Self-hosting options:

  • USB drive / local folder (just open the HTML files)
  • Home server on your LAN
  • Tor hidden service (2 commands, no port forwarding needed)
  • VPS with HTTPS
  • GitHub Pages for small archives

Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.

Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.

How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is “trust but verify” – it accelerates the boring parts but you still own the architecture.

Live demo: https://online-archives.github.io/redd-archiver-example/ GitHub: https://github.com/19-84/redd-archiver (Public Domain)

Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d88be8bed7b6afddd4

  • K3CAN@lemmy.radio
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    Can anyone figure out what the minimum process is to just use the SSG function? I’m having a really hard time trying to understand the documentation.

  • offspec@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    9 hours ago

    It would be neat for someone to migrate this data set to a Lemmy instance

    • communism@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 hours ago

      This is just an archive. No different from using the wayback machine or any other archive of web content.

  • a1studmuffin@aussie.zone
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    1
    ·
    18 hours ago

    This seems especially handy for anyone who wants a snapshot of Reddit from pre-enshittification and AI era, where content was more authentic and less driven by bots and commercial manipulation of opinion. Just choose the cutoff date you want and stick with that dataset.

  • breakingcups@lemmy.world
    link
    fedilink
    English
    arrow-up
    95
    arrow-down
    4
    ·
    23 hours ago

    Just so you’re aware, it is very noticeable that you also used AI to help write this post and its use of language can throw a lot of people off.

    Not to detract from your project, which looks cool!

    • 19-84@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      114
      arrow-down
      3
      ·
      23 hours ago

      Yes I used AI, English is not my first language. Thank you for the kind words!

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        27
        ·
        16 hours ago

        You’re awesome. AI is fun and there’s nothing wrong with using it especially how you did. Lemmy was hit hard with AI hate propaganda. China probably trying to stop it’s growth and development in other countries or some stupid shit like that. But you’re good. Fuck them

  • frongt@lemmy.zip
    link
    fedilink
    English
    arrow-up
    42
    ·
    23 hours ago

    And only a 3.28 TB database? Oh, because it’s compressed. Includes comments too, though.

      • muusemuuse@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        14 hours ago

        If only I had the space and bandwidth. I would host a mirror via Lemmy and drag the traffic away.

        Actually, isn’t the a way to decentralize this that can be accessed from regular browsers on the internet? Live content here, archive everywhere.

        • psycotica0@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 hours ago

          Someone could format it into essentially static pages and publish it on IPFS. That would probably be the easiest “decentralized hosting” method that remains browsable

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    5
    ·
    8 hours ago

    I dont know if historic data is very interesting. Its the new content we are interested in…

  • SteveCC@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    23 hours ago

    Wow, great idea. So much useful information and discussion that users have contributed. Looking forward to checking this out.

    • muusemuuse@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      14 hours ago

      You know what would be a good way to do t? Take all that content and throw it on a federated service like ours. Publicly visible. No bullshit. And no reason to visit Reddit to get that content. Take their traffic away.

  • 19-84@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    2
    ·
    22 hours ago

    PLEASE SHARE ON REDDIT!!! I have never had a reddit account and they will NOT let me post about this!!

    • Bazell@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      8 hours ago

      We can’t share this on Reddit, but we can share this on other platforms. Basically, what you have done is you scraped tons of data for AI learning. Something like “create your own AI Redditor” . And greedy Reddit management will dislike it very much even if you will tell them that this is for the cultural inheritance. Your work is great anyway. Sadly, that I do not have enough free space to load and store all this data.

  • Tanis Nikana@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    2
    ·
    23 hours ago

    Reddit is hot stinky garbage but can be useful for stuff like technical support and home maintenance.

    Voat and Ruqqus are straight-up misinformation and fascist propaganda, and if you excise them from your data set, your data will dramatically improve.