Scraper policy
Peptide Intelligence is a research-peptide price comparison and editorial site. To keep our pricing data current we run a small number of automated scrapers that read publicly visible product pages on vendor websites once a week.
This page describes exactly what those scrapers do. Vendors who would prefer not to be included can opt out using the rule at the bottom of this page; no email or manual contact is required.
Who we are
Operator: Eugene Bidchenco. Project repository: github.com/WhiteShoeBanker.
User-Agent
All production scraper traffic from this site is sent with the following User-Agent string:
peptide-intel/1.0 (+https://peptide-intel-omega.vercel.app/scraper-policy) - research peptide price comparison
Any traffic from this project that does not carry that User-Agent is either occasional, manually issued reconnaissance (prefixed peptide-intel-recon/) or, if you see a different identifier, not us.
What we read
Only publicly visible product information from vendor product detail pages and product index pages: product names, sizes, prices, currency, and stock status. Where vendors expose structured data (schema.org Product / ProductGroup JSON-LD) we read that; otherwise we parse the same HTML a customer's browser receives.
We do not read pages behind login. We do not bypass paywalls or access controls. We do not run browser automation to defeat anti-bot measures. We do not request URLs that the vendor's robots.txt disallows for User-agent: * or for User-agent: peptide-intel/1.0.
How often we read
Scrape cadence is per vendor:
- Pure Rawz: daily at 03:00 UTC.
- Core Peptides: weekly on Sundays at 04:30 UTC and 05:00 UTC (split into two crons for runtime budget).
- SwissChems: weekly on Sundays at 05:30 UTC (peptides category only — SARMs, nootropics, PCT, bioregulators, bundles, and powders are not scraped).
- Ascension Peptides: weekly on Sundays at 06:00 UTC and 06:30 UTC (split into two crons for runtime budget; we honour their robots.txt Crawl-delay of 10 seconds).
- Future vendors: weekly unless otherwise noted.
One run reads at most one product index page plus one detail page per product. We do not crawl the full site, do not follow tag/category/filter links, and do not paginate beyond what is needed to enumerate products.
Throttle
We wait at least 5 seconds between requests to the same domain. If a vendor's robots.txt declares a longer Crawl-delay, we honour the larger value. We do not run requests in parallel against the same vendor.
Robots.txt
We re-fetch robots.txt at the start of every scrape run and respect its Disallow and Crawl-delay directives, both for User-agent: * and for any rule block specifically targeting peptide-intel/1.0. A robots.txt change takes effect on our next scheduled run.
Opting out
Vendors who do not wish to be scraped can add the following two lines to their robots.txt:
User-agent: peptide-intel/1.0 Disallow: /
Our scraper detects this on its next scheduled run and stops fetching pages from your domain. No email or manual contact is needed; the file is the source of truth.
Data we keep
We store a snapshot of each product's public price, size, currency, and stock status, plus the URL we fetched and a timestamp. We do not store, redistribute, or republish vendor content (descriptions, images, formulas, or other prose) beyond the structured fields needed for price comparison.