The short version
MagniSpore is building a long-term knowledge base about mushroom cultivation. The best way to make that knowledge base accurate is to learn from what growers like you actually do — your photos, observations, substrate recipes, and outcomes. If you opt in, we include your de-identified cultivation data in that internal dataset. If you opt out, we don’t. Either way, nothing changes about what the app does for you.
We don’t sell this data, license it to data brokers, or give third-party AI providers a copy for their own training. See “What we will never do” below for the specifics.
What we collect (if you opt in)
Four categories, each independently toggleable in Settings → Privacy:
- Community posts— photos, comments, and reactions you share publicly to the feed. These are visible to other growers regardless of your contribution setting; opting in here specifically lets us use them for internal analysis and training of our machine-learning models.
- Grow journal— logs, observations, and photos from your personal grows. This is your private content; opting in lets us read it in a de-identified pipeline.
- Lab cultures— agar plates, liquid cultures, and grain spawn records from the Lab section.
- Yield & harvest data— weights, timing, biological efficiency, and outcome records.
For photos and profile avatars, we re-encode the image server-side and strip all EXIF, IPTC, and XMP metadata before it reaches storage. What we keep is coarse: capture timestamp, image orientation, color profile, and a device-class label from a fixed set (smartphone, dslr, point-and-shoot, or unknown). GPS coordinates, device serial numbers, and model-specific identifiers (“iPhone 15 Pro,” “Canon EOS R5”) are discarded before the photo enters storage. If the sanitization pipeline cannot process an image, the upload is rejected rather than stored with metadata intact.
How we use it
- Improve the product. Spot patterns across grows to surface better defaults, better strain recommendations, better contamination warnings.
- Build a cultivation knowledge base.The community aggregate insights on strain pages (average time-to-pins, common substrates, typical BE%) come from this. We only display an aggregate when at least 5 distinct growers have contributed to the underlying calculation; below that threshold we show “insufficient data” rather than a potentially-identifying number.
- Train future models. Eventually, we plan to train AI that can diagnose contamination from a photo, recommend next actions from your grow state, and answer questions grounded in real outcomes. Consented data is what makes that possible. Each training dataset we build is versioned and immutable; we keep internal records of which consent state applied to each export, so we can audit what went into any model we shipped.
What we will never do
- Sell your data.No data licensing, no ad networks, no data-broker relationships. The business model is the freemium subscription — full stop.
- Give third parties a copy for their use.No downstream buyers, no external data-science vendors who keep a copy. Raw identifiable content stays internal. Aggregate statistics may be published or shared, and a named expert reviewer (for example, a mycologist consulted on dataset labeling) may see content under a written confidentiality agreement — but they don’t get a copy to take home.
- Let third-party AI providers train on your data.Today, we don’t transmit user-contributed content to any third-party AI provider. If that changes in the future, we will only use providers whose terms prohibit training on submitted data — and if that arrangement ever became impossible, we would disable the affected feature or move it to an internally-hosted model before continuing.
- Expose your private grows. Private content stays private to you and invited collaborators, subject to standard limits: we may disclose content in response to valid legal process, to investigate suspected abuse of the Service, or in connection with a security incident where disclosure is necessary to protect users. Only content you explicitly post to the community feed is visible to other users.
- Let outside AI crawlers scrape the site. Public community posts are not indexable by commercial AI-training crawlers (for example, GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bytespider). Our robots.txt disallows these crawlers specifically, and we add new ones as they appear.
How you control it
- Flip any of the four categories on or off at any time in Settings → Privacy. Each toggle is recorded as a dated event in an append-only history you can view on the same page. This history is retained while your account exists and is deleted when your account is deleted.
- Per-item deletion.Deleting an individual photo, post, or observation from within the app removes it from future training eligibility immediately, the same way revoking the whole category does. You don’t need to revoke a whole category to remove a specific item.
- Signup defaults to opt-in when your IP address at signup geolocates to the United States, and to opt-out for every other IP geolocation. You can change this immediately after signup.
- Deleting your account cascades to your content per the retention policy in the Privacy Policy.
One honest caveat about past training
If you opt in today and your photos get used in a training run that produces a model checkpoint we retain, and then you opt out the month after that — we will stop using your content for future training runs, and any content that was merely staged for export but not yet consumed by a training run is still fully revocable. But a model that already incorporated your content cannot have your specific contribution surgically removed. This is a limitation of how neural networks work, not a policy choice.
Practically: your forward-looking consent is always respected immediately. Retroactive removal from models that already shipped is not feasible. If that trade-off is uncomfortable, leave the contribution toggles off and we’ll never include your content in the first place.