Install Huzzler App
Install our app for a better experience and quick access to Huzzler.
Join the Huzzler founder community
A community where real founders build together. No fake gurus, no AI spam.
Join 3,046 founders
Huzzler is a strictly AI-free community
No fake MRR screenshots. Stripe-verified revenue only
Real advice from founders who've actually built
Network with serious builders, not wannabes
Hi there! The December algorithm update affected my website traffic. The site is a wholesale produce business in NYC. What ca I do to improve the website organic traffic?
Trust is earned, not assumed.
When you’re feeding sensitive code, business ideas, or personal thoughts into an AI, you need to know: where does this data go?
I went down the rabbit hole of Anthropic’s Terms & Conditions. Here’s what I found.
The Core Question
Is my data being used to train Claude?
The answer isn’t a simple yes or no. It depends on your choices—and a few exceptions.
But it really mostly seems like a no.
When Your Data IS Used for Training
Anthropic will use your conversations if:
- You explicitly opt in — Model improvement is a toggle in your privacy settings. Off by default? No. You need to check.
- Safety flags trigger — If their classifiers flag your conversation, it may be used to improve safety models and enforce usage policies. This happens regardless of your settings.
- Special programs — Participation in things like the Trusted Tester Program grants training access.
What gets included? The entire conversation. Any content, custom styles, or preferences you’ve set. Claude for Chrome usage data too.
Worth noting: raw connector content from Google Drive or MCP servers is excluded—unless you directly paste it into the chat.
When Your Data Is not? used
Here’s the seemingly good news.
Incognito chats are protected. Even if you have Model Improvement enabled, incognito conversations won’t train future models.
Feedback via thumbs up/down buttons? Stored separately for up to 5 years, de-linked from your user ID.
But, if the safety classifiers flag your content, then your content will be kept.
How are the classifiers built though?
If you insist on using Anthropic, keep on reading.
How to Control Your Settings
On Desktop:
- Click your name in the settings menu
- Navigate to Settings → Privacy
- Toggle “Help Improve Claude” on or off
Direct link: claude.ai/settings/data-privacy-controls
On Mobile:
- Open settings
- Select Settings → Privacy
- Same toggle
This is simple. No dark patterns and no buried menus — just suspicion.
The Catch
When you disable training, it only affects new conversations.
Previously stored data and anything already used in training? That ship has sailed.
The Safety Exception
Even with training disabled, safety-flagged conversations may still be reviewed. Anthropic reserves the right to use this data for:
- Internal trust and safety models
- Harmful content detection
- Policy enforcement
- Safety research
This isn’t surprising. Every AI company does this. But transparency matters, and they’re upfront about it.
My Take
Can we trust Anthropic?
Here’s what I see:
The positive:
- Clear documentation of what’s used and what isn’t
- Accessible privacy controls
- Incognito mode actually means something
- They don’t hide behind legal jargon
The concerns:
- Safety exceptions create ambiguity around “disabled” training
- Past data can’t be retroactively excluded
- Default settings matter—and most users never change them
Trust isn’t binary. It’s a spectrum.
Anthropic sits on the more transparent end compared to competitors. They explain their policies in plain language. They give you real controls.
But “trust” and “verify” should always go together.
Practical Recommendations
If you’re using Claude for sensitive work:
- Check your settings — Visit claude.ai/settings/data-privacy-controls right now
- Use incognito for sensitive conversations — It’s the only guaranteed exclusion
- Assume safety reviews happen — Don’t rely on privacy settings for edge cases
- Read the actual policies — Don’t take my word for it
The Bottom Line
Anthropic isn’t perfect. No company is.
But they’ve built a system where informed users can make real choices about their data. That’s more than most can say.
The question isn’t whether Anthropic is trustworthy.
The question is: have you taken the time to configure your trust level?
Sources:
This entire message was dictated, meaning typed on my behalf by the computer, using a tool I made called Privisay.
Check out the demo for the automated post I created for LinkedIn.
Looking for people to try it out.
Hey fellow builders,
I’m looking for some honest feedback/roasting on the core UX of my project, cvcomp.
The Hypothesis I started building this because I felt that standard resume scanners were optimising for vanity metrics. They give users a "95/100" score based on formatting, even if the content is totally irrelevant to the job they are applying for.
I decided to build a tool that forces Context. It compares the resume strictly against a specific Job Description (JD). If the resume is well-written but misses the specific semantic requirements of the JD, the score drops.
The UX Challenge (Where I need help) The biggest friction point in this space is the "Edit Loop." Usually: User scans resume -> Finds gaps -> Opens Word/Canva -> Edits -> Exports PDF -> Re-uploads.
To fix this, I built a Live Editor directly in the browser. You can click "Accept" on AI suggestions or double-click to manual edit, then download the PDF immediately.
What I’d love feedback on:
The On-boarding: Does the requirement to upload a JD immediately feel like too much friction, or does it make sense given the value prop?
The Live Editor: Does the "Accept/Decline" flow feel intuitive, or does it feel like you lack control?
The Tone: Does the "Gap Analysis" feel helpful, or is it too discouraging compared to standard scanners?
I’m trying to validate if this "tough love" approach is better than the "feel good" approach of competitors.
Here is the link: cvcomp.com
Don't hold back, if the UI is confusing or the logic seems off, I want to hear it.
Thanks,
Prashatha Jain