Install Huzzler App

Install our app for a better experience and quick access to Huzzler.

The #1 Founder Community

Join the Huzzler founder community

A community where real founders build together. No fake gurus, no AI spam.

Join 2,895 founders

Community member ari
Community member prashastha-jain
Community member jst-tan
Community member Krzysztof
Community member vincent
An AI-free community

Huzzler is a strictly AI-free community

Stripe-verified revenue

No fake MRR screenshots. Stripe-verified revenue only

Genuine Feedback

Real advice from founders who've actually built

Authentic Connections

Network with serious builders, not wannabes

Top New Old
Ari Nakos
@ari
8 hours ago
Thoughts on Anthropic

Trust is earned, not assumed.

When you’re feeding sensitive code, business ideas, or personal thoughts into an AI, you need to know: where does this data go?

I went down the rabbit hole of Anthropic’s Terms & Conditions. Here’s what I found.

The Core Question

Is my data being used to train Claude?

The answer isn’t a simple yes or no. It depends on your choices—and a few exceptions.

But it really mostly seems like a no.

When Your Data IS Used for Training

Anthropic will use your conversations if:

  1. You explicitly opt in — Model improvement is a toggle in your privacy settings. Off by default? No. You need to check.
  2. Safety flags trigger — If their classifiers flag your conversation, it may be used to improve safety models and enforce usage policies. This happens regardless of your settings.
  3. Special programs — Participation in things like the Trusted Tester Program grants training access.

What gets included? The entire conversation. Any content, custom styles, or preferences you’ve set. Claude for Chrome usage data too.

Worth noting: raw connector content from Google Drive or MCP servers is excluded—unless you directly paste it into the chat.

When Your Data Is not? used

Here’s the seemingly good news.

Incognito chats are protected. Even if you have Model Improvement enabled, incognito conversations won’t train future models.

Feedback via thumbs up/down buttons? Stored separately for up to 5 years, de-linked from your user ID.

But, if the safety classifiers flag your content, then your content will be kept.

How are the classifiers built though?

If you insist on using Anthropic, keep on reading.

How to Control Your Settings

On Desktop:

  1. Click your name in the settings menu
  2. Navigate to Settings → Privacy
  3. Toggle “Help Improve Claude” on or off

Direct link: claude.ai/settings/data-privacy-controls

On Mobile:

  1. Open settings
  2. Select Settings → Privacy
  3. Same toggle

This is simple. No dark patterns and no buried menus — just suspicion.

The Catch

When you disable training, it only affects new conversations.

Previously stored data and anything already used in training? That ship has sailed.

The Safety Exception

Even with training disabled, safety-flagged conversations may still be reviewed. Anthropic reserves the right to use this data for:

  • Internal trust and safety models
  • Harmful content detection
  • Policy enforcement
  • Safety research

This isn’t surprising. Every AI company does this. But transparency matters, and they’re upfront about it.

My Take

Can we trust Anthropic?

Here’s what I see:

The positive:

  • Clear documentation of what’s used and what isn’t
  • Accessible privacy controls
  • Incognito mode actually means something
  • They don’t hide behind legal jargon

The concerns:

  • Safety exceptions create ambiguity around “disabled” training
  • Past data can’t be retroactively excluded
  • Default settings matter—and most users never change them

Trust isn’t binary. It’s a spectrum.

Anthropic sits on the more transparent end compared to competitors. They explain their policies in plain language. They give you real controls.

But “trust” and “verify” should always go together.

Practical Recommendations

If you’re using Claude for sensitive work:

  1. Check your settings — Visit claude.ai/settings/data-privacy-controls right now
  2. Use incognito for sensitive conversations — It’s the only guaranteed exclusion
  3. Assume safety reviews happen — Don’t rely on privacy settings for edge cases
  4. Read the actual policies — Don’t take my word for it

The Bottom Line

Anthropic isn’t perfect. No company is.

But they’ve built a system where informed users can make real choices about their data. That’s more than most can say.

The question isn’t whether Anthropic is trustworthy.

The question is: have you taken the time to configure your trust level?

Sources:

Ari Nakos
@ari
9 hours ago
A dictation tool from Mac OS.

This entire message was dictated, meaning typed on my behalf by the computer, using a tool I made called Privisay.

Check out the demo for the automated post I created for LinkedIn.

Looking for people to try it out.

/
Image 1
/
Vincent
@vincent 🇧🇪
9 months ago
Promoted #showcases
7,458 Startup Founders Will See Your Product This Week | Advertise on Huzzler

Reach thousands of active founders looking for tools to solve their problems. Our Featured Product placement guarantees premium visibility with 7,458 weekly impressions for post ads (like you are reading right now).

Get direct access to your perfect target audience - people actively building, launching, and growing startups who are ready to invest in solutions like yours. Limited weekly slots available.

Reserve yours now at huzzler.so/advertise

/
Image 1
Image 2
/
Prashastha Jain
@prashastha-jain
6 days ago
Feedback Request: I ditched "Generic Scores" for strict "JD Context"

Hey fellow builders,

I’m looking for some honest feedback/roasting on the core UX of my project, cvcomp.

The Hypothesis I started building this because I felt that standard resume scanners were optimising for vanity metrics. They give users a "95/100" score based on formatting, even if the content is totally irrelevant to the job they are applying for.

I decided to build a tool that forces Context. It compares the resume strictly against a specific Job Description (JD). If the resume is well-written but misses the specific semantic requirements of the JD, the score drops.

The UX Challenge (Where I need help) The biggest friction point in this space is the "Edit Loop." Usually: User scans resume -> Finds gaps -> Opens Word/Canva -> Edits -> Exports PDF -> Re-uploads.

To fix this, I built a Live Editor directly in the browser. You can click "Accept" on AI suggestions or double-click to manual edit, then download the PDF immediately.

What I’d love feedback on:

The On-boarding: Does the requirement to upload a JD immediately feel like too much friction, or does it make sense given the value prop?

The Live Editor: Does the "Accept/Decline" flow feel intuitive, or does it feel like you lack control?

The Tone: Does the "Gap Analysis" feel helpful, or is it too discouraging compared to standard scanners?

I’m trying to validate if this "tough love" approach is better than the "feel good" approach of competitors.

Here is the link: cvcomp.com

Don't hold back, if the UI is confusing or the logic seems off, I want to hear it.

Thanks,

Prashatha Jain

/
Image 1
/
Jst Tan
@jst-tan
2 weeks ago
The death of SaaS, the rise of AaaS?

Hello everyone! With the rise of vibe coding, many people are now building their own software, apps and websites instead of paying for subscription, or hire developers to build it. I had seen many people preferring to use AI to create a website then paying devs hundreds of dollars just to create a website. 

However, as all of us know, generic AI is just generic, with many fault and issues, including the infamous purple design, gradient & glowing effects, as well as many info is inaccurate. Most vibe coders just ignored this, since they don’t know how to fix it. 

Due to these, many people are getting afraid that they will be replaced by AI anytime now, since it can be used to develop anything just to replace them, and corporation will prefer to pay a $20 a month for AI that do not complain, do not sleep, over thousands for a single employees. 

But with these, I believe it is time for a change, instead of we developing software, websites, we develop the rules, agent skills, MCP to create a certain stuff, and sell these instead, so anyone interested can use this to create their own product, to their likings. As an example, we can create agent skills & MCP to let users create a SaaS for productivity tools, so instead of generic tools, users can produce high quality tools for their internal use. We can also create databases of knowledges, related to a topic, so vibe coded websites can have somewhere to go for info. 

I am not here to sell a service or a course, I want to hear the feedback of others, regarding this topic. 

Krzysztof
@Krzysztof
4 weeks ago
I realized I started indie hacking about a year ago

I realized I started indie hacking about a year ago, and from my experience, building a web tool from scratch teaches you a lot more than just copying code from the internet or following a YouTube tutorial

/
Image 1
/
Vincent
@vincent 🇧🇪
4 weeks ago
New Testimonial for Huzzler Advertising!

There's no better feeling than receiving a positive testimonial!

Thanks so much for sharing this, Matthew 🙏

Glad to hear the Huzzler ads are working well 😎

/
Image 1
/