GitHub That helpful AI suggesting code completions may have access to more context than
you expect. Depending on the product and settings, prompts, accepted
suggestions, open files, and telemetry can leave your machine. The Telemetry Problem AI coding assistants like GitHub Copilot collect: Code snippets you accept and reject
File contents in your editor
Comments and documentation you write
Environment variables visible in your workspace
Repository metadata and structure The .env File Risk Developers often have .env files with: API keys
Database credentials
Secret tokens
Encryption keys If your AI assistant can see these files, that data may be: Logged for "quality improvement"
Used to train future models
Stored on third-party servers The Terms of Service GitHub Copilot's Terms (as of 2026): Individual users: Code snippets may be used to improve models
Business/Enterprise: Opt-out available, but defaults matter The Pattern: Launch with aggressive data collection Add opt-out after backlash Most users never find the setting The Intellectual Property Problem If an AI assistant sends proprietary code or secrets to a remote service: Sensitive context can be retained in logs
Similar suggestions may surface elsewhere
Trade-secret and compliance questions get harder to answer Steps You Can Take Immediate Actions: Check your AI assistant's privacy settings
Disable telemetry where possible
Never put secrets in files AI can read
Use secret management tools (Vault, etc.) Long-Term: Audit what data your tools collect
Consider self-hosted AI alternatives
Keep sensitive code in isolated environments Bottom line: treat AI coding tools like any other service with workspace
access. Review the settings, isolate secrets, and keep sensitive repositories on
stricter policies.