Secrets Management in Go Applications From Environment Variables to Vault
I once inherited a Go microservice where database passwords were hardcoded in constants, API keys were passed as command-line arguments, and JWT signing keys were stored in git. If this sounds familiar, you’re not alone. In my years as a DevSecOps engineer, I’ve seen this pattern more times than I care to count, and it’s usually discovered during the worst possible moment - right after a security incident.
The irony is that Go makes building secure applications relatively straightforward, but its simplicity can be deceptive when it comes to secrets management. There’s no built-in framework telling you the “right way” to handle sensitive data, which means developers often fall into common traps that can compromise entire systems.
The Mistakes That Keep Me Up at Night
Let me start with what not to do, because understanding the failure modes is crucial to building secure systems. I’ve seen production applications with patterns like this:
const (
DatabasePassword = "super_secret_password"
APIKey = "sk-1234567890abcdef"
)
type Config struct {
DatabaseURL string `json:"database_url"`
APISecret string `json:"api_secret"` // This gets logged everywhere
}
// Command line secrets - visible in process lists
flag.StringVar(&apiKey, "api-key", "", "API key for external service")
These approaches fail in predictable ways. Process lists expose command-line arguments to anyone who can run ps aux
. JSON marshalling means your secrets end up in logs, error messages, and debugging output. Hardcoded values persist in git history forever, even after you “fix” them. I once spent a weekend rotating every credential in a system because someone had pushed API keys to a public repository two years earlier.
The more subtle issue is memory security. Go’s garbage collector means you can’t reliably control when sensitive data gets cleared from memory. If your application crashes and generates a core dump, those “deleted” secrets might still be sitting in memory, waiting for someone to extract them.
Environment Variables: The Right Starting Point
Most Go applications start with environment variables for configuration, and that’s actually a reasonable approach if done correctly. The key is understanding that environment variables aren’t inherently secure - they’re just less insecure than the alternatives I showed above.
Here’s how I structure secret loading in Go applications:
type SecretConfig struct {
databasePassword string // unexported fields are crucial
apiKey string
jwtSecret []byte
}
func LoadFromEnv() (*SecretConfig, error) {
dbPass := os.Getenv("DATABASE_PASSWORD")
if dbPass == "" {
return nil, errors.New("DATABASE_PASSWORD is required")
}
apiKey := os.Getenv("API_KEY")
if apiKey == "" {
return nil, errors.New("API_KEY is required")
}
// Clear the environment variables after reading
os.Unsetenv("DATABASE_PASSWORD")
os.Unsetenv("API_KEY")
return &SecretConfig{
databasePassword: dbPass,
apiKey: apiKey,
}, nil
}
// Safe accessor methods only
func (c *SecretConfig) DatabasePassword() string {
return c.databasePassword
}
func (c *SecretConfig) APIKey() string {
return c.apiKey
}
The unexported fields are critical here. They prevent accidental serialization and make it much harder for secrets to leak through reflection or debugging tools. I always validate that required secrets are present at startup - failing fast is much better than discovering missing credentials when you’re trying to connect to a database under load.
Clearing environment variables after reading them helps limit exposure, though it’s not foolproof. Other processes running as the same user could potentially read environment variables before you clear them, but it’s still better than leaving them around indefinitely.
For memory security, I implement a cleanup pattern:
func (c *SecretConfig) Clear() {
// Zero out sensitive fields
for i := range c.jwtSecret {
c.jwtSecret[i] = 0
}
c.databasePassword = ""
c.apiKey = ""
}
// Always use defer to ensure cleanup
func main() {
config, err := LoadFromEnv()
if err != nil {
log.Fatal(err)
}
defer config.Clear()
// Rest of application
}
This approach works well for simpler applications, but it starts to break down as systems become more complex. What happens when you need secret rotation? How do you handle different secrets for different environments? What about audit logging for secret access?
File-Based Secrets: The Docker Evolution
As applications moved to containerized environments, file-based secrets became more common. Docker Secrets, Kubernetes mounted secrets, and similar patterns all follow this approach. The idea is that sensitive data gets mounted into containers as files, usually with restricted permissions.
I’ve found this pattern particularly useful for database credentials and TLS certificates:
func readSecretFile(path string) (string, error) {
// Validate file permissions first
info, err := os.Stat(path)
if err != nil {
return "", fmt.Errorf("failed to stat secret file %s: %w", path, err)
}
mode := info.Mode()
if mode&0077 != 0 { // Check if group/other have any permissions
return "", fmt.Errorf("secret file %s has overly permissive permissions: %v", path, mode)
}
data, err := os.ReadFile(path)
if err != nil {
return "", fmt.Errorf("failed to read secret file %s: %w", path, err)
}
// Trim whitespace that often gets added to secret files
return strings.TrimSpace(string(data)), nil
}
func loadDockerSecrets() (*SecretConfig, error) {
const secretsDir = "/run/secrets/"
dbPass, err := readSecretFile(filepath.Join(secretsDir, "db_password"))
if err != nil {
return nil, err
}
apiKey, err := readSecretFile(filepath.Join(secretsDir, "api_key"))
if err != nil {
return nil, err
}
return &SecretConfig{
databasePassword: dbPass,
apiKey: apiKey,
}, nil
}
File permission validation is crucial here. I’ve seen too many deployments where secret files were world-readable, completely defeating the purpose of the security model. The permission check catches these misconfigurations early.
One challenge with file-based secrets is handling updates. Unlike environment variables that are set at process startup, files can change while your application is running. This led me to implement a file watcher pattern for applications that need to handle secret rotation:
func (c *SecretConfig) watchSecretFile(path string, updateFunc func(string)) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
log.Printf("Failed to create file watcher: %v", err)
return
}
defer watcher.Close()
err = watcher.Add(path)
if err != nil {
log.Printf("Failed to watch file %s: %v", path, err)
return
}
for {
select {
case event := <-watcher.Events:
if event.Op&fsnotify.Write == fsnotify.Write {
if newValue, err := readSecretFile(path); err == nil {
updateFunc(newValue)
} else {
log.Printf("Failed to read updated secret file: %v", err)
}
}
case err := <-watcher.Errors:
log.Printf("File watcher error: %v", err)
}
}
}
This pattern works well in Kubernetes environments where secret updates trigger file changes, but it adds complexity to the application logic. You need to handle concurrent access to secrets and ensure that updates don’t break ongoing operations.
Building a Configuration Management Layer
As systems grow more complex, the ad-hoc approaches start to break down. You end up with secrets scattered across environment variables, files, and maybe some cloud provider APIs. Different teams use different patterns. Debugging becomes a nightmare because you’re never sure where a particular secret is supposed to come from.
This is where I learned to build an abstraction layer. Instead of hardcoding specific secret sources throughout the application, create an interface that can handle multiple backends:
type SecretSource interface {
GetSecret(ctx context.Context, key string) (string, error)
ListSecrets(ctx context.Context) ([]string, error)
}
type ConfigManager struct {
sources []SecretSource
cache map[string]cachedSecret
mu sync.RWMutex
}
type cachedSecret struct {
value string
expiresAt time.Time
}
func (cm *ConfigManager) GetSecret(ctx context.Context, key string) (string, error) {
// Check cache first
cm.mu.RLock()
if cached, exists := cm.cache[key]; exists && time.Now().Before(cached.expiresAt) {
cm.mu.RUnlock()
return cached.value, nil
}
cm.mu.RUnlock()
// Try each source in order
for _, source := range cm.sources {
if value, err := source.GetSecret(ctx, key); err == nil {
cm.cacheSecret(key, value)
return value, nil
}
}
return "", fmt.Errorf("secret %s not found in any source", key)
}
func (cm *ConfigManager) cacheSecret(key, value string) {
cm.mu.Lock()
defer cm.mu.Unlock()
cm.cache[key] = cachedSecret{
value: value,
expiresAt: time.Now().Add(5 * time.Minute), // Configurable TTL
}
}
This abstraction allows you to implement different secret sources while keeping the same interface. Environment variables become one source, files become another, and cloud APIs become a third. The caching layer reduces latency and provides resilience when external secret stores are temporarily unavailable.
The ordering of sources matters. I typically configure them from most specific to most general: file-based secrets for local development, then cloud-specific secret stores for production, then environment variables as a fallback.
Implementing hot reloading of secrets was one of the more complex challenges I faced:
func (cm *ConfigManager) StartSecretWatcher(ctx context.Context) {
ticker := time.NewTicker(30 * time.Second)
defer ticker.Stop()
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
cm.refreshExpiredSecrets(ctx)
}
}
}
func (cm *ConfigManager) refreshExpiredSecrets(ctx context.Context) {
cm.mu.Lock()
defer cm.mu.Unlock()
for key, cached := range cm.cache {
if time.Now().After(cached.expiresAt) {
// Try to refresh from sources
for _, source := range cm.sources {
if newValue, err := source.GetSecret(ctx, key); err == nil {
cm.cache[key] = cachedSecret{
value: newValue,
expiresAt: time.Now().Add(5 * time.Minute),
}
break
}
}
}
}
}
The periodic refresh pattern ensures that secret updates eventually propagate to the application without requiring restarts. The tradeoff is complexity - you need to handle concurrent access carefully and decide what to do when secret refreshes fail.
Cloud Provider Integration
Moving to cloud environments opened up new possibilities for secret management. Each major cloud provider offers managed secret services that handle encryption, access control, and audit logging. The challenge is integrating with these services while maintaining the flexibility to run in different environments.
For AWS Secrets Manager, I typically implement something like this:
type AWSSecretsSource struct {
client *secretsmanager.Client
region string
}
func NewAWSSecretsSource(ctx context.Context, region string) (*AWSSecretsSource, error) {
cfg, err := config.LoadDefaultConfig(ctx, config.WithRegion(region))
if err != nil {
return nil, fmt.Errorf("failed to load AWS config: %w", err)
}
return &AWSSecretsSource{
client: secretsmanager.NewFromConfig(cfg),
region: region,
}, nil
}
func (a *AWSSecretsSource) GetSecret(ctx context.Context, secretName string) (string, error) {
input := &secretsmanager.GetSecretValueInput{
SecretId: aws.String(secretName),
}
result, err := a.client.GetSecretValue(ctx, input)
if err != nil {
return "", fmt.Errorf("failed to get secret %s: %w", secretName, err)
}
if result.SecretString != nil {
return *result.SecretString, nil
}
// Handle binary secrets
return string(result.SecretBinary), nil
}
The AWS SDK handles authentication through IAM roles, which works well in containerized environments. The key insight I learned is to always use context for cancellation and timeouts. Secret API calls can be slow, and you don’t want a hung secret lookup to bring down your entire application.
Azure Key Vault follows a similar pattern but with different authentication mechanisms:
type AzureKeyVaultSource struct {
client *azsecrets.Client
vaultURL string
}
func NewAzureKeyVaultSource(vaultURL string) (*AzureKeyVaultSource, error) {
cred, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
return nil, fmt.Errorf("failed to create Azure credential: %w", err)
}
client, err := azsecrets.NewClient(vaultURL, cred, nil)
if err != nil {
return nil, fmt.Errorf("failed to create Key Vault client: %w", err)
}
return &AzureKeyVaultSource{
client: client,
vaultURL: vaultURL,
}, nil
}
func (a *AzureKeyVaultSource) GetSecret(ctx context.Context, name string) (string, error) {
resp, err := a.client.GetSecret(ctx, name, "", nil)
if err != nil {
return "", fmt.Errorf("failed to get secret %s from Key Vault: %w", name, err)
}
if resp.Value == nil {
return "", fmt.Errorf("secret %s has no value", name)
}
return *resp.Value, nil
}
One challenge with cloud-based secret services is handling different error conditions. Network timeouts, authentication failures, and missing secrets all require different handling strategies. I learned to implement retry logic with exponential backoff:
func (a *AWSSecretsSource) GetSecretWithRetry(ctx context.Context, secretName string) (string, error) {
var lastErr error
for attempt := 0; attempt < 3; attempt++ {
if attempt > 0 {
waitTime := time.Duration(attempt) * time.Second
select {
case <-time.After(waitTime):
case <-ctx.Done():
return "", ctx.Err()
}
}
value, err := a.GetSecret(ctx, secretName)
if err == nil {
return value, nil
}
lastErr = err
// Don't retry on permission errors
if strings.Contains(err.Error(), "AccessDenied") {
break
}
}
return "", fmt.Errorf("failed to get secret after 3 attempts: %w", lastErr)
}
The retry logic helps with transient network issues, but you need to be careful not to retry on errors that won’t be resolved by waiting. Permission errors, for example, require configuration changes and won’t be fixed by retrying.
HashiCorp Vault: The Enterprise Solution
For organizations that need more control over secret management, HashiCorp Vault provides a comprehensive solution. Vault offers features like secret rotation, dynamic secrets, and detailed audit logging. However, it also adds operational complexity that smaller teams might not need.
Integrating with Vault requires understanding its authentication model. Unlike cloud providers that use IAM roles, Vault has its own authentication methods:
type VaultSource struct {
client *api.Client
path string
}
func NewVaultSource(address, path string) (*VaultSource, error) {
config := api.DefaultConfig()
config.Address = address
client, err := api.NewClient(config)
if err != nil {
return nil, fmt.Errorf("failed to create Vault client: %w", err)
}
return &VaultSource{
client: client,
path: path,
}, nil
}
func (v *VaultSource) authenticateWithK8s(role string) error {
tokenBytes, err := os.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/token")
if err != nil {
return fmt.Errorf("failed to read service account token: %w", err)
}
data := map[string]interface{}{
"jwt": string(tokenBytes),
"role": role,
}
resp, err := v.client.Logical().Write("auth/kubernetes/login", data)
if err != nil {
return fmt.Errorf("failed to authenticate with Vault: %w", err)
}
if resp.Auth == nil {
return fmt.Errorf("no auth information returned from Vault")
}
v.client.SetToken(resp.Auth.ClientToken)
return nil
}
func (v *VaultSource) GetSecret(ctx context.Context, key string) (string, error) {
secret, err := v.client.Logical().ReadWithContext(ctx, v.path+"/"+key)
if err != nil {
return "", fmt.Errorf("failed to read secret from vault: %w", err)
}
if secret == nil || secret.Data == nil {
return "", fmt.Errorf("secret %s not found", key)
}
value, exists := secret.Data["value"]
if !exists {
return "", fmt.Errorf("secret %s has no 'value' field", key)
}
strValue, ok := value.(string)
if !ok {
return "", fmt.Errorf("secret %s value is not a string", key)
}
return strValue, nil
}
The Kubernetes authentication method works well in containerized environments. Vault validates the service account token and returns a Vault token that can be used for subsequent requests. Token renewal is another consideration - Vault tokens have limited lifetimes and need to be refreshed periodically.
One feature I particularly appreciate about Vault is dynamic secrets. Instead of storing static database passwords, Vault can generate short-lived credentials on demand:
func (v *VaultSource) GetDynamicDBCredentials(ctx context.Context, role string) (*DBCredentials, error) {
path := fmt.Sprintf("database/creds/%s", role)
secret, err := v.client.Logical().ReadWithContext(ctx, path)
if err != nil {
return nil, fmt.Errorf("failed to get dynamic credentials: %w", err)
}
if secret == nil || secret.Data == nil {
return nil, fmt.Errorf("no credentials returned for role %s", role)
}
username, _ := secret.Data["username"].(string)
password, _ := secret.Data["password"].(string)
return &DBCredentials{
Username: username,
Password: password,
LeaseDuration: time.Duration(secret.LeaseDuration) * time.Second,
}, nil
}
Dynamic secrets eliminate the need for credential rotation since each credential has a short lifetime. However, they require more sophisticated application logic to handle credential refresh and connection pooling.
Testing Secret Management Code
Testing secret management code presents unique challenges. You can’t use real secrets in tests, but you need to verify that the integration logic works correctly. I typically use a combination of mocking and integration testing.
For unit tests, I create mock implementations of the SecretSource interface:
type MockSecretSource struct {
secrets map[string]string
errors map[string]error
callLog []string
}
func (m *MockSecretSource) GetSecret(ctx context.Context, key string) (string, error) {
m.callLog = append(m.callLog, key)
if err, exists := m.errors[key]; exists {
return "", err
}
if value, exists := m.secrets[key]; exists {
return value, nil
}
return "", fmt.Errorf("secret %s not found", key)
}
func TestConfigManager(t *testing.T) {
mock := &MockSecretSource{
secrets: map[string]string{
"db_password": "test_password",
"api_key": "test_key",
},
}
cm := &ConfigManager{
sources: []SecretSource{mock},
cache: make(map[string]cachedSecret),
}
value, err := cm.GetSecret(context.Background(), "db_password")
assert.NoError(t, err)
assert.Equal(t, "test_password", value)
// Verify caching works
value2, err := cm.GetSecret(context.Background(), "db_password")
assert.NoError(t, err)
assert.Equal(t, "test_password", value2)
assert.Equal(t, 1, len(mock.callLog)) // Should only call source once
}
For integration testing, I use testcontainers to spin up real instances of secret stores:
func TestVaultIntegration(t *testing.T) {
ctx := context.Background()
req := testcontainers.ContainerRequest{
Image: "vault:latest",
ExposedPorts: []string{"8200/tcp"},
Env: map[string]string{
"VAULT_DEV_ROOT_TOKEN_ID": "test-token",
"VAULT_DEV_LISTEN_ADDRESS": "0.0.0.0:8200",
},
WaitingFor: wait.ForHTTP("/v1/sys/health").WithPort("8200"),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
require.NoError(t, err)
defer container.Terminate(ctx)
host, err := container.Host(ctx)
require.NoError(t, err)
port, err := container.MappedPort(ctx, "8200")
require.NoError(t, err)
vaultURL := fmt.Sprintf("http://%s:%s", host, port.Port())
// Test actual Vault operations
source, err := NewVaultSource(vaultURL, "secret/data")
require.NoError(t, err)
source.client.SetToken("test-token")
// Put a secret
_, err = source.client.Logical().Write("secret/data/test", map[string]interface{}{
"data": map[string]interface{}{
"value": "test-secret-value",
},
})
require.NoError(t, err)
// Retrieve the secret
value, err := source.GetSecret(ctx, "test")
require.NoError(t, err)
assert.Equal(t, "test-secret-value", value)
}
Integration tests with real secret stores catch issues that mocks miss, like authentication problems, network timeouts, and API changes. However, they’re slower and more complex to set up, so I use them sparingly.
Production Considerations
Running secret management in production requires thinking about observability, error handling, and performance. I learned these lessons through various incidents and near-misses.
Monitoring secret access is crucial for security auditing:
import (
"github.com/prometheus/client_golang/prometheus"
)
var (
secretRetrievalCounter = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "secret_retrievals_total",
Help: "Total number of secret retrieval attempts",
},
[]string{"source", "secret_name", "status"},
)
secretRetrievalDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "secret_retrieval_duration_seconds",
Help: "Time spent retrieving secrets",
},
[]string{"source"},
)
)
func (cm *ConfigManager) GetSecretWithMetrics(ctx context.Context, key string) (string, error) {
start := time.Now()
defer func() {
secretRetrievalDuration.WithLabelValues("combined").Observe(time.Since(start).Seconds())
}()
value, err := cm.GetSecret(ctx, key)
status := "success"
if err != nil {
status = "error"
}
secretRetrievalCounter.WithLabelValues("combined", key, status).Inc()
return value, err
}
These metrics help identify performance issues and potential security problems. Unusual spikes in secret access might indicate a compromised application or misconfigured retry logic.
Audit logging is equally important:
type SecretAuditLogger struct {
logger *slog.Logger
}
func (a *SecretAuditLogger) LogSecretAccess(ctx context.Context, operation, secretName, source string, success bool) {
a.logger.InfoContext(ctx, "secret access",
slog.String("operation", operation),
slog.String("secret_name", secretName),
slog.String("source", source),
slog.Bool("success", success),
slog.String("user", getUserFromContext(ctx)),
slog.String("trace_id", getTraceIDFromContext(ctx)),
)
}
The audit log helps with compliance requirements and incident investigation. I always include trace IDs to correlate secret access with specific requests.
Performance became an issue as applications grew in complexity. Secret retrieval can be a bottleneck, especially when using remote secret stores. Caching helps, but cache invalidation is tricky. I implemented a pattern where secrets have explicit TTLs, and applications gracefully handle stale data:
func (cm *ConfigManager) GetSecretWithFallback(ctx context.Context, key string) (string, error) {
// Try to get fresh secret
value, err := cm.GetSecret(ctx, key)
if err == nil {
return value, nil
}
// Fall back to cached value if available, even if expired
cm.mu.RLock()
if cached, exists := cm.cache[key]; exists {
cm.mu.RUnlock()
log.Printf("Using cached value for %s due to error: %v", key, err)
return cached.value, nil
}
cm.mu.RUnlock()
return "", fmt.Errorf("no secret available for %s: %w", key, err)
}
This pattern prioritizes availability over freshness, which is usually the right tradeoff for secret access.
Security Best Practices
Throughout this journey, I learned several security best practices that aren’t always obvious. Memory security is one area where Go applications can be vulnerable. While Go’s garbage collector prevents many memory safety issues, it doesn’t help with data exposure in memory dumps or swap files.
For highly sensitive data, I implement a secure string type:
type SecureString struct {
data []byte
}
func NewSecureString(s string) *SecureString {
return &SecureString{
data: []byte(s),
}
}
func (s *SecureString) String() string {
return string(s.data)
}
func (s *SecureString) Clear() {
for i := range s.data {
s.data[i] = 0
}
}
func (s *SecureString) MarshalJSON() ([]byte, error) {
return []byte(`"[REDACTED]"`), nil
}
func (s *SecureString) GoString() string {
return "[REDACTED]"
}
This approach allows explicit memory clearing and prevents accidental serialization of sensitive data. The custom JSON marshalling ensures that secrets don’t leak through API responses or debug output.
Another important consideration is secret rotation. Applications need to handle credential changes gracefully without downtime:
func (cm *ConfigManager) RotateSecret(ctx context.Context, key, newValue string) error {
cm.mu.Lock()
defer cm.mu.Unlock()
// Update cache with new value
cm.cache[key] = cachedSecret{
value: newValue,
expiresAt: time.Now().Add(5 * time.Minute),
}
// Notify listeners about the change
cm.notifySecretChange(key, newValue)
return nil
}
func (cm *ConfigManager) RegisterSecretChangeListener(key string, callback func(string)) {
cm.mu.Lock()
defer cm.mu.Unlock()
if cm.listeners == nil {
cm.listeners = make(map[string][]func(string))
}
cm.listeners[key] = append(cm.listeners[key], callback)
}
The listener pattern allows different parts of the application to respond to secret changes. Database connection pools can refresh their credentials, HTTP clients can update authentication headers, and so on.
Lessons Learned and What’s Next
Building secret management systems in Go taught me that security is as much about operational practices as it is about code. The best cryptographic implementation won’t help if secrets are logged in plaintext or stored in environment variables that get dumped to container orchestration systems.
The key insights I’d share with other developers are:
Start simple but design for complexity. Environment variables work fine for small applications, but build abstractions that can grow with your system. Don’t try to implement Vault integration on day one, but don’t hardcode assumptions that make it impossible later.
Security and usability are often in tension, but they don’t have to be. Good secret management makes development easier, not harder. When developers have to jump through hoops to access credentials, they find workarounds that compromise security.
Observability is crucial. You can’t secure what you can’t see. Implement logging and metrics from the beginning, not as an afterthought when you’re debugging a production incident.
Test the failure modes. Your secret store will go down at the worst possible time. Design for graceful degradation and practice your incident response procedures.
Looking ahead, the landscape continues to evolve. Kubernetes operators are making secret rotation more automated. Service meshes are handling more of the certificate lifecycle. WebAssembly is creating new sandboxing possibilities for secret access.
The fundamental principles remain the same, though. Minimize secret exposure, design for failure, and always assume that your current approach will need to evolve. The goal isn’t to build the perfect secret management system - it’s to build one that can adapt as requirements change and threats evolve.
The code examples in this article represent patterns I’ve used in production systems, but they’re starting points rather than complete solutions. Every environment has unique requirements and constraints. The key is understanding the tradeoffs and making conscious decisions about security, complexity, and operational overhead.
If you’re building Go applications that handle sensitive data, start with the basics and evolve incrementally. Don’t let perfect be the enemy of good, but don’t let expedient be the enemy of secure either. The investment in proper secret management pays dividends in reduced incident response, easier compliance audits, and better sleep at night.