The first 90 days as a security engineer in a growing company are often spent producing documents nobody ships from. Risk registers grow, slide decks multiply, and vendors pitch dashboards that promise visibility without changing how code reaches production. That pattern feels productive because it produces artifacts. It rarely reduces risk until something concrete lands in identity systems, pipelines, and runtime defaults.
This guide is written for anyone who needs to build a security program early with execution bias. It assumes you are responsible for platform-level outcomes: shared delivery systems, baseline identity posture, secrets handling, and the minimum telemetry that makes incidents and audits tractable. If you are in your first 90 days as a security engineer (or leading platform security from engineering), treat this as a shipping roadmap, not a maturity model checklist.
If you have not yet written down how your team engages the rest of engineering, start with The Platform Security Team Charter (Copy/Paste Template). Ownership boundaries between AppSec, cloud, and platform are in Platform Security vs AppSec vs Cloud Security: Who Owns What?.
Why “Assess First” Often Fails in the First 90 Days
Assessment without delivery creates a dangerous gap. Leadership hears that you are “doing a baseline,” while engineers experience no change to how they authenticate, how secrets flow, or how merges reach production. Attackers do not wait for your roadmap workshop.
The alternative is not recklessness. It is sequencing: ship a small set of controls that scale across teams, measure adoption, and expand. Each deliverable should answer whether a typical engineer’s default path is safer than it was last month. If the answer is no, you are still in presentation mode.
Broader program framing for AppSec and product risk lives in Building an AppSec Program. This post stays focused on what to ship in platform security’s first quarter.
First 90 Days as a Security Engineer: The Operating Principle
Ship controls that change default behavior for many teams at once. Prefer enforced baselines over optional guidance. Prefer one well-integrated pattern over three overlapping tools. Prefer measurable adoption over narrative confidence.
Your calendar should reflect that principle. Week one should include talking to platform and identity owners and writing a one-page charter or operating agreement, not selecting a vendor for a category you have not defined. By day thirty, something enforceable should exist in at least one of: workforce identity, CI/CD, or secrets. By day ninety, those threads should connect so an incident or audit question has a defensible answer without heroic manual work.
The diagram below is a simple mental model for how early platform work should flow: narrow intake, repeated shipping, and explicit “what we did not buy.”
[Charter + ownership]
|
+--------------+--------------+
| | |
[Identity] [CI/CD + secrets] [Logging baseline]
| | |
+--------------+--------------+
|
[Metrics + exceptions]
|
[Next quarter roadmap]
Days 1–15: Charter, Allies, and the First Enforced Baseline
The opening weeks set whether you are seen as a partner or a bottleneck. Spend time with whoever owns identity (IT, IT security, or cloud IAM), platform engineering, and release engineering. Your goal is to align on one or two non-negotiable baselines and who implements them.
Publish a minimal charter or team agreement even if it is imperfect. Scope, intake channel, and “what we do not do” matter more than polish. Reuse the structure from The Platform Security Team Charter (Copy/Paste Template) and trim to what your org can actually staff.
SSO and MFA enforcement as the first scaling win
Centralized workforce authentication with phishing-resistant or strong MFA is one of the highest-leverage controls you can ship early. It is not glamorous, but it reduces account takeover blast radius across SaaS, VPN, and often cloud consoles. The work is usually policy and rollout, not custom code.
Define what “enforced” means. MFA for all human identities in the IdP is a common baseline. Exceptions should be time-bound, named, and approved. Service accounts and break-glass accounts need explicit handling so they do not become the path of least resistance.
Roll out in phases if you must: admins and production-adjacent roles first, then general staff. Communicate the date enforcement turns on and what to do if someone is blocked. The secure path should be easier than workarounds; if engineers routinely bypass SSO with shared local accounts, you have a usability and governance problem, not just a policy gap.
Document how machine identities and automation authenticate separately from humans. CI jobs, Terraform operators, and runtime service accounts should not inherit “MFA for everyone” as a lazy substitute for proper workload identity. The goal in the first two weeks is not to solve every edge case; it is to eliminate the obvious human-account gap while naming who will own non-human identity follow-up in the next sprint.
Start inventory without boiling the ocean
In parallel, build a lightweight inventory of where code lives, which CI system builds production artifacts, and where secrets are referenced. You do not need a full CMDB. You need enough truth to prioritize pipeline and secrets work in weeks three through six.
List production deploy paths explicitly: which repository triggers which workflow, which registry or artifact store receives images or packages, and which credentials those workflows use. One accurate diagram of “commit to prod” beats ten interviews where everyone remembers a different shortcut. That diagram becomes the backlog for the middle phase of the quarter.
Days 16–45: Secrets Management, CI Guardrails, and Baseline Logging
This middle phase is where many programs either accelerate or stall. The failure mode is buying three products before branch protection exists. The success mode is shipping guardrails that every team hits when they merge and deploy.
Secrets management: architecture first, then migration
Secrets are a platform problem because leakage in CI or shared runners affects every service. Your first deliverable should be a clear pattern: no long-lived production secrets in repo plaintext, no shared “deploy keys” in chat, and a single approved store or broker for production credentials (vault, cloud secret manager, or equivalent) with rotation expectations.
Inventory secret-like material in repositories and CI variables. Prioritize pipelines that can reach production or sensitive data. For GitHub Actions and similar systems, pipeline design directly affects whether secrets are exfiltratable; technical context matters, as discussed in GitHub Actions Secrets.
Ship reference implementations for one language or one service template so teams can copy a working pattern. Security wins when the easiest way to deploy is also the compliant way.
Rotation policy should be written in plain language: which secrets rotate on a schedule, which rotate on incident, and how applications pick up new material without downtime surprises. Early programs often over-focus on storage and under-focus on distribution and restart behavior. A secret that is “in the vault” but still baked into a twelve-month-old container image is not really under control.
CI/CD guardrails that scale
Branch protection on default branches is a baseline, not a debate. Require review for merges that affect production paths. Restrict who can approve changes to pipeline definitions and deployment workflows when your VCS supports it.
Introduce incremental checks rather than a big-bang “security gate” that everyone learns to bypass. Static analysis or dependency scanning can start as informational, then graduate to blocking once noise is manageable. For higher maturity targets, align on provenance: builds that produce artifacts should be traceable to a known pipeline run; signing and verification can be a later-quarter goal if baseline hygiene is still missing.
Platform engineering usually implements workflow changes; platform security defines requirements and exceptions. If that boundary is fuzzy, revisit the ownership article above.
Treat third-party CI actions and marketplace dependencies as part of your supply chain. Pinning versions, requiring review for workflow file changes, and limiting use of opaque actions are all “boring” controls that prevent spectacular failures. You do not need perfection on day forty-five; you need a published rule and a path to compliance for the repositories that matter most.
Baseline logging and telemetry
You cannot respond to what you cannot see. Early logging work is not about buying a SIEM first. It is about ensuring authoritative sources exist and are retained: IdP sign-in and admin logs, cloud control-plane audit logs, VCS and CI audit events, and Kubernetes or runtime audit where applicable.
Centralize into one aggregation point if possible, even if analysis is minimal at first. Define retention that matches incident and compliance needs. Ensure break-glass and admin actions produce events someone will actually review during a tabletop.
A CTO-oriented checklist for cloud foundations overlaps here; see CTO Cloud Security Checklist for complementary framing.
Alerting in the first quarter should stay modest. A handful of high-signal detections on identity anomalies, org-level admin changes, and pipeline tampering beats hundreds of noisy rules that get disabled. You are building the plumbing and trust in data quality; sophisticated detection programs come after logs are complete and time-synced.
Days 46–90: Connect the Threads, Metrics, and Exceptions
The final month of the quarter should make the program legible to leadership and engineers. You are proving that the first 90 days as a security engineer produced durable controls, not a folder of recommendations.
Metrics that reflect shipped reality
Prefer a small set of outcome-oriented metrics. Examples include percentage of workforce users under enforced MFA with zero overdue exceptions, percentage of production deploy pipelines using the approved secrets pattern, count and age of open critical findings in platform-owned systems, and coverage of central audit log ingestion for defined sources.
Avoid celebrating “number of scans run” if critical issues remain open. Activity metrics are useful for capacity planning; they are dangerous as success criteria.
Exception process as a product
If you enforced MFA and CI policies, exceptions will appear. Treat the exception process as a first-class workflow: owner, business justification, compensating controls, expiry date, and renewal review. Exceptions that never expire become permanent vulnerabilities.
First quarterly roadmap handoff
End day ninety with a written plan for the next quarter that builds on what shipped. Examples: expand provenance and signing, tighten Kubernetes admission defaults, formalize vulnerability SLAs for platform images, or deepen detection use cases on the logging foundation you built.
Run one short tabletop that uses only the telemetry you actually have. If the exercise stalls because “we would need to ask someone to pull logs,” your logging work is not finished. If you can trace a hypothetical stolen laptop or leaked token through IdP and cloud audit trails, you have earned the right to invest in fancier tooling.
Anti-Patterns in the First 90 Days
Assessment-only roadmaps that delay any enforcement until “after the strategy offsite” train the organization to ignore security until a contract or incident forces action.
Buying a broad platform before identity and CI baselines are stable multiplies cost and integration debt. You end up paying for features you cannot enforce.
Optional policies without defaults create uneven risk. Teams that care will comply; teams under delivery pressure will not, and you will not know which is which.
Security operating as a critic without implementation partners burns goodwill. Platform security needs explicit engineering owners for changes to pipelines, clusters, and identity configs.
Metrics that reward ticket volume or meeting count incentivize busywork over risk reduction.
“Shadow security” initiatives where well-meaning teams deploy overlapping scanners without a single triage queue create fatigue and duplicate findings. Centralize expectations even when execution stays distributed.
How to Build a Security Program Early and Avoid Tool Sprawl
Tool sprawl usually comes from unclear ownership of categories, duplicate buying across departments, and pilots that never retire. Build security program early with a simple rule: one primary system per capability domain unless you have a written reason to split (for example, regulated data boundary).
Before adding a tool, answer five questions. What control does this enforce or measure? Who will operate it day to day? What is the default for engineers who do nothing? What decommissions when this lands? What happens to exceptions?
Prefer consolidation. If your IdP, cloud provider, and VCS already ship audit logs, ingest those before adding another agent everywhere. If dependency scanning exists in CI, justify a second scanner on overlap reduction or coverage gap, not on vendor demos.
When you do need external help, choose depth and outcomes over checkbox reports. How to Choose a Security Company applies to tooling and services alike: clarity on what problem you are solving prevents shelfware.
Copy/Paste: 90-Day Control Baseline Policy (YAML Sketch)
Adapt fields to your risk register and ticketing system. The point is to make “shipped” and “enforced” auditable.
platform_security_first_90_days:
identity:
workforce_sso_enforced: true
mfa_enforced_for_humans: true
exception_max_days: 90
break_glass_accounts_documented: true
secrets:
no_plaintext_production_secrets_in_repos: true
approved_secret_store: "ORG_SECRET_MANAGER"
ci_secrets_audit_completed: true
cicd:
default_branch_protection: true
production_pipeline_change_requires_review: true
third_party_actions_pinning_policy: "required"
logging:
idp_audit_logs_centralized: true
cloud_audit_logs_centralized: true
vcs_and_ci_audit_logs_centralized: true
minimum_retention_days: 90
governance:
charter_published: true
weekly_triage_with_platform_eng: true
monthly_metrics_review: true
Copy/Paste: End-of-90-Days Review Checklist
Use this in a doc or ticket before you declare the quarter “done.”
## Platform Security — First 90 Days Completion Review
- [ ] Charter (or team agreement) published with scope and intake path
- [ ] MFA/SSO enforcement status documented; exceptions listed with owners and expiry
- [ ] Production secrets pattern documented; one reference implementation exists
- [ ] Default branch protection and pipeline change review rules verified on critical repos
- [ ] IdP, cloud, and VCS/CI logs flowing to a central store with agreed retention
- [ ] Top platform risks from incidents or assessments have owners and due dates
- [ ] Next-quarter roadmap written with 3–5 shippable initiatives, not vague “improvements”
Conclusion
The first 90 days as a security engineer set the tone for whether security is a delivery function or a parallel bureaucracy. Build security program early by shipping SSO/MFA enforcement, a real secrets and CI baseline, and centralized logging that you can query when something breaks. Say no to tool sprawl and assessment-only phases long enough to get one enforced control live per major surface: identity, delivery, and observability.
When those defaults exist, deeper work (provenance, hardening, advanced detection) compounds. Without them, everything else is decoration.