The next two sections relate to security engineering. The work done in security engineering can be described with a building and breaking model. Building develops and maintains defenses, and breaking looks to discover the risks that exist afterward.
Offense and defense. Red and Blue. Builders and Breakers.
Builders and breakers are not advisable team structures. Rather, it’s a way to think about how work is balanced across a security team. Some security teams have opinionated approaches in how they balance towards breakers or builders.
Builders insert themselves into active projects or development work with partner engineering teams. These efforts are difficult when partnerships are unhealthy or don’t exist at all.
Security perceived as friction
As we discussed in foundations, a security engineering team is driven by reducing risks to an overall mission. This may put the security team’s goals at odds with partner engineering teams. Outside influence is often perceived as friction. When security builders are not viewed as partners, security teams tend to exert unhealthy influence on engineering decisions.
This influence from security can quickly feel like a slowdown for engineering. As a result, the relationship degrades, security teams become excluded, and begin to form harmful organization patterns from competing incentives.
These influences fester when technical and mission-adjacent teams formed outside of normal engineering channels. The security team’s influence becomes more increasingly distant from mitigation work, laser focused on generating influence, and invested in areas with low risk to the engineering organizational goals. When security does attempt to contribute to core engineering efforts - it might not be celebrated, except by themselves.
This pattern may be more familiar by another name. In IT, it can be described as Shadow IT. Any team left unsupported by engineering resources often finds itself hiring contractors, managing development, finding employees fending for themselves with expensed SaaS products, and self hosting marketing efforts.
Another well known example is the pattern of isolated Dev and Ops organizations which was famously remodeled with Google’s approach to SRE, or the DevOps movement, and migration into more efficient IaaS tooling. The early organizational anti-patterns of conflicting incentives were reworked and refocused into a more aligned model.
We’ll call this phenomenon Shadow Engineering going forward. It is a troubling observation that security teams may fall into this pattern (while also trying to prevent it!).
Security from an engineering identity
Decentralized engineering teams create challenges for an organization if they develop into the Shadow Engineering pattern. The growth of a shadow engineering team has clear indicators:
- Hired into a non-engineering reporting chain
- Exclusion from engineering roadmap discussions
- Unique technology choices and separate source code management
- Lower and inconsistent hiring and interviewing standards
- Lower and inconsistent development and deployment standards (style, peer review, testing)
Ironically, security engineering teams are often vocal against the formation of such teams. Shadow teams are often considered a source of risk. Security teams are forced into less impactful strategies when an engineering organization resists the mission of a security team.
We want to avoid building a security engineering team that has these “shadow” indicators. We want security to be part of the whole engineering identity. Nearly indistinguishable. Security, like quality, are championed as ideals of the whole team rather than competing forces pushed by disparate teams.
Founding members of security engineering teams should make collaboration with engineering a priority.
- Security specialization reports to engineering leadership.
- Planning and roadmapping should be inclusive of security efforts.
- Shares the organizations technology standards and development practices.
- Hiring maintains and contributes to the hiring bar.
- Mitigations are rolled out with product launch / deployment standards.
These are crucial components of successful co-existence with organization wide engineering efforts. A culture that values security displays these indicators.
The next section, Breaking, has a “chicken and egg” relationship with this current section, Building. Once flaws, vulnerabilities, or bugs are found… who fixes them? Does the security engineering team fix all of them as a service to the organization? Or, does the security team find bugs, triage them to engineering, and project manage their mitigation?
The answers largely depend on culture and leadership. There are some common patterns, with no right answers. Consider a bug in normal circumstances. For instance, a bug that breaks production or encourages user complaints to skyrocket. These types of issues have an easier time getting fixed - they are loud. They attract discussion, investigation, and pull requests.
Security debt is silent. Some findings from a security team will be one-off tasks. Other findings will be breaking changes that may involve product or customer communication, but not with any clear customer benefit. For instance, a breaking change that may force how customers interact with a platform or API: Collaboration on a breaking fix will be broad and require a lot of contribution… but if it has no customer upside, it won’t be pleasant to communicate.
Frameworks may also split responsibility across teams. There may be effort to build a framework that avoids a class of risks, while also an effort to migrate old code into the new approach.
We’ve discussed some variability in how bugs are fixed and who fixes them, but haven’t discussed great rules of thumb to follow. Guidelines for this problem may not be very reliable.
Rather, there are at least some anti-patterns to avoid.
First, security engineering is not a dumping ground for bugs and tasks labeled “security”. Security teams can not operate efficiently if expected to fix everything they find, nor does it support validated learning within an organization. Security teams may have the least amount of context to delicately rollout a breaking change or know if a framework is usable by developers. There must, at least, be examples of collaboration on security tasks.
On the other end, there are times when no owner exists to fix a bug. A bug may sophisticated enough that security subject matter expertise is needed to mitigate it. Security may end up needing to fix classes of issues on their own with specialized knowledge of an attack method or threat actor. These cases may require more, or total ownership, from security teams.
Task ownership presents a challenge to security teams. The challenge is whether they are different than engineering teams. As mentioned earlier - the strongest engineering organizations share a whole-team ideal with multiple goals at once, including security. Strict lines shouldn’t appear. The overlaps in how tasks are owned are a matter of culture and leadership judgement that can’t be fully answered or eliminated.