The next two sections relate to security engineering. The work done in security engineering can be described with a building and breaking model. Building develops and maintains defenses, and breaking looks to discover the risks that exist afterward.

  • Offense 🔪 and Defense 🛡
  • Red 🔴 and Blue 🔵
  • Building 🏗 and Breaking 💥

Builders and breakers are not advisable team structures. Rather, building and breaking (verbs) help discuss how work is balanced across a security team. Some parts of the security community have caught onto this balance and named it “purple teams”, describing an optimum state between these interests within security. An even wider view of purple teams includes relationships with how security operates with the larger engineering organizations as well.

A healthy balance is grown - not bought, declared, or re-org’d. We’ll discuss the balancing act from here on out.

Security perceived as friction

As we discussed in foundations, a security engineering team is driven by reducing risks to an overall mission. This may put the security team’s goals at odds with partner engineering teams. Outside influence is often perceived as friction. When security builders are not viewed as partners, security teams tend to exert unhealthy influence on engineering decisions.

This influence from security can quickly feel like a slowdown for engineering. As a result, the relationship degrades, security teams become excluded, and begin to form harmful organization patterns from competing incentives.

These influences fester when technical and mission-adjacent teams formed outside of normal engineering channels. The security team’s influence becomes more increasingly distant from mitigation work, laser focused on generating influence, and invested in areas with low risk to the engineering organizational goals. When security does attempt to contribute to core engineering efforts - it might not be celebrated, except by themselves.

This pattern may be more familiar by another name. In IT, it can be described as Shadow IT. Any team left unsupported by engineering resources often finds itself hiring contractors, managing development, finding employees fending for themselves with expensed SaaS products, and self hosting marketing efforts.

Another well known example is the pattern of isolated Dev and Ops organizations which was famously remodeled with Google’s approach to SRE, or the DevOps movement, and migration into more efficient IaaS tooling. The early organizational anti-patterns of conflicting incentives were reworked and refocused into a more aligned model.

We’ll call this phenomenon Shadow Engineering going forward. It is a troubling observation that security teams may fall into this pattern (while also trying to prevent it!).

Security from an engineering identity

Decentralized engineering teams create challenges for an organization if they develop into the Shadow Engineering pattern. The growth of a shadow engineering team has clear indicators:

  • Hired into a non-engineering reporting chain
  • Exclusion from engineering roadmap discussions
  • Unusual technology choices and separate source code management
  • Lower and inconsistent hiring and interviewing standards
  • Lower and inconsistent development and deployment standards (style, peer review, testing)

Ironically, security engineering teams are often vocal against the formation of such teams. Shadow teams are often considered a source of risk.

We want to avoid building a security engineering team that has these “shadow” indicators. We want security to be part of the whole engineering identity. Nearly indistinguishable. Security, like quality, are championed as ideals of the whole team rather than competing needs pushed by disparate teams.

Founding members of security engineering teams should make collaboration with engineering a priority.

  • Security specializations report to engineering leadership.
  • Planning and roadmapping should be inclusive of security efforts.
  • Shared technology standards and development practices.
  • Maintains and contributes to the hiring bar.
  • Mitigations are rolled out with product launch / deployment standards.

These are crucial components of successful co-existence with organization wide engineering efforts. A culture that values security displays these indicators.

Task ownership

The next section, Breaking, has a “chicken and egg” relationship with this current section, Building. Once flaws, vulnerabilities, or bugs are found… who fixes them? Does the security engineering team fix all of them as a service to the organization? Or, does the security team find bugs, triage them to engineering, and project manage their mitigation?

The answers largely depend on culture and leadership. There are some common patterns, with no right answers. Consider a bug in normal circumstances. For instance, a bug that breaks production or encourages user complaints to skyrocket. These types of issues have an easier time getting fixed - they are loud. They attract discussion, investigation, and pull requests.

Security debt is silent.

Some risk findings from a security team will often be simple tasks. Others may be breaking changes no customer has asked for.

For instance, a breaking change that may force how customers interact with a platform or API. These changes will be difficult to prioritize and communicate to a customer without a clear upside. These are likely to see pushback to a security team that found the issue.

Frameworks may also split responsibility across teams. There may be effort to build a framework that avoids a class of risks, while also an effort to migrate old code into the new approach, and additional effort to advocate for usage of the new framework. How are these efforts distributed across a security team, and engineering team?

So far we’ve discussed sources of uncertainty with how bugs are fixed and who fixes them, but haven’t discussed great rules of thumb to follow. Guidelines for these problems may not be very reliable across companies and team cultures.

Rather, we can start with some anti-patterns to avoid.

First, security engineering is not a dumping ground for bugs and tasks labeled “security”. Security teams can not operate efficiently if expected to fix everything they find, nor does it support validated learning within an organization. Security teams may have the least amount of context to delicately rollout a breaking change or know if a framework is usable by developers. There must, at least, be examples of collaboration on security tasks.

On the other end, there are times when no owner exists to fix a bug. A bug may sophisticated enough that security subject matter expertise is needed to mitigate it. Security may end up needing to fix classes of issues on their own with specialized knowledge of an attack method or threat actor. These cases may require more, or total ownership, from security teams until a rollout happens with engineering.

The strongest engineering organizations share a whole-team ideal with multiple goals at once, including security. Strict lines shouldn’t appear. The overlaps in how tasks are owned are a matter of culture and leadership judgement that can’t be fully answered or eliminated. Rather, having sharp edges will likely contribute to political toxicity. A culture that collaborates on risk as a whole team is superior but difficult to maintain.