The next two sections relate to security engineering. The work done in security engineering can be described with a building and breaking model. Building develops and maintains defenses, and breaking looks to discover the risks that exist afterward.

  • Offense 🔪 and Defense 🛡
  • Red đź”´ and Blue 🔵
  • Building 🏗 and Breaking đź’Ą

Builders and breakers are not advisable team structures. Rather, this view helps us discuss how work is balanced across a security team. Some parts of the security community have caught onto this balance and describe it as “purple teaming”, describing an optimum state between these interests within security. An even wider view of purple teams includes relationships with how security operates with the larger engineering organizations as well.

A healthy balance is grown - not bought, declared, or re-org’d. We’ll discuss the balancing act from here on out.

Security perceived as friction

As we discussed in foundations, a security engineering team is driven by reducing risks to an overall mission. This may put the security team’s goals at odds with partner engineering teams. Outside influences on engineering can easily be perceived as friction. When security builders are not viewed as engineers or partners… those security teams tend to exert unhealthy influence on engineering decisions.

As a result, relationships degrade. Security teams become excluded. The organization can begin to form harmful patterns from competing incentives between engineering and security.

These influences fester when technical and mission-adjacent teams are formed outside of normal engineering channels.

This pattern may be more familiar by another name. In IT, it can be described as Shadow IT. Any team left unsupported by engineering resources often finds itself hiring contractors, managing development, finding employees fending for themselves with expensed SaaS products, and self hosting marketing efforts on strange platforms.

Another well known example is the pattern of isolated Dev and Ops organizations which was famously remodeled with Google’s approach to SRE, or the DevOps movement, and migration into more efficient IaaS tooling. The early organizational anti-patterns of conflicting incentives were reworked and refocused into a more aligned model.

We’ll call this phenomenon Shadow Engineering going forward. It is a troubling observation that security teams may fall into this pattern (while also trying to prevent it!).

Security from an engineering identity

Look for indicators of shadow engineering with security in engineering environments.

  • Hired into a non-engineering reporting chain
  • Exclusion from engineering roadmap discussions
  • Unusual technology choices and separate source code management
  • Lower and inconsistent hiring and interviewing standards
  • Lower and inconsistent development and deployment standards (style, peer review, testing)
  • Forced hiring, bypassing hiring panels

Ironically, security engineering teams are often vocal against the formation of such teams. Shadow teams are often considered a source of risk.

We want to avoid building a security engineering team that has these “shadow” indicators. We want security to be part of the whole engineering identity. Nearly indistinguishable. Security, like quality, are championed as ideals of the whole team rather than competing needs pushed by disparate teams.

Founding members of security engineering teams should make collaboration with engineering a priority.

  • Security specializations report to engineering leadership.
  • Planning and roadmapping should be inclusive of security efforts.
  • Shared technology standards and development practices.
  • Maintains and contributes to the hiring bar.
  • Mitigations are rolled out with product launch / deployment standards.

These are crucial components of successful co-existence with organization wide engineering efforts. A culture that values security displays these indicators.

Task ownership

The next section, Breaking, has a “chicken and egg” relationship with this current section, Building. Once flaws, vulnerabilities, or bugs are found… who fixes them? Does the security engineering team fix all of them as a service to the organization? Or, does the security team find bugs, triage them to engineering, and project manage their mitigation?

The answers largely depend on culture and leadership. There are some common patterns, with no right answers. Consider a bug under normal circumstances: For instance, a bug that breaks production or encourages user complaints to skyrocket. These types of issues have an easier time getting fixed - they are loud. They attract discussion, investigation, and pull requests.

Security debt is silent.

Some risk findings from a security team will often be simple tasks. Others may be breaking changes no customer has asked for.

For instance, a breaking change that may force customers to interact with a platform or API in a new way. These changes will be difficult to prioritize and communicate to a customer without a clear upside. These are likely to see pushback to the security team that found the original issue.

Frameworks may also split responsibility across teams. An effort to build a framework that avoids a class of risks might require a migration of old code into the new approach. Further effort is needed to advocate for usage of the new framework. How are these efforts distributed across a security team, and engineering team?

So far we’ve discussed sources of uncertainty with how bugs are fixed and who fixes them, but haven’t discussed great rules of thumb to follow. Guidelines for these problems may not be very reliable across companies and team cultures.

Rather, we can start with some anti-patterns to avoid.

First, security engineering is not a dumping ground for bugs and tasks labeled “security”. Security teams can not operate efficiently if expected to fix everything they find, nor does it support validated learning within an organization. Security teams may have the least amount of context to delicately rollout a breaking change or know if a framework is usable by developers. There must, at least, be examples of collaboration on security tasks.

On the other end, there are times when no owner exists to fix a bug. A bug may be sophisticated enough that security subject matter expertise is needed to mitigate it. Security may end up needing to fix classes of issues on their own with specialized knowledge of an attack method or threat actor. These cases may require more, or total ownership, from security teams until a rollout happens with engineering.

The strongest engineering organizations share a whole-team ideal with multiple goals at once, including security. Strict lines shouldn’t appear. The overlaps in how tasks are owned are a matter of culture and leadership judgement that can’t be fully answered or eliminated. Rather, having sharp edges will likely contribute to political toxicity. A culture that collaborates on risk as a whole team is superior but difficult to maintain.