Breaking is how we find weak spots. Breakers on a security team expect to hack everything and reveal these weak spots. Some security teams are initially formed by hiring a hacker to do just that. This founding security employee expects to show the organization where they are insecure.

Notice that the demand to hire someone for breaking also reveals the desire for some leadership. The hacker’s output answers a question: What should we work on?

Breaking describes many vertical skillsets in security. Specialist minds for offense can apply aggressive methods against products, applications, infrastructure, networks, physical locations, employees, or anything else.

The output of breaking goes by many names: risks, findings, vulnerabilities, gaps, exposures, and many others.

The relationship of breaking with building is a bit more complicated than just a manufacturing pipeline where we find things and fix them. Anti-patterns form immediately in the breaker / builder relationship: even at the scale of a one-person security team.

Breaking weighed with building

Security engineers look to discover new risks in addition to building mitigations against known risks. This relationship of building and breaking creates a notable flywheel of work activity:

Find risks, mitigate them, then find some more.

Similar problems appear in other fields as the Explore / Exploit Tradeoff.

Security engineers believe that there is an optimal ratio of:

  • Earned knowledge about risks gained from breaking.
  • Building to prevent the risks you’ve discovered.

Breaking is a source of knowledge. The lessons from breaking provide a direction for security leadership. Without the evidence that comes from breaking, we have difficulty prioritizing where we should spend our time.

However, not all breaking efforts will produce the same amount of valuable direction. Early security teams might not need heavy investments in breaking. The most fundamental security work is often low-hanging fruit that does require much evidence to justify. (as discussed in fundamentals

Similarly, eating nutritious food does not require firsthand experimentation to prove the health benefits. We believe in a best practice because of the relationship it has in mitigating many risks. We trust certain authorities, like doctors, to help us expedite mitigation work without individually building firsthand evidence ourselves.

The breaking and building ratio relates to our discussion in Foundations about risk-based security. Some best practices are beneficial towards mitigating so many risks that it becomes inefficient to argue for them all individually. Logging is an excellent example of this. An organization does not need to spend engineering time discovering a risk that would warrant a logging system; Those risk scenarios surround us. Logging has an immense value that rarely needs wholesale justification. Centralized logging, for these reasons, is often suggested as a best practice.

As discussed in fundamentals: Low-hanging mitigation fruit comes under different names: Best practices, checklists, maturity models, or standards. Early security teams will often look to these for these work sources instead of discovering evidence and proving risks.

However, all organizations evolve. The risks become increasingly unknown. These early shortcuts eventually diminish in value. Suddenly, the fundamentals aren’t enough.

Unique and innovative organizations quickly run out of shortcuts: Blockchain, Self-Driving, AI, Social, etc. Additionally, some organizations become so complex through acquisitions or growth that uncertainty spikes about what exists, how things work, and what risks are out there. Simple questions like “where should we start?” require significant guidance for prioritization.

A little bit of breaking becomes more useful to surface and prioritize risks under these types of conditions.

Unfortunately, breaking can be addictive in this way: who doesn’t like finding vulnerabilities? While exciting, this trend does not last long, eventually dipping into an organizational anti-pattern.

Respecting the Law of the Lever

Security teams should be built with a balance of building and breaking in mind. Breaking will out-leverage building in terms of the work it generates for others to complete. It is not uncommon for a security engineer to discover a vulnerability with a few hours of effort. That vulnerability might require weeks of engineering hours to fix with extensive coordination.

As discussed in Building, bug fixing can spiral into large amounts of collaborative work with customer impacts. A widely repeated belief by experienced security engineers is that everything has bugs. So, where does the breaking end? It shouldn’t end, but it should be rationed against building.

A surplus of findings from breaking activity will increasingly generate churn and burnout. Both the breakers that find risks and the individuals trying to keep up will experience toil.

The suffering comes down to a desire for impact. Everyone works to have an impact. Security teams are no exception. Security teams prefer to see their discoveries from breaking mitigated. Fixes are an indication that breaking work has an impact- it has come full circle. Breaking has early leverage in this way, though diminishing in return.

Breaking also experiences rare interruptions. The breaker can continue breaking once they’ve made a finding. After they’ve triaged the finding, they may continue breaking again. This is not the case if they’re also invested in the mitigation. The breaker is interrupted to assist the mitigation.

Engineering leadership becomes crucial in detecting these nuanced imbalances. How are findings fixed? Do we invest more hours into finding more issues? Do we scale and diversify breaking activities with a long-term headcount? Do we change gears and begin building to avoid classes of problems? Do we incentivize these activities differently?

Often, the perception is that findings from breaking are “unplanned work.” Engineers may view unplanned work as non-incentivized. Is the output of breaking viewed as a distraction? Will it contribute to the incentive structure for engineers to work on risks?