Saturday, April 4, 2026

When “Vulnerable” Isn’t Vulnerable: A False Positive in Nuclei External Interaction Checks

 

In November 2025, I came across an interesting behavior while using Nuclei—one that highlights an important lesson in application security: not every finding is a real vulnerability.

This write-up documents a false positive scenario I reported, how it happens, and why understanding your tools is just as important as running them.


The Scenario

While testing for external service interaction (commonly used in detecting SSRF or out-of-band vulnerabilities), I used the template:

external-service-interaction.yaml

The expectation was straightforward:

  • The target application should trigger an outbound request
  • That request should hit an OOB (out-of-band) service like Interactsh
  • If observed → potential vulnerability

However, the results didn’t align with that assumption.


The Unexpected Behavior

The scan returned:

[info] external-service-interaction

At first glance, this suggests that:

  • The target server made an outbound request
  • The system might be vulnerable

But upon deeper inspection, that wasn’t actually happening.


What Was Really Happening

Instead of the target initiating the interaction, the behavior was caused by the scanner itself.

Here’s what the template effectively validated:

  • The Interactsh service was reachable
  • Nuclei could resolve the generated domain
  • DNS/HTTP interaction occurred between the scanner and the OOB service

Crucially missing:

  • No outbound request from the target server
  • No real SSRF or external interaction vulnerability

This means the “positive” result could mislead testers into thinking a system is vulnerable when it isn’t.


Root Cause Insight

The issue stemmed from how DNS resolution and interaction checks were handled.

The template did not strictly validate whether:

  • The interaction originated from the target
  • Or from the scanning environment itself

Because of this, self-generated interactions could be interpreted as valid findings.


Reporting and Resolution

I reported this issue to the ProjectDiscovery team:

The response was quick and collaborative:

After testing the patch locally, the false positive no longer appeared.


Key Takeaways

1. Tools Can Mislead

Automated scanners are powerful—but they’re not infallible. Blind trust can lead to incorrect conclusions.

2. Validate the Source of Interaction

For OOB testing, always confirm:

  • Who initiated the request?
  • Is it truly coming from the target?

3. Understand Template Logic

Templates define behavior. A small logic gap can create large testing inaccuracies.

4. Debug Mode Is Your Friend

Running with:

-debug -vv

helped reveal what was actually happening behind the scenes.


Final Thoughts

This wasn’t a vulnerability in a target—it was a gap in detection logic. But addressing it improves the reliability of results for everyone using the tool.

Kudos to the ProjectDiscovery maintainers for the quick turnaround and fix.

False positives are often overlooked, but they matter. Reducing them improves signal quality—and ultimately, better security decisions.


No comments:

Post a Comment