
Precision Over Assumption: Mastering PUT & WebDAV Recon
An OPTIONS-enabled response in Kioptrix can waste more time than it saves when you read it too confidently. One visible PUT token in an Allow header is enough to make beginners sprint toward “write confirmed,” even when the server has only handed them a clue wearing a very convincing costume.
The real friction is not spotting interesting HTTP methods. It is separating advertised methods, path-specific behavior, and actual WebDAV or PUT validation without turning a clean lab into a noisy scrapbook of half-proven claims. That gap is where bad notes, false positives, and overconfident reporting tend to bloom.
“Keep guessing, and you do not just lose time. You train yourself to trust the wrong evidence.”
This guide helps you confirm PUT/WebDAV behavior safely in a lab, narrow your claims to what the server actually proves, and document findings in a way that survives both scrutiny and hindsight. The method is deliberately modest: capture headers first, test the exact path, use harmless objects only if scope allows, and treat retrieval as the final referee.
It is built on an evidence-first workflow: OPTIONS, header capture, DAV clues, retrieval checks, cleanup, and confidence language that does not outrun the facts.
The Path to Precision:
- Separate what the server says from what it actually does.
- Look for the WebDAV fingerprints that matter.
- Prove the smallest true thing, and stop there.
Because in this corner of recon, the quiet win is not excitement. It is precision. And that changes everything.
Table of Contents
Fast Answer
If an OPTIONS response suggests PUT or WebDAV-related methods, do not jump straight into uploading files. In an authorized Kioptrix-style lab, the safer workflow is to confirm the exact allowed methods, check whether WebDAV headers are present, test with harmless non-executable files, document every response, and stop the moment the behavior becomes ambiguous. The goal is evidence, not noise.
Allow: is a lead, not a verdict.
- Capture the response before you interpret it
- Confirm behavior on the exact path, not the whole host
- Use harmless content before you write anything stronger
Apply in 60 seconds: Create a note with seven fields: path, method, headers, status, body, retrieval result, confidence level.

Who this is for / not for
This is for you if you are
- Practicing on authorized Kioptrix-style labs
- Learning how to verify risky HTTP methods without creating avoidable mess
- Writing cleaner pentest notes and evidence logs
- Trying to distinguish “OPTIONS says yes” from “server actually accepts safe PUT”
This is not for you if you need
- Instructions for real-world targets you do not own or administer
- A guide to weaponizing WebDAV misconfigurations
- A bypass playbook for production systems
- Legal advice about scanning or upload testing
I learned this distinction the slightly embarrassing way. Early on, I treated any interesting verb like a lottery ticket. The notes looked dramatic, but not reliable. A good mentor crossed out half my adjectives and said, “Show me what happened on this path, with this object, at this time.” Brutal. Useful. Instantly better. That same discipline pairs well with a repeatable Kioptrix recon routine because the quieter your workflow is, the easier it becomes to spot when one result truly deserves attention.
Eligibility checklist
Use this workflow only if all answers are Yes.
- Do you own the system or have explicit written authorization?
- Is file-write validation within the lab’s allowed scope?
- Can you keep the test object harmless and reversible?
- Can you document every request and every result?
Neutral next action: If any answer is No, stop at observation and record the ambiguity instead of escalating the test.
Start with OPTIONS, not assumptions
Why Allow: is a clue, not a verdict
OPTIONS is useful because it asks the server what communication options are available for a given URL. MDN describes OPTIONS as a way to request permitted communication options for a given URL or server, and OWASP frames it as the most direct starting point for identifying supported methods.
But the important phrase is for a given URL. That tiny detail is where many beginner mistakes begin. People see PUT once, mentally promote it to “host is writable,” and skip the careful middle. That is like hearing one violin warm up backstage and declaring the whole orchestra in tune.
In practice, several things can blur the picture:
- Reverse proxies may advertise methods differently from the origin
- Applications may expose authoring behavior only on specific routes
- Legacy modules may leave method traces without a clean, usable write path
- Authentication and ACLs may exist even when the method is visible
What to capture before doing anything else
- Full response headers
- Status code
- Exact request path tested
- Server banner and date if present
- Any variation between
/,/dav/,/webdav/, or app-specific paths
A small ritual helps here. I write the path on a scratch line before I send the request. Not because I’m noble. Because after three tabs and two cups of coffee, everything starts to look like “that one route near the login page,” and that is how evidence turns into soup. If you are still building that habit, a structured HTTP enumeration workflow keeps this first step from drifting into improvisation.
The first fork: method enabled vs path writable
Why path specificity changes everything
A server may reject PUT at / but accept it in a subdirectory. A legacy CMS path may behave differently from the homepage. A virtual host can expose different method handling on the same IP. This is why the most disciplined question is not “Is PUT enabled?” but “On which exact path, under which exact conditions, did we observe what behavior?”
That question is slower. It is also infinitely more reportable.
Safer confirmation questions to answer
- Is the method advertised globally or only on one route?
- Does the response differ by path?
- Is authentication required before write attempts?
- Does the server normalize, rewrite, or redirect the target path?
- Does the server appear to store, reject, or fake-accept the object?
Here is the quiet truth most beginners miss: sometimes the most useful finding is not where PUT works. It is where PUT fails cleanly and consistently. That narrows the real surface area. In a report, that matters more than swagger. A neat “405 at root, 401 at authoring path, no anonymous write confirmed” can be more valuable than a noisy pile of guesses.
- One host can show different behavior across routes
- Auth-gated PUT is not the same as anonymous write
- Path rewrites can create false confidence
Apply in 60 seconds: Build a two-column note: “advertised” vs “confirmed.” Put every observation in the right box.
WebDAV fingerprints that matter before any upload
Headers and behaviors worth checking
PUT alone is not the same thing as WebDAV support. The IETF’s WebDAV specification defines additional methods, headers, and semantics for resource properties, collections, namespace operations, and locking. In other words, WebDAV is a larger authoring framework, not just a single write verb wearing a clever hat.
Before any upload attempt, look for fingerprints such as:
DAV:response headersMS-Author-Viaindicators- Behavior that suggests
PROPFINDhandling 207 Multi-Statusresponses, which RFC 4918 defines for situations where multiple status values need to be returned for independent operations- Depth-related quirks on DAV-aware methods
Why WebDAV clues deserve separate notes
Because they change interpretation. A plain PUT exposure can be serious, but WebDAV indicators may suggest broader authoring intent, legacy administration surface, or different risk around collections and properties. Blending these into one bucket weakens your analysis. Separate them. One note for “method exposure.” Another for “DAV capability indicators.” That little bookkeeping habit makes your report feel like a clean laboratory notebook instead of a thriller draft written at 2:13 a.m. It also fits naturally beside careful Apache recon and PHP-focused web stack recon, where the smallest header or module clue can quietly reframe the entire surface.
Show me the nerdy details
RFC 4918 extends HTTP/1.1 with WebDAV-specific methods and semantics, including property handling, namespace manipulation, and locking. That is why a 207 response or a visible DAV header tells you more than “some write thing happened.” It suggests the server may be speaking a richer authoring dialect. Even then, the presence of DAV-related behavior still does not prove anonymous, path-specific write success. Keep your claims narrow.

Safe confirmation workflow: prove write behavior without proving too much
Use a harmless test object
Choose a tiny, non-executable text file. Give it a distinctive filename that makes cleanup easy and avoids collisions. Avoid script extensions, archives, or anything a server might interpret, transform, or execute. Even in a lab, skipping straight to executable content collapses the line between verification and exploitation. The cleaner habit is slower by about three minutes and smarter by about three years.
Confirm in the least invasive order
- Capture the path-specific OPTIONS response
- Record DAV-related headers if present
- Test existence behavior first on the exact target path
- Attempt a minimal harmless upload only if the lab scope explicitly permits it
- Use GET to verify whether the object is actually retrievable
- Delete the object only if cleanup is authorized and documented
Evidence to log for each step
- Request method
- Request path
- Status code
- Response headers
- Whether the file is retrievable
- Whether the content type changed on storage
- Whether cleanup succeeded
I still remember a lab where the server returned what looked like success for a harmless object, but the retrieval path behaved differently because of rewriting. The upload note said “confirmed.” The retrieval note said “unclear.” Only one of those sentences deserved to live. Since then, retrieval has been my lie detector. If your notes tend to blur at exactly this stage, a clean pentest report template or a better pentesting note-taking system can prevent the evidence from dissolving into mood.
Decision card: When A vs B
A. Stop at observation
Use this when OPTIONS advertises PUT but the scope does not clearly allow upload testing, auth appears required, or behavior is inconsistent.
B. Perform minimal harmless validation
Use this only when the lab scope allows a reversible write test and you can document retrieval and cleanup.
Trade-off: A is lower noise and lower certainty. B gives stronger evidence but only when tightly constrained.
Neutral next action: Pick the narrowest step that answers one precise question.
- Status codes are only part of the story
- Fake acceptance and rewritten paths happen
- Harmless content keeps the finding narrow and defensible
Apply in 60 seconds: Add a required note field called “retrieval confirmed: yes/no/unclear.”
Response codes that look simple but are not
405 Method Not Allowed
This usually suggests the method is blocked for that path. MDN notes that the Allow header lists methods supported by a resource and must be sent with a 405 Method Not Allowed response. That makes 405 more informative than many people realize. It is not glamorous, but it is wonderfully specific.
401 or 403
These may indicate the method exists behind authentication or ACL controls. That is materially different from public writable exposure. Do not flatten those into the same sentence. Your future self, reading the report at midnight before delivery, will thank you.
201 Created or 204 No Content
These are stronger indicators of write behavior, but they still do not finish the story. You need retrieval confirmation to distinguish storage from path rewriting, storage to a different location, or a response emitted by an intermediary.
207 Multi-Status
RFC 4918 defines 207 Multi-Status for cases where multiple independent operations need individual status reporting. If you see it, treat it as a significant DAV clue and capture it carefully because it changes interpretation.
500 or inconsistent errors
These often mark the cliff edge where good testers stop performing theater. A backend module may be half-configured, misrouted, or speaking through a proxy in a way that makes simple conclusions unsafe. That is not failure. That is the exact moment to document ambiguity rather than decorate it.
Mini confidence calculator
Give yourself 1 point for each of these:
- OPTIONS showed PUT on the exact path tested
- A harmless write attempt returned a success-like response
- The object was retrievable with the expected content
0–1 points: observation only. 2 points: promising but incomplete. 3 points: strong path-specific write evidence.
Neutral next action: Report the score and the missing piece instead of overstating certainty.
Path hunting without turning the lab into confetti
Smarter places to check
Path selection should follow evidence, not adrenaline. Favor routes already hinted at by prior recon: content directories, legacy authoring paths, redirects, HTML comments, and names suggested by the app stack. OWASP’s guidance recommends verifying method support by issuing requests using different methods rather than trusting one response alone. That supports a focused, evidence-led matrix rather than broad, noisy guessing.
Low-noise logic for route selection
- Follow paths discovered from earlier recon
- Prefer high-probability routes over blind wordlists
- Compare behavior on a small number of candidate paths
- Document identical requests that produce different responses
A focused path matrix often teaches more than a loud scan ever will. One lab notebook of mine has only four routes on a page, each with status, headers, and retrieval notes. That tiny grid solved the puzzle. Another notebook from my noisier era had twenty-seven routes and the intellectual fragrance of a tipped-over toolbox. For this stage, wget mirroring for recon, curl-only recon, or even careful wordlist tuning can help you narrow paths without turning route discovery into confetti.
Sample low-noise matrix fields
- Path
- OPTIONS result
- DAV header present?
- Auth required?
- Harmless PUT attempted?
- GET retrieval result
- Cleanup result
Capture OPTIONS, headers, path, date, and server details.
Split “method advertised” from “write confirmed.”
Check for DAV, 207, PROPFIND-style behavior, auth gates.
Use one harmless object only if scope allows.
Confirm storage, path, and content integrity.
State only what was observed. Mark ambiguity clearly.
Common mistakes
Confusing advertised methods with confirmed capability
OPTIONS is a lead, not proof. That distinction sounds obvious until a flashy verb appears and the brain begins composing victory music.
Testing too many routes too quickly
You lose the thread of causality. When five things changed between request A and request B, your notes cannot explain the result. Analysis becomes astrology with status codes.
Skipping retrieval validation
A successful upload response is incomplete evidence on its own. Retrieval tells you whether storage happened where you think it happened and whether the object is actually accessible.
Ignoring auth boundaries
A protected write path is not the same risk as anonymous upload. OWASP’s guidance emphasizes verifying supported methods but does not grant permission to blur authorization context into a stronger claim than the evidence supports.
Forgetting cleanup
Even labs deserve tidy footprints. A harmless object with a distinctive name is easy to find and easier to remove when cleanup is authorized.
Mixing verification with exploitation in one step
This is the classic beginner shortcut. It feels efficient. It also makes the result harder to explain, harder to defend, and harder to teach from. First prove write behavior. Then, if the lab scope and exercise truly require more, let that become a separate phase, not a blurred accident. Many of these stumbles live in the same family as other Kioptrix enumeration mistakes and broader Kali Linux mistakes in Kioptrix labs, where eagerness outruns evidence by half a step and the notes pay the price.
- Limit the number of variables you change
- Keep auth context explicit in every note
- Treat cleanup as part of the test, not housekeeping afterward
Apply in 60 seconds: Add a final line to every test note: “What exactly does this prove, and what does it not prove?”
Reporting angle: what actually belongs in your notes
Strong report language
- “Server advertises PUT in OPTIONS response at [path].”
- “Harmless file upload behavior was / was not confirmed.”
- “WebDAV indicators were / were not observed.”
- “Behavior appears path-specific / auth-gated / inconsistent.”
Evidence that makes your report credible
- Header excerpts
- Status code sequence
- Retrieval result
- Scope note showing lab authorization
- Clear distinction between observation and inference
This is where a lot of junior notes wobble. They say too much in the first sentence and too little in the second. A clean finding often sounds modest: “PUT advertised at /dav/; anonymous harmless write not confirmed due to 401; DAV header observed.” That is not timid. That is professional. It tells the truth at the resolution your evidence supports.
Three real entities are useful to know here because they shape the mental model. MDN is helpful for method and header semantics. OWASP provides testing guidance and workflow framing. The IETF defines the WebDAV protocol in RFC 4918. Those are not decorations. They are the furniture in the room. And if you want that furniture arranged into something readable for others, learning how to read a penetration test report can sharpen how you write one.
Report-prep list
- Exact route tested
- Date and time of observation
- Headers that matter:
Allow,DAV, auth-related details - Status code progression
- Retrieval and cleanup outcome
- Confidence statement: observed, likely, or unconfirmed
Neutral next action: Turn the strongest path into a one-paragraph finding and leave weaker paths in the appendix or notes.
Short Story: the upload that taught the wrong lesson
Short Story: Years ago, on a training box, I saw PUT in an OPTIONS response and felt the kind of excitement that makes your chair suddenly seem too small. I uploaded a harmless file, got a success-looking response, and wrote “write confirmed” in bold, like I was engraving a monument. Then a mentor asked me to retrieve it. I could not. The path redirected.
A second route behaved differently. A third returned a header I had ignored. My triumphant finding collapsed into three narrower truths: the method was advertised, one route handled the request oddly, and actual retrievable storage was unconfirmed.
That correction stung for about ten minutes and improved my work for years. Since then, I do not trust the first green light. I trust a chain: path, method, response, retrieval, cleanup, scope. It is less cinematic. It is also how you avoid teaching yourself the wrong lesson in a lab where the whole point is to learn clean habits before the stakes get taller.
Safety / Disclaimer
Lab-only boundary
Use this workflow only on systems you own, administer, or are explicitly authorized to test. Keep validation narrow, reversible, and documented. Prefer harmless objects and minimal requests over exploratory escalation.
What this guide is deliberately not doing
- It does not provide a production-target workflow
- It does not provide payload guidance for code execution
- It does not encourage bypassing authentication or controls
- It does not turn WebDAV confirmation into exploitation advice
This boundary matters for another reason too: good labs teach restraint. Anyone can make noise. The craft is making a useful observation, leaving a tidy footprint, and writing notes another operator can actually trust.
Next step
Build a one-page validation checklist. That is the fastest upgrade most learners can make in under 15 minutes. Create a small note template with: path tested, methods advertised, DAV indicators, harmless file attempt, retrieval result, cleanup result, and final confidence level.
Suggested confidence language
- Observed only: Method advertised, no safe write validation performed
- Partially confirmed: Success-like response seen, retrieval unclear
- Confirmed write behavior: Harmless object stored and retrievable on the tested path
If you do only one thing after reading this, do this: make a template that forces you to separate observation from inference. It is the simplest antidote to noisy testing and overconfident prose. For some people that template becomes even stronger when paired with an Obsidian enumeration template or an OSCP host note template, because good structure keeps small truths from slipping through the floorboards.

FAQ
Is OPTIONS enough to prove PUT is enabled?
No. OPTIONS is strong reconnaissance, and both MDN and OWASP describe it as a direct way to learn allowed or supported methods for a given URL, but it still does not prove path-specific, retrievable write behavior on its own.
What is the difference between PUT support and WebDAV support?
PUT is an HTTP method. WebDAV is a broader extension to HTTP defined by the IETF that adds methods, headers, properties, collections, and locking behavior. Seeing PUT does not automatically mean you have full DAV capability.
If I get 201 Created, is that enough to call it confirmed?
It is stronger evidence than an advertised method alone, but it is still incomplete until you verify retrieval on the expected path and confirm what was actually stored.
Why should I upload a text file instead of a script in a lab?
Because it keeps the validation narrow, reversible, and easier to defend in notes. You are proving write behavior, not leaping into execution. That distinction improves both safety and report quality.
What does DAV: in the headers actually tell me?
It is a clue that the server may support WebDAV semantics. Treat it as an indicator worth documenting, not as universal proof of anonymous writable authoring.
Can a server advertise PUT but still block real uploads?
Yes. The advertisement may be path-specific, auth-gated, inconsistent, or influenced by upstream components. That is why retrieval and exact-path testing matter so much.
Does a 403 mean the method is not vulnerable?
Not necessarily. It may mean the method exists but is blocked by authorization or ACL controls. That is different from saying there is no relevant behavior at all.
How do I tell whether the behavior is path-specific?
Test only a small number of high-probability routes discovered from recon, and compare identical steps across them. If one path advertises or handles the method differently, record the difference rather than generalizing to the host.
What should I record in my notes for a pentest report?
At minimum: exact path, request method, headers, status code, DAV indicators, retrieval result, cleanup result, and a confidence statement separating observation from inference.
When should I stop testing and document ambiguity instead?
Stop when you see inconsistent responses across identical requests, signs of auth-required functionality outside scope, unclear storage behavior that could alter lab state, or unexpected server-side processing beyond simple file placement.
Conclusion
The curiosity loop from the beginning closes here: the exciting part is not that PUT appeared. The exciting part is that you now know how not to be fooled by it. OPTIONS can open the door, but disciplined validation tells you whether there is actually a room behind it.
The best lab habit in this corner of web testing is wonderfully unglamorous. Capture first. Separate method exposure from write confirmation. Note DAV clues separately. Validate with a harmless object only if scope allows. Retrieve. Clean up. Then report exactly what you observed, no more and no less.
In the next 15 minutes, build that one-page checklist and use it on one route only. That tiny pilot will sharpen your notes faster than ten louder experiments ever will.
Last reviewed: 2026-03.