
An OSCP-Style Pentest Report on Kali
An OSCP-style pentest report on Kali fixes that by treating evidence like source code: predictable paths, clean naming, and a single build command. If you want a broader reporting blueprint to compare against, keep a reference to a professional OSCP report template so your “finding cards” don’t drift over time.
The pain is modern and specific: Flameshot saves to the wrong place, Markdown links break after one folder move, and “final_final2.png” quietly becomes your new version control. Keep winging it and you’ll lose time, lose confidence in your own proof, and eventually ship the wrong file.
This workflow lets you capture screenshots with Flameshot, write findings in Markdown, and export a consistent Pandoc PDF—fast, rebuildable, and calm. An OSCP-style report is a reproducible, evidence-first write-up: clear impact, verifiable proof, copy/paste repro steps, and actionable remediation—built to withstand a skeptical reader. If you want a “why this matters” refresher in a single place, see a practical guide to a Kioptrix pentest report and note how it keeps evidence tied to steps.
- Capture evidence into one known place
- Write findings in a repeatable “finding card” format
- Build the PDF with one command, every time
Apply in 60 seconds: Create a /reporting-kit/ folder and add an /evidence/ directory right now.
Table of Contents
Evidence-first workflow in 90 seconds (the “report muscle memory” loop)
Capture → label → drop into /evidence/ (no desktop clutter)
The fastest reporting pipeline is the one you can do while half-asleep. Your goal is not “perfect notes.” Your goal is zero-friction evidence capture that lands in the right place automatically. If you’ve been experimenting with different approaches, it helps to compare this flow against a dedicated note-taking system for pentesting so your capture habits stay consistent across labs and client-style practice.
Here’s the loop you’re building:
- Capture with Flameshot (always saved into one known folder)
- Label quickly (a short filename that tells the truth)
- Link it immediately into the finding you’re writing
- Build the PDF with one command when you’re done
When I started doing this, my “evidence” was scattered across Downloads, Desktop, and a random folder called screens. I lost at least one clean exploit chain because I couldn’t find the screenshot that proved the claim. Never again. (If your lab workflow is VM-heavy, it also helps to keep your environment stable—things like VirtualBox networking modes (NAT vs Host-Only vs Bridged) can quietly change what “proof” looks like in screenshots and logs.)
Let’s be honest… most “reporting problems” are folder problems
People blame writing. They blame Pandoc. They blame “not being good at reports.” But most report pain is a boring, unsexy issue: your artifacts have no home. No home means no paths. No paths means broken images. Broken images mean you don’t trust your own PDF. And if you don’t trust it, you’ll keep re-checking it, wasting time you don’t have.
One rule that prevents 80% of missing-evidence chaos
Rule: If it isn’t in /evidence/, it doesn’t exist.
That sounds harsh. It’s also how you stop bleeding time. Every screenshot, exported Burp request, nmap output snippet—anything you plan to reference—goes into the evidence directory (or a finding subfolder inside it). Your report should be buildable from that tree alone. No “oh it’s on my desktop, hold on.” This becomes even more important once you’re collecting fast enumeration artifacts; a tight loop like a fast enumeration routine for any VM can generate a lot of output quickly, and your evidence folder is where sanity lives.
- One evidence root folder
- One naming convention
- One report entry file
Apply in 60 seconds: Make a folder named evidence and promise yourself you won’t save screenshots anywhere else.

Who this is for / not for (save yourself a bad setup)
For: OSCP-style labs, CTF-to-client writing practice, small consulting gigs
This pipeline shines when you’re doing practical exploitation and need to turn it into a narrative that a client (or examiner) can follow. It’s ideal for:
- OSCP-style exam practice reports and lab write-ups
- Solo consultants who want a consistent report “look” without Word drama
- Internal security teams doing quick, repeatable engagement summaries
Also: if you’re the person who says “I’ll write the report later” and then later turns into a gremlin at midnight—this is for you. I have been that gremlin. If you’re building a sustainable cadence, pairing this reporting kit with a realistic schedule like a 2-hour-a-day OSCP routine keeps “evidence capture” from becoming a once-a-week panic.
Not for: teams needing full GRC platforms, complex review workflows, heavy redaction tooling
If your org requires multi-author workflows, approval chains, tracked changes, or automated compliance mappings, you might outgrow this setup. This is a lean pipeline—not a full reporting platform.
That said, even larger teams often keep a Markdown-to-PDF pipeline for rapid internal deliverables while the “big report” flows elsewhere.
If you only do this once a year, do this simpler version instead
If you rarely write reports, don’t build a cathedral. Use:
- One
report.md - One
evidence/folder - One
pandoccommand you copy/paste
Repeatability is still the win. You just don’t need extra moving parts. If you’re still setting up your lab environment, start from a safe baseline—something like a safe hacking lab at home gives you guardrails so your “report practice” stays legal and controlled.
- Yes: you produce PDFs more than once a month
- Yes: you’ve lost time fixing broken image links
- Yes: you want a report you can rebuild on a new machine
Apply in 60 seconds: If you answered “yes” to two items, commit to the folder structure in the next section.
Flameshot that doesn’t betray you later (settings that matter in court-of-client)
Fixed save directory, consistent format, and timestamp naming
Flameshot is popular because it’s quick: capture, annotate, save. But “quick” can turn into messy unless you lock three things:
- Save directory: your report’s
evidence/folder - Format: PNG for clarity (especially for terminal text)
- Naming: timestamp + short label (so duplicates don’t overwrite)
In practice, this means every screenshot is immediately usable in a report without hunting for it. The difference is subtle until the day it saves you 45 minutes. If your reporting workstation is a non-standard Kali setup (ARM64, Pi, etc.), you’ll also appreciate having a “known-good” capture flow—especially when your browser tooling behaves oddly (see Burp browser not available on Kali ARM64 for the kind of friction that can derail evidence capture mid-session).
Annotation style that scans well in PDFs (arrows, boxes, blur)
Annotations should be readable at “PDF zoom level 100%.” My personal rule: if the arrow is thinner than a coffee stirrer, it’s too thin. Use:
- Rectangles to frame the proof (the exact command output line)
- Arrows sparingly (one arrow per screenshot is usually enough)
- Blur for secrets (API keys, session tokens, client names)
One of my early mistakes: I’d annotate everything like a detective board. It looked dramatic. It also looked unprofessional and made the reader’s eyes bounce.
Hotkeys that reduce friction (so you actually capture evidence)
Hotkeys matter because evidence capture is easy to “skip” when you’re in the flow. If your hotkey is awkward, you’ll tell yourself “I’ll screenshot later.” Later is a lie.
Set one keybind you can hit without thinking. Then practice once: exploit → screenshot → filename → back to work. Your hands will learn it. (If you’re tuning your shell for speed, a solid Zsh setup for pentesters complements this workflow nicely—fewer keystrokes, fewer mistakes, more consistent logs.)
Here’s what no one tells you… “pretty screenshots” can make findings harder to trust
A screenshot is not a poster. It’s proof. Over-stylized screenshots—huge highlights, too much blur, heavy zoom—can accidentally reduce credibility. The reader starts wondering: what am I not seeing?
Aim for “clinical.” Clear proof. Minimal decoration. If it feels boring, you’re doing it right.
Show me the nerdy details
PNG preserves sharp terminal text better than JPEG. If your screenshots look “smudgy” in PDF, it’s often compression or scaling. Keep screenshots reasonably sized, and avoid repeatedly re-saving images through lossy tools. When you do need to shrink images, do it once in a controlled way and keep the original.

The evidence folder architecture that makes Pandoc painless
A minimal structure that scales: /report/, /evidence/, /src/, /out/
Your future self wants one thing: predictable paths. Here’s a structure that works for labs and real engagements:
reporting-kit/
report.md
finding.md
evidence/
F-01/
F-02/
src/
metadata.yaml
out/
report.pdf
You can keep it simpler if you want. The essential idea is that report.md is the entrypoint and evidence/ is where proof lives. The rest is support.
If you’re still deciding where your lab lives (VirtualBox vs VMware vs Proxmox), pick one and standardize—this overview of
VirtualBox vs VMware vs Proxmox
can help you make that call without second-guessing every week.
Naming convention for screenshots (so you can find them under pressure)
Here’s a naming convention that behaves well in PDFs and file listings:
YYYY-MM-DD_HHMM_F-01_proof.pngYYYY-MM-DD_HHMM_F-01_repro-step2.pngYYYY-MM-DD_HHMM_F-02_before-fix.png
Notice what’s missing: vague names. If a screenshot name can’t tell the truth in 2 seconds, it’s not helping you.
Curiosity gap: why “IMG_1234.png” quietly destroys credibility
Clients rarely see your file tree. But you do. And when you can’t find the right proof quickly, you start taking shortcuts: “close enough screenshot,” “I’ll skip the terminal output,” “this is obvious.” That’s when reports become fragile.
Good naming is invisible discipline. It’s the quiet kind of professionalism that prevents mistakes before they exist.
Optional: split by finding IDs (F-01, F-02) for instant traceability
Splitting evidence by finding ID is one of those small moves that pays off fast. If a reviewer asks “where’s the proof for F-03?” you don’t search—you open a folder.
It also makes it easier to redact: you can review sensitive artifacts by finding folder rather than scanning an entire directory. If you’re practicing on intentionally vulnerable targets, it’s helpful to keep the same discipline across machines—this vulnerable machine difficulty map is a good way to plan progression while keeping your evidence tree consistent.
Markdown notes that read like a professional report (not a lab journal)
The “finding card” template: Title → Impact → Evidence → Repro → Fix → References
Markdown is a superpower because it forces clarity. The structure below turns chaos into a readable finding every time:
## F-01: [Short, specific title]
**Severity:** [Low/Medium/High]
**Affected:** [system/app/component]
### Impact (plain English)
What could a real attacker do, and why does it matter?
### Evidence (what you can prove)
- Screenshot: evidence/F-01/...
- Command output: show the exact line that supports the claim
### Reproduction (step-by-step)
1) ...
2) ...
3) ...
### Remediation (what to do next)
- Immediate mitigation
- Longer-term fix
### References (optional)
Vendor docs, standards, or internal ticket numbers
I used to write findings like diary entries: “then I tried this, then I tried that…” It felt authentic. It also made reviewers tired. The finding card makes the reader’s job easy.
How to write repro steps that survive copy/paste (and don’t gaslight the reader)
Repro steps should behave like a recipe. If the reader follows them, they get the same result. That means:
- Include prerequisites (VPN on, user role, host/IP, port)
- Use exact commands (not “run nmap”)
- Show expected output (one line is enough)
One honest trick: write the steps as if you’re handing them to your most literal friend. If there’s ambiguity, they will find it. If you want your enumeration steps to stay sharp (and not “generic nmap”), keep a cheat sheet of easy-to-miss Nmap flags so your evidence includes the one line that actually proves the point.
Severity phrasing that doesn’t start arguments (US client tone)
In US client-facing writing, the easiest way to trigger pushback is to sound absolute when you’re actually describing risk. Use language that is firm but fair:
- Prefer: “This could allow…” over “This allows…” when assumptions exist
- Prefer: “If an attacker has access to…” when access is required
- State constraints clearly (internal-only, authenticated, rate-limited)
It’s not about being timid. It’s about being accurate—accuracy builds trust, and trust makes remediation happen.
Micro-format tricks: callouts, code fences, and short paragraphs for passage ranking
Write so each section stands alone. That means tight paragraphs and scannable elements:
- Keep paragraphs to 1–3 sentences
- Use bullets for lists of conditions and artifacts
- Use code fences for commands, never inline command soup
When I switched to “small paragraphs,” my reports got read faster—by humans. That’s the only metric that matters. If you want to go deeper on Pandoc capabilities (without guessing flags), keep the official manual handy: Pandoc Manual.
Repeatable templates that don’t feel robotic (open loops without fluff)
A single report.md entrypoint with includes (or copy blocks)
Keep one file that represents “the report.” Even if you break findings into separate files later, the build should start at report.md.
This single entrypoint becomes your muscle memory.
I like to keep a copy-ready “finding card” in finding.md and paste it for each finding. Fancy include systems are optional. Consistency is not.
The “executive summary” pattern that actually gets read
Make the top of your report feel like a dashboard:
- What was tested (scope)
- What matters (top risks)
- What to do next (short action list)
When I started doing this, the tone of review calls changed. Less “what does this mean?” and more “how fast can we fix it?” That’s a win. If you’re practicing toward OSCP, pairing a crisp exec summary with a reference list of OSCP exam commands keeps your report steps reproducible and audit-friendly.
Open loop: how to hint at risk without overclaiming (trust-building phrasing)
Here’s the secret: you can be compelling without being dramatic. Create forward pull by naming the risk path, not by inflating severity.
Example: “If an attacker can reuse a session token, they can impersonate a user.” That’s an open loop—the reader wants to know can they?—and you’ll answer it with evidence. (And if your evidence depends on proxy tooling, don’t let DNS betray your story—see a Proxychains DNS leak fix so your traffic proof and your narrative stay aligned.)
A “proof chain” checklist: claim → screenshot → command output → timestamp
Every finding should have a proof chain that looks like this:
- Claim: what the vulnerability is
- Proof: screenshot that shows it
- Support: the command output line that anchors the screenshot
- Context: timestamp and scope context (where/when observed)
This is what makes a report feel “expensive.” Not the template. Not the font. The proof chain.
- Pandoc PDF: best for repeatable builds and clean versioning
- Word/Docs: best for multi-review and tracked changes
- Hybrid: write in Markdown, export to DOCX when stakeholders demand it
Apply in 60 seconds: Choose one “default” output format for the next 30 days and build around it.
Pandoc PDF output that looks expensive (without LaTeX pain)
Choose your base: eisvogel, pandoc-latex-template, or default
Pandoc is an open-source document converter that can turn Markdown into PDF (and much more). The manual is thorough—and honestly, a little intimidating. That’s why your first goal is not perfection; it’s a stable build.
You can start with Pandoc’s defaults and still produce a readable PDF. If you want a more “report-ish” look, templates like eisvogel are popular.
Use them if they reduce formatting work, not if they create a new hobby.
(If you’ve ever had to debug “why does my lab environment feel different today,” you’ll recognize the same principle from platform setup—e.g.,
a WSL2 + Kali + VMware hybrid setup
works great when you keep the moving parts minimal and predictable.)
Embed screenshots reliably (paths, relative links, image sizing)
The most common failure mode: images show up in Markdown preview, then vanish in PDF. The fix is usually boring:
- Use relative paths from
report.md - Don’t move the evidence folder after linking
- Keep filenames simple (letters, numbers, hyphens)
Example Markdown image:

Table formatting that won’t explode in PDF
Tables are where many PDFs go to die. Keep them narrow. If you must use a table, prefer simple two-column layouts. For longer content, use lists instead of tables.
I learned this after watching a table wrap into a chaos accordion that made the report look like it was falling down stairs.
Fonts, margins, and heading styles that skim on mobile preview
Even when a report is “a PDF,” it’s often first read on a laptop screen or a phone. That means your PDF needs:
- Headings with enough contrast and spacing
- Body text that isn’t tiny
- Short paragraphs and frequent visual breaks
Remember: skimmability is not laziness. It’s respect.
Curiosity gap: the one Pandoc flag that stops “why is this page blank?”
If you ever generate a PDF and see weird blank pages, don’t panic and rewrite the report. It can be template/page-break behavior. Before you spiral, simplify: build with fewer options, confirm baseline output, then add styling back step-by-step.
Open loop answer: the fix is usually not “more formatting,” it’s less—until your pipeline is stable.
/evidence/.report.md into a client-ready PDF.Show me the nerdy details
If you want to go deeper later: Pandoc’s PDF route often uses a LaTeX engine under the hood, which is why “templates” can affect page breaks and spacing. Start with the simplest command that builds reliably. Then add one improvement at a time—template, metadata, margins—so you can identify what changes output behavior.
One-command builds (Makefile + script) so you never “hand-format” again
make pdf pipeline: clean → build → output → checksum
The emotional benefit of make pdf is underrated. It removes decision fatigue. It also prevents the “I forgot which command I used last time” spiral.
A simple flow looks like:
- Clean: remove old outputs
- Build: run Pandoc with known flags
- Output: save to
/out/report.pdf - Checksum: optional, but useful when sharing versions
I started doing this after sending the wrong PDF version once. It was… not my finest hour. A checksum won’t save you from everything, but it can save you from “oops, wrong file.” If you’re the kind of person who also automates tool setup, you might enjoy building a small companion script like a mini exploitation toolkit in Python to standardize the tiny tasks that otherwise steal attention.
Auto-insert metadata: client, date, tester, scope (from a config file)
Metadata sounds fancy; it’s just a way to stop retyping the same header information. A metadata.yaml file can hold:
- Engagement name
- Date range
- Tester name
- Scope summary
This is also where you keep the tone consistent across reports—especially helpful when you’re tired and your phrasing starts to wobble.
Optional: build variants (internal vs client) with redaction toggles
Even for small engagements, you may want:
- Internal version: includes raw tool output, internal notes, extra context
- Client version: minimal secrets, clean narrative, only necessary artifacts
You can do this with separate entry files (report_client.md and report_internal.md) or by including/excluding sections. Keep it simple at first.
If your work touches web apps, a clean evidence split also pairs well with a focused workflow like
a Burp Suite WebSocket workflow
so “internal notes” don’t leak into “client proof.”
Stop. Don’t ship the first PDF. run this 30-second QA loop
Before you send anything, do a fast scan:
- Do all images render?
- Do headings make sense when skimming?
- Does each finding have impact + proof + repro + fix?
- Did you accidentally include secrets in a screenshot?
Yes, it takes 30 seconds. That’s the point. You’re buying peace.
- If you build PDFs weekly, you’ll save the most
- If you build PDFs monthly, you’ll still feel the relief
- If you build PDFs rarely, keep the pipeline minimal
Apply in 60 seconds: Mini calculator: (reports per month) × (minutes lost to formatting) = your monthly time tax. Write the number down and decide if it’s acceptable.
Short Story: … (120–180 words) …
Two years ago, I finished a long lab chain at night—one of those sessions where everything finally clicks and your terminal history feels like a victory parade. I told myself I’d “document it tomorrow.” Tomorrow arrived with fresh amnesia and three new tasks. By the time I opened my notes, the screenshots were scattered across three folders, and the one image that proved the pivot was missing.
I rebuilt parts of the chain from memory and shipped a report that was technically correct—but emotionally fragile. I could feel it: the reader would sense the uncertainty between the lines. The next week I built a tiny reporting kit. Same tools, same skills, different container. The first time make pdf produced a clean report, I actually laughed. Not because it was perfect—but because it didn’t require heroics.
Common mistakes that quietly wreck reports (and how to avoid them)
Mistake #1: evidence captured before the final exploit path is stable
This is the classic: you screenshot a half-working path, then later discover the final chain is slightly different. Suddenly your “proof” is out of sync with your actual repro steps.
Fix: capture evidence when the chain is stable—or capture early screenshots as “exploration” and clearly label them as such. Don’t let early evidence masquerade as final proof.
Mistake #2: screenshot shows outcome, not the proof (missing commands / context)
A screenshot of “I got a shell” is not always proof. Proof is: what led to it. The reader needs context: target, command, output, and the line that matters.
Fix: pair “outcome” screenshots with at least one screenshot that shows the command/output that explains the outcome.
Mistake #3: untrusted severity language (“critical!!!”) without business mapping
Severity is not a vibe. In real client work, inflated language creates skepticism. In exam-style writing, it can also make your report feel immature.
Fix: explain impact in plain English first. Then pick a severity that matches the conditions required (auth required? internal only? high complexity?). Calm writing reads as confidence. (If you’re practicing on Kioptrix-style machines, grounding severity and impact is easier when your enumeration is disciplined—this Nmap guide for Kioptrix in Kali is a good reference for evidence-friendly scans.)
Mistake #4: broken image paths after moving folders
Moving folders after writing is how images disappear. It’s not a Pandoc problem. It’s a discipline problem.
Fix: decide on structure first. Keep relative paths. If you must move, do it once, then rebuild and verify images immediately.
Mistake #5: findings that cannot be reproduced from your steps
This is the silent killer: the report sounds plausible, but the steps don’t actually work. Reviewers lose trust instantly.
Fix: do a quick “cold read” of your own repro steps. Pretend you didn’t write them. If you get stuck, the reader will too. If your repro depends on VPN stability in platforms like TryHackMe, don’t ignore disconnects—see TryHackMe OpenVPN keeps disconnecting so your “steps” don’t silently fail for environmental reasons.
“Don’t do this” checklist before you hit Send (loss-prevention)
Don’t leak secrets: tokens, internal IPs, usernames in screenshots
It’s painfully easy to leak something in a screenshot. Once it’s in a PDF, it travels. Blur aggressively. If you’re unsure, blur it. (Yes, even internal IPs sometimes.)
My rule: if the artifact could create embarrassment on a screen share, it gets reviewed twice. (And if part of your evidence is “secure access posture,” pairing your report with a checklist like Kali SSH hardening can make remediation recommendations more actionable.)
Don’t ship inconsistent timestamps (timezone mismatches look suspicious)
When timestamps don’t match across screenshots and logs, readers wonder what’s going on. You don’t want your client playing detective with your evidence.
Pick a timezone and be consistent. If you’re remote or traveling, note it in the report metadata.
Don’t mix scopes (lab habits vs client constraints)
Lab reporting habits can sneak into client reports: extra scanning, unrelated targets, “I tried this too…” chatter. It muddies the narrative and can cause friction.
Keep client reports scope-clean. If something interesting is out of scope, mention it carefully as an observation without implying you tested it.
Don’t bury the fix: remediation should be scannable and prioritized
Remediation should not be a single paragraph that starts with “It is recommended that…” and ends in fog.
Use a short prioritized list. Include an immediate mitigation and a longer-term fix when applicable. Your report should make action easier, not harder.
- Gather: scope, target list, dates, tester contact
- Gather: findings summary (titles + severities)
- Gather: top 3 remediations (plain English)
Apply in 60 seconds: Create a “Send-ready” checklist line at the bottom of report.md and tick it before exporting.
FAQ
What’s the fastest way to turn pentest notes into a PDF on Kali?
Write findings in Markdown (one report.md entrypoint), keep screenshots in a predictable evidence/ folder, then run a single Pandoc command (or make pdf) to build the PDF.
Speed comes from repeatability, not typing faster.
Is Pandoc good enough for client-ready pentest reports?
Yes for many solo or small-team engagements—especially when you value consistent output and versionable text. If your client requires tracked changes or multiple reviewers in Word, consider exporting to DOCX as a bridge. The content quality still matters more than the tool.
How do I keep screenshots from breaking in the exported PDF?
Use relative paths from report.md, keep filenames simple (letters/numbers/hyphens), and avoid moving the folder after linking images.
Rebuild the PDF right after you capture evidence so you catch broken paths early.
What’s a good folder structure for pentest evidence and notes?
A practical starter is report.md at the root, an evidence/ folder (optionally split by finding ID), and an out/ folder for the final PDF.
The core principle is one canonical evidence root.
How do I write findings that feel “OSCP-style” but still professional?
Use a clear finding card: impact, evidence, reproduction steps, remediation. Keep language accurate and calm. Show proof for each claim. “OSCP-style” is really “reproducible and evidence-based,” not “dramatic.”
Should I include tool output (nmap, burp, metasploit) verbatim?
Include what supports the claim and helps remediation. Long raw output often belongs in an appendix or trimmed to the relevant lines. If the reader has to scroll past walls of text to find the point, you’re taxing them. If you do include Metasploit output and hit environment weirdness, resolving setup issues like msfconsole bundler / Ruby version mismatch on Kali can keep your evidence clean and reproducible.
What’s the minimum QA checklist before sending a report to a client?
Verify images render, findings have impact/proof/repro/fix, secrets are redacted, the executive summary matches the body, and the PDF filename/version is correct. Then do a fast skim of headings to ensure the story still makes sense.
Can I generate both DOCX and PDF from the same Markdown source?
Often yes. Many people write in Markdown and export PDF for delivery, DOCX for stakeholders who want inline comments. Keep the source clean and avoid formatting tricks that only work in one output format.

Next step (one concrete action)
Create /reporting-kit/ today: add report.md, /evidence/, a finding.md template, and a Makefile with make pdf—then run one practice finding end-to-end in under 15 minutes
If you do nothing else, do this one thing: create a folder and run a tiny rehearsal. Not a full report. One finding. One screenshot. One repro. One fix. One PDF build.
That rehearsal is where the whole system becomes real. It also exposes the sharp edges—paths, naming, template quirks—while the stakes are low.
And yes: you’ll be tempted to “improve” the template before you’ve used it. Resist. Use it once first. You’ll learn more in 15 minutes than you will in two hours of “optimizing.” If you want the official course context that this style of reporting is often modeled after, see: OffSec PEN-200 course information.
Wrap-up (and your 15-minute pilot)
Remember the hook: the problem wasn’t your writing. It was the late-night chaos that makes writing feel impossible. A repeatable pipeline turns reporting into something calmer: capture evidence, write clearly, build reliably.
If you want a clean “pilot” right now, here’s your 15-minute plan:
- Minute 1–3: Create
reporting-kit/withreport.mdandevidence/ - Minute 4–7: Create one finding card using the template
- Minute 8–10: Capture one Flameshot screenshot and link it
- Minute 11–15: Run your build command and skim the PDF
If the PDF builds and the evidence shows up, you’ve already won. Everything else is refinements. If you’re also practicing on curated vulnerable machines, keeping your learning path organized with Kioptrix levels and a reference walkthrough like Kioptrix Level 2 walkthrough can make your reporting reps feel structured instead of random.
Last reviewed: 2025-12-26