OSCP+ Pivoting Tool Choice: TUN (Ligolo-NG) vs SOCKS (Chisel) vs Transparent Proxy (sshuttle) โ€” Which Fits Your Target Mix?

OSCP pivoting tool choice

Mastering the OSCP+ Pivot: Precision Over Guesswork

The fastest way to lose half a day in an OSCP+ lab isnโ€™t failing an exploitโ€”itโ€™s building a pivot that โ€œworksโ€ for a browser and quietly breaks everything else. That pain has a shape: mixed traffic (HTTP + SMB/AD + RDP/WinRM), proxy-ignorant tools, and DNS behaving like a polite saboteur.

Keep guessing, and you donโ€™t just waste timeโ€”you burn attempts, confidence, and the clean narrative your notes are supposed to protect.

This post gives you an OSCP+ pivoting tool choice you can make under stress: when to go TUN (Ligolo-NG), when SOCKS (Chisel) is enough, and when a transparent proxy (sshuttle-style) is the low-friction middle pathโ€”based on your target mix and constraints, not preferences.


Quick Definition: Pivoting is routing or proxying your traffic through an intermediate host so your tools can reach an otherwise inaccessible network segment.

  • ๐Ÿ”น TUN: Acts like you โ€œhave an interfaceโ€ directly in the target network.
  • ๐Ÿ”น SOCKS: Detours traffic app-by-app (requires proxychains or app settings).
  • ๐Ÿ”น Transparent Proxy: Captures traffic with fewer per-app settingsโ€”but each fails differently.

The Repeatable Validation Method:

Reachability โ†’ DNS โ†’ Service Behavior

This workflow cut my pivot-related โ€œmystery timeโ€ roughly in half.

Hereโ€™s the calm way to pick. Then prove itโ€™s real. Then move.


1) Who this is for / not for (so you donโ€™t waste a weekend)

Safety note up front: This article is for authorized lab work and in-scope penetration testing only. Pivoting is a lateral movement skill; misusing it can cause harm. If you donโ€™t have written permission and clear scope, stop here.

Who itโ€™s for

  • OSCP/OSCP+ style learners who pivot across subnets weekly, not yearly
  • Consultants who want a repeatable โ€œfirst pivotโ€ decision rule that survives stress
  • Anyone juggling Windows + Linux targets and mixed protocols (HTTP + SMB + RDP/WinRM)

Who itโ€™s not for

  • Anyone without explicit authorization and scope (seriously)
  • โ€œOne-host onlyโ€ tests where direct access is enough
  • Environments where installing agents/binaries is prohibited (policy-first wins)
Takeaway: The โ€œbestโ€ pivot tool is the one that matches your traffic shape and your constraints, not the one with the loudest fans.
  • Start with authorization + constraints
  • Then match tool to protocol mix
  • Then validate quickly (donโ€™t hope)

Apply in 60 seconds: Write down your top 3 protocols today (e.g., HTTP, SMB, RDP) before you pick a pivot method.

Quick lived-experience confession: the first time I โ€œlearned pivoting,โ€ I didnโ€™t learn pivoting. I learned how to stare at a terminal with growing dread while my tools lied to me. Thatโ€™s normal. The fix is a decision rule you can follow when youโ€™re tired, hungry, and 40 minutes behind schedule.

OSCP pivoting tool choice

2) The real question: what kind of traffic are you trying to move?

The most expensive pivot mistake is treating every packet like itโ€™s the same kind of packet. Itโ€™s not. Your target mix usually includes:

  • Web traffic (HTTP/HTTPS, browsers, APIs)
  • Windows/AD traffic (SMB, LDAP, Kerberos, RPC-ish behaviors)
  • Remote admin (RDP, WinRM, SSH)

Pivoting mental model in 60 seconds (TUN vs SOCKS vs transparent)

TUN makes your machine behave like it has an interface on the far network. Tools that expect โ€œreal networkingโ€ tend to calm down.

SOCKS is an application-level detour. If your app is proxy-aware, itโ€™s fast and clean. If not, itโ€™s a brick wall with a smiley sticker.

Transparent proxy (like sshuttle-style behavior) tries to capture traffic without asking each app to cooperate. Itโ€™s often low-friction, but assumptions matter (TCP focus, DNS quirks, egress rules).

โ€œTarget mixโ€ checklist: protocols, tools, and pain points

  • Which protocols must work reliably? (Pick two that matter most.)
  • Which tools are you actually using? (Nmap, Burp Suite with an external browser setup in Kali, Impacket-style tooling, RDP clients, etc.)
  • Do you need scanning/port discovery through the pivot, or only known services?
  • Do you need name resolution inside the remote network (AD DNS), or can you live on IPs?

Curiosity gap: Why the same pivot โ€œworksโ€ for HTTP but fails for AD

Because HTTP tools often tolerate proxies and retries. AD tooling often expects a more โ€œnativeโ€ network presenceโ€”multiple ports, service discovery behaviors, name resolution dependencies, and client libraries that donโ€™t politely ask your SOCKS proxy for permission. That mismatch is why your browser sings while your AD tooling sulks.

Takeaway: Before choosing a pivot tool, confirm youโ€™re allowed to tunnel the way you plan to tunnel.
  • Yes/No: Do you have written authorization for lateral movement?
  • Yes/No: Does scope allow deploying an agent/binary on an intermediate host?
  • Yes/No: Are there egress restrictions you must respect (only 80/443, proxy required, etc.)?

Apply in 60 seconds: Write one sentence: โ€œMy pivot must work even if outbound is restricted to ____.โ€

Small but timely truth: most real environments have some form of monitoring and policy around tunneling and lateral movement. That doesnโ€™t mean โ€œdonโ€™t pivotโ€ (you may be explicitly contracted to). It means your choices should be auditable, explainable, and aligned to the rules of engagementโ€”especially if youโ€™re working under a formal methodology like what NIST describes for technical security testing and assessment.


OSCP pivoting tool choice

3) TUN mode (Ligolo-NG): when you need โ€œIโ€™m basically on that networkโ€

TUN is the choice you make when your toolchain needs the network to feel โ€œreal.โ€ Not โ€œreal-ish.โ€ Real.

What TUN gives you (and why scanners suddenly behave)

TUN creates a routed path that looks like an interface. That matters because a lot of toolingโ€”especially anything doing discovery, negotiation, or multi-port behaviorโ€”assumes it can speak normally at the network layer.

Where youโ€™ll feel the difference:

  • Port discovery feels less โ€œmysteriously incompleteโ€
  • Multi-port protocols stop acting like theyโ€™re behind a curtain
  • Clients that ignore proxy settings suddenly become usable

Where TUN shines in OSCP-style labs (multi-service + mixed tooling)

If your day includes even one of these, TUN is a strong default:

  • AD-heavy segments (where name resolution and multiple services matter)
  • Mixed Linux/Windows clients where some apps refuse to proxy
  • โ€œI need it to behave like Iโ€™m thereโ€ situations (RDP/WinRM alongside SMB/LDAP)

Personal note: I used to fight with โ€œwhy wonโ€™t this client respect my proxy?โ€ until I accepted a boring truthโ€”some tools arenโ€™t going to cooperate, especially under time pressure. TUN is how you stop negotiating with them.

Tradeoffs youโ€™ll feel: setup overhead, routing, and visibility

  • More moving parts: routes, interfaces, and debugging network layers
  • More responsibility: you can accidentally route traffic you didnโ€™t intend
  • More payoff: once stable, it supports more of your toolchain

Letโ€™s be honestโ€”TUN feels like magic until DNS bites

DNS is the sabotage artist that wears a polite suit. Your tunnel can be perfect and your experience still fails if name resolution points to the wrong place, leaks to the wrong resolver, or doesnโ€™t match how the remote network expects queries to behave.

Show me the nerdy details

When a pivot โ€œfeels real,โ€ it usually means your traffic is routed in a way that your OS and libraries treat as ordinary networking. That changes how discovery, negotiation, and fallback behaviors work. The flip side is you now own routing choices: what goes through the tunnel, what stays local, and how name resolution is handled.

Takeaway: Pivoting costs timeโ€”budget it like you budget enumeration.
  • Input 1: How many tools must work through the pivot? (e.g., 3 vs 10)
  • Input 2: How often will you switch subnets today? (1 vs 4)
  • Input 3: How proxy-aware is your toolchain? (low/medium/high)

Apply in 60 seconds: If you need many tools and expect multiple subnet shifts, default to a more โ€œnetwork-nativeโ€ approach (often TUN).


4) SOCKS (Chisel): the fast lane for app-by-app proxying

SOCKS is the tool you reach for when you want speed, control, and a smaller blast radiusโ€”especially if your workflow is mostly โ€œa few proxy-aware apps.โ€

When SOCKS is exactly enough (browser + API calls + single toolchain)

  • Youโ€™re doing web-heavy work: browser + Burp Suite + curl-like workflows
  • You already know the target service ports (less scanning through pivot)
  • You want a pivot thatโ€™s quick to stand up and quick to tear down

Iโ€™ve had days where SOCKS was the hero because the engagement was basically โ€œweb app + one database port.โ€ Anything more wouldโ€™ve been busywork.

Where SOCKS surprises you (tools that ignore proxy settings)

This is the classic trap: your proxy works, so you assume your tool will use it. But some tools:

  • donโ€™t support proxies at all
  • support proxies only in certain modes
  • support HTTP proxies but not SOCKS (or vice versa)
  • silently fall back to direct connections (the worst kind of โ€œhelpโ€)

The โ€œproxy-aware toolingโ€ rule: who plays nice, who doesnโ€™t

Plays nice: browsers, many CLI web tools, many package managers, some API clients.

Often doesnโ€™t: lower-level scanners, some protocol-specific clients, tools that spawn multiple subprocesses without inheriting proxy settings.

Curiosity gap: The one setting that silently turns SOCKS into a time sink

Itโ€™s not one setting so much as one assumption: โ€œmy whole toolchain is proxy-aware.โ€ The moment that assumption fails, your time disappears into troubleshooting. Thatโ€™s why SOCKS works best when your workflow is intentionally app-limited.

Show me the nerdy details

SOCKS is powerful because itโ€™s precise: you can decide which applications and which traffic should take the detour. Itโ€™s fragile when your workflow includes tools that operate below the application layer or rely on libraries that donโ€™t expose proxy controls cleanly.

Takeaway: SOCKS wins when your work is web-heavy and your toolchain is proxy-aware by design.
  • Fewer tools to support = fewer surprises
  • App-by-app control reduces accidental traffic
  • Best for โ€œknown services,โ€ not โ€œscan everythingโ€ days

Apply in 60 seconds: List your top 5 tools and mark each: โ€œproxy-aware: yes/no/unsure.โ€ If โ€œunsureโ€ dominates, consider TUN.


5) Transparent proxy (sshuttle): the โ€œjust make it workโ€ middle path

sshuttle-style transparent proxying is the option people fall in love with because it feels like cheatingโ€”in the good way. When it works, you get that โ€œwhy isnโ€™t everything like this?โ€ glow.

Why sshuttle feels effortless (especially in Linux-heavy stacks)

It often fits nicely when:

  • you have reliable SSH reachability
  • your environment is Linux-first (or at least Linux-friendly)
  • you want a โ€œroute-likeโ€ experience without full VPN complexity

Real talk: the first time I used a transparent-ish approach successfully, I got cocky. Then I met an environment with strict egress policies and learned humilityโ€”quickly, loudly, and in front of my own notes.

What โ€œtransparentโ€ really means (and what it does not cover)

โ€œTransparentโ€ typically means your applications donโ€™t need individual proxy settings to benefit. But it doesnโ€™t mean:

  • every protocol is covered equally
  • non-TCP traffic is handled the way you expect
  • DNS behavior will magically match the target environment

Egress assumptions: SSH reachability, latency, and reliability

This approach often assumes SSH is allowed and stable. If the network is โ€œopinionatedโ€ (tight firewalls, inspection, timeouts), your experience becomes inconsistentโ€”fast wins followed by confusing failures.

Curiosity gap: Why sshuttle can be perfectโ€ฆ until the network gets opinionated

Because it relies on the network letting you be clever. Some networks do. Some networks would prefer you werenโ€™t there.

Show me the nerdy details

Transparent proxy approaches typically hook into routing/packet handling so apps donโ€™t need explicit proxy settings. That reduces friction, but it also means your troubleshooting must consider OS behavior (routes, resolver choices, interface priorities) rather than only app configuration.

Short Story: The day โ€œit connectsโ€ lied to me (120โ€“180 words) โ€ฆ

I once had a pivot that looked flawless on paper. The tunnel came up. The โ€œsimple testโ€ succeeded. I even wrote the victory line in my notesโ€”because Iโ€™m dramatic and I like closure. Then I tried the actual work: a mix of web browsing, SMB enumeration, and an RDP hop. The browser behaved. SMB acted like it was listening through a wall.

RDP connected once, then never again. I spent 90 minutes blaming the target, then the firewall, then my coffee intake. The culprit was boring: name resolution and route priority werenโ€™t aligned with the traffic I was pushing. I hadnโ€™t validated the pivot as a systemโ€”only as a single connection. That day taught me a rule I still follow: if a pivot is real, it should pass three proofs (reachability, DNS, service behavior) without needing excuses.


6) Decision matrix: pick in 90 seconds (by constraints, not preference)

This is the section you come back to when youโ€™re under exam pace or client pressure. Read it like a checklist youโ€™d trust when youโ€™re tired.

If your target mix is AD-heavy (SMB/LDAP/Kerberos/RDP)

  • Default leaning: TUN-style approach
  • Why: AD workflows often need โ€œnetwork-nativeโ€ behavior across multiple services
  • Exception: If your tasks are narrowly scoped to one proxy-aware app, SOCKS can be enough

If your target mix is web-heavy (HTTP/HTTPS APIs + browser workflows)

  • Default leaning: SOCKS-style approach
  • Why: web tooling is usually proxy-friendly, and you get fast setup + small blast radius
  • Exception: If you must scan broadly or use stubborn clients, TUN becomes attractive

If your constraint is egress (only 80/443, tight firewalling, flaky paths)

  • Default leaning: choose the approach that fits your allowed transport and policy
  • Why: reliability beats elegance; a fragile pivot is a productivity sink
  • Operator move: plan for a fallback option before you start โ€œdebugging feelingsโ€

If your constraint is time (exam pace vs consulting pace)

  • Exam pace: pick the method that reduces unpredictable troubleshooting
  • Consulting pace: pick the method thatโ€™s easiest to explain and document for a report

โ€œRed flagโ€ rules: when a tool choice will backfire later

  • If you need scanning + AD + RDP and pick SOCKS as your only plan
  • If your environment forbids binaries/agents and you pick a method that requires them
  • If you havenโ€™t decided how DNS should behave and you choose a โ€œmagic-feelingโ€ setup
Takeaway: Decide by constraints, not preference.
  • Choose TUN when you need broad tool compatibility and โ€œIโ€™m on that subnetโ€ behavior.
  • Choose SOCKS when youโ€™re web-heavy and can keep the workflow proxy-aware.
  • Choose sshuttle-style when SSH is reliable and you want low-friction, but keep an escape hatch.

Apply in 60 seconds: Write your one-liner: โ€œMy pivot must support ____ (protocols) with ____ (constraints).โ€ Then pick.

Commercial reality check (neutral, not hype): OSCP/OSCP+ learners often touch tools like Nmap, Burp Suite, and Windows RDP clients in the same session. That โ€œmixed toolingโ€ reality is exactly why TUN approaches get so much loveโ€”because they reduce the number of special cases you have to remember.


7) Common mistakes (the expensive kind)

Mistake #1: Picking a pivot tool before mapping your toolchainโ€™s proxy behavior

If you donโ€™t know which tools are proxy-aware, youโ€™re gambling with your time. And time is the only resource you canโ€™t โ€œsudoโ€ your way into later.

Mistake #2: Treating DNS as an afterthought (it becomes your โ€œphantom bugโ€)

DNS failures rarely look like DNS failures. They look like โ€œservice is down,โ€ โ€œcredentials are wrong,โ€ or โ€œthe target is flaky.โ€ In AD-heavy segments, name resolution is not a convenienceโ€”itโ€™s part of the environmentโ€™s nervous system.

Mistake #3: Debugging the wrong layer (routing vs proxy vs service reachability)

When something fails, ask: is the problem path (routing), policy (firewall/egress), translation (proxy behavior), or identity (DNS/name resolution)? Picking the wrong layer is how you lose an hour.

Mistake #4: Pivot sprawlโ€”too many hops, no notes, no rollback

Iโ€™ve watched smart people build a tunnel chain that workedโ€ฆ until it didnโ€™tโ€ฆ and then nobody knew which hop introduced the failure. If you canโ€™t explain your pivot in three sentences, itโ€™s already too complicated.

Micro-rule that saves lives: one pivot at a time, one verification pass, then proceed.


8) Donโ€™t do this: two traps that silently burn attempts

Trap #1: โ€œIt connectsโ€ โ‰  โ€œMy tool works through itโ€

A tunnel can be โ€œupโ€ while your workflow is effectively blocked. Connection success only proves one thing: a connection succeeded. It does not prove your scanner, your client libraries, or your name resolution behaves correctly.

Trap #2: Mixing pivot methods without a plan (and losing observability)

Mixing approaches can be fineโ€”if itโ€™s deliberate. Itโ€™s chaos if youโ€™re switching out of frustration. The danger is losing observability: you stop knowing what traffic goes where, and debugging becomes guesswork.

Hereโ€™s what no one tells youโ€”your notes are part of the pivot

Your notes are not a diary. Theyโ€™re the control plane for your own thinking. When pivoting gets complex, your documentation is what keeps you from re-learning the same lesson three times in one afternoon.

Takeaway: Gather the right facts before you โ€œcompare tools.โ€
  • Which intermediate hosts are allowed to run binaries/agents?
  • What outbound ports and protocols are permitted from the pivot point?
  • Which services must you reach (web, SMB, RDP/WinRM) and which are โ€œnice to haveโ€?

Apply in 60 seconds: Write a 3-line โ€œconstraints cardโ€ and keep it visible while you work.


9) Validation without chaos: prove your pivot is real (quick, repeatable)

If you take nothing else from this article, take this: validate pivots like a grown-up. Calmly. Repeatedly. With the same three proofs every time.

The 3 proofs: reachability, name resolution, service behavior

  • Reachability: can you reach the subnet and the host(s) you care about?
  • Name resolution: do names resolve the way the target environment expects?
  • Service behavior: does the actual protocol behave normally (not just โ€œa port is openโ€)?

Hereโ€™s a small, honest number: when I started forcing myself to do these three proofs, I cut my pivot-related โ€œmystery timeโ€ by roughly half in labs. Not because I got smarterโ€”because I stopped guessing.

Minimal โ€œevidence packโ€ you should capture (for reports/exams)

  • One sentence: what pivot method you used and why (constraints)
  • One proof per category (reachability/DNS/service behavior)
  • Any assumptions (e.g., โ€œDNS handled by remote resolverโ€ vs โ€œIP-only workflowโ€)

When to switch approaches (a calm escalation ladder)

  • If proxy-aware apps work but non-proxy tools fail โ†’ consider moving toward TUN-like behavior
  • If everything is flaky under a โ€œtransparentโ€ setup โ†’ re-check assumptions (egress, DNS) or simplify
  • If youโ€™re debugging for more than a reasonable window โ†’ switch method or reduce scope intentionally
Takeaway: A real pivot passes three proofs without excuses.
  • Reachability is necessary but not sufficient
  • DNS is the silent deal-breaker in many mixed environments
  • Service behavior is where truth shows up

Apply in 60 seconds: Add a โ€œ3 proofsโ€ checklist to your notes template and refuse to skip it.


10) When to seek help / pause the test

If scope/authorization is unclear: stop and get it in writing

If you canโ€™t answer โ€œam I allowed to pivot like this?โ€ with a document, youโ€™re not stuckโ€”youโ€™re at a boundary. Respect it.

If youโ€™re breaking client policy (agents, binaries, tunneling rules)

Some environments explicitly disallow certain tooling patterns. Thatโ€™s not a puzzle to solve. Thatโ€™s a rule to follow, and a conversation to have with stakeholders.

If your pivot destabilizes a production segment (roll back, notify, document)

Even authorized testing should be safe. If something you did causes instability, your job is to respond professionally: roll back, notify the right people, document what happened and what you changed.

If youโ€™re stuck: what to ask a mentor/teammate without oversharing sensitive data

  • State your constraints: egress rules, allowed tooling, target protocols
  • State your proofs: what passed, what failed (reachability/DNS/service behavior)
  • Ask a bounded question: โ€œGiven these constraints, which approach reduces proxy-ignorant tool issues?โ€

Timely, practical truth: teams that do this well usually have a lightweight methodology. Not because itโ€™s fancy, but because it keeps the work accountable and repeatable. Thatโ€™s why standards-oriented guidance (like NISTโ€™s) keeps showing up in real-world testing conversationsโ€”less drama, more traceability.


11) Next step (one concrete action)

Build a one-page โ€œTarget Mix โ†’ Pivot Choiceโ€ cheat sheet. Not a manifesto. One page.

Build a one-page โ€œTarget Mix โ†’ Pivot Choiceโ€ cheat sheet

  • Write your top 5 tools (the ones you actually use)
  • Mark whether each is proxy-aware (yes/no/unsure)
  • Map them to TUN / SOCKS / transparent proxy as your default

Neutral action line: Next time you pivot, use the sheet once, then update it with what broke.


OSCP pivoting tool choice

FAQ

Is Ligolo-NG (TUN) better than Chisel (SOCKS) for OSCP/OSCP+ labs?

Often, yesโ€”when your lab day is mixed (web + SMB/AD + RDP/WinRM) and you need broad tool compatibility. SOCKS can be faster when your workflow is intentionally limited to proxy-aware apps (especially web-heavy tasks).

When should I use SOCKS instead of a TUN interface?

Use SOCKS when you want app-by-app control, a smaller blast radius, and your key tools reliably respect proxy settings. Itโ€™s excellent for browser-driven workflows and many HTTP-centric tasks.

Why do some tools ignore SOCKS/proxy settings?

Some tools operate below the application layer, use libraries that donโ€™t expose proxy controls cleanly, or spawn subprocesses that donโ€™t inherit your proxy environment. In those cases, a more network-native approach (often TUN) reduces friction.

Does sshuttle work for Windows targets, or is it Linux-only in practice?

In practice, sshuttle-style workflows are commonly used in Linux-heavy setups. Whether it โ€œworks for Windows targetsโ€ depends on what you mean: the targets can be Windows, but your pivoting approach still has assumptions about transport (often TCP) and how traffic is captured and forwarded. Always validate with the three proofs.

Whatโ€™s the easiest pivot method when egress is restricted to 80/443?

The easiest method is the one that matches the allowed transport and policy constraints. Under tight egress, reliability beats elegance. Plan a primary approach and a fallback, and avoid sprawling multi-hop improvisation.

How do I know if my pivot problem is DNS vs routing vs firewall?

Use the three proofs: if reachability fails, suspect routing/firewall. If IP works but names fail, suspect DNS. If ports look open but the actual protocol misbehaves, suspect service behavior issues (or tool/proxy mismatch).

Can I chain multiple pivots safely, and whatโ€™s the practical limit?

You can chain pivots in authorized work, but each hop adds failure modes and reduces observability. The practical limit is less about a number and more about whether you can still explain, validate, and roll back cleanly. If you canโ€™t describe your chain simply, simplify it.

What should I document during pivoting for a clean exam/report narrative?

Document the constraint (โ€œwhy this pivotโ€), the method (high-level), and the three proofs (reachability/DNS/service behavior). Add any assumptions and a rollback note. This is the difference between โ€œit workedโ€ and โ€œitโ€™s defensible.โ€

Which pivot approach is most reliable for AD enumeration traffic?

Reliability often improves when the network experience is more nativeโ€”so many people default toward TUN-like behavior for AD-heavy segments. But โ€œmost reliableโ€ still depends on constraints: policy, egress, and allowed tooling.

Whatโ€™s the biggest mistake people make when switching pivot tools mid-test?

Switching out of frustration without re-validating the three proofs. If you change the pivot method, re-check reachability, DNS, and service behaviorโ€”otherwise you carry the same problem into a new tunnel and blame the tunnel for it.


Conclusion

Hereโ€™s the loop we opened at the top: why does the browser work while everything else fails? Because traffic shape mattersโ€”and some pivot methods ask your tools to cooperate, while others make the network feel real enough that cooperation isnโ€™t required.

If youโ€™re time-poor (and most of us are), your highest-leverage move is simple: choose by constraints.

  • TUN when you need broad compatibility across mixed protocols and stubborn clients.
  • SOCKS when your workflow is intentionally proxy-aware and mostly web-heavy.
  • Transparent proxy when SSH is reliable and you want low-frictionโ€”while staying honest about edge cases.
Pivot Choice in 30 Seconds (Infographic)
Step 1: Your traffic
Mostly web + proxy-aware apps?
Mostly AD/SMB/RDP + mixed tooling?
Step 2: Your constraints
Tight egress / strict policy?
Need scanning/discovery through pivot?
Decision
  • SOCKS โ†’ web-heavy + proxy-aware
  • TUN โ†’ mixed/AD-heavy + stubborn tools
  • Transparent โ†’ SSH-reliable + low-friction, validate carefully
Always finish with: Reachability โ†’ DNS โ†’ Service behavior.

Your 15-minute next step: open your lab logging / notes workflow and add a โ€œ3 proofsโ€ block. Then write a one-line rule: โ€œIf DNS and service behavior donโ€™t pass in 10 minutes, I switch approaches.โ€ That single rule saves more attempts than any tool preference ever will.

Last reviewed: 2026-01.