Technology Apr 23, 2026 · 11 min read

Port Exhaustion, Context Switching, and Why "HttpClientFactory" Exists (.NET)

Hello there!👋🧔‍♂️ When .NET apps call other HTTP APIs under load, production often teaches the same lesson in two different voices: cryptic socket errors such as “Only one usage of each socket address is normally permitted,” and slowdowns or instability that feel like “the network” but trace back...

DE
DEV Community
by Outdated Dev
Port Exhaustion, Context Switching, and Why "HttpClientFactory" Exists (.NET)

Hello there!👋🧔‍♂️ When .NET apps call other HTTP APIs under load, production often teaches the same lesson in two different voices: cryptic socket errors such as “Only one usage of each socket address is normally permitted,” and slowdowns or instability that feel like “the network” but trace back to threads waiting on I/O the expensive way. The first story is mostly connection lifecycle (ports, pooling, TIME_WAIT). The second is mostly scheduling (blocking async work, thread-pool pressure, context switching).

What follows maps those symptoms onto System.Net.Http.HttpClient, explains why IHttpClientFactory from Microsoft.Extensions.Http (through AddHttpClient in ASP.NET Core and other generic host apps) is usually the right default, shows where Refit and similar libraries still sit on the same plumbing, and ends with dos and don’ts you can use in code review tomorrow.

The two problems in one sentence

Port exhaustion happens when the OS runs out of usable ephemeral ports (or related socket resources) because too many connections are opened, held, or left in TIME_WAIT. Context switching is the CPU cost of the OS pausing one thread and resuming another; under load, blocking threads on network I/O multiplies that cost and can starve the thread pool. Misusing HttpClient in .NET makes both worse.

Scope: TCP and the thread pool are general OS/runtime ideas, but the APIs, defaults, and examples here are .NET-specific (BCL + ASP.NET Core integration). If you use another language or framework, the symptoms may look the same; the fixes will use different types and lifetimes.

1. How HttpClient is tied to sockets and ports

HttpClient is not “a thin wrapper around a single HTTP call.” It owns (via an HttpMessageHandler, typically SocketsHttpHandler) a pool of connections to each host. Connections are reused when possible (Connection: keep-alive). That is good: fewer handshakes, fewer new local ports per request.

Problems appear when every request gets a new HttpClient (or a new handler) so the runtime cannot reuse connections cleanly, or when connections are not returned to the pool promptly.

Ephemeral ports and TIME_WAIT

When a TCP connection closes, the local endpoint can sit in TIME_WAIT for a while (often on the order of minutes, depending on OS tuning). While in that state, that (local IP, local port, remote IP, remote port) tuple cannot be reused for a new connection with the same identity. Under bursty traffic, opening many short-lived connections can exhaust the available ephemeral port range for a given destination, or stress socket tables, even if CPU and memory look fine.

So “port exhaustion” in discussions of HttpClient usually means ephemeral port / socket exhaustion or too many concurrent outbound connections, not literally “we ran out of port 443.”

The classic anti-pattern

// Anti-pattern: new client per call (or per request scope without proper disposal)
public async Task<string> GetAsync(string url)
{
    using var client = new HttpClient(); // new handler/socket setup underneath
    return await client.GetStringAsync(url);
}

Creating a new HttpClient for each call often creates a new handler chain and new connection behavior. Even with using, you pay setup cost every time and defeat connection pooling across calls. Under load, this pattern is a common contributor to socket pressure and DNS staleness (see below).

Static singleton HttpClient:

private static readonly HttpClient Shared = new();

public Task<string> GetAsync(string url) => Shared.GetStringAsync(url);

This reuses connections and avoids the per-call construction problem, but a long-lived default HttpClient can still hold a handler that never refreshes DNS the way you expect for long-running processes, and you lose per-logical-client configuration unless you add it carefully.

That is where IHttpClientFactory comes in.

2. What IHttpClientFactory fixes

IHttpClientFactory is part of Microsoft.Extensions.Http on .NET; you usually register clients from AddHttpClient in ASP.NET Core or another .NET generic host. Register named or typed clients in DI. The factory:

  1. Pools HttpMessageHandler instances and manages their lifetime (default handler lifetime is about two minutes in modern ASP.NET Core setups; long enough to reuse connections, short enough to pick up DNS and cert changes reasonably).
  2. Centralizes configuration (base address, default headers, timeouts, primary HttpClient handler options).
  3. Lets you compose cross-cutting concerns: logging, resilience (for example with Polly), and tests with clear seams.

Example (ASP.NET Core):

// Program.cs
builder.Services.AddHttpClient("GitHub", client =>
{
    client.BaseAddress = new Uri("https://api.github.com/");
    client.DefaultRequestHeaders.UserAgent.ParseAdd("MyApp/1.0");
});

// A consumer
public class GitHubService(IHttpClientFactory httpClientFactory)
{
    public async Task<string> GetZenAsync(CancellationToken ct = default)
    {
        var client = httpClientFactory.CreateClient("GitHub");
        return await client.GetStringAsync("zen", ct);
    }
}

You still use async/await and one logical client per named registration, not a new physical client per operation.

Key idea: The factory is not magic; it coordinates handler lifetime and pooling so you do not accidentally create unbounded distinct handlers (and thus unbounded distinct connection pools) while still allowing multiple logical HTTP APIs in one process.

3. Other libraries and approaches (same plumbing, nicer surface)

On .NET, everything below ultimately rides on HttpClient and the same socket and threading realities. Pick the surface that matches your team’s style and how strongly typed you want the boundary to be.

Typed clients (first-party)

You inject a class that takes HttpClient in its constructor and register it with AddHttpClient<TClient>(). You get one configured client per type, full testability (mock the wrapper or use a test handler), and no stringly typed client names in business code.

Refit

Refit turns a C# interface into an HTTP client: you declare methods with attributes ([Get("/users/{id}")]). Refit generates the implementation and uses HttpClient under the hood. In ASP.NET Core you typically register with AddRefitClient<T>(), which plugs into IHttpClientFactory, so you keep handler pooling and DNS rotation while your code stays declarative. It shines when an API map is stable and you want compile-time checks instead of manual HttpRequestMessage assembly.

Small example (add the Refit.HttpClientFactory NuGet package):

using Refit;

// Contract
public interface IGitHubZenApi
{
    [Get("/zen")]
    Task<string> GetZenAsync(CancellationToken cancellationToken = default);
}

// Program.cs: Refit wires this client through the same HttpClientFactory pipeline
builder.Services.AddRefitClient<IGitHubZenApi>()
    .ConfigureHttpClient(c =>
    {
        c.BaseAddress = new Uri("https://api.github.com/");
        c.DefaultRequestHeaders.UserAgent.ParseAdd("MyApp/1.0");
    });

// Any service: inject the interface, not HttpClient
public class DemoService(IGitHubZenApi gitHub)
{
    public Task<string> FetchZenAsync(CancellationToken ct = default) =>
        gitHub.GetZenAsync(ct);
}

Flurl

Flurl is a fluent URL builder and HTTP wrapper built on HttpClient. People like it for readable one-off calls and chaining; you still want a single long-lived client strategy (often via FlurlHttp.Configure with a shared client or integration with your DI setup) so you do not recreate clients per call.

RestSharp

RestSharp (from 107+) uses HttpClient internally. The API is different from Refit (builder-style rather than interface-driven). Same rule applies: one client per logical API and no per-request client construction if you care about scale.

Polly and resilience

Polly (or Microsoft.Extensions.Http.Resilience in newer stacks) adds retries, circuit breakers, and timeouts as delegating handlers. AddHttpClient has first-class hooks to add those handlers so retries do not accidentally multiply new sockets; they wrap the same pooled HttpClient pipeline.

When HTTP is the wrong tool

If you control both ends and need streaming, strong contracts, and efficient multiplexing, gRPC (Grpc.Net.Client) is not “instead of HttpClient” in spirit, it still uses HTTP/2, but it changes design and tooling. This post stays on REST-ish HttpClient usage; just remember port and thread pressure also appear if you open too many parallel channels without backpressure.

4. Context switching: what it has to do with HTTP calls

OS thread context switches

When a thread is runnable, the scheduler may preempt it to run another thread on the same CPU. Each switch has a small but non-zero cost (saving registers, cache effects). Under heavy load, too many runnable threads can mean measurable CPU time spent just switching.

Blocking async code

If server code blocks on asynchronous work (.Result, .Wait(), GetAwaiter().GetResult()), you block thread-pool threads while I/O could have freed them:

// Bad in ASP.NET Core request path: blocks a thread
var json = httpClient.GetStringAsync(url).Result;

That can:

  • Reduce the pool’s ability to accept new requests.
  • Increase queueing and latency.
  • Increase context switching as the OS juggles more blocked and runnable threads than necessary.

The fix is to use await all the way and propagate CancellationToken so threads return to the pool while I/O is in flight.

Async I/O is not “free,” but it scales better

await on true async I/O does not mean zero CPU work; there is still scheduling and continuation work. But it avoids holding a thread for every concurrent outbound call, which is exactly what you want when each HttpClient call may wait on the network.

So: proper HttpClient reuse addresses connection/socket pressure; proper async addresses thread-pool and context-switch pressure.

5. HttpClient dos and don’ts (best practices)

Do

  • Use IHttpClientFactory (named, typed, or Refit-generated clients) in ASP.NET Core services so handlers rotate and connections pool predictably.
  • Set timeouts: HttpClient.Timeout and/or per-operation CancellationTokenSource with a bounded delay for user-facing SLAs.
  • Pass CancellationToken from the request (or work item) into every outbound call so work cancels when the caller disconnects.
  • Stay async end-to-end; avoid sync-over-async in request paths.
  • Configure the primary handler when you need it: for example SocketsHttpHandler.PooledConnectionLifetime (and related settings) so long-lived processes refresh connections and DNS sensibly; Microsoft’s HttpClient guidelines spell out recommended values for server scenarios.
  • Treat one logical outbound API as one client registration (one name, one typed client, or one Refit interface), with base address and default headers in one place.
  • Log correlation IDs on outbound calls (via a delegating handler or HttpClient message handler) so distributed traces line up.
  • Test with HttpMessageHandler mocks or WebApplicationFactory for integration tests; avoid hitting real networks in unit tests.

Don’t

  • Don’t do new HttpClient() per request (or per arbitrary using scope) in hot paths; the classic socket exhaustion and DNS foot-guns.
  • Don’t block on Task from HttpClient (.Result, .Wait(), GetResult()) on ASP.NET Core thread-pool threads; it wastes threads and worsens context switching under load.
  • Don’t dispose HttpClient instances returned by IHttpClientFactory.CreateClient(...); the factory owns the lifetime; disposing can break pooling (see official docs).
  • Don’t share mutable per-request state on a static HttpClient (for example default headers that change per user), prefer per-request headers on HttpRequestMessage or separate client registrations.
  • Don’t send unbounded parallel requests without throttling (semaphore, batching, or queue), you can still exhaust remote limits, local ports, or memory even with a correct singleton client.
  • Don’t disable TLS validation or ignore certificate errors in production; that is a security failure, not a performance tweak.

6. Practical checklist

Concern What to do
Too many connections / ports / TIME_WAIT Prefer IHttpClientFactory (or a carefully managed singleton with correct handler lifetime). Avoid new HttpClient() per call.
Stale DNS in long-running apps Factory-managed handler rotation helps; verify SocketsHttpHandler / PooledConnectionLifetime style settings for your version.
Thread pool starvation / high context switching Never block on Task-based APIs; use await and ConfigureAwait(false) only where appropriate (libraries; ASP.NET Core apps usually omit it on the server).
Many distinct outbound bases Use named or typed clients (or Refit interfaces) so each logical API gets one stable configuration, not one new client type per request.

7. Closing thought

Port exhaustion and context switching are different mechanisms, but both show up when a .NET service makes lots of outbound HTTP calls under load. Misusing HttpClient amplifies socket and DNS issues; blocking on async amplifies thread-pool and scheduling issues. IHttpClientFactory is the supported way in ASP.NET Core (and the generic host) to keep handlers and connection pools healthy while you keep your code fully asynchronous, so your CPUs spend time on work, not on fighting the network stack and the scheduler. Libraries like Refit or typed clients sit on top of that .NET foundation; they do not remove the need to get the lifecycle right.

Further reading (official docs)

DE
Source

This article was originally published by DEV Community and written by Outdated Dev.

Read original article on DEV Community
Back to Discover

Reading List