I've spent the last few months building something that doesn't exist anywhere
else in open source: a 6G protocol stack where the architectural differences
from 5G are not theoretical — they're running code.
This post explains three specific decisions I made in the core network that
are impossible to implement in Open5GS, free5GC, or any existing 5G SA
stack — and why they matter for what 6G actually needs to be.
The repo: github.com/j143/6g
The problem with treating 6G as "faster 5G"
Most 6G discussion talks about terahertz spectrum, higher speeds, and lower
latency. Those are PHY-layer improvements — incremental. The architectural
changes are deeper and less discussed.
5G's core network was designed around three assumptions:
- Sessions are IP-based — every PDU session carries IP packets
- Control plane leads — AMF establishes the session, then UPF forwards
- Terrestrial-only mobility — tracking areas are ground-cell anchored
All three assumptions break for 6G. Here's how, and what I built instead.
Inversion 1: User-plane-first UPF
In 5G SA, the first packet from a UE cannot be forwarded until:
- AMF registers the UE (NAS Registration)
- SMF creates a PDU session
- UPF establishes the bearer
That chain takes 10–50ms of control-plane signaling before a single byte
of user data moves. This is fine for smartphones downloading Instagram.
It is catastrophic for a 6G industrial sensor that sends a 40-byte reading
every 2ms and cannot afford the setup overhead.
6G proposals (Qualcomm's "Rethinking the Control Plane" whitepaper series)
argue for inverting this: let the UPF forward speculatively, and lazily
trigger session establishment only if the flow continues.
Here's what that looks like in 6g-core/upf.rs:
pub enum FlowAction {
/// Forward immediately, notify control plane asynchronously
ForwardAndNotify { qos_class: QosClass },
/// Buffer for up to `wait_ms` while SMF establishes session
Buffer { wait_ms: u32 },
/// Reject — policy violation
Reject { reason: RejectReason },
}
impl Upf {
/// Called when a packet arrives with no matching session context.
/// This is architecturally impossible in 5G — a 6G-only behavior.
pub fn forward_unknown_flow(
&mut self,
packet: &Packet,
) -> FlowAction {
// Heuristic: known UE + low-latency slice → forward immediately
if self.ue_table.contains(&packet.source_ue_id)
&& packet.slice_id.is_urllc()
{
return FlowAction::ForwardAndNotify {
qos_class: QosClass::Urllc,
};
}
FlowAction::Buffer { wait_ms: 5 }
}
}
The control plane becomes a thin adaptation layer, not the gatekeeper.
First-packet latency drops to sub-millisecond for known UEs on low-latency
slices.
Inversion 2: Semantic PDU sessions
5G has three PDU session types: IPv4/v6, Ethernet, and Unstructured.
All three assume the network's job is to deliver bits faithfully.
6G semantic communication research challenges this assumption entirely. The
network should deliver meaning — transmit enough for the receiving
application's task to succeed, not necessarily every bit. A video stream
for an object detection task only needs to transmit enough for the detector
to fire correctly — not every pixel.
I added a fourth PDU session type: Semantic(GoalSpec).
/// 6G-only: 4th PDU session type. No equivalent in 3GPP 5G specifications.
#[derive(Debug, Clone)]
pub enum PduSessionType {
Ipv4,
Ipv6,
Ipv4v6,
Ethernet,
Unstructured,
/// Semantic / goal-oriented session.
/// The network knows the application task and optimises
/// transmission for task success, not bit fidelity.
Semantic(GoalSpec),
}
#[derive(Debug, Clone)]
pub struct GoalSpec {
/// What the receiver is trying to accomplish
pub task: SemanticTask,
/// Maximum bits the sender is allowed to use
pub max_bits: u32,
/// Minimum task success rate the session must sustain
pub min_task_success_rate: f32,
}
#[derive(Debug, Clone)]
pub enum SemanticTask {
ImageClassification { top_k: u8 },
SpeechRecognition { wer_threshold: f32 },
ObjectDetection { iou_threshold: f32 },
TextSummarization { rouge_threshold: f32 },
}
When the SMF creates a Semantic(GoalSpec) session, the UPF routes the
payload through TextSemanticCodec — a semantic encoder that compresses
2048 bytes to ~312 bytes by preserving task-relevant meaning rather than
exact bytes. The metric is no longer BER. It's task success rate.
This is a protocol-level change, not an application-layer optimization.
The network infrastructure — not the app — owns the semantic contract.
Inversion 3: NTN-native AMF
5G NR added NTN (Non-Terrestrial Networks) as a bolt-on in Release 17. The
AMF's tracking area concept was designed for terrestrial cells — a cell has
a fixed location, fixed propagation delay, and a static coverage boundary.
A LEO satellite at 550km altitude moves at 7.5km/s relative to Earth. Its
beam sweeps across ground at ~100km per minute. The coverage boundary moves
continuously. The propagation delay (~1.8ms to 550km LEO) is a first-order
effect on HARQ timing, not an afterthought.
In 5G, the AMF has no model for this. In 6g-core/amf.rs:
#[derive(Debug, Clone)]
pub enum TrackingArea {
/// Conventional terrestrial cell — identical to 5G concept
Terrestrial {
tac: u32,
cell_id: u64,
},
/// 6G-native NTN tracking area.
/// The AMF reasons about beam-level mobility for LEO satellites.
/// No equivalent in 3GPP 5G AMF specifications.
Ntn {
ntn_node_id: NtnNodeId,
/// Beam index within the satellite's coverage pattern
beam_id: u16,
/// One-way propagation delay — required for HARQ timing adjustment
propagation_delay: Duration,
/// Expected beam dwell time before handover is needed
beam_dwell_remaining: Duration,
},
}
impl Amf {
/// Handles mobility from terrestrial to NTN mid-session.
/// In 5G, this path does not exist — NTN is a separate deployment.
pub fn handle_ntn_handover(
&mut self,
ue_id: UeId,
target: TrackingArea::Ntn { .. },
) -> Result<HandoverDecision, CoreError> {
// Adjust HARQ timing for new propagation delay
// Re-evaluate QoS against NTN link budget
// Preserve semantic session context across handover
...
}
}
The AMF now reasons about beam-level mobility as a first-class concept,
not as a network topology edge case.
The fourth new thing: SDF (Sensing Data Function)
5G has no NF for sensing — because 5G doesn't do integrated sensing. In 6G,
the same waveform that carries data also functions as a radar. The base
station simultaneously communicates and senses its environment (ISAC —
Integrated Sensing and Communication).
This creates a new architectural requirement: sensing results generated
in the RAN need to be consumable by applications through the core network.
A factory automation application should be able to subscribe to "object
detected within 10 meters of cell X" the same way it subscribes to a UPF
traffic flow.
I built this as a new network function — SDF (Sensing Data Function):
/// Sensing Data Function — a 6G-new NF with no 5G equivalent.
/// Exposes ISAC sensing results from the RAN as a core SBI subscription.
pub struct SensingDataFunction {
subscriptions: HashMap<ApplicationId, SensingFilter>,
sensing_results: VecDeque<SensingResult>,
}
#[derive(Debug, Clone)]
pub struct SensingResult {
pub source_cell: CellId,
pub timestamp: Instant,
pub detections: Vec<Detection>,
}
#[derive(Debug, Clone)]
pub struct Detection {
pub range_m: f32,
pub azimuth_deg: f32,
pub velocity_mps: f32,
pub confidence: f32,
}
An application subscribes to SDF over the SBI (Service-Based Interface),
identical to how an application subscribes to NWDAF analytics in 5G.
The difference: the data originates from the radio waveform itself, not
from network KPIs.
What it can simulate today
Run cargo test --workspace and the following behaviors are exercised:
-
UPF lazy session establishment —
forward_unknown_flow()route selection with QoS heuristics -
Semantic PDU session lifecycle — SMF creates
Semantic(GoalSpec), UPF routes throughTextSemanticCodec, measures task success rate -
NTN AMF registration — UE registers with
TrackingArea::Ntn, beam handover triggered onbeam_dwell_remaining = 0 -
SDF pub/sub — ISAC detection published by
6g-isaccrate, delivered to subscriber viaSensingDataFunction - AI Q-learning MAC scheduler — compared against Round Robin baseline on throughput fairness (Jain index)
- THz PHY — OTFS waveform BER vs Eb/N0, RIS phase-shift channel model
- Full 5G-AKA auth flow — AUSF + UDM subscriber credential management
The validation strategy is in ROADMAP.md — every experiment has a
real-system baseline to compare against (NIST path-loss tables, Vienna 5G
LLS, ns-3 NR, Liu et al. CRB results for ISAC).
Why Rust
Three reasons, none of them trendy:
1. Correctness in protocol state machines.
6G core network code manages sessions, bearers, and mobility state across
concurrent UEs. State machine bugs in C++ are silent — a null pointer
dereference in a 5G AMF won't panic, it'll corrupt neighbor state
silently. Rust's ownership model makes illegal state transitions a
compile error. TrackingArea::Ntn cannot be constructed without a
propagation_delay — the type system enforces the protocol invariant.
2. No garbage collector pauses.
A 6G scheduler operating at sub-millisecond HARQ timescales cannot afford
GC pauses. Rust gives deterministic memory behavior without the manual
memory management risk of C/C++.
3. Cargo workspace for protocol layering.
The 5G/6G stack is a strict layered architecture. Rust's workspace +
crate system maps perfectly: 6g-phy cannot accidentally import
6g-core types, because the dependency graph in Cargo.toml makes
layer violations a build error. In a C++ monorepo, nothing prevents a
PHY function from calling into the core. Rust makes the architecture
physically enforced.
What I'm building next
-
examples/hello_6g.rs— a single runnable scenario that exercises all four 6G-unique behaviors in sequence with readable terminal output - Python bindings via
pyo3— so ML researchers can drive the 6G-ai channel estimator and scheduler from Python notebooks without Rust - Scenario library — urban ISAC, LEO NTN handover, industrial URLLC — each a self-contained reproducible experiment
Looking for collaborators
If you work in any of these areas, I'd genuinely value your input:
- 6G researchers — is the architecture directionally correct against current proposals? Where am I wrong?
- Rust engineers — crate structure, API design, idiomatic Rust review
- Telecom engineers with 5GC experience — does the delta from 5G read as meaningful to you, or am I solving the wrong problems?
The repo: github.com/j143/6g
Open issues, open PRs, or find me here. All feedback welcome.
This article was originally published by DEV Community and written by Janardhan Pulivarthi.
Read original article on DEV Community
