LVS
lvs-devel
Google
 
Web LinuxVirtualServer.org

[PATCH nf-next 0/3] IPVS changes, part 4 of 4 - extras

To: Simon Horman <horms@xxxxxxxxxxxx>
Subject: [PATCH nf-next 0/3] IPVS changes, part 4 of 4 - extras
Cc: Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx>, Florian Westphal <fw@xxxxxxxxx>, lvs-devel@xxxxxxxxxxxxxxx, netfilter-devel@xxxxxxxxxxxxxxx, Dust Li <dust.li@xxxxxxxxxxxxxxxxx>, Jiejian Wu <jiejian@xxxxxxxxxxxxxxxxx>
From: Julian Anastasov <ja@xxxxxx>
Date: Mon, 23 Mar 2026 18:25:20 +0200
        Hello,

        This patchset is part 4 of changes that accumulated in
recent time. It is for nf-next and should be applied when the
patches from part 1-3 are already applied. It contains extras
for the per-net tables.

        All patches here come from the work
"ipvs: per-net tables and optimizations" last posted
on 19 Oct 2025 as v6, with the following changes:

Patch 1 comes from v6/patch 10 with added get_conn_tab_size() helper

Patch 2 comes from v6/patch 13 with added text for the commit

Patch 3 comes from v6/patch 14 with updated docs

        As result, the following patches will:

* As the connection table is not with fixed size, show its current
  size to user space

* Add /proc/net/ip_vs_status to show current state of IPVS, per-net

cat /proc/net/ip_vs_status
Conns:  9401
Conn buckets:   524288 (19 bits, lfactor -5)
Conn buckets empty:     505633 (96%)
Conn buckets len-1:     18322 (98%)
Conn buckets len-2:     329 (1%)
Conn buckets len-3:     3 (0%)
Conn buckets len-4:     1 (0%)
Services:       12
Service buckets:        128 (7 bits, lfactor -3)
Service buckets empty:  116 (90%)
Service buckets len-1:  12 (100%)
Stats thread slots:     1 (max 16)
Stats chain max len:    16
Stats thread ests:      38400

It shows the table size, the load factor (2^n), how many are the empty
buckets, with percents from the all buckets, the number of buckets
with length 1..7 where len-7 catches all len>=7 (zero values are
not shown). The len-N percents ignore the empty buckets, so they
are relative among all len-N buckets. It shows that smaller lfactor
is needed to achieve len-1 buckets to be ~98%. Only real tests can
show if relying on len-1 buckets is a better option because the
hash table becomes too large with multiple connections. And as
every table uses random key, the services may not avoid collision
in all cases.

* add conn_lfactor and svc_lfactor sysctl vars, so that one can tune
  the connection/service hash table sizing


Julian Anastasov (3):
  ipvs: show the current conn_tab size to users
  ipvs: add ip_vs_status info
  ipvs: add conn_lfactor and svc_lfactor sysctl vars

 Documentation/networking/ipvs-sysctl.rst |  35 ++++
 net/netfilter/ipvs/ip_vs_ctl.c           | 247 ++++++++++++++++++++++-
 2 files changed, 278 insertions(+), 4 deletions(-)

-- 
2.53.0




<Prev in Thread] Current Thread [Next in Thread>