Kubernetes comes packed with existing objects, such as Pod, Service, DaemonSet, etc., but you can create your own: the latter are called Custom Resource Definitions.
CRDs are paired with a custom controller called an operator. An operator subscribes to the lifecycle events of CRD(s).
The magic happens via an operator. The Gateway API doesn't come with an out-of-the-box operator. Instead, different vendors provide their own.
CRDs have a cluster-wide scope.
The cluster-wide CRD doesn't allow rolling upgrades.
The obvious solution is to have one cluster per team.
The ideal situation, as the initial quote of this post states, would be to have namespace-scoped CRDs. Unfortunately, it's not the path that Kubernetes chose.
The next best thing would be to add a virtual cluster on top of the real one to partition it: that's the promise of vCluster.
vCluster isolates each virtual cluster. Hence, with a single control plane, you can deploy a v1.0 CRD in one cluster and a v1.2 in another without trouble.
CRDs are cluster-wide resources, but there's no conflict since the virtual clusters behave like isolated clusters.