Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ServicePort -> DNS SRV does not allow services with same name but different protocol #97149

Open
frasertweedale opened this issue Dec 9, 2020 · 46 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@frasertweedale
Copy link

What happened:

Creating a service with two ports that have the same name, but different protocol, fails:

$ cat service-test.yaml 
apiVersion: v1
kind: Service
metadata:
  name: service-test
  labels:
    app: service-test
spec:
  selector:
    app: service-test
  clusterIP: None
  ports:
  - name: ldap
    protocol: TCP
    port: 389
  - name: kerberos
    protocol: TCP
    port: 88
  - name: kerberos
    protocol: UDP
    port: 88

$ oc replace -f service-test.yaml 
The Service "service-test" is invalid:
spec.ports[2].name: Duplicate value: "kerberos"

This is an important use case for some applications, and the limitation is surprising. More information in my blog post: https://frasertweedale.github.io/blog-redhat/posts/2020-12-08-k8s-srv-limitation.html#kubernetes-srv-limitation

What you expected to happen:

Such Service objects should be accepted, and should result in the creation of the corresponding DNS SRV records that have the same service name, but different protocol ID, e.g., from the object above:

_ldap._tcp.<service>.<ns>.svc.<zone>.     ...
_kerberos._tcp.<service>.<ns>.svc.<zone>. ...
_kerberos._udp.<service>.<ns>.svc.<zone>. ...

How to reproduce it (as minimally and precisely as possible):f

As shown above.

Anything else we need to know?:

I'm happy to file a KEP if it is deemed necessary. However, all that is required conceptually is to relax the uniqueness check to the name/protocol pairs instead of service name only. I hope it could be addressed without adding new fields, i.e. aligning the ServicePort semantics with the established semantics of SRV records.

Environment:

  • Kubernetes version (use kubectl version):
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2+ad738ba", GitCommit:"ad738ba548b6d6b5cd2e83351951ccd7019afa4c", GitTreeState:"clean", BuildDate:"2020-11-25T00:18:44Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:

  • OS (e.g: cat /etc/os-release):

sh-4.4# cat /etc/os-release
NAME="Red Hat Enterprise Linux CoreOS"
VERSION="47.83.202011252347-0"
VERSION_ID="4.7"
OPENSHIFT_VERSION="4.7"
RHEL_VERSION="8.3"
PRETTY_NAME="Red Hat Enterprise Linux CoreOS 47.83.202011252347-0 (Ootpa)"
ID="rhcos"
ID_LIKE="rhel fedora"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::coreos"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="OpenShift Container Platform"
REDHAT_BUGZILLA_PRODUCT_VERSION="4.7"
REDHAT_SUPPORT_PRODUCT="OpenShift Container Platform"
REDHAT_SUPPORT_PRODUCT_VERSION="4.7"
OSTREE_VERSION='47.83.202011252347-0'
  • Kernel (e.g. uname -a):

  • Install tools:

  • Network plugin and version (if this is a network-related bug):

  • Others:

@frasertweedale frasertweedale added the kind/bug Categorizes issue or PR as related to a bug. label Dec 9, 2020
@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 9, 2020
@frasertweedale
Copy link
Author

/sig network

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 9, 2020
frasertweedale added a commit to frasertweedale/kubernetes that referenced this issue Dec 9, 2020
Creating a service with two ports that have the same name, but
different protocol, fails:

    $ cat service-test.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: service-test
      labels:
        app: service-test
    spec:
      selector:
        app: service-test
      clusterIP: None
      ports:
      - name: ldap
        protocol: TCP
        port: 389
      - name: kerberos
        protocol: TCP
        port: 88
      - name: kerberos
        protocol: UDP
        port: 88

    $ oc replace -f service-test.yaml
    The Service "service-test" is invalid:
    spec.ports[2].name: Duplicate value: "kerberos"

Some services operate over both TCP and UDP (e.g. DNS).
Furthermore, some of these also require DNS SRV records for both
_tcp and _udp with the same service name (e.g. Kerberos).  The
current ServicePort validation behaviour - `name` must be unique -
does not admit this use case.

Relax the uniqueness check, ensuring that name/protocol *pairs* are
unique.

Fixes: kubernetes#97149
@aojea
Copy link
Member

aojea commented Dec 9, 2020

This is working as expected, names for ports must be unique

// ServicePort contains information on service's port.
type ServicePort struct {
// The name of this port within the service. This must be a DNS_LABEL.
// All ports within a ServiceSpec must have unique names. When considering
// the endpoints for a Service, this must match the 'name' field in the
// EndpointPort.
// Optional if only one ServicePort is defined on this service.
// +optional
Name string `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`

what's the problem on using?

apiVersion: v1
kind: Service
metadata:
  name: service-test
  labels:
    app: service-test
spec:
  selector:
    app: service-test
  clusterIP: None
  ports:
  - name: ldap
    protocol: TCP
    port: 389
  - name: kerberos-tcp
    protocol: TCP
    port: 88
  - name: kerberos-udp
    protocol: UDP
    port: 88
kubectl get service/service-test
NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                 AGE
service-test   ClusterIP   None         <none>        389/TCP,88/TCP,88/UDP   9s

@frasertweedale
Copy link
Author

@aojea doing that means you can create the service object. But the resulting SRV records will be wrong. They need to be _kerberos._tcp.<the-rest> and _kerberos._udp.<the-rest>. This is how SRV records work - mapping IANA-registered service names and IANA-registered transport protocols to target host/port pairs.

So yes, "names for ports must be unique" - in the current Kubernetes implementation. That behaviour is the bug - it obstructs this important use case.

@aojea
Copy link
Member

aojea commented Dec 9, 2020

So yes, "names for ports must be unique" - in the current Kubernetes implementation. That behaviour is the bug - it obstructs this important use case.

I think that there are solutions to this problem, but they have to be backwards compatible
#97150 (comment)

To be fair, this "technically" doesn´t obstruct Kerberos to work, AFAIK it only needs one protocol to work (not much into Kerberos so I can be wrong here) , it also seems that TCP is not enabled by default in some kerberos implementation https://web.mit.edu/kerberos/krb5-1.4/krb5-1.4.1/doc/krb5-admin/Hostnames-for-KDCs.html

_kerberos._udp
This is for contacting any KDC by UDP. This entry will be used the most often. Normally you should list port 88 on each of your KDCs.
_kerberos._tcp
This is for contacting any KDC by TCP. The MIT KDC by default will not listen on any TCP ports, so unless you've changed the configuration or you're running another KDC implementation, you should leave this unspecified. If you do enable TCP support, normally you should use port 88.

However, I do see that this is a real use case.
@robscott @thockin @chrisohaver is this something that the new field on service ports appProtocol can solve?

// The application protocol for this port.
// This field follows standard Kubernetes label syntax.
// Un-prefixed names are reserved for IANA standard service names (as per
// RFC-6335 and http://www.iana.org/assignments/service-names).
// Non-standard protocols should use prefixed names such as
// mycompany.com/my-custom-protocol.
// +optional
AppProtocol *string

https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/1507-app-protocol

IIUIC current DNS specifications use the port to create the srv record, https://github.com/kubernetes/dns/blob/master/docs/specification.md#232---srv-records

can we use appProtocol instead, if it is defined, or both, or find a way to do it :) ?

@tiran
Copy link
Contributor

tiran commented Dec 9, 2020

I think that there are solutions to this problem, but they have to be backwards compatible
#97150 (comment)

To be fair, this "technically" doesn´t obstruct Kerberos to work, AFAIK it only needs one protocol to work (not much into Kerberos so I can be wrong here) , it also seems that TCP is not enabled by default in some kerberos implementation web.mit.edu/kerberos/krb5-1.4/krb5-1.4.1/doc/krb5-admin/Hostnames-for-KDCs.html

FreeIPA (our use case) requires both TCP and UDP for Kerberos. UDP is used by default for simple authentication requests. In some cases the payload does not fit into an UDP package. KRB5 automatically switches TCP. We cannot just use TCP-only. Some application assume UDP as primary protocol. TCP would also slow down Kerberos. We really need both.

IIUIC current DNS specifications use the port to create the srv record, https://github.com/kubernetes/dns/blob/master/docs/specification.md#232---srv-records

can we use appProtocol instead, if it is defined, or both, or find a way to do it :) ?

Can you change the code to use a combination of port and protocol? Some like port 88 is not a sufficient qualifier for a service. You always need the protocol. For example https://man7.org/linux/man-pages/man5/services.5.html uses an identifier pair like 88/tcp and 88/udp.

@frasertweedale
Copy link
Author

frasertweedale commented Dec 9, 2020

From my POV, though I have not tested it (I really have no idea how to build and test Kubernetes locally with my own changes), if you relax the uniqueness constraint from ServicePort "name" to the pair of "name" and "protocol", that might be enough. It would seem to be backwards compatible, up to any other part of Kubernetes, or third party programs, not handling the possibility of ServicePorts with duplicate names. Certainly there would be no change to the behaviour for any currently-accepted Service spec. If there are other parts of Kubernetes (including docs) that need changing to handle the possibility of duplicate Service name, they can be changed.

But there might not even be much else that needs changing. EndpointPort handles duplicate names fine, as does the OpenShift cluster-dns-operator (or whatever is the component in OpenShift that turns Endpoints into SRV records). See https://frasertweedale.github.io/blog-redhat/posts/2020-12-08-k8s-srv-limitation.html#endpoints-do-not-have-the-limitation for proof.

@aojea
Copy link
Member

aojea commented Dec 9, 2020

, if you relax the uniqueness constraint from ServicePort "name" to the pair of "name" and "protocol", that might be enough. It would seem to be backwards compatible, up to any other part of Kubernetes, or third party programs, not handling the possibility of ServicePorts with duplicate names.

I´m just highlighting the fact that the API explicitly mentions that the name should be unique , let´s wait for @thockin , this requirement in the API comes from pre1.0 kubernetes versions, I think he can explain it better than anybody ...

can we use appProtocol instead, if it is defined, or both, or find a way to do it :) ?

Can you change the code to use a combination of port and protocol?

right now the SRV record is built like
_<port>._<proto>.<service>.<ns>.svc.<zone>. <ttl> IN SRV <weight> <priority> <port-number> <service>.<ns>.svc.<zone>

what I´m suggesting is an additional record based on the new field appProtocol
_<appProtocol>._<proto>.<service>.<ns>.svc.<zone>. <ttl> IN SRV <weight> <priority> <port-number> <service>.<ns>.svc.<zone>.
the protocol is always obtained from the port spec

@frasertweedale
Copy link
Author

what I´m suggesting is an additional record based on the new field appProtocol
_<appProtocol>._<proto>.<service>.<ns>.svc.<zone>. <ttl> IN SRV <weight> <priority> <port-number> <service>.<ns>.svc.<zone>.
the protocol is always obtained from the port spec

Based on what I have read about the purpose of appProtocol, it is more about signalling to the Kubernetes network ingress systems how requests should be transported through to the application. To me it seems conceptually wrong to make SRV records based on it and may have unintended consequences. I could be wrong about it - I love to see more concrete examples of how appProtocol is used, which would help clarify whether it makes sense to use it for SRV records or not.

@chrisohaver
Copy link
Contributor

For a for-now one-off workaround, I think you could use CoreDNS's rewrite plugin to add -tcp/-udp to the port name segment, based on the protocol segment. The rewrite plugin allows re-writing of incoming queries using regular expressions. The rewrite could look something like the following untested snip ...

rewrite name regex (_kerberos)\._(tcp|udp)\.(.*\.svc\.cluster\.local\.) {1}-{2}._{2}.{3}

which essentially would rewrite, for example ...
_kerberos._tcp.service.namespace.svc.cluster.local. to _kerberos-tcp._tcp.service.namespace.svc.cluster.local.

the above rewrite snip would also need a response rewrite to make it compatible with clients that are sensitive to name mismatches.

@frasertweedale
Copy link
Author

@chrisohaver you have it backwards. We require the service name in the SRV records to be _kerberos, but Kubernetes will not let us create the Service object that would yield those records.

Did you mean you could use a rewrite the remove the -tcp/-udp suffix?

@chrisohaver
Copy link
Contributor

I think I got the direction correct...

  1. Assume a service is defined with two ports named, kerberos-tcp and kerberos-udp
  2. A dns query comes in to CoreDNS for _kerberos._tcp.service.namespace.svc.cluster.local.
  3. The rewrite plugin changes it to _kerberos-tcp._tcp.service.namespace.svc.cluster.local.
  4. kubernetes plugin looks up the modified query name _kerberos-tcp._tcp.service.namespace.svc.cluster.local., and creates response

@thockin
Copy link
Member

thockin commented Dec 9, 2020

The problem here is compatibility. I swear I opened this same issue years ago, but I can't find it.

The way service ports are defined sort of predates the strategic merge capabilities. It ends up needing a compound key and we STILL have cases where various patch/merge cases do not work and can't really be fixed.

To allow this would be incompatible with all existing client-side merging (including apply).

I'm not HAPPY about this, but that's where it is.

Here's the part where I make suggestions that I don't like. We could consider adding a way to override the SRV name. E.g. a net-new field like ianaName which does not have the same uniqueness requirement. DNS could use that for SRV, if it is defined. I don't love the idea, so I'd really want to know that other avenues have been exhausted before we really go there.

@prameshj because she always has more state on DNS than me.

@thockin
Copy link
Member

thockin commented Dec 9, 2020

#47249

@prameshj
Copy link
Contributor

prameshj commented Dec 9, 2020

+1 Using the rewrite plugin workaround as @chrisohaver suggested

As a longer term fix, introducing a new "ianaName" or "dnsName" in the ServicePort seems like the best solution, imo.

@frasertweedale
Copy link
Author

OK, I suppose my next step is to write a KEP to add a new ServicePort field then?

@thockin
Copy link
Member

thockin commented Dec 10, 2020

I don't LIKE that idea, so I am eager to hear alternatives.

@thockin
Copy link
Member

thockin commented Dec 10, 2020

Can we take the conversation to #47249 to keep it all together?

@thockin thockin closed this as completed Dec 10, 2020
@aojea
Copy link
Member

aojea commented Dec 10, 2020

I don't LIKE that idea, so I am eager to hear alternatives.

@thockin I suggested one

what I´m suggesting is an additional record based on the new field appProtocol
_<appProtocol>._<proto>.<service>.<ns>.svc.<zone>. <ttl> IN SRV <weight> <priority> <port-number> <service>.<ns>.svc.<zone>.
the protocol is always obtained from the port spec

why current field appProtocol field is not valid, it is already in the Service spec and the semantics are good IMHO

// The application protocol for this port.
// This field follows standard Kubernetes label syntax.
// Un-prefixed names are reserved for IANA standard service names (as per
// RFC-6335 and http://www.iana.org/assignments/service-names).
// Non-standard protocols should use prefixed names such as
// mycompany.com/my-custom-protocol.
// This is a beta field that is guarded by the ServiceAppProtocol feature
// gate and enabled by default.
// +optional
AppProtocol *string `json:"appProtocol,omitempty" protobuf:"bytes,6,opt,name=appProtocol"`

kerberos is already in the IANA Service Name and Transport Protocol Port Number Registry

https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=kerberos

@prameshj
Copy link
Contributor

prameshj commented Dec 10, 2020

I don't LIKE that idea, so I am eager to hear alternatives.

Tim Hockin I suggested one

what I´m suggesting is an additional record based on the new field appProtocol
_<appProtocol>._<proto>.<service>.<ns>.svc.<zone>. <ttl> IN SRV <weight> <priority> <port-number> <service>.<ns>.svc.<zone>.
the protocol is always obtained from the port spec

why current field appProtocol field is not valid, it is already in the Service spec and the semantics are good IMHO

Is this suggesting reuse of appProtocol to include values that need not be valid protocols? In this specific example, the appProtocol value will be"kerberos", which is a valid protocol as you linked. But will it always be the case? Can someone not have "mytestport_tcp.mytestsvc.default.svc.cluster.local"?

Sorry for commenting on the closed issue. I am happy to re-comment in the other issue if we want to add this proposal there.

// The application protocol for this port.
// This field follows standard Kubernetes label syntax.
// Un-prefixed names are reserved for IANA standard service names (as per
// RFC-6335 and http://www.iana.org/assignments/service-names).
// Non-standard protocols should use prefixed names such as
// mycompany.com/my-custom-protocol.
// This is a beta field that is guarded by the ServiceAppProtocol feature
// gate and enabled by default.
// +optional
AppProtocol *string `json:"appProtocol,omitempty" protobuf:"bytes,6,opt,name=appProtocol"`

kerberos is already in the IANA Service Name and Transport Protocol Port Number Registry

https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=kerberos

@aojea
Copy link
Member

aojea commented Dec 10, 2020

Is this suggesting reuse of appProtocol to include values that need not be valid protocols? In this specific example, the appProtocol value will be"kerberos", which is a valid protocol as you linked. But will it always be the case? Can someone not have "mytestport_tcp.mytestsvc.default.svc.cluster.local"?

indeed, the suggestion is to use the appProtocol field in Service Spec to generate a DNS SRV record based on that field, because the field description sounds correct to me.

But will it always be the case? Can someone not have "mytestport_tcp.mytestsvc.default.svc.cluster.local"?

it will have mytestport_tcp.mytestsvc.default.svc.cluster.local today if he sets the port.Name to mytestport, there is no much difference with respect what we have now, right?

Sorry for commenting on the closed issue. I am happy to re-comment in the other issue if we want to add this proposal there.

I think that the other issue is related, or this is a consequence of it, but this discuss the SRV record port thing, the other the service port name uniqeness

@prameshj
Copy link
Contributor

Is this suggesting reuse of appProtocol to include values that need not be valid protocols? In this specific example, the appProtocol value will be"kerberos", which is a valid protocol as you linked. But will it always be the case? Can someone not have "mytestport_tcp.mytestsvc.default.svc.cluster.local"?

indeed, the suggestion is to use the appProtocol field in Service Spec to generate a DNS SRV record based on that field, because the field description sounds correct to me.

But will it always be the case? Can someone not have "mytestport_tcp.mytestsvc.default.svc.cluster.local"?

it will have mytestport_tcp.mytestsvc.default.svc.cluster.local today if he sets the port.Name to mytestport, there is no much difference with respect what we have now, right?

if they wanted to reuse the name "mytestport" for both tcp and udp protocol.. the current implementation won't let them do it. I realise this is not a concrete usecase, but if we are making an API change to support it, we should try to support all cases.

Sorry for commenting on the closed issue. I am happy to re-comment in the other issue if we want to add this proposal there.

I think that the other issue is related, or this is a consequence of it, but this discuss the SRV record port thing, the other the service port name uniqeness

@aojea
Copy link
Member

aojea commented Dec 10, 2020

if they wanted to reuse the name "mytestport" for both tcp and udp protocol.. the current implementation won't let them do it. I realise this is not a concrete usecase, but if we are making an API change to support it, we should try to support all cases.

sorry, I realise my comment wasn't clear, if they want to reuse the name on both ports, can they do it with the AppProtocol field without any additional API change?

the " what happens if the user sets a non-service name in the AppProtocol field", is something possible today, if a user want to general DNS SRV records with a non-service name, they can do it today with the Port Name field

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@frasertweedale
Copy link
Author

/reopen
/remove-lifecycle rotten

Resurrecting this issue as I (yes after a long time) submitted a KEP: kubernetes/enhancements#3242

@k8s-ci-robot
Copy link
Contributor

@frasertweedale: Reopened this issue.

In response to this:

/reopen
/remove-lifecycle rotten

Resurrecting this issue as I (yes after a long time) submitted a KEP: kubernetes/enhancements#3242

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Mar 16, 2022
@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2022
@thockin thockin removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 7, 2022
@thockin thockin self-assigned this Jul 7, 2022
@thockin
Copy link
Member

thockin commented Aug 18, 2022

I was scrubbing issues assigned and I realize I lost track of this one. The KEP aged-out. I apologize. Do you want to revive it?

A few thoughts as I paged it back in, some of which contradict past-me.

1

I do not think we can really make name non-unique, it would potentially break too many clients. Consider someone who has an older kubectl - it would merge the two port stanzas, silently, and potentially disrupt your service.

While it WOULD be a break, we could consider it. Why the change in heart here? Well, I was wrong - the merge keys for ports[] are either port (client-side SMP) or port + protocol (server-side). Neither considers name at all. There are some places where the uniqueness matters (e.g. gathering endpoints for a Service with a named targetPort) but we can disambiguate by protocol. E.g. MAYBE we could allow names to be duplicated if the only difference between the port stanzas is the protocol (and maybe nodeport? We don't co-allocate that, but maybe we should - we DO allow it).

e.g.

  ports:
  - name: dns
    port: 53
    protocol: UDP 
    targetPort: 53
  - name: dns
    port: 53
    protocol: TCP 
    targetPort: 53

There's still a risk that some other entity is relying on name uniqueness, so I am not convinced this path is best, but it would be nicest.

2

appProtocol is not approporiate for this (@frasertweedale had good rationale, but also that value can be a 'prefix.com/name' style value).

I could maybe get behind a model where we said: if appProtocol is a "bare" name, we use it - either instead of or in addition to name. So given:

  ports:
  - name: krb-udp
    appProtocol: kerberos
    port: 88
    protocol: UDP 
    targetPort: 88
  - name: krb-tcp
    appProtocol: kerberos
    port: 88
    protocol: TCP 
    targetPort: 88

We would get DNS records:

_krb-udp._udp.ns.svc.cluster.local
_krb-tcp._tcp.ns.svc.cluster.local
_kerberos._udp.ns.svc.cluster.local
_kerberos._tcp.ns.svc.cluster.local

or even just:

_kerberos._udp.ns.svc.cluster.local
_kerberos._tcp.ns.svc.cluster.local

3

I could also maybe see formalizing the rewrite proposed above. E.g. If you specify your name as <something>-<protocol> you get SRV records _<something>-<protocol>._<protocol>... AND _<something>._<porotocol>... -- convention over configuration.

The first idea seems scary, but the last 2 ideas both seem simpler than adding a new field, no?

@thockin thockin assigned thockin and unassigned thockin Aug 18, 2022
@aojea
Copy link
Member

aojea commented Aug 26, 2022

you can start with 2 right now creating your own CoreDNS plugin ...

.... and if there is traction it can be formalized

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 24, 2022
@thockin thockin removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 25, 2022
@k8s-triage-robot
Copy link

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jan 19, 2024
@thockin thockin removed the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Mar 7, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 5, 2024
@thockin thockin removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants