%ents; ]>
Federated MUC for Constrained Environments This document provides a protocol for reducing the bandwidth cost of local users contributing to a remote MUC over a constrained link through local proxying of the MUC room. &LEGALNOTICE; 0289 Experimental Standards Track Standards Council XMPP Core XEP-0045 FMUC &ksmithisode; 0.1 2010-11-29 psa

Initial published version.

0.0.1 2010-05-24 kis

First draft.

MUC uses lots of bandwidth. Sometimes server to server traffic is heavily constrained. This limits the amount of traffic going across s2s through local proxying for remote MUC rooms. It requires no setup in advance, and needs no bandwidth for remote rooms without local occupants. The premise is that a proxy room joins another room, and receives stanzas from the MUC just as anoter occupant would.

If appropriately configured, avoid bandwidth use that isn't strictly necessary for message exchange.

Allow an endpoint to scale gracefully up to the usual full MUC chat service as bandwidth allows.

Each local representation has a different address for the federated MUC so that standard XMPP routing rules can be used, and servers do not need to be modified. To generate the JID through which a user can join a federated MUC, the joining client should apply &xep0106; to the JID of the MUC, and use this as the node part of a JID with the host of the mirroring domain. For example, if a client is connected to the server 'remote.example.com', which has a mirroring service 'mirror.remote.example.com', and the user wants to join the MUC 'jabberchat@talk.example.com', their client would generate a federated MUC JID of jabberchat\40talk.example.com@mirror.remote.example.com for them to use.

The following JIDs are used in this document.

kev@remote.example.com/Swift joining jabberchat@talk.example.com through a pre-known mirror.remote.example.com service. At this point mirror.remote.example.com knows nothing of the jabberchat@talk.example.com MUC, and no existing proxying is in place beyond mirror.remote.example.com being willing to proxy for kev@remote.example.com

]]>

mirror.example.com then un-escapes 'jabberchat\40talk.example.com', and joins jabberchat@talk.example.com (the master), saying it's a room mirror.

]]>

jabberchat@talk.example.com recognises that the mirror service is now mirrorring it, and performs the usual ACL checks as if kev@example.com/Swift had joined directly, sending presence to all occupants as normal. For all in-room routing, the slave is now treated as an occupant, and the slave is expected to do fan-out to its users as it is itself a MUC.

]]>

The slave then fans-out.

]]>

If the master doesn't allow the user to join, they send the standard MUC error to the slave. Note that for stanzas sent to a user on the slave (such as this join error), it sends to the full MUC JID of the user on the slave, not to the slave room as it does with most other stanzas.

]]>

The proxy then delivers this to the user

]]>

Now when a user joins the master directly it will do usual presence distribution to occupants (remembering the slave is an occupant). Status codes are omitted from this example, see &xep0045; for those.

]]> ]]> ]]>

The flow for a user leaving the proxy room is much the same as joining the proxy room:

]]> ]]> ]]> ]]>

When the master MUC receives a parting presence from the only user of the proxy, the proxy itself also leaves the room. This means that as long as no users of the proxy are in the room, it is causing no traffic on the s2s link.

Distribution of presence for users parting when connected directly to the MUC is identical to distribution of presence for users joining directly to the MUC.

Distribution of presence for users changing status is the same as that for joining and parting.

Normal fan-out like presence

[[Unclassified]] It's getting warm in here. ]]> [[Unclassified]] It's getting warm in here. ]]>

If the proxy is not using fire and forget mode (see below), it MUST NOT fan out this message to local users until it receives the message copy from the MUC.

[[Unclassified]] It's getting warm in here. [[Unclassified]] It's getting warm in here. ]]>

When receiving the message copy, the proxy MUST then distribute to proxied occupants.

[[Unclassified]] It's getting warm in here. ]]>

When dealing with very constrained s2s links, the extra round-trip involved with the MUC sending the message back to the proxy may be unacceptable. In this case, the proxy MAY include the <nomirror> element. If the MUC receives a message from a proxy with <nomirror>, it MUST NOT resend this message to the proxy during its usual fan-out, but MUST send it to other occupants as usual. If sending a message with <nomirror>, the proxy MUST perform fan-out as if the MUC had sent the message back to it.

Note that this use introduces unfortunate side-effects, such as messages appearing out of order, depending on whether connected directly to the MUC, or through a proxy. Also, messages rejected by the MUC may already have been delivered to users on a proxy. As such, a proxy SHOULD only use <nomirror> in environments where these side-effects are understood.

[[Unclassified]] It's getting warm in here. ]]> [[Unclassified]] It's getting warm in here. ]]>

If the proxy is using fire and forget mode, it MUST fan out this message to local users now, instead of waiting until it receives the message copy from the MUC.

[[Unclassified]] It's getting warm in here. ]]>

Because this is fire and forget mode, the MUC now MUST NOT send the message back to the proxy, but MUST send to the other occupants.

[[Unclassified]] It's getting warm in here. ]]>

To perform administration of the MUC, connect directly to the MUC and follow the standard process.

This allows a MUC mirror to proxy for another JID, so should only be deployed in scenarios where either the proxy service is trusted, or it is known that the users of the proxy service are in the same security domain as the proxy service.

None.

Needs a namespace.

When advanced.