<abstract>This document defines methods for distributing Multi-User Chat (MUC) rooms across multiple services.</abstract>
&LEGALNOTICE;
<number>XXXX</number>
<status>ProtoXEP</status>
<type>Standards Track</type>
<sig>Standards</sig>
<dependencies>
<spec>XEP-0045</spec>
<spec>XEP-0030</spec>
</dependencies>
<supersedes/>
<supersededby/>
<shortname>dmuc</shortname>
&stpeter;
<revision>
<version>0.0.2</version>
<date>2010-02-05</date>
<initials>psa</initials>
<remark><p>Simplified the protocol to use a master-slave approach; modified terminology.</p></remark>
</revision>
<revision>
<version>0.0.1</version>
<date>2007-06-01</date>
<initials>psa</initials>
<remark><p>First draft.</p></remark>
</revision>
</header>
<section1topic='Introduction'anchor='intro'>
<section2topic='Motivation'anchor='motivation'>
<p>&xep0045; defines a full-featured technology for multi-user text conferencing in XMPP. By design, <cite>XEP-0045</cite> assumes that a conference room is hosted at a single service, which can be accessed from any point on the network. However, this assumption introduces a single point of failure for the conference room, since if occupants at a using domain lose connectivity to the hosting domain then they also lose connectivity to the room. In some deployment scenarios (and even on the open Internet) this behavior is suboptimal. Therefore, this document attempts to define a technology for distributing MUC rooms across multiple services.</p>
<p>This specification addresses the following requirements:</p>
<olstart="1">
<li>Enable distribution of a MUC room across multiple services.</li>
<li>Enable a service to determine which other services it will peer with.</li>
<li>Enable the room creator to specify if distribution is allowed.</li>
<li>Enable occupants to remain in an instance of the conference if connectivity is lost to other instances.</li>
<li>Enable syncing of history, configuration, and room rosters on reconnect.</li>
</ol>
</section2>
<section2topic='Approach'anchor='approach'>
<p>The basic approach to distribution of MUC rooms is as follows:</p>
<olstart="1">
<li>A user creates a room on a service and configures it as "distributed" (or the service assumes that the room is distributed based on local service policy); this first instance of the room is called a SOURCE and the service on which it is created is called a FIRSTHOST.</li>
<li>The firsthost can immediately request that other services (called PEERHOSTS) replicate the room by creating their own local instances (called SHADOWS); alternatively, the firsthost can wait to send the replication request until users from the peerhost have joined the room.</li>
<li>If a user from the peerhost attempts to join the source room after replication is established, the firsthost invites the user to join the shadow rather than the source by sending a direct invitation to the user.</li>
<li>As long as the peerhost and firsthost have connectivity, they share room messages, room rosters, and room configuration changes in real time. If any conflict arises, the firsthost's information rules since it is "first among equals".</li>
<li>If the peerhost loses connectivity to the firsthost, it maintains the shadow, including local room history, room roster, and room configuration, and if possible also maintains connectivity with other peerhosts.</li>
<li>Upon reconnecting to the firsthost, a peerhost exchanges room history and room rosters with the firsthost and receives room configuration data (if modified).</li>
</ol>
<p>The room IDs of source rooms SHOULD be opaque to users and unique across all possible peerhosts, for example by generating a UUID in accordance with &rfc4122; or by hashing the human-readable name of the room using the SHA-256 algorithm in accordance with &nistfips180-2;.</p>
<p>This document adds the following terms to those defined in <cite>XEP-0045</cite>:</p>
<ul>
<li>Firsthost -- The MUC service at which a room is created.</li>
<li>Peerhost -- Any MUC service (other than the firsthost) that hosts an instance of the room.</li>
<li>Shadow -- An instance of the room at a peerhost.</li>
<li>Source -- The canonical instance of the room at the firsthost.</li>
</ul>
</section2>
<section2topic='Entities'anchor='terms-entities'>
<p>In this document, the examples use the following entities.</p>
<ul>
<li>Firsthost: firsthost.example.com</li>
<li>Peerhosts: peer1.example.net and peer2.example.org</li>
<li>Shadows: f609923deb78718a125b93d32609bd5265dd927242ac93a99eb366109df2bd39@peer1.example.net and f609923deb78718a125b93d32609bd5265dd927242ac93a99eb366109df2bd39@peer2.example.org</li>
<p>When the original room owner creates the room (or subsequently configures the room), the service MAY offer the option of making the room a "distributed room". This is done by including the "muc#roomconfig_distributed" feature in the room configuration form:</p>
<p>Alternatively, the firsthost can choose to perform room distribution in the background, rather than exposing the 'muc#roomconfig_distributed' option to the user.</p>
</section2>
<section2topic='Replicating a Room'anchor='replication'>
<p>When a firsthost would like a peerhost to provide a shadow, it sends a replication request to the peerhost.</p>
<examplecaption='Firsthost Requests Replication of Room'><![CDATA[
<p>Several error cases are possible (the peerhost is resource constrained, the firsthost is forbidden to peer with the peerhost, etc.); these will be specified more fully in a future version of this specification.</p>
<p>Once the peerhost acknowledges that it is willing and able to replicate the room, four things happen:</p>
<olstart="1">
<li>The source sends the room configuration to the shadow.</li>
<li>The source sends the room roster to the shadow.</li>
<li>The shadow optinoally requests the room history from the source.</li>
<li>The firsthost informs other peerhosts about the new peerhost.</li>
</ol>
<examplecaption='Source Sends Room Configuration to Shadow'><![CDATA[
<p>The new shadow SHOULD request the room history. This is done by sending an IQ-get from the shadow to the source, containing a <history/> element qualified by the 'http://jabber.org/protocol/muc' namespace (the syntax and semantics of this element are described in <cite>XEP-0045</cite>).</p>
<examplecaption='Peerhost Acknowledges Notification of New Peerhost'><![CDATA[
<iqfrom='peer2.example.org'
id='ph1'
to='firsthost.example.com'
type='result'/>
]]></example>
</section2>
<section2topic='Joining a Room'anchor='join'>
<p>When a user attempts to join a source room, the firsthost determines if it will invite the user to join a shadow at a peerhost instead. The process for determining when to send invitations is implementation specific and might be subject to configuration at the firsthost (e.g., the firsthost might send invitations only to users of a domain associated with the peerhost and only after a certain number of such users have joined the room at the firsthost).</p>
<p>To begin, a user at the peerhost attempts to join the source room at the firsthost:</p>
<examplecaption='User Seeks to Join Source Room'><![CDATA[
<p>The shadow then informs the source (and any other shadows) of the user's presence; it does so by sending presence from the roomjid of the user at the shadow to a roomjid with the same roomnick at the source and shadow(s).</p>
<examplecaption='Shadow Informs Source of New Occupant'><![CDATA[
<p>The source then delivers that presence stanza to its local users. (Note: The shadow needs to send only one presence stanza to the source, thus reducing the number of stanzas sent over the server-to-server link between the peerhost and the firsthost.)</p>
<p>The source then delivers that message stanza to its local users. (Note: The shadow needs to send only one message stanza to the source, thus reducing the number of stanzas sent over the server-to-server link between the peerhost and the firsthost.)</p>