<?xml version="1.0" encoding="UTF-8"?>
  <?xml-stylesheet type="text/xsl" href="rfc2629.xslt" ?>
  <!-- generated by https://github.com/cabo/kramdown-rfc version 1.7.29 (Ruby 2.6.10) -->


<!DOCTYPE rfc  [
  <!ENTITY nbsp    "&#160;">
  <!ENTITY zwsp   "&#8203;">
  <!ENTITY nbhy   "&#8209;">
  <!ENTITY wj     "&#8288;">

]>


<rfc ipr="trust200902" docName="draft-song-rtgwg-din-usecases-requirements-00" category="info" consensus="true" submissionType="IETF" xml:lang="en" tocInclude="true" sortRefs="true" symRefs="true">
  <front>
    <title abbrev="DIN: Problem, Use Cases, Requirements">Distributed Inference Network (DIN) Problem Statement, Use Cases, and Requirements</title>

    <author initials="S." surname="Jian" fullname="Song Jian">
      <organization>China Mobile</organization>
      <address>
        <email>songjianyjy@chinamobile.com</email>
      </address>
    </author>
    <author initials="W." surname="Cheng" fullname="Weiqiang Cheng">
      <organization>China Mobile</organization>
      <address>
        <email>chengweiqiang@chinamobile.com</email>
      </address>
    </author>

    <date year="2025"/>

    
    <workgroup>rtgwg</workgroup>
    <keyword>DIN</keyword> <keyword>AI Inference</keyword>

    <abstract>


<?line 32?>

<t>This document describes the problem statement, use cases, and requirements for a "Distributed Inference Network" (DIN) in the era of pervasive AI. As AI inference services become widely deployed and accessed by billions of users, applications and devices, traditional centralized cloud-based inference architectures face challenges in scalability, latency, security, and efficiency. DIN aims to address these challenges by leveraging distributed edge-cloud collaboration, intelligent scheduling, and enhanced network security to support low-latency, high-concurrency, and secure AI inference services.</t>



    </abstract>



  </front>

  <middle>


<?line 37?>

<section anchor="introduction"><name>Introduction</name>

<t>AI inference is rapidly evolving into a fundamental service accessed by billions of users, applications, IoT devices, and AI agents.</t>

<t>The rapid advancement and widespread adoption of large AI models are introducing significant changes to internet usage patterns and service requirements. These changes present new challenges that existing network need to address to effectively support the growing demands of AI inference services.</t>

<t>First, internet usage patterns are shifting from primarily content access to increasingly include AI model access.</t>

<t>Users and applications are interacting more frequently with AI models, generating distinct traffic patterns that differ from traditional web browsing or streaming. This shift requires networks to better support model inference as an important service type alongside conventional content delivery.</t>

<t>Second, the interaction modalities are diversifying from simple human-to-model conversations to include complex multi-modal interactions.</t>

<t>As AI inference costs decrease dramatically, applications, IoT devices, and autonomous systems are increasingly integrating AI capabilities through API calls and embedded model access. This expansion creates unprecedented demands for high-concurrency processing and predictable low-latency responses, as these systems often require real-time inference for critical functions including autonomous operations, industrial control, and interactive services.</t>

<t>Third, AI inference workloads introduce distinct traffic characteristics that impact network design.</t>

<t>Both north-south traffic between users and AI services, and east-west traffic among distributed AI components, are growing significantly. Moreover, the nature of AI inference communication, often organized around token generation and processing, introduces new considerations for traffic management, quality of service measurement, and resource optimization that complement traditional bit-oriented network metrics.</t>

<t>These developments collectively challenge current network infrastructures to adapt to the unique characteristics of AI inference workloads. Centralized approaches face limitations in supporting the distributed, latency-sensitive, and concurrent nature of modern AI services, particularly in scenarios requiring real-time performance, data privacy, and reliable service delivery.</t>

<t>This document outlines the problem statement, use cases, and functional requirements for a Distributed Inference Network (DIN) to enable scalable, efficient, and secure AI inference services that can address these emerging challenges.</t>

</section>
<section anchor="conventions-and-definitions"><name>Conventions and Definitions</name>
<dl>
  <dt>DIN:</dt>
  <dd>
    <t>Distributed Inference Network</t>
  </dd>
</dl>

<t>The key words "<bcp14>MUST</bcp14>", "<bcp14>MUST NOT</bcp14>", "<bcp14>REQUIRED</bcp14>", "<bcp14>SHALL</bcp14>", "<bcp14>SHALL
NOT</bcp14>", "<bcp14>SHOULD</bcp14>", "<bcp14>SHOULD NOT</bcp14>", "<bcp14>RECOMMENDED</bcp14>", "<bcp14>NOT RECOMMENDED</bcp14>",
"<bcp14>MAY</bcp14>", and "<bcp14>OPTIONAL</bcp14>" in this document are to be interpreted as
described in BCP 14 <xref target="RFC2119"/> <xref target="RFC8174"/> when, and only when, they
appear in all capitals, as shown here.</t>

<?line -18?>

</section>
<section anchor="problem-statement"><name>Problem Statement</name>

<t>The proliferation of AI inference services has exposed fundamental limitations in traditional centralized AI inference architectures.</t>

<t>Centralized inference deployments face severe scalability challenges when handling concurrent requests from the rapidly expanding ecosystem of users, applications, IoT devices, and AI agents. Service providers have experienced recurrent outages and performance degradation during peak loads, with concurrent inference requests projected to grow from millions to billions. The fundamental constraint of concentrating computational resources in limited geographical locations creates inherent bottlenecks that lead to service disruptions and degraded user experience under massive concurrent access.</t>

<t>While human-to-model conversations may tolerate moderate network latency, the emergence of diverse interaction patterns including application-to-model, device-to-model, and machine-to-model communications imposes stringent low-latency requirements that centralized architectures cannot meet.</t>

<t>Applications including industrial robots, autonomous systems, and real-time control platforms require low-latency responses that are fundamentally constrained by the unavoidable geographical dispersion between end devices and centralized inference facilities. This architectural limitation creates critical barriers for delay-sensitive operations across manufacturing, healthcare, transportation, and other domains where millisecond to sub-millisecond-level response times are essential.</t>

<t>Enterprise and industrial AI inference scenarios present unique security and compliance requirements that fundamentally conflict with centralized architectural approaches.</t>

<t>Sectors including finance, healthcare, and public service sectors handle sensitive data subject to strict regulatory requirements that often mandate localized processing and data sovereignty. The transmission of confidential information, model parameters, and intermediate computational data across extended network paths to centralized inference pools creates unacceptable vulnerabilities and compliance violations. These fundamental constraints render centralized inference architectures unsuitable for numerous critical applications where data sovereignty, privacy protection, and regulatory compliance represent non-negotiable requirements.</t>

</section>
<section anchor="use-cases"><name>Use Cases</name>

<section anchor="enterprise-secure-inference-services"><name>Enterprise Secure Inference Services</name>
<t>Enterprises in regulated sectors such as finance, healthcare, industrial and public services require strict data governance while leveraging advanced AI capabilities. In this use case, inference servers are deployed at enterprise headquarters or private cloud environments, with branch offices and field devices accessing these services through heterogeneous network paths including dedicated lines, VPNs, and public internet connections.</t>

<t>The scenario encompasses various enterprise applications such as AIoT equipment inspection, intelligent manufacturing, and real-time monitoring systems that demand low-latency, high-reliability, and high-security inference services. Different network paths should provide appropriate levels of cryptographic assurance and quality of service while accommodating varying bandwidth and latency characteristics across the enterprise network topology.</t>

<t>The primary challenge involves maintaining data sovereignty and security across diverse network access scenarios while ensuring consistent low-latency performance for delay-sensitive industrial applications.</t>

</section>
<section anchor="edge-cloud-collaborative-model-training"><name>Edge-Cloud Collaborative Model Training</name>
<t>Small and medium enterprises often need to dynamically procure additional AI inference capacity while facing capital constraints for full-scale inference infrastructure deployment. This use case enables flexible resource allocation where businesses maintain core computational resources on-premises while dynamically procuring additional inference capacity from AI inference providers during demand peaks.</t>

<t>The hybrid deployment model allows sensitive data to remain within enterprise boundaries while leveraging elastic cloud resources for computationally intensive operations. As enterprise business requirements fluctuate, the ability to seamlessly integrate local and cloud-based inference resources becomes crucial for maintaining service quality while controlling operational costs.</t>

<t>The network should support efficient coordination between distributed computational nodes, ensuring stable performance during resource scaling operations and maintaining inference pipeline continuity despite variations in network conditions across different service providers.</t>

</section>
<section anchor="dynamic-model-selection-and-coordination"><name>Dynamic Model Selection and Coordination</name>
<t>The transition from content access to model inference access necessitates intelligent model selection mechanisms that dynamically route requests to optimal computational resources. This use case addresses scenarios where applications should automatically select between different model sizes, specialized accelerators, and geographic locations based on real-time factors including network conditions, computational requirements, accuracy needs, and cost considerations.</t>

<t>The inference infrastructure should support real-time assessment of available resources, intelligent traffic steering based on application characteristics, and graceful degradation during resource constraints.</t>

<t>Key requirements include maintaining service continuity during model switching, optimizing the balance between response time and inference quality, and ensuring consistent user experience across varying operational conditions. This capability is particularly important for applications serving diverse user bases with fluctuating demand patterns and heterogeneous device capabilities.</t>

</section>
<section anchor="adaptive-inference-resource-scheduling-and-coordination"><name>Adaptive Inference Resource Scheduling and Coordination</name>
<t>The evolution from content access to model inference necessitates intelligent resource coordination across different computational paradigms. This use case addresses scenarios where inference workloads require adaptive resource allocation strategies to balance performance, cost, and efficiency across distributed environments.</t>

<t>Large-small model collaboration represents a key approach for balancing inference accuracy and response latency. In this pattern, large models handle complex reasoning tasks while small models provide efficient specialized processing, requiring the network to deliver low-latency connectivity and dynamic traffic steering between distributed model instances. The network should ensure efficient synchronization and coherent data exchange to maintain service quality across the collaborative ecosystem.</t>

<t>Prefill-decode separation architecture provides an optimized framework for streaming inference tasks. This pattern distributes computational stages across specialized nodes, with prefilling and decoding phases executing on optimized resources. The network should provide high-bandwidth connections for intermediate data transfer and reliable transport mechanisms to maintain processing pipeline continuity, enabling scalable handling of concurrent sessions while meeting real-time latency requirements.</t>

<t>The network infrastructure should support dynamic workload distribution, intelligent traffic steering, and efficient synchronization across distributed nodes. This comprehensive approach ensures optimal user experience while maximizing resource utilization efficiency across the inference ecosystem.</t>

</section>
<section anchor="privacy-preserving-split-inference"><name>Privacy-Preserving Split Inference</name>
<t>For applications requiring strict data privacy compliance, model partitioning techniques enable sensitive computational layers to execute on-premises while utilizing cloud resources for non-sensitive operations. This approach is particularly relevant for applications processing personal identifiable information, healthcare records, financial data, or proprietary business information subject to regulatory constraints.</t>

<t>The network should support efficient transmission of intermediate computational results between edge and cloud with predictable performance characteristics to maintain inference pipeline continuity. Challenges include maintaining inference quality despite network variations, managing computational dependencies across distributed nodes, and ensuring end-to-end security while maximizing resource utilization efficiency across the partitioned model architecture.</t>

</section>
</section>
<section anchor="requirements"><name>Requirements</name>

<section anchor="scalability-and-elasticity-requirements"><name>Scalability and Elasticity Requirements</name>

<t>Distributed Inference Network should support seamless scaling to accommodate billions of concurrent inference sessions while maintaining consistent performance levels. The network should provide mechanisms for dynamic discovery and integration of new inference nodes, with automatic load distribution across available resources. Elastic scaling should respond to diurnal patterns and sudden demand spikes without service disruption.</t>

</section>
<section anchor="performance-and-determinism-requirements"><name>Performance and Determinism Requirements</name>

<t>AI inference workloads require consistent and predictable network performance to ensure reliable service delivery. The network should provide strict Service Level Agreement (SLA) guarantees for latency, jitter, and packet loss to support various distributed inference scenarios. Bandwidth provisioning should accommodate bursty traffic patterns characteristic of model parameter exchanges and intermediate data synchronization, with performance isolation between different inference workloads.</t>

</section>
<section anchor="security-and-privacy-requirements"><name>Security and Privacy Requirements</name>

<t>Comprehensive security mechanisms should protect AI models, parameters, and data throughout their transmission across network links. Cryptographic protection should extend to physical layer transmissions without introducing significant overhead or latency degradation. Privacy-preserving techniques should prevent leakage of sensitive information through intermediate representations while supporting efficient distributed inference.</t>

</section>
<section anchor="identification-and-scheduling-requirements"><name>Identification and Scheduling Requirements</name>

<t>The network should support fine-grained identification of inference workloads to enable appropriate resource allocation and path selection. Application-aware networking capabilities should allow inference requests to be steered to optimal endpoints based on current load, network conditions, and computational requirements. Both centralized and distributed scheduling approaches should be supported to accommodate different deployment scenarios and organizational preferences.</t>

</section>
<section anchor="management-and-observability-requirements"><name>Management and Observability Requirements</name>

<t>The network should provide comprehensive telemetry for performance monitoring, fault detection, and capacity planning. Metrics should include inference-specific measurements such as token latency, throughput, and computational efficiency in addition to traditional network performance indicators. Management interfaces should support automated optimization and troubleshooting across the combined compute-network infrastructure.</t>

</section>
</section>
<section anchor="security-considerations"><name>Security Considerations</name>

<t>This document highlights security as a fundamental requirement for DIN. The distributed nature of inference workloads creates new attack vectors including model extraction, data reconstruction from intermediate outputs, and adversarial manipulation of inference results. Security mechanisms should operate at multiple layers while maintaining the performance characteristics necessary for efficient inference. Physical layer encryption techniques show promise for protecting transmissions without the overhead of traditional cryptographic approaches.</t>

</section>
<section anchor="iana-considerations"><name>IANA Considerations</name>

<t>This document has no IANA actions.</t>

</section>


  </middle>

  <back>



    <references title='Normative References' anchor="sec-normative-references">



<reference anchor="RFC2119">
  <front>
    <title>Key words for use in RFCs to Indicate Requirement Levels</title>
    <author fullname="S. Bradner" initials="S." surname="Bradner"/>
    <date month="March" year="1997"/>
    <abstract>
      <t>In many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="2119"/>
  <seriesInfo name="DOI" value="10.17487/RFC2119"/>
</reference>

<reference anchor="RFC8174">
  <front>
    <title>Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words</title>
    <author fullname="B. Leiba" initials="B." surname="Leiba"/>
    <date month="May" year="2017"/>
    <abstract>
      <t>RFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.</t>
    </abstract>
  </front>
  <seriesInfo name="BCP" value="14"/>
  <seriesInfo name="RFC" value="8174"/>
  <seriesInfo name="DOI" value="10.17487/RFC8174"/>
</reference>




    </references>




<?line 177?>

<section numbered="false" anchor="acknowledgments"><name>Acknowledgments</name>

<t>The authors would like to thank the contributors from China Mobile Research Institute for their valuable inputs and discussions.</t>

</section>


  </back>

<!-- ##markdown-source:
H4sIAH/y9WgAA6Vb25IbR3J9x1eUhw/edQATJC2HpYn1rkdDKnZsXsYa0hsb
Doej0F0AStPohrq6McQq9C/7LftlPplZ10YPKdkPDA4aQFVWXk6ezCysVquF
G3Rb/49uutZcqaEfzcIeev7LDS+fP//m+cvFYIcGb168sm7o7XocTK1u243p
TVsZ9c4Mj13/oH7z6vbdb9Vd360bs1f3gx7M3rTDUn10Rt1oZ9xSYSv1vflx
tD2/5y4Wer3uzZEWv313Fb5dfKf8fKPb7ZUy7aLC+tuuP10p2266xaLuqlbv
IWbd682wcl27XfXD9nG7qm27Gp2paLlVn622ev584cb13jpnu3Y4HfDt29cf
vlu0435t+qtFjT2u1MvnL/9pUXWtM60bnVcSRP7HBZ1723fj4UrxVosHc8Kz
+mqhVgoHov+ub5OuFgs9Druu5/dF2nvIqf7N6nahlOr6rW7tX/QAca7Uzc62
Wr3t1rYx9K7Za9tcKTrZD/jC6YfTv1b0kT1/4rLq9mnZPxn7Iz6zxSKm3fq1
n1yyog89+q+cLbpou34PmY5YVz1TDbxADTujeuOPBZt+d/P3Duvh/2++hlYX
ZJPwpcVitVopvYbz6GpYLD7srFMw10hGULVxFZzKOF7z4N3HJfeB6VSV3Cc3
oMImSn/BMS+8Z9qWdzC9Vt1GHUx/1A7iwUCX6tqRnWz8qsO7toJMawMNGPVo
a9OcIOuh6U7YhATRFT7g8GJ9UtBVA5s5Whny9iTr4dDYik3p+PO14SWX8B9d
W3quG1XhGL1u7F+wTtV0Y71aa1oziaJ72GMw1TD2kGej8aja6aaBxfAah3KV
bjQEsMNpqRqora3wB/x97PkR7W02G1tZeueS/FJpu4e+O6XrGquy6l2xLs7U
mCN0tbVwojrTr6m3ZsWiqqprsHPX8yGXkGUwUMOWrOrgUvXY4MtegHan2VNa
jxZBPpLCjYdD1w+q6R5X8QA7u92tEHX4WC9PaB3+mpk31uVCPG1v6xruvXgG
Xxj6rh4rkm+xKL4EF+z1wdawqjl2zZGOiQNAJWoztrUm/4J9/Nq/xthLddt9
SNYmqbGzJrW4S0Xub2RraP9ISuEwoI+Rl7lDbzS91R1oNdqj0f2Wj7zv4IVw
JijA+pOR2M5uWwv7aiwDE7L9cBAyRw91Q0Zsrg56oNfOq1GOlQfTpfoQvICX
gCCOJGvNY+4Zw04PynyCR9DewZytgW5yh+rI5+C1iDCoOFiYAhCA+chOBfBp
a9biU+b8zvZuWD59EijC7eyGJdn03R4y273uLXaE5wysVzac6KOCah0+i7fx
ohnrpFX/ObLPRzKpRHgRwqJ1QxhG++07PNiQArENVny0wy4Zaalgb0OR4cMH
Gw4U+RSI6QSszNpCU70cIMeGR7NWayiLRAZ8AxMh/x4vyFLwXz56MKELpuCz
rg1tENUuR8wwhc6n7J7eJK8J7kApUBEV2Dq4IunwiMN5pPIKxUqwaX+Cee4B
jm29ZKtG1cBnsR0gbbBGtFbT553dnKKZHLZujNqN8IDV0K1EPt6ud17fYjE2
EiAYH/+k9mMz2BWvnu/HVpsieNU55IfasM0NkQLKRoDK5vTFcEWS7tpu343Q
8ckhDwXrFw4E+uHti40rfRAMthwi4ARbeMMdvdE04k0GnKKuESalw7EpzaeD
bomEKNpiwBpji/gDYELlpo6xQvluCoyUM2kpkoT2wfdqWw0aeTRHVDiKOxCJ
oUMGxA/H6zb4THAl/K+b1WD3JtMn7Yw8zSokhBTNexPx1klp3cH0Qbm2rUdK
Ht6D+q4RHUf7HYuIhzZ6eFRhSvLqptO1i6hnzkMKCEXLmZ7eqHxkwc3wLIIU
0BVQSd7ybYdgBa0ZdiCKoGRxGQTOo4EqxggCECXI51OZdsPq0bi0N8jSJEmS
R8BpwakBrEv2ngB7GVo3yMZvASMd3F6iqNWU5s8gEWvtx9Z77NJby5NFYiNw
t5bg9wGPA+7Al8QbgnMsk/qcgDoMhDD3pmILhxPB24C0wr9+HCmYTyRUwIk9
dDD2/n0hZVBjj3coa+09gxUbSPByjsvBbW2HVddbce9goL2BBivxA3JPRKVp
uoNwPWIbMaHEhKQkDJKRobUeFupHz5g4KenDQH+QiqFHQPaZv0xVHp3uUt1k
HA3I0Xca5MZTsQanHXSIhQC4ZGfaK3OJSM1WVEZYOoaoLkbykFmfIKJvS987
aCxcjWADDD+gWKZFruucj1vaNEUuQpAZOM6yVKhjNOXGow48qgeKM0IEk2aw
XvJzhAdo3C+m5wEaYOIZpv5LKkiiDq3Ixsy2wQECfx2+zAK91yG/leQWcvRM
ZhOXIcL4TN3ELCcB/8psbMtu6hZUlC6uPi+2MDpUfuQzAKmLtx/vP1ws5X/1
7j3//f3r//h4+/3rV/T3/R+v37yJfyz8J+7/+P7jm1fpr/TNm/dv375+90q+
jKeqeLS4eHv95wvRy8X7uw+3799dv7mQaie3I2EQMwOBXmQJOo12i1CAESar
b2/u/vbXF1+pn376O9RyL1+8+Obnn/2Lr1/881d48YhaUXbrWqI9/BIqPi0Q
G0b3tAoUTBkRkdFItnG77rFVOygOOv+H/yLN/PeV+t26Orz46vf+AR24eBh0
VjxknZ0/OfuyKHHm0cw2UZvF84mmS3mv/1y8DnrPHv7uDxQ2avXi6z/8nv3s
rDMijoOQauwmQPZTZFjtNLOEjmqQvECZANBTtWWxaFFSUjrMES59TKpdH76a
hQFCmLzgzAsD8gSI2dYNR1mCNebIxMaE4Ybyhyovoj3MHsAkhYv8n4qqew9i
0OWRMhqpC7wCyxtKMVR39iaIA0DTJC9nx4SSWBx8rhYz1CPDKfz5QXEWWAq/
z06V9BTPh+1/gFalEqJ8Lyfeh3qRws//zcVWYUjKxjCCJQk3vBEbZRBl7g+j
WJlxVZItG5ztjx23poP4hx3zs6YLVUugk7al4MPa624YYDBTPXikbKjcpCI8
JALr+vGQty1ILdiBrJKpFIkUmgZTcNxFyTTjme1i8aed/RLL32vqADTk/kay
Hv0RcnlsBnDjhgCcd4Z+pKYoy45YVWWkNLlQlGDp3Sh7QOfca+p7mVzQjHQ5
rpaQ5agKw8p0zpJcZ7lOMlDOGooeDnJT26EoM2bgyiWvMpPoGXEGcHRMI8/q
kpDMQ9b3FFsdIBc5toucfrYSEEkpM2SeKMWz+KI0PIQ26WNna87Lha/BXw5U
3sEAgTyb1O0SijMLL4AUXzD5IijTUgFs0Ylj/bHWPZywF1YBW+mMVWXVBzyx
7xw5WTtit4GDeok0pJthV+HY3ItrHZfBQq05r+G8WLbbQwEMa9APx7Djelc6
VutV9mhFvbIm6lWRLaRmpKYRuIVuEA6vJe/iO74CigYuIT/yutCB8ZQ1tsyE
NoJWW91Omjhi0TNrbuBhg4eweb+EFInbkluiuB+6PvdI0CKhk7kCGUXHNdaP
COL8NzkX0MtgGaahUB2BJGuR2D7lhy1YLb4yF0ZS61D1S8BAwCaiT4peWZoK
KYP6ajgJvrJ1fXvfo+rG1mIQFVvUZHcJeVBsqG2Q7BOK1D3Kadq7BGHe0PuX
+QQZ66yOARLtGO7nPf/QdY3LKn0CzIMU7MexofItthImpj7artFDTCDuqRRC
cc/wPC9AiUdj60Yr21M8teCLPWFMDLeiDybxMNX3MpQXZBhaOUZTZt3CZ2N7
EdDcmm03SD1SdCQXTJziLAgvnqksiu6lDEiM3PMAl4UaJ0kvg6mjb7qx2hEx
nfXpLDLP3TtBqndfVsWWVMFLQUGU9bLuuW/z1tM+0SUkF5IeaqjlhPlxD6I3
2dxhUCadHzLXqM57clhqELIJyFO5O2/ao+27di8tCI79NeIB5+6omvLOtbGm
ydC6CkHl+0OpqJKW1o6io6MeA3lI6e8JKWrqQLG+uXRcqv+8e+cKrIgtXfhs
a1Ibj6I2ACBOQB4DggEBjvQEW2bHL9wyGPSaeCIZiLsG2AfZqTqfT0xSQplF
9x3qv44ZYGiQSaOWu3AzYwqppv0Ihj7CTyNez/S2UVJuNqZoXIgSUSmNTR2Y
rGAyjsvgR0mGOxVVfzoMIQfj1G7s2fVo65l2jXgkbAtG09VCKaFP7seu8Z1H
W8M5+GieIUy7Ix7omIYlAwTJh+7QNd32FAwoXfi8SWNbmrEYSsWwAv6xl0ww
JJX2nORky0D0wl6+n5+SpByORrO9rzschJ7ys5zoz5GGPOQzt7oUyKGR1w0H
1U0aeeFbbzlrfOjlQIv7PRW+TCYRAOM+01VosYY5SX1q9V7a0ZzKCMd0HUu3
svsHzKhIJXJU4k10UKmvC9Cnk23GpllRiZZ3b8umWFbZeeoVAMh3XrBSYz5Z
wWPf2YOovqrwKWA9Ogpul1kVwvTTRJmqFQA9QH/P2pCjnGtB8DLqYUYJXFIV
+klVny/bfJxS9RZBZXda97bOTh7a8DjWo5uSFFiop9l4y7hp29zr19Rshe/F
U2RQD6+igPEInI7O3fNcLX6C0LqSsfIkOt/L63jSS2vIjvBsKYxCLc5FnN7D
fC4bUHjSJExidsKcxJRpN2X+saJQILHziA2AEjBGzu+LDi7741HYM12aeMa5
r+BbmEzFxh4+3fVIHuJhoZLIe+qlW7WwHlJKjHsnBKYo6EffE/UeTEFRyOh8
4ZcOmDmVPRhu4NDpbDvScWk6C9rEuSi1XMLJqBCwRd1RR4x30waFAMsr8X+P
I/emkVTFYt1k+lhEMssbSAycTzjPxnzyDvIrJfXBNwKyJMifd3HbvaHZr3Ux
32XhifQ/ZK0O7MatfjbzbLhPkcV3Y02J3QQkZR4X/6BSNw7svIiZVwS9+hOA
3sIXKNPbUNng6NxV6AKXT0Vr1h6RUOjaLPkTKyjrnnMDL88OnaJzSXsjHyPp
ENi70OV3w2TgEiLjSZSehEoSkRmRk+78Rumjto3OsdqVXCcMdZAVTS853586
0/w05Xul4ZFBRplrjsW4yjIQnenfzaSOC2PcOSTJo0uW9SYF7lI/BtzMj5TC
TGWtG47u4AxF0e2rtqBQj1PhCso5RZj2tHzgBnJUolmwvvfsSOVPdJmknM7E
yTqPPQr/poPztFB4DUtABnFC0QO253ksv7lREnDh7WVVsWBouaaZFyWXVB19
Hwx2Hy/nzCMNXYgZfw3QPIkwmY9k6H4Gj2U0URVe2+3+V0DI3LA4lGk6aGKO
ypDjIk9aGRQG3yqGZxS601tU6QjZxais3oIR3tCtnZVjShgaitl1qVQBw648
PQrtF/YZkaRMSRFX/MRV3N4z3FRMendZ+ntD/tKQb8aEqxR0kaHjSBy0ewhM
JpPWxRIkpegcYPO5cpo/Dru8KghDxYKJh4LvGDpZPsvMwNQMCQieR7dWfZI5
4xYc6IXcJ1S9ME2YSwsg+4Y48z3zSW49sXMHMjtlO1kRVBVlQBxewOx3vdlY
MPAaD2sq+MibZdes6RKUy/dwPMDRYIfaT3yUTX7fJ/MBtpaPC2/oTD9uEknO
TzpE8Nx8nj4x5BxE5NhKI8l5+rFjWDKfUJIxHnW5rEWiP7NBcB4uhFONmVX7
fMSiwSbMm1gOXYcqptSxTVtwlMxWWTtwhrktpa7hxONnymlS5actfnhBmdX3
uSgiqElfDtbnOv6XJcf9fB4P/h6QKtnvrEsxDYkShmYc+xyV2NAhY8E5erPz
VUfEG4kXFwndNCl6TehPIQ1HIIXITdj7HByHgtzkQUIJ6k76has7QkHJiffI
k0N2W/q7afJMQJP33kLnMbUXs37uwDmbsQmew410F+8XxKKvDJtGn6iapIsI
7PtmpnSVozOdmKn1qKk5N5EIk46g+ilxgMOb4yxvyB0cwkl1zK3sjYRI0c5O
3UwafNK1hKVvdVrfu15Kw5B6S2agZk0sNbOF8mZ90cctCN8vKu+mrfjPtNah
yrEZXBon1VuTyteIWfGSW17znV0DyzDis8XdpbrJb1WfM9YzVhnLwXD2VBYu
5QLV+ei2Ngfqy8MMxj0ZrRO6ii/QUNLkzbH/T0jGoEhXEbPMxD33xbPiVxcc
r/fZ1J/key2tDnpZfvbzd3wmDhIaFrE4p+tasVFpirvWs8P3KV5nFst4fu4i
0kb9bNbKkgw3Cz1iw1IVtStPcTi0Tdc26EZdxoiz/BprWXUG98EuMzXcZdBw
VI0XUpifdBHt2Atjzu92j3VNrEmKB7jog68tUMHPTPmlE3GXKUguQVF4WtLB
xLxPXM0MbDtT+vQyamx0Z3vxZS+ma0/fSPucpXweCBdA3vAc9nrbG7lx+Jv7
N9e/VdsRoABreXCOvfsfLKnNzyV09WCoZywVTvDPMHbIg3RmVHupvo0ch0Vz
PumEdkbu0mPvqFs3vQteQle4BZgNJSNLdefDSemll3Qg0LtM3db5+eFML2Xu
7qNEfj539ml74hQ3BbOIMJXFUbIbAU1+UX46dBUaKPMmcllglu3L/OGjJl4T
sS2R4ptiKpLmkLEu4DEtWfewOzm5KEOZvlg7hcpTP7EgBKDJm0qulLdHLiOz
OSRmk3GPqAhz5CGF0Q/0owYe16RpRErBYfJWmDsWj3Eoy+Vbun2aEu+s54ph
bz1/qFJZlLUGSgt/Jslv6NrM1l8XseWanOnP0SLd8cwHXHMFum+A7FKb8lJl
N2ZW+pFIjpfMz0XS+DyEHzX5525syYVIptcylwkcGI5y6HimEptlIffQCZaz
TcEwrJ9vDAIjuun9i7Yu7OOyxky6aOwPsY4G9j+1yTAlRXE24EhNEr7Vkv22
kFJG/PWej/K38co3f/z9mnw3JPwv+kLA47LEQCVj6E73iXE3B6I0YgUx1aB7
ELy4NhDnPYdGty3/6OWt3A4PWwaSFs264gKXr6+n++lpMizX47MrZRxXMNac
4TLqZNs4keI75NkNy7mMZluefXc9DJ7plMOXblG6afh4ekBOll+eJ5kAQCON
43Zdx1FddCD2aw44kdus5otPueIcEfym6EBPr3pTsY7Kc0dKi5jvJr+Iy1ya
rfrq9p1k6ILHxpvsc9Efbr4QZUL2Q+ZVx7PbRpL8gNn+ep+/xU7lTCvHiw3K
AhoB3dBH+CVPzXcNea4L89jD2Mzgki84LpOezhOX1HCGbmDwz5Do90u+Tjxn
nkyzP1OVSL9U+7hIWJ3wWd2VKQoPKbuxDxa55JFCjwpTCTGf9EiG2aRGkqUE
tinvC5e3CtJtMPah2+t311/yHzhL28knw0+z/G8y1zAyrXJdPbTdY4OSTrDk
pyv5rbWp/+VioxtnLn4WfJEfSkNw1n4DEis/4NDtg/f/VryNPsRekP+2mXrd
hsoaVCBQ+UA1PP++hbnEUTejr5nJVQIMV6MoCzL/L3Q7NisZPwAA

-->

</rfc>

