Professional Documents
Culture Documents
Transmission Resource Management Parameter Description
Transmission Resource Management Parameter Description
Parameter Description
Copyright Huawei Technologies Co., Ltd. 2009. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without prior
written consent of Huawei Technologies Co., Ltd.
Notice
The information in this document is subject to change without notice. Every effort has been made in
the preparation of this document to ensure accuracy of the contents, but all statements, information,
and recommendations in this document do not constitute the warranty of any kind, express or implied.
1 Contents
1 Change History ......................................................................................................................... 1-4
2 Introduction ............................................................................................................................... 2-5
3 TRM Algorithm Overview ....................................................................................................... 3-6
3.1 Contents of TRM Algorithms ......................................................................................................... 3-6
3.2 Requirements of TRM Algorithms ................................................................................................. 3-6
3.2.1 Networking Requirement ...................................................................................................... 3-6
3.2.2 QoS Requirement ................................................................................................................. 3-8
3.2.3 Capacity Requirement .......................................................................................................... 3-8
3.2.4 Differentiated Service Requirement ..................................................................................... 3-8
Change History
The change history provides information on the changes in different document versions.
RAN Version
01 (2009-03-30)
11.0
Draft (2009-03-10)
11.0
Draft (2009-01-15)
11.0
01 (2009-03-30)
This is the document for the first commercial release of RAN11.0.
Compared with draft (2009-03-10) of RAN11.0, this issue incorporates the following changes:
Change Type
Change Description
Parameter Change
Feature change
None.
None.
Editorial change
None.
Draft (2009-03-10)
This is the second draft of the document for RAN11.0.
Compared with draft (2009-01-15), draft (2009-03-10) optimizes the description.
Draft (2009-01-15)
This is the initial draft of the document for RAN11.0.
Compared with 02 (2008-07-30) of RAN10.0, draft (2009-01-15) incorporates the following
changes:
Change
Type
Change Description
Parameter Change
Feature
change
None.
None.
Change
Type
Change Description
Parameter Change
Editorial
change
None.
None.
None.
None.
Introduction
Transmission Resource Management (TRM) is aimed at increasing the system capacity in
various networking scenarios without affecting the Quality of Service (QoS). In addition, TRM
provides differentiated services for Best Effort (BE) services to improve the data transmission
efficiency.
TRM involves management of the transmission resources on the Iub, Iur, and Iu interfaces.
Transmission resources are one type of resource that the UTRAN provides. Closely related to
TRM algorithms are Radio Resource Management (RRM) algorithms, such as the scheduling
algorithm and load control algorithm for the Uu interface. The TRM algorithm policies should be
consistent with the RRM algorithm policies.
Compared with the transmission on the other interfaces, the transmission on the Iub interface is
of higher costs and more complex networking modes and has a greater impact on the system
performance. Therefore, this document describes only the TRM algorithms for the Iub interface.
Intended Audience
This document is intended for:
System operators who need a general understanding of transmission resource management.
Personnel working on Huawei products or systems.
Impact
Impact on system performance
None.
Impact on other features
None.
NodeB
RNC
MSC
Server
MGW
SGSN
GGSN
HLR
NOTE:
: not involved
: involved
UE = User Equipment, RNC = Radio Network Controller, MSC Server = Mobile Service Switching Center
Server, MGW = Media Gateway, SGSN = Serving GPRS Support Node, GGSN = Gateway GPRS Support
Node, HLR = Home Location Register
NB = NodeB
BW = bandwidth
Bandwidth being variable: The bandwidth on the transport network might be variable. For
example, the bandwidth of Asymmetric Digital Subscriber Line (ADSL) transmission is variable.
In this case, the TRM algorithms need to be able to detect the available bandwidth.
ATM&IP dual stack: ATM and IP transmission resources are available for one Iub interface at the
same time so that the transmission cost is reduced.
Hybrid IP: High-QoS transmission (such as IP over E1) and low-QoS transmission (IP over FE)
are applicable to one Iub interface at the same time so as to enable differentiated management
of services.
RAN sharing: Operators share the physical bandwidth. In this case, some bandwidth should be
reserved for each operator.
Table 3-1 lists the types of transport applicable to each interface.
Table 1-3 Types of transport applicable to each interface
Interface
ATM
IP
Hybrid IP
Iub
Iur
Iu-CS
Iu-PS
QoS Requirement
The WCDMA system supports the following types of service:
Signaling, such as SRB, SIP, NCP, and CCP
Real-time (RT) service, such as conversational and streaming
Formatted Table
Capacity Requirement
The capacity requirements are as follows:
With the QoS guaranteed, the network should allow access of users to the maximum extent. This
is mainly implemented by the load control algorithm.
When data needs to be transferred for NRT services with innate bursty characteristic, the
bandwidth should be fully utilized to ensure a high throughput and prevent congestion. This is
mainly implemented by the user plane congestion control algorithm.
Differentiated service requirement for the GBR of NRT services: For NRT services, the GBR is
configurable by running the SET USERGBR command according to the traffic class, user
priority, and bearer type (that is, DCH or HSPA) of the services.
Differentiated service requirement for the allocation of bandwidth for NRT services: The activity of
NRT services does not follow any obvious rule. When the demand from NRT services for the
transmission bandwidth exceeds the total available Iub bandwidth, the bandwidth needs to be
allocated to the services in a certain way. For High Speed Packet Access (HSPA) services,
when Uu resources face a hurdle, the Uu resources are allocated to NRT services according to
the Scheduling Priority Indicator (SPI) weight. Accordingly, in the case of Iub transmission
resource shortage, the Iub transmission resources also need to be allocated to the NRT
services according to the SPI. For details, see section 7.3 "Congestion Control of Iub User
Plane."
Transmission Resources
Transmission Resource Introduction
Transmission resources consist of ATM transmission resources and IP transmission resources.
ATM transmission resources are as follows:
Physical transmission resources: E1/T1, channelized STM-1, unchannelized STM-1, ATM
physical port (IMA, UNI, and fractional ATM)
Logical Port (LP) resources: ATM hub LP and ATM leaf LP
Path resources: AAL2 path, SAAL link, and IPoA PVC
Figure 4-1 shows the relation between the ATM transmission resources.
Figure 1-2 Relation between the ATM transmission resources
Description
Transmission
Mode
Type of Service at
the ATM Layer
AEUa
UNI
VPI: 0 to 255
CBR
IMA
VCI: 32 to
65535
RTVBR
Fractional ATM
NRTVBR
Fractional IMA
UBR
LP
UBR+
UNI
VPI: 0 to 255
CBR
IMA
VCI: 32 to
65535
RTVBR
LP
NRTVBR
UBR
UBR+
Board
Description
Transmission
Mode
Type of Service at
the ATM Layer
UOIa
NCOPT
VPI: 0 to 255
CBR
VCI: 32 to
65535
RTVBR
The UOIa is applicable to the IuCS, Iu-PS, Iu-BC, Iur, and Iub
interfaces.
NRTVBR
UBR
UBR+
Description
Transmission Mode
PEUa
PEUa refers to the RNC 32-port packet over E1/T1 interface unit
(REV: a).
PPP
MCPPP
IP over Ethernet
FG2a
MLPPP
The FG2a is applicable to the IP-based Iub, Iur, Iu-CS, and Iu-PS
interfaces.
GOUa
IP over Ethernet
The GOUa is applicable to the IP-based Iub, Iur, Iu-CS, and Iu-PS
interfaces.
UOIa
PPP
Board
Description
Transmission Mode
POUa
PPP
MLPPP
LP Resources
LP Introduction
After the physical transmission resources and path resources are configured, the system can
start to operate and services can be established. There are problems, however, in the following
scenarios:
Transmission convergence
Transmission convergence can be performed either on the transport network (for example,
convergence of NB1 and NB2, as shown in Figure 4-3) or at the hub NodeB (for example,
convergence of NB3 and NB4 at NB1, as shown in Figure 4-3). If only physical transmission
resources and path resources are configured, the bandwidth constraints at the convergence
points are unavailable. As shown in Figure 4-3, the total available bandwidth BW0 is known, but
the values of BW1 through BW4 are unknown. Thus, the admission algorithm does not work
properly. For example, if the total reserved bandwidth at NB2 exceeds BW2, congestion and
packet loss may occur and in the downlink, the total volume of data sent to NB2 may exceed
BW2.
Figure 1-4 Iub transmission convergence
RAN sharing
Operators share the bandwidth at one NodeB. In this case, the bandwidth needs to be
configured for each operator so that the bandwidth used by each operator does not exceed
their respective reserved bandwidth. If only physical transmission resources and path
resources are configured, such a requirement fails to be fulfilled.
To solve the preceding problems, the Logical Port (LP) concept is introduced to the TRM feature.
LPs are used for bandwidth configuration at transport nodes and for bandwidth admission and
traffic shaping, so as to prevent congestion.
An LP describes the bandwidth constraints between paths or between other LPs.
An LP can be comprised of only paths. Such an LP is called a leaf LP. A physical port can be a
leaf LP.
An LP can also be comprised of only other LPs. Such an LP is called a hub LP. A physical port
can be a hub LP.
One key characteristic of LPs is the bandwidth. For an LP, the uplink bandwidth can be different
from the downlink bandwidth.
LPs at the RNC can be classified into the following types:
ATM LP: used for bandwidth admission and traffic shaping. Multiple levels of ATM LPs are
supported.
IP LP: used for bandwidth admission and traffic shaping. Only one level of IP LP is supported.
Transmission resource group: used for admission only and applicable to ATM and IP transport.
Multiple levels of transmission resource groups are supported.
On the RNC side, LPs cannot contain transmission resource groups, and transmission resource
groups cannot contain LPs either.
LPs need to be configured on both the RNC and NodeB sides.
LPs are configured on the RNC side for the following purposes:
Admission control in convergence or RAN sharing scenario
Traffic shaping in the downlink
LPs are configured on the NodeB side for the following purposes:
Fairness between local data and forwarded data in convergence scenario
Traffic shaping in RAN sharing scenario
NB = NodeB
BW = bandwidth
The leaf LPs, that is, LP1, LP2, LP3, and LP4, have a one-to-one relation with the NodeBs. The
bandwidth of each leaf LP is equal to the Iub bandwidth of each corresponding NodeB.
The hub LP, that is, LP125, corresponds to the hub NodeB, and the LPs connected to the hub LP
correspond to the NodeBs on the network. The bandwidth of the hub LP is equal to the Iub
bandwidth of the hub NodeB.
The actual rate at a leaf LP is limited by the bandwidth of the leaf LP and the scheduling rate at
the hub LP and physical port.
In the Call Admission Control (CAC) algorithm, the reserved bandwidth of a leaf LP is limited by
not only the bandwidth of the leaf LP but also the bandwidth of the hub LP and the bandwidth of
the physical port. That is, the total reserved bandwidth of all the LPs under a hub LP cannot
exceed the bandwidth of the hub LP.
In RAN sharing scenario, an LP needs to be configured for each operator that uses the NodeB.
Table 4-3 describes the ATM LP capabilities of interface boards at the RNC.
Table 1-6 ATM LP capabilities of interface boards at the RNC
Board
Number of LPs
Level of LPs
AEUa
Five
Five
Board
Number of LPs
Level of LPs
UOIa_ATM
Five
IP LP at the RNC
IP LPs have the functions of IP traffic shaping and bandwidth admission. They are configured on
IP interface boards by running the ADD IPLOGICPORT command. These LPs have the following
attributes:
Bandwidth: The downlink bandwidth is used for traffic shaping and bandwidth admission, and the
uplink bandwidth is used for bandwidth admission only.
Resource management mode, that is, SHARE or EXCLUSIVE: indicates whether operators in
RAN sharing scenario share the Iub transmission resources.
When the ADD IPPATH or ADD SCTPLNK command is executed to add an IP path or an SCTP
link respectively, the path or link can be set to join an LP.
IP LPs are similar to ATM LPs in terms of principles and application. The current version of RAN
supports only one level of IP LP.
Table 4-4 describes the IP LP capabilities of interface boards at the RNC.
Table 1-7 IP LP capabilities of interface boards at the RNC
Board
Number of LPs
Level of Shaping
PEUa
None
FG2a
0 to 119
GOUa
0 to 119
UOIa
0 to 119
POUa
None
IP LP at the NodeB
IP LPs at the NodeB have the function of IP traffic shaping. To configure an IP LP, run the ADD
RSCGRP command to add an IP resource group to the interface board at the NodeB. The LP has
attributes such as the TX bandwidth, RX bandwidth, bearing port type, and bearing port number.
The TX bandwidth is used for traffic shaping, and the RX bandwidth is used to calculate the
remaining bandwidth for backpressure. Then, when the ADD IPPATH command is executed to
add an IP path, that is, a path carrying the data traffic of the local NodeB, the path can be set to
join an LP; when the ADD IP2RSCGRP command is executed, the signaling traffic and the
forwarded data traffic can be set to join an LP.
IP LPs at the NodeB are mainly used to differentiate operators in RAN sharing scenario.
Each interface board of the NodeB supports a maximum of four IP LPs.
Path Resources
Path resources involve those on the control plane, user plane, and management plane. The paths
on the user plane, that is, AAL2 paths for ATM transport and IP paths for IP transport, are key
resources. The allocation and management of transmission resources are based on paths.
AAL2 Path
In ATM transport mode, the following types of AAL2 path can be configured:
CBR
RT-VBR
NRT-VBR
UBR
UBR+
When an AAL2 path is configured, the TXTRFX and RXTRFX parameters need to be set. They
determine the type of path. The traffic record indexes are configured by running the ADD
ATMTRF command.
IP Path
IP paths can be categorized into the following classes:
High-quality class
Low-quality class
The low-quality class, denoted LQ_xx, is applicable to only hybrid IP transport.
IP paths can be further classified into QoS path and non-QoS path.
The Per Hop Behavior (PHB) of QoS paths is determined by the TRM mapping configuration.
The PHB of non-QoS paths is determined by the type of path.
Table 4-5 lists the types of IP path.
Table 1-8 Types of IP path
Type
High-Quality Class
Low-Quality Class
QoS path
QoS
LQ_QoS
Non-QoS path
BE
LQ_BE
AF11
LQ_AF11
AF12
LQ_AF12
AF13
LQ_AF13
AF21
LQ_AF21
AF22
LQ_AF22
AF23
LQ_AF23
AF31
LQ_AF31
AF32
LQ_AF32
AF33
LQ_AF33
AF41
LQ_AF41
AF42
LQ_AF42
AF43
LQ_AF43
EF
LQ_EF
NOTE
On the Iu-PS interface, even if IPoA transport is used, IP paths still need to be configured.
HSDPA and HSUPA services can be carried on the same IP path, with HSDPA services in the downlink and
HSUPA services in the uplink.
Priorities
At each ATM port (such as IMA, UNI, or fractional ATM port) or leaf LP of the RNC, there are five
types, as shown in Figure 4-5. The scheduling order is as follows: CBR > RT-VBR > MCR of
UBR+ > NRT-VBR > UBR > UBR+.
Figure 1-6 Priorities at each ATM port of the RNC
At each IP port (such as PPP/MLPPP port) or LP of the RNC, there are six types, as shown in
Figure 4-6. The default scheduling order is as follows: Queue1 > Queue2 > WRR (Queue3,
Queue4, Queue5, Queue6), where WRR refers to Weighted Round Robin.
At each ATM port (such as IMA, UNI, or fractional ATM port) or LP of the NodeB, there are four
types, as shown in Figure 4-7. The scheduling order is as follows: CBR or MCR of UBR+ > RTVBR > NRT-VBR > UBR or UBR+.
Figure 1-8 Priorities at each ATM port of the NodeB
At each IP port (such as Ethernet port or PPP/MLPPP port) or LP of the NodeB, there are six
types, as shown in Figure 4-8. The default scheduling order is as follows: Queue1 > WFQ
(Queue2, Queue3, Queue4, Queue5, Queue6), where WFQ refers to Weighted Fair Queuing.
Figure 1-9 Priorities at each IP port of the NodeB
TRM Mapping
The transport network can provide differentiated QoS services, and the QoS requirements of
traffic vary according to the traffic types. TRMMAP refers to the mapping from traffic bearers to
transport bearers.
The RNC supports configuration of mapping to transport bearers according to the characteristics
of traffic.
Figure 5-1 shows the TRM mapping.
The RNC provides the following traffic classes that can be used in TRMMAP configuration:
Common channel
SRB
SIP
AMR speech
CS conversational
CS streaming
PS conversational
PS streaming
PS interactive
PS background
Type of radio bearer: R99, HSDPA, and HSUPA. R99 bearers have certain requirements for the
delay because of the time window mechanism. HSPA bearers, however, have relatively low
requirements for the delay because of the absence of the time window mechanism on the Iub
interface.
ARP: Even for traffic of the same type, the QoS requirements of different users vary. Thus, highpriority services may require high-QoS transport bearers at the transport layer.
THP: For interactive services, such as PS interactive services, THP parameters are available.
There are three classes of THP: high, medium, and low.
In summary, the inputs to TRMMAP are the traffic class, type of radio bearer, user priority and
ARP, and THP. That is, each combination of these inputs corresponds to one priority of transport
bearer.
Transport Bearer
Type of Path
Paths are defined for the purpose of preventing the impact of different types of interface boards
and different traffic queues at the physical layer. The transport bearer service refers to the service
of transmitting traffic over paths of specific types. For path types, see section 4.4 "Path
Resources."
The DSCP mechanism employed at the RNC is as follows: The traffic carried on QoS paths uses
the DSCPs mapped from services, whereas the traffic carried on non-QoS paths uses the DSCPs
corresponding to the type of IP path, that is, PHB. The mapping from PHB to DSCP can be set by
running the SET PHBMAP command.
Value range of DSCP: 0 to 63. Each DSCP corresponds to a PHB attribute.
Value range of PHB: BE, AF11, AF12, AF13, AF21, AF22, AF23, AF31, AF32, AF33, AF41, AF42,
AF43, and EF, in ascending order of priority.
QoS paths are recommended, because of simple configuration and better implementation of
multiplexing, QoS guarantee, and service differentiation.
ATM
IP
ATM&IP
Hybrid IP
Iub
Iur
Interface
ATM
IP
Iu-CS
Iu-PS
ATM&IP
Hybrid IP
NOTE
The RNC-oriented default TRM mapping is not specific for operators or user priorities. If no adjacent-nodeoriented mapping is configured, the RNC-oriented default TRM mapping applies.
DSCP (Binary)
DSCP (Decimal)
EF
101110
46
AF43
100110
38
AF42
100100
36
AF41
100010
34
AF33
11110
30
AF32
11100
28
AF31
11010
26
AF23
10110
22
AF22
10100
20
AF21
10010
18
AF13
1110
14
AF12
1100
12
AF11
1010
10
If the mapping from PHB to DSCP is not configured by running the SET PHBMAP command, the
default mapping applies.
If the traffic is carried on a non-QoS IP path, the DSCP corresponding to the path type is used.
If the traffic is carried on a QoS IP path, the DSCP is determined by the mapping (that is, the
PHBMAP) from the PHB, which is further determined by the mapping (that is, the TRMMAP)
from traffic classes to QoS paths. Thus, the user needs to configure only one QoS path before
obtaining diversified mapping from different traffic classes and user priorities to different
DSCPs.
Adjacent-Node-Oriented Mapping
To provide better differentiated services, the RNC supports configuration of TRMMAP for adjacent
nodes and even for a specific operator and a specific user priority at a specific adjacent node.
This helps achieve flexible configuration of mapping from traffic bearers to transport bearers.
To configure the mapping for an adjacent node, perform the following steps:
Step 2 Run the ADD TRMMAP command to specify the mapping from the traffic classes of a
specific interface type and transport type to the transport bearers.
Step 3 Run the ADD ADJMAP command to reference the configured TRMMAP tables for the
adjacent node. In this step, the TRMMAP tables need to be individually specified for Gold, Silver,
and Copper users.
NOTE
In RAN sharing scenario, if the resource management mode is set to EXCLUSIVE, the operator index needs to
be set so as to specify the TRMMAP for the users of that operator at the adjacent node.
The related commands are ADD TRMMAP, MOD TRMMAP, ADD ADJMAP, and MOD ADJMAP.
----End
Load Control
The load control algorithm allocates transmission resources to services, manages the
transmission bandwidth, and controls the transmission load for the purpose of allowing access of
users to the maximum extent without affecting the QoS.
Definition of Load
The load control algorithm is implemented at the RNC, and therefore, the load is defined and
measured at the RNC. The definition of load is based on the reserved bandwidth. The load
control algorithm reserves bandwidth for each service. The load refers to the sum of bandwidth
reserved for all services. The uplink load and downlink load are calculated separately.
The load of each path and that of each LP (including leaf LP and hub LP) need to be calculated.
The load definitions are as follows:
Load of a path: sum of bandwidth reserved for all services on the path
Load of a leaf LP: total load of all paths carried on the LP
Load of hub LP: total load of all LPs under the hub LP
user experience deteriorates and the Iub bandwidth usage decreases. To solve the possible
congestion problem, the Iub interface requires the related congestion control algorithm. For
details, see section 7.3 "Congestion Control of Iub User Plane."
The following bandwidth reservation policies apply:
RT services, including conversational and streaming services, are admitted at the Maximum Bit
Rate (MBR).
The bandwidth for RT services must be guaranteed. RT services do not allow packet loss or
large-volume data buffering.
The activity of RT services follows an obvious rule. When multiple services access the network,
the total actual traffic volume is relatively stable. The appropriate setting of activity factors can
help achieve correct admission of the services.
RT services should be admitted on the basis of the average actual traffic volume, so that the
number of users allowed to access the network can be increased to the maximum extent under
the condition that the QoS is guaranteed.
Reserved bandwidth for admission of an RT service = MBR x Activity factor, where the activity
factor needs to be set for each type of service.
NRT services, including interactive and background services, are admitted at the GBR.
NRT services do not have strict requirements for bandwidth guarantee. When resources are
insufficient, the traffic throughput can be lowered at the application layer through data buffering,
to which the application layer can be adaptive.
The activity of NRT services does not follow any obvious rule. When multiple services access
the network, the total actual traffic volume fluctuates greatly. Therefore, it is difficult to estimate
the exact bandwidth used by NRT services.
If a large number of users access the network, the bandwidth efficiency is improved to a certain
extent, but congestion and packet loss occur. If a small number of users access the network, the
bandwidth efficiency is low.
If no appropriate user plane congestion control algorithm is available for preventing congestion
and packet loss, the services should be admitted at the MBR multiplied by the activity factor.
The MBR, however, needs to be adjusted frequently in the interests of high bandwidth efficiency
and a large number of users accessing the network. Thus, a complicated user plane load
algorithm is required.
Huawei has developed a complete user plane congestion control algorithm, in which the only
condition of transmission admission is to provide GBR guarantee for users. The principle is to
allow access of users to the maximum extent under the condition that the GBR is guaranteed.
That is, the admission algorithm can reserve the bandwidth for users based on the GBR.
In terms of 3G signaling, SRB services can be admitted at either the GBR or 3.4 Kbit/s.
Admission at 3.4 Kbit/s: The bandwidth is fixed at 3.4 Kbit/s. This admission mode is applicable
to R99, HSDPA, and HSUPA services.
Admission at the GBR: For R99 services, if the bandwidth of a transport channel varies
between 3.4 Kbit/s and 13.6 Kbit/s, resource allocation and resource admission do not need to
be performed again.
In terms of common channels, EFACH services are admitted at the GBR, and other common
channel services are admitted at the MBR.
Because of the discontinuity of traffic, there are active periods, during which data is transmitted,
and inactive periods, during which data is not transmitted. Activity factors are used by the
admission control to achieve better utilization of transmission resources.
Activity factors are applicable to the Iub, Iur, Iu-CS, and Iu-PS interfaces. The number of users
that can access the network is related to the activity factors.
For common channels or SRBs, the activity factors are identical for all users, instead of varying
according to user priorities.
Activity factors can be configured for different types of service by running the ADD TRMFACTOR
command. Table 6-1 lists the default settings of activity factors for different types of service.
Table 1-11 Default settings of activity factors for different types of service
Type of Service
UL/DL
DL
70
UL
70
IMS SRB
DL
15
IMS SRB
UL
15
DL
100
SRB
DL
15
SRB
UL
15
AMR voice
DL
70
AMR voice
UL
70
R99 CS conversational
DL
100
R99 CS conversational
UL
100
R99 CS streaming
DL
100
R99 CS streaming
UL
100
R99 PS conversational
DL
70
R99 PS conversational
UL
70
R99 PS streaming
DL
100
R99 PS streaming
UL
100
R99 PS interactive
DL
100
R99 PS interactive
UL
100
R99 PS background
DL
100
R99 PS background
UL
100
HSDPA SRB
DL
50
DL
15
HSDPA voice
DL
70
HSDPA conversational
DL
70
HSDPA streaming
DL
100
HSDPA interactive
DL
100
HSDPA background
DL
100
HSUPA SRB
UL
50
UL
15
Type of Service
UL/DL
HSUPA voice
UL
70
HSUPA conversational
UL
70
HSUPA streaming
UL
100
HSUPA interactive
UL
100
HSUPA background
UL
100
EFACH channel
DL
20
When the adjacent-node-oriented mapping is added or modified by running the ADD ADJMAP or
MOD ADJMAP command respectively, the activity factor table to be referenced can be specified
by the FTI parameter.
For BE services, the GBR can be set by running the SET USERGBR command. The associated
parameters are as follows:
TrafficClass
THPClass
BearType
UserPriority
UlGBR
DlGBR
Admission Control
Admission control is used to determine whether the system resources are sufficient for the
network to accept the access request of a new user. If the system resources are sufficient, the
access request is accepted; otherwise, the request is rejected.
Admission to a path:
Load of the path + Bandwidth required by the user < Total configured bandwidth of the path
Congestion threshold
Admission to an LP: (The admission to LPs should be performed level by level. The following
requirement is applicable to each level of LP.)
Load of the LP + Bandwidth required by the user < Total bandwidth of the LP Congestion
threshold
NOTE
For a path that belongs to a path group, admission control must be performed at both the path level and the path
group level.
For an IMA group or MLPPP group, the RNC automatically adjusts the maximum bandwidth available to the whole
group and uses the new admission threshold if the bandwidth of an IMA link or MLPPP link changes.
Load Balancing
In the admission control mechanism, load balancing is an algorithm used to achieve the load
balance between primary and secondary paths. A service is not always preferably admitted to the
primary path. If the load of the primary path exceeds its load threshold and the ratio of primary
path load to secondary path load is higher than the load ratio threshold, then the service is
preferably admitted to the secondary path, so as to improve the resource usage and user
experience.
The load of a path is calculated as follows:
PathLoad = PortUsed PortAvailable x 100%
where:
PathLoad refers to the load of the path.
PortUsed refers to the total bandwidth of the admitted services at the physical port.
PortAvailable refers to the total available bandwidth at the physical port, including the used
bandwidth.
When the primary path for a type of service exists at more than one physical port, PortUsed and
PortAvailable refer to the sum of used bandwidth and the sum of available bandwidth at these
ports respectively.
Load balancing tables can be configured by running the ADD LOADEQ command. Each table
contains primary path load thresholds and primary-to-secondary path load ratio thresholds. The
combination of a primary path load threshold and a path load ratio threshold can vary depending
on the traffic type. In addition, the ARP needs to be taken into consideration. After the load
balancing tables are configured, they can be referenced when load balancing parameters need to
be set for ATM&IP- or hybrid-IP-based Iub adjacent nodes by running the ADD ADJMAP or MOD
ADJMAP command.
The load balancing application policy is similar to the TRMMAP policy. If the reference for load
balancing tables is not set for the adjacent node, the default load balancing table applies. The
table with the index 0 is the default one. It can only be queried by running the LST LOADEQ
command.
Table 6-2 lists the default settings of load and load ratio thresholds for different types of service.
Table 1-12 Default settings of load and load ratio thresholds for different types of service
Threshold
Default Value
100
100
100
100
100
100
100
100
30
100
30
100
30
100
30
100
100
Threshold
Default Value
100
100
100
30
100
30
100
30
100
30
100
100
100
100
100
30
100
30
100
30
100
30
Threshold
Default Value
100
Admission Procedure
Primary and secondary paths are used in admission control. According to the mapping from traffic
types to transmission resources, the RNC calculates the load of the primary and secondary paths
and then determines whether to select the primary or secondary path as the preferred path for
admission based on the settings of the primary path load threshold and primary-to-secondary
path load ratio threshold. If the admission to the preferred path fails, then the admission to the
non-preferred path is performed. For details about the mapping from traffic types to transmission
resources, see chapter 5 "TRM Mapping."
For example, assume that secondary paths are available for new users, handover of users, and
rate upsizing of users and that the RNC selects primary paths as preferred paths for admission of
the new users and handover of users (the procedures of admission with secondary paths
preferred are the same). The following procedures describe the admission of these users on the
Iub interface respectively.
The admission procedure for a new user is as follows:
Step 4 The new user attempts to be admitted to available bandwidth 1 on the primary path, as
shown in Figure 6-1.
Step 5 If the user succeeds in applying for the resources on the primary path, the user is
admitted to the primary path.
Step 6 If the user fails to apply for the resources on the primary path, the user then attempts to
be admitted to available bandwidth 2 on the secondary path, as shown in Figure 6-1.
Step 7 If the user succeeds in applying for the resources on the secondary path, the user is
admitted to the secondary path. If the user fails, the bandwidth admission request of the user is
rejected.
----End
Figure 1-12 Admission procedure for a new user
Available bandwidth 1 = Total bandwidth of the primary path Used bandwidth Bandwidth reserved for
handover
Available bandwidth 2 = Total bandwidth of the secondary path Used bandwidth Bandwidth reserved for
handover
Available bandwidth 1 = Total bandwidth of the primary path Used bandwidth Bandwidth reserved against congestion
Available bandwidth 2 = Total bandwidth of the secondary path Used bandwidth Bandwidth reserved against
congestion
NOTE
If no secondary paths are available for the users, the admission is performed only on the primary paths.
FWDCONGCLRBW
BWDCONGCLRBW
These two parameters are used to determine whether the congestion is resolved.
Congestion detection can be triggered in any of the following conditions:
Bandwidth adjustment because of resource allocation, modification, or release
Change in the configured bandwidth or the congestion threshold
Fault in the physical link
Assume that the forward parameters of a port for congestion detection are defined as follows:
Configured bandwidth: AVE
Forward congestion threshold: CON
Forward congestion resolving threshold: CLEAR (Note that CLEAR is greater than CON.)
Used bandwidth: USED
Then, the mechanism of congestion detection for the port is as follows:
Congestion occurs on the port when CON + USED AVE.
Congestion disappears from the port when CLEAR + USED < AVE.
The congestion detection for a path or a resource group is similar to that for a port.
Generally, congestion thresholds need to be set for only ports or resource groups. If different
types of AAL2 paths or IP paths require different congestion thresholds, the associated
parameters need to be set for the paths as required.
If ATM LPs or IP LPs are configured, congestion control is also applicable to the LPs. The
congestion detection mechanism for the LPs is the same as that for resource groups.
After the RNC receives a congestion message, the RNC triggers LDR actions. For details about
the LDR actions, see the Load Control Parameter Description.
After the RNC receives an overload message, the RNC triggers Overload Control (OLC) actions.
OLC triggers release of resources used by users in order of comprehensive priority.
Each LP performs the shaping function. The total data transmission rate does not exceed the
bandwidth configured for the LP.
The scheduling function is described as follows:
Scheduling in ATM transport mode: When there are multiple LPs or the hub NodeB needs to
transmit the uplink data of the lower-level NodeB, the physical port performs scheduling of all
the PVCs. The PVCs with high priority are dispatched preferentially. The PVCs with the same
priority are dispatched on the basis of the services carried on the PVCs.
Scheduling in IP transport mode: When there are multiple LPs, the IP physical port performs
Round Robin (RR) scheduling of all the LPs to guarantee fairness between the LPs.
The Iub congestion control algorithm must be implemented in the uplink and downlink directions.
It consists of the following algorithms:
RLC (Radio Link Control) retransmission rate-based downlink congestion control algorithm
Backpressure-based downlink congestion control algorithm
NodeB HSDPA-based adaptive downlink flow control
R99 single service downlink congestion control algorithm
NodeB backpressure-based uplink congestion control algorithm
Transport layer uplink congestion control algorithm
R99 single service uplink congestion control algorithm
Scenario
Service Type
HSDPA service
R99 service
The recommended configurations for the downlink congestion control algorithms are as follows:
The RLC retransmission rate-based congestion control algorithm switch is disabled. Other
algorithm switches are enabled.
In the convergence scenario, the multiple-level LPs are configured if the configuration of multiplelevel LPs is supported.
In the IP transport scenario, the IP PM is enabled if it is supported.
The relations between the four downlink congestion control algorithms are as follows:
Relation between the RNC backpressure-based congestion control algorithm and the RNC RLC
retransmission rate-based congestion control algorithm
Both the algorithms are implemented in the RNC. Therefore, they may take effect
simultaneously.
When the backpressure-based congestion control algorithm switch of a service is enabled, the
RLC retransmission rate-based congestion control algorithm switch is disabled automatically.
Relation between the RNC backpressure-based congestion control algorithm and the RNC R99
single service congestion control algorithm
Both the algorithms are implemented in the RNC. Therefore, they may take effect
simultaneously.
In the case that backpressure takes effect, the backpressure-based congestion control
algorithm ensures that no packet loss occurs in the RNC. The R99 single service congestion
control algorithm monitors packet loss and reduces the rate only when congestion occurs on the
transport network. Therefore, it has no impact on the backpressure-based congestion control
algorithm. It serves as the supplement in the case that backpressure does not take effect.
Relation between the RNC R99 single service congestion control algorithm and the RNC RLC
retransmission rate-based congestion control algorithm
Both the algorithms are implemented in the RNC. Therefore, they may take effect
simultaneously.
The R99 single service congestion control algorithm can take the place of the RLC
retransmission rate-based congestion control algorithm. Therefore, when the R99 single service
congestion control algorithm takes effect, the RLC retransmission rate-based congestion control
algorithm can be disabled.
Relation between the NodeB HSDPA flow control algorithm and the RNC backpressure-based
congestion control algorithm
The HSDPA flow control algorithm is implemented in the NodeB, and the backpressure-based
congestion control algorithm is implemented in the RNC. Therefore, they may take effect
simultaneously.
If the NodeB HSDPA flow control algorithm switch is set to NO_BW_SHAPING, then the two
algorithms do not conflict in the case that backpressure takes effect. The congestion problem on
the Iub interface cannot be solved in the case that backpressure does not take effect.
If the NodeB HSDPA flow control algorithm switch is set to DYNAMIC_BW_SHAPING, then the
two algorithms conflict in the case that backpressure takes effect. The NodeB HSDPA flow
control algorithm can independently solve the congestion problem of HSDPA users on the Iub
interface in the case that backpressure does not take effect.
If the NodeB HSDPA flow control algorithm switch is set to BW_SHAPING_ONOFF_TOGGLE,
then the NodeB flow control policy is automatically set to DYNAMIC_BW_SHAPING and can
independently solve the congestion problem of HSDPA users in the case that backpressure
does not take effect. The NodeB flow control policy is automatically set to NO_BW_SHAPING in
the case that backpressure takes effect.
Relation between the NodeB HSDPA flow control algorithm and the RNC RLC retransmission
rate-based congestion control algorithm
The NodeB HSDPA flow control algorithm is excellent. Therefore, the RLC retransmission ratebased congestion control algorithm of the HSDPA service is not used.
When both the algorithms take effect simultaneously, one is applied to R99 services, and the
other is applied to HSDPA services. They do not conflict with each other. Generally, the priority
of R99 services is higher than that of HSDPA services. Therefore, the rate of HSDPA services is
reduced till the rate reaches the minimum value. In this case, the RLC retransmission ratebased congestion control algorithm takes effect to limit the rate of R99 services.
Relation between the NodeB HSDPA flow control algorithm and the RNC R99 single service
congestion control algorithm
The HSDPA flow control algorithm is implemented in the NodeB, and the R99 single service
congestion control algorithm is implemented in the RNC. Therefore, they may take effect
simultaneously.
When both the algorithms take effect simultaneously, one is applied to R99 services, and the
other is applied to HSDPA services. They do not conflict. The R99 single service congestion
control algorithm aids the NodeB HSDPA flow control algorithm in solving flow control problems
of R99 services.
Through flow control algorithm 1, the transmission rate of the RNC matches the bandwidth on the
Iub interface, as shown in Figure 7-2.
Figure 1-16 BE service flow control in the case of Iub congestion
----End
Step 7 When the buffer length of the queue is greater than the packet discarding threshold, the
RNC starts discarding data packets from the buffer.
The packet discarding thresholds are DROPPKTTHD0, DROPPKTTHD1, DROPPKTTHD2, DROPPKTTHD3,
DROPPKTTHD4, and DROPPKTTHD5.
The length of packets discarded from the queue is equal to the packet discarding threshold minus the congestion
threshold.
Step 8 When the buffer length of the queue is smaller than the congestion recovery threshold,
the queue leaves the congestion state. The port is recovered if all the queues on the port leave
the congestion state. The interface boards send congestion resolving signals to the associated
DPUb boards, and the DPUb boards restore the transmission rate of BE users on the port.
Step 9 After the BE users leave the congestion state, the RNC increases the transmission rate
every 10 ms according to the increasing step until the BE users reach the Maximum Bit Rate
(MBR). The value of MBR is carried on the Radio Access Bearer (RAB) from the Core Network
(CN).
The initial increasing step of the transmission rate is 2,000 bit/s x SPI, and the step is doubled at intervals of 200
ms.
----End
The result of flow control algorithm 2 for the BE service is shown in Figure 7-3.
Figure 1-17 Result of flow control algorithm 2 for the BE service
The NodeB Iub flow control algorithm switch Switch is set as follows:
When the switch is set to DYNAMIC_BW_SHAPING, the NodeB adjusts the available
bandwidth for HSDPA users based on the delay and packet loss condition on the Iub interface.
Then, considering the rate on the air interface, the NodeB performs Iub shaping and distributes
flow to HSDPA users.
When the switch is set to NO_BW_SHAPING, the NodeB does not adjust the bandwidth based
on the delay and packet loss condition on the Iub interface. The NodeB reports the conditions on
the air interface to the RNC, and then the RNC performs bandwidth allocation.
When the switch is set to BW_SHAPING_ONOFF_TOGGLE, the flow control policy for the
ports of the NodeB is either DYNAMIC_BW_SHAPING or NO_BW_SHAPING in accordance
with the congestion detection mechanism of the NodeB.
This section describes the flow control policy used when Switch is set to
BW_SHAPING_ONOFF_TOGGLE. The algorithm architecture is shown in Figure 7-4.
Figure 1-18 Dynamic flow control algorithm architecture
Scenario
Service Type
R99 service
HSUPA service
The recommended configurations for the uplink congestion control algorithms are as follows:
All the algorithm switches are enabled.
In the IP transport scenario, the IP PM is enabled if it is supported.
The relations between the four uplink congestion control algorithms are as follows:
The NodeB backpressure-based uplink congestion control algorithm and the NodeB uplink
bandwidth adaptive adjustment algorithm are implemented in the NodeB. The RNC R99 single
service uplink congestion control algorithm is implemented in the RNC. These three algorithms
may take effect simultaneously.
The result (available bandwidth for LPs) of the NodeB uplink bandwidth adaptive adjustment
algorithm is the input for the NodeB backpressure-based uplink congestion control algorithm. If
the NodeB boards support the NodeB uplink bandwidth adaptive adjustment algorithm and the
NodeB backpressure-based uplink congestion control algorithm, both the algorithms can be
used together to solve the uplink Iub congestion problems (in direct connection and
convergence scenarios). This is the main scheme of the uplink flow control algorithm.
If the NodeB supports the NodeB backpressure-based uplink congestion control algorithm and
the NodeB uplink bandwidth adaptive adjustment algorithm, the RNC R99 single service uplink
congestion control algorithm can control the transmission rate of UEs based on the
backpressure flow control and rate limiting results. They do not conflict with each other.
Otherwise, the RNC R99 single service uplink congestion control algorithm independently
controls the transmission rate of UEs based on the FP congestion detection results.
If the NodeB supports the NodeB backpressure-based uplink congestion control algorithm and
the NodeB uplink bandwidth adaptive adjustment algorithm, the NodeB cross-Iur single HSUPA
service uplink congestion control algorithm can solve the packet loss problem due to Iur
interface congestion for HSUPA users.
Figure 7-5 shows the principle of the NodeB backpressure-based congestion control algorithm.
Figure 1-19 Principle of the NodeB backpressure-based uplink congestion control algorithm
If no congestion is detected on the port, the status of the queues must be checked on the basis
of the buffer data of the queues.
For IP transport based on the V2 platform: The algorithm directly checks whether congestion
occurs on the port based on the actually measured buffer usage on the port because LP
shaping is supported. If congestion is detected on the port, the rates of all the BE users on the
port are reduced.
Step 2 When the buffer data volume on the decoding DSP is larger than a certain threshold,
some data packets in the buffer are discarded.
For HSUPA users, the data can be buffered in the decoding DSP for 500 ms and will be discarded
after 500 ms.
For R99 users, the data can be buffered in the decoding DSP for 60 ms and will be discarded
after 60 ms.
Step 3 When the buffer data volume of the LPs and queues is smaller than the congestion
recovery threshold, congestion is resolved. The interface boards send the congestion resolving
signals to the DSP concerned. The BE users on the port leave the congestion state, and the
transmission rates are restored.
Step 4 After the BE users leave the congestion state, the decoding DSP increases the
transmission rate by a certain step every 10 ms until the transmission rate of the BE users
reaches the MBR.
The initial increasing step of the transmission rate is 2,000 bit/s x SPI, and the step is doubled at
intervals of 200 ms.
Step 5 The buffer data volume on the decoding DSP is the input for scheduling. The hybrid
service may consider the buffer conditions of several services on the decoding DSP.
----End
Figure 1-20 Frame structure of the congestion indication on the transport network
Congestion Status indicates the congestion status of the transport network. Its values are as
follows:
0: no TNL congestion
1: reserved for future use
2: TNL congestion detected by delay build-up
3: TNL congestion detected by frame loss
After receiving the non-cross-Iur congestion indication periodically measured on each LP, the
NodeB adjusts the exit bandwidth on the NodeB side according to the following principles:
If the NodeB receives the congestion indication in which the value of Congestion Status is 2 or 3
in a measurement period, it reduces the exit bandwidth of the LP by a certain step.
Otherwise, the NodeB increases the exit bandwidth of the LP by a certain step, and the changed
exit bandwidth does not exceed the configured bandwidth.
If the FP of a service of a user detects the uplink R99 congestion due to frame loss,
If the rate reducing period timer expires, the RNC reduces the rate of the uplink service by a
level and notifies the UE through the TFC Control signaling. The rate is not lower than the GBR.
Then, the rate reducing period timer and the congestion recovery timer are started.
If the rate reducing period timer does not expire, the rate cannot be reduced, and the
congestion recovery timer is restarted.
Step 3 If the congestion recovery timer expires and the current rate of the user does not reach
the MBR, the RNC increases the rate by a level and notifies the UE through the TFC CONTROL
signaling. Then, the congestion recovery timer is restarted.
----End
IP RAN FP-MUX: The frame protocol multiplexing (FP-MUX) is used to encapsulate several small
FP PDU frames (also called subframe) into a UDP packet, thus improving the transmission
efficiency. The FP-MUX is only applied to Iub user plane data based on the UDP/IP protocol.
IP RAN header compression: IP RAN header compression is performed to compress the protocol
header of the PPP frame to improve the bandwidth utilization.
FP silent mode: The FP silent mode is a mechanism of eliminating unused and null data on the
Iub/Iur interface.
IP RAN FP-MUX
The FP-MUX is used to encapsulate several small FP PDU frames (also called subframe) into a
UDP packet, thus improving the transmission efficiency.
The FP-MUX is applied only to Iub user plane data based on the UDP/IP protocol.
The FP-MUX can be applied to frames with the same priority, namely, frames with the same
DSCP value.
Figure 7-7 shows the format of the FP-MUX UDP/IP packet.
Figure 1-21 Format of the FP-MUX UDP/IP packet
To activate the FP-MUX, the FPMUXSWITCH parameter should be set to YES. SUBFRAMELEN
indicates the maximum length of the subframe; MAXFRAMELEN indicates the maximum frame
length of the FP-MUX UPD/IP packet. At the time set by FPTIME, the UDP packet is sent.
Only the FG2a and GOUa support the FP-MUX. Each board supports 1,800 FP-MUX streams.
The QoS path occupies 14 FP-MUX streams for mapping, and the non-QoS path occupies only
one FP-MUX stream.IP
IP RAN header compression is performed to compress the protocol header of the PPP frame to
improve the bandwidth utilization. The RNC and NodeB support the following header
compression methods.
ACFC
Address and Control Field Compression (ACFC) complies with RFC 1661. It is used to compress
the address and control fields of the PPP protocol. Generally, the address and control field values
are fixed values and need not be transferred each time. After the Link Control Protocol (LCP)
negotiation of the PPP link is complete, the address and control field of successive packets can
be compressed.
PFC
Protocol Field Compression (PFC) complies with RFC 1661. It is used to compress the 2-byte
protocol field to a 1-byte one. The structure of this field is consistent with the ISO 3309 extension
mechanism for the protocol field.
When the least significant bit of the protocol field is 0, the protocol field contains two bytes. The
remaining bits follow this bit.
When the least significant bit of the protocol field is 1, the protocol field contains one byte. This
byte is the last one.
Most packets can be compressed because the assigned protocol field value is generally less than
256.
IPHC
IP Header Compression (IPHC) complies with RFC 2507 and RFC 3544. It is used to compress
the IP/UDP header on the PPP link. IPHC improves the bandwidth utilization by using the
following methods:
The unchanged header fields in the IP/UDP header are not carried in each packet.
The header fields changed in a specified mode are replaced by the less significant bits.
When a packet with a full header is occasionally sent, the header context can be established at
both ends of the link. The original header can be restored according to the context and the
received compressed header.
The associated parameter on the RNC side is IPHC.
The associated parameter on the NodeB side is IPHC.
FP Silent Mode
The FP silent mode saves the transmission bandwidth of the uplink R99 service and improves the
uplink transmission efficiency.
Two modes, normal mode and silent mode, can be used in uplink transmission. When the
transport bearer is established and the NodeB is informed through the related control plane
procedure, the SRNC selects the transmission mode.
In normal mode, for the DCH, the NodeB continuously sends the UL DATA FRAME to the RNC.
In silent mode, when only one transport channel is transmitted on the transport bearer, the NodeB
does not send the UL DATA FRAME to the RNC after receiving a TFI indicating TB numbered 0
in a TTI period.
In silent mode, for all associated DCHs, the NodeB does not send the UL DATA FRAME to the
RNC after receiving a TFI indicating TB numbered 0.
In the current release, the transmission mode is permanently set to the normal mode.
IP PM
On the actual network, the bandwidth on the Iub interface may be variable. Based on the packet
loss and delay on the IP transport network detected by IP PM, the transmission bandwidth on the
Iub IP LP can be adjusted adaptively. The adjusted bandwidth can be used as the input for port
backpressure.
The IP PM solution is described as follows:
If backpressure is implemented on the LP, congestion and packet loss do not occur on the LP but
may occur on the transport network.
The RNC and NodeB implement IP PM in the following way to detect congestion and packet loss
on the transport network:
The transmitter sends a Forward Monitoring (FM) packet containing the count and timestamp of
the transmit packet to the receiver.
The receiver adds the count and timestamp of the receive packet to the FM packet to generate
a Backward Reporting (BR) packet and then sends it to the transmitter.
The transmitter adjusts the available bandwidth on the LP according to the FM and BR packets
and adjusts the rate on the LP according to the bandwidth adjustment result.
The dynamic adjustment of the bandwidth on the LP is dependent on the IP PM detection result. During the LP
configuration, if the BWADJ parameter is set to ON, IP PM for all IP paths on the LP must be activated. Therefore,
the system dynamically adjusts the bandwidth on the LP according to the Iub transmission quality information
obtained by IP PM.
The predicted available bandwidth is also applied to the access algorithm. For details, see section
6.3 "Admission Control."
If the BWADJ parameter is set to ON, MAXBW and MINBW must be configured.
If the BWADJ parameter is set to OFF, only one fixed bandwidth may be configured for the LP.
Only the FG2a and GOUa support IP PM. Each board supports 500 PM streams. The QoS Path needs to occupy
a maximum of 14 PM streams. The non-QoS Path occupies only one PM stream.
The ACT IPPM command is used to activate IP PM, and the DEA IPPM command is used to deactivate IP PM.
TRM Parameters
1.1 Description
Table 1-15 TRM parameter description
Parameter ID
Description
Beartype
BWADJ
BWDCONGBW
BWDCONGCLRBW
BWDHORSVBW
CONGCLRTHD0
When the time of the queue 0 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the CBR
queue.
CONGCLRTHD1
When the time of the queue 1 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the
RTVBR queue.
CONGCLRTHD2
When the time of the queue 2 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the
Parameter ID
Description
NRTVBR queue.
CONGCLRTHD3
When the time of the queue 3 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the UBR
queue.
CONGCLRTHD4
When the time of the queue 4 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the
UBR+ queue.
CONGCLRTHD5
When the time of the queue 5 buffer no more than the value of this
parameter, we cancel port flow control.
CONGTHD0
When the time of the queue 0 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
CBR queue.
CONGTHD1
When the time of the queue 1 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
RTVBR queue.
CONGTHD2
When the time of the queue 2 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
NRTVBR queue.
CONGTHD3
When the time of the queue 3 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
UBR queue.
CONGTHD4
When the time of the queue 4 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
UBR+ queue.
CONGTHD5
When the time of the queue 5 buffer no less than the value of this
parameter, we begin port flow control.
DLR99CONGCTRLSWITCH
When the switch is selected, the congestion detection and control for
DL R99 service is supported.
DR
Discard Rate. The link is not congested when the frame loss ratio is
lower than or equal to this threshold.
DraSwitch
Parameter ID
Description
channel reconfiguration control algorithm is used for the RNC.
5) DRA_HSDPA_DL_FLOW_CONTROL_SWITCH: When the switch
is on, power control is enabled for HSDPA services in AM mode.
6) DRA_HSDPA_STATE_TRANS_SWITCH: When the switch is on,
the status of the UE RRC that carrying HSDPA services can be
changed to CELL_FACH at the RNC. If a PS BE service is carried
over the HS-DSCH, the switch PS_BE_STATE_TRANS_SWITCH
should be on simultaneously. If a PS real-time service is carried over
the HS-DSCH, the switch PS_NON_BE_STATE_TRANS_SWITCH
should be on simultaneously.
7) DRA_HSUPA_DCCC_SWITCH: When the switch is on, the DCCC
algorithm is used for HSUPA. The DCCC switch must be also on
before this switch takes effect.
8) DRA_HSUPA_STATE_TRANS_SWITCH: When the switch is on,
the status of the UE RRC that carrying HSUPA services can be
changed to CELL_FACH at the RNC. If a PS BE service is carried
over the E-DCH, the switch PS_BE_STATE_TRANS_SWITCH
should be on simultaneously. If a PS real-time service is carried over
the E-DCH, the switch PS_NON_BE_STATE_TRANS_SWITCH
should be on simultaneously.
9) DRA_IU_QOS_RENEG_SWITCH: When the switch is on and the
Iu QoS RENEQ license is activated, the RNC supports renegotiation
of the maximum rate if the QoS of real-time services is not ensured
according to the cell status.
10) DRA_PS_BE_STATE_TRANS_SWITCH: When the switch is on,
UE RRC status transition (CELL_FACH/CELL_PCH/URA_PCH) is
allowed at the RNC.
11) DRA_PS_NON_BE_STATE_TRANS_SWITCH: When the switch
is on, the status of the UE RRC that carrying real-time services can
be changed to CELL_FACH at the RNC.
12) DRA_R99_DL_FLOW_CONTROL_SWITCH: Under a poor radio
environment, the QoS of high speed services drops considerably and
the TX power is overly high. In this case, the RNC can set restrictions
on certain transmission formats based on the transmission quality,
thus lowering traffic speed and TX power. When the switch is on, the
Iub overbooking function is enabled.
13) DRA_THROUGHPUT_DCCC_SWITCH: When the switch is on,
the DCCC based on traffic statistics is supported over the DCH.
DROPPKTTHD0
When the time of the queue 0 buffer no less than the value of this
parameter, we begin to loss the packets, and when the port flow
control type is ATM, this parameter means the packet discard
threshold of the CBR queue.
DROPPKTTHD1
When the time of the queue 1 buffer no less than the value of this
parameter, we begin to loss the packets, and when the port flow
control type is ATM, this parameter means the packet discard
threshold of the RTVBR queue.
DROPPKTTHD2
When the time of the queue 2 buffer no less than the value of this
parameter, we begin to loss the packets, and when the port flow
control type is ATM, this parameter means the packet discard
threshold of the NRTVBR queue.
DROPPKTTHD3
When the time of the queue 3 buffer no less than the value of this
parameter, we begin to loss the packets, and when the port flow
control type is ATM, this parameter means the packet discard
Parameter ID
Description
threshold of the UBR queue.
DROPPKTTHD4
When the time of the queue 4 buffer no less than the value of this
parameter, we begin to loss the packets, and when the port flow
control type is ATM, this parameter means the packet discard
threshold of the UBR+ queue .
DROPPKTTHD5
When the time of the queue 5 buffer no less than the value of this
parameter, we begin to loss the packets.
DSCP
This parameter specifies the DiffServ Code Point for the ping
command.
EventAThred
This parameter specifies the threshold of event A, that is, the upper
limit of RLC retransmission ratio.
EventBThred
This parameter specifies the threshold of event B, that is, the lower
limit of RLC retransmission ratio.
FCINDEX
FLOWCTRLSWITCH
FPMUXSWITCH
Indicating whether to check the link of the IP path with FPMUX. Only
FG2a and GOUa board support FPMUX.
FTI
FWDCONGBW
FWDCONGCLRBW
FWDHORSVBW
IPHC
IPHC
MAXBW
MAXFRAMELEN
MINBW
MoniterPrd
NodeBLdcAlgoSwitch
Parameter ID
Description
algorithm): When the cell group level credit load is heavy, users are
assembled in priority order among all the NodeBs and some users
are selected for LDR action in order to reduce the cell group level
credit load.
IUB_OLC (Iub Overload congestion control algorithm): When the
NodeB Iub load is Overload, users are assembled in priority order
among all the NodeBs and some users are selected for Olc action in
order to reduce the NodeB Iub load.
To enable some of the algorithms above, select them. Otherwise,
they are disabled.
PendingTimeA
PendingTimeB
PQNUM
This parameter is valid only when the port flow control type is IP;
Priority queue number of ATM is fixed to 2 and can not be modified.
PT
Port Type
RXTRFX
SPI
SUBFRAMELEN
Switch
TD
Time Delay. The link is not congested when the delay is lower than
this threshold.
TimeToMoniter
This parameter specifies the delay time after the RLC entity is
established or reconfigured and before the retransmission ratio
monitoring is started.
TimeToTriggerA
TimeToTriggerB
TrafficClass
This parameter specifies the traffic class that the service belongs to.
Based on Quality of Service (QoS), there are two traffic classes:
interactive, background.
TXTRFX
TX traffic record index at the port from which the IPoA PVC goes out
of the RNC. The TX traffic must have been configured.
UserPriority
Beartype
Parameter ID
Description
- R99: The service is carried on a non-HSPA channel.
- HSPA: The service is carried on an HSPA channel.
BWADJ
BWDCONGBW
BWDCONGCLRBW
BWDHORSVBW
CONGCLRTHD0
When the time of the queue 0 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the CBR
queue.
CONGCLRTHD1
When the time of the queue 1 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the
RTVBR queue.
CONGCLRTHD2
When the time of the queue 2 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the
NRTVBR queue.
CONGCLRTHD3
When the time of the queue 3 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the UBR
queue.
CONGCLRTHD4
When the time of the queue 4 buffer no more than the value of this
parameter, we cancel port flow control, and when the port flow control
type is ATM, this parameter means the recover threshold of the
UBR+ queue.
CONGCLRTHD5
When the time of the queue 5 buffer no more than the value of this
parameter, we cancel port flow control.
CONGTHD0
When the time of the queue 0 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
CBR queue.
CONGTHD1
When the time of the queue 1 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
RTVBR queue.
CONGTHD2
When the time of the queue 2 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
NRTVBR queue.
CONGTHD3
When the time of the queue 3 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
Parameter ID
Description
UBR queue.
When the time of the queue 4 buffer no less than the value of this
parameter, we begin port flow control, and when the port flow control
type is ATM, this parameter means the congestion threshold of the
UBR+ queue.
CONGTHD4
Default Value
Beartype
R99, HSPA
R99, HSPA
BWADJ
OFF
OFF, ON
OFF, ON
BWDCONGBW
0~320000
0~320000
BWDCONGCLRBW
0~320000
0~320000
BWDHORSVBW
0~320000
0~320000
CONGCLRTHD0
15
10~150
10 to 150
CONGCLRTHD1
15
10~150
10 to 150
CONGCLRTHD2
15
10~150
10 to 150
CONGCLRTHD3
15
10~150
10 to 150
CONGCLRTHD4
25
10~150
10 to 150
CONGCLRTHD5
25
10~150
10 to 150
CONGTHD0
25
10~150
10 to 150
CONGTHD1
25
10~150
10 to 150
CONGTHD2
25
10~150
10 to 150
CONGTHD3
25
10~150
10 to 150
CONGTHD4
50
10~150
10 to 150
CONGTHD5
50
10~150
10 to 150
Parameter ID
Default Value
DLR99CONGCTRLSWITCH
OFF, ON
DR
0~1000
DraSwitch
DRA_AQM_SWITCH,
DRA_BE_EDCH_TTI_RECFG_SWITCH,
DRA_BE_RATE_DOWN_BF_HO_SWITCH,
DRA_DCCC_SWITCH,
DRA_HSDPA_DL_FLOW_CONTROL_SWITCH,
DRA_HSDPA_STATE_TRANS_SWITCH,
DRA_HSUPA_DCCC_SWITCH,
DRA_HSUPA_STATE_TRANS_SWITCH,
DRA_IU_QOS_RENEG_SWITCH,
DRA_PS_BE_STATE_TRANS_SWITCH,
DRA_PS_NON_BE_STATE_TRANS_SWITCH,
DRA_R99_DL_FLOW_CONTROL_SWITCH,
DRA_THROUGHPUT_DCCC_SWITCH
DRA_AQM_SWITC
DRA_BE_EDCH_TT
DRA_BE_RATE_DO
DRA_DCCC_SWIT
DRA_HSDPA_DL_F
DRA_HSDPA_STAT
DRA_HSUPA_DCC
DRA_HSUPA_STAT
DRA_IU_QOS_REN
DRA_PS_BE_STAT
DRA_PS_NON_BE
DRA_R99_DL_FLO
DRA_THROUGHPU
DROPPKTTHD0
60
10~150
10 to 150
DROPPKTTHD1
60
10~150
10 to 150
DROPPKTTHD2
60
10~150
10 to 150
DROPPKTTHD3
60
10~150
10 to 150
DROPPKTTHD4
80
10~150
10 to 150
DROPPKTTHD5
80
10~150
10 to 150
DSCP
0(PING
IP)
-(SET PHBMAP,SET DSCPMAP)
62(ADD SCTPLNK)
0~63
0 to 63
EventAThred
160
0~1000
EventBThred
80
0~1000
FCINDEX
0~1999
0 to 1999
Parameter ID
Default Value
FLOWCTRLSWITCH
OFF, ON
OFF, ON
FPMUXSWITCH
NO
NO, YES
NO, YES
FTI
0~33
0~33
FWDCONGBW
0~320000
0~320000
FWDCONGCLRBW
0~320000
0~320000
FWDHORSVBW
0~320000
0~320000
IPHC
UDP/IP_HC
No_HC, UDP/IP_HC
No_HC(Disable
compress),UDP/IP_
compress)
IPHC
ENABLE
DISABLE, ENABLE
MAXBW
1~1000
64~64000 step:64
MAXFRAMELEN
270
24~1031
24~1031
MINBW
1~1000
64~64000 step:64
MoniterPrd
1000
40~60000
40~60000
NodeBLdcAlgoSwitch
IUB_LDR,
NODEB_CREDIT_LDR,
LCG_CREDIT_LDR, IUB_OLC
IUB_LDR,
LCG_CREDIT_LDR
PendingTimeA
0~1000
0~1000
PendingTimeB
0~1000
0~1000
PQNUM
0~5
0 to 5
PT
BOOL(Boolean port
RXTRFX
100~1999
100~1999
SPI
0~15
0~15
Parameter ID
Default Value
SUBFRAMELEN
127
16~1023
16~1023
Switch
BW_SHAPING_ONOFF_TOGGLE
STATIC_BW_SHAP
DYNAMIC_BW_SH
BW_SHAPING_ON
0~100
TimeToMoniter
5000
0~500000
0~500000
TimeToTriggerA
1~100
1~100
TimeToTriggerB
14
1~100
1~100
TrafficClass
INTERACTIVE, BACKGROUND
INTERACTIVE, BAC
TXTRFX
100~1999
100 to 1999
Parameter ID
Default Value
UserPriority
GOLD, SILVER, CO
The Default Value column is valid for only the optional parameters.
The "-" symbol indicates no default value.
and
Reassembly
Service
Specific
Appendix
Default TRMMAP Table for the ATM-Based Iub and Iur
Interfaces
Table 1-17 Default TRMMAP table for the ATM-based Iub and Iur interfaces
TC/THP
Gold
Silver
Primary
Secondary Primary
Common channel
RT_VBR
None
SRB
RT_VBR
SIP
Copper
Secondary
Primary
Secondary
None
RT_VBR
None
AMR
RT_VBR
None
RT_VBR
None
RT_VBR
None
R99 CS conversational
RT_VBR
None
RT_VBR
None
RT_VBR
None
R99 CS streaming
RT_VBR
None
RT_VBR
None
RT_VBR
None
R99 PS conversational
RT_VBR
None
RT_VBR
None
RT_VBR
None
R99 PS streaming
RT_VBR
None
RT_VBR
None
RT_VBR
None
R99 PS high-priority
interactive
NRT_VBR
None
NRT_VBR
None
NRT_VBR
None
R99 PS medium-priority
interactive
NRT_VBR
None
NRT_VBR
None
NRT_VBR
None
R99 PS low-priority
interactive
NRT_VBR
None
NRT_VBR
None
NRT_VBR
None
R99 PS background
NRT_VBR
None
NRT_VBR
None
NRT_VBR
None
HSDPA SRB
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSDPA SIP
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSDPA voice
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSDPA conversational
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSDPA streaming
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSDPA high-priority
interactive
UBR
None
UBR
None
UBR
None
HSDPA medium-priority
interactive
UBR
None
UBR
None
UBR
None
HSDPA low-priority
interactive
UBR
None
UBR
None
UBR
None
HSDPA background
UBR
None
UBR
None
UBR
None
HSUPA SRB
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSUPA SIP
RT_VBR
None
RT_VBR
None
RT_VBR
None
TC/THP
Gold
Silver
Copper
Primary
Secondary Primary
Secondary
Primary
Secondary
HSUPA voice
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSUPA conversational
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSUPA streaming
RT_VBR
None
RT_VBR
None
RT_VBR
None
HSUPA high-priority
interactive
UBR
None
UBR
None
UBR
None
HSUPA medium-priority
interactive
UBR
None
UBR
None
UBR
None
HSUPA low-priority
interactive
UBR
None
UBR
None
UBR
None
HSUPA background
UBR
None
UBR
None
UBR
None
Gold
Silver
Primary
Secondary Primary
Common channel
EF
None
SRB
EF
SIP
Copper
Secondary Primary
Secondary
None
EF
None
AMR
EF
None
EF
None
EF
None
R99 CS conversational
AF43
None
AF43
None
AF43
None
R99 CS streaming
AF33
None
AF33
None
AF33
None
R99 PS conversational
AF43
None
AF43
None
AF43
None
R99 PS streaming
AF33
None
AF33
None
AF33
None
R99 PS high-priority
interactive
AF33
None
AF33
None
AF33
None
R99 PS medium-priority
interactive
AF33
None
AF33
None
AF33
None
R99 PS low-priority
interactive
AF33
None
AF33
None
AF33
None
R99 PS background
AF13
None
AF13
None
AF13
None
HSDPA SRB
EF
None
HSDPA SIP
EF
None
TC/THP
Gold
Silver
Copper
Primary
Secondary Primary
Secondary Primary
Secondary
HSDPA voice
AF43
None
AF43
None
AF43
None
HSDPA conversational
AF43
None
AF43
None
AF43
None
HSDPA streaming
AF33
None
AF33
None
AF33
None
HSDPA high-priority
interactive
AF11
None
AF11
None
AF11
None
HSDPA medium-priority
interactive
AF11
None
AF11
None
AF11
None
HSDPA low-priority
interactive
AF11
None
AF11
None
AF11
None
HSDPA background
BE
None
BE
None
BE
None
HSUPA SRB
EF
None
HSUPA SIP
EF
None
HSUPA voice
AF43
None
AF43
None
AF43
None
HSUPA conversational
AF43
None
AF43
None
AF43
None
HSUPA streaming
AF33
None
AF33
None
AF33
None
HSUPA high-priority
interactive
AF23
None
AF23
None
AF23
None
HSUPA medium-priority
interactive
AF23
None
AF23
None
AF23
None
HSUPA low-priority
interactive
AF23
None
AF23
None
AF23
None
HSUPA background
AF13
None
AF13
None
AF13
None
Gold
Silver
Copper
Primary
Secondary
Primary
Secondary
Primary
Secondary
Common channel
RT_VBR
EF
SRB
RT_VBR
EF
SIP
RT_VBR
EF
AMR
RT_VBR
EF
RT_VBR
EF
RT_VBR
EF
R99 CS
conversational
RT_VBR
AF43
RT_VBR
AF43
RT_VBR
AF43
R99 CS streaming
RT_VBR
AF33
RT_VBR
AF33
RT_VBR
AF33
TC/THP
Gold
Silver
Copper
Primary
Secondary
Primary
Secondary
Primary
Secondary
R99 PS
conversational
RT_VBR
AF43
RT_VBR
AF43
RT_VBR
AF43
R99 PS streaming
RT_VBR
AF33
RT_VBR
AF33
RT_VBR
AF33
R99 PS high-priority
interactive
NRT_VBR
AF33
NRT_VBR
AF33
NRT_VBR
AF33
NRT_VBR
AF33
NRT_VBR
AF33
NRT_VBR
AF33
R99 PS low-priority
interactive
NRT_VBR
AF33
NRT_VBR
AF33
NRT_VBR
AF33
R99 PS background
NRT_VBR
AF13
NRT_VBR
AF13
NRT_VBR
AF13
HSDPA SRB
EF
RTVBR
HSDPA SIP
EF
RTVBR
HSDPA voice
RT_VBR
AF43
RT_VBR
AF43
RT_VBR
AF43
HSDPA
conversational
RT_VBR
AF43
RT_VBR
AF43
RT_VBR
AF43
HSDPA streaming
RT_VBR
AF33
RT_VBR
AF33
RT_VBR
AF33
HSDPA high-priority
interactive
AF23
UBR
AF23
UBR
AF23
UBR
AF23
UBR
AF23
UBR
AF23
AF11
HSDPA low-priority
interactive
AF23
UBR
AF23
UBR
AF23
AF11
HSDPA background
AF13
UBR
AF13
UBR
AF13
UBR
HSUPA SRB
EF
RTVBR
HSUPA SIP
EF
RTVBR
HSUPA voice
RT_VBR
AF43
RT_VBR
AF43
RT_VBR
AF43
HSUPA
conversational
RT_VBR
AF43
RT_VBR
AF43
RT_VBR
AF43
HSUPA streaming
RT_VBR
AF33
RT_VBR
AF33
RT_VBR
AF33
HSUPA high-priority
interactive
AF23
UBR
AF23
UBR
AF23
UBR
AF23
UBR
AF23
UBR
AF23
AF11
HSUPA low-priority
interactive
AF23
UBR
AF23
UBR
AF23
AF11
HSUPA background
AF13
UBR
AF13
UBR
AF13
UBR
Gold
Silver
Copper
Primary
Secondary
Primary
Secondary
Primary
Secondary
Common channel
EF
LQEF
SRB
EF
LQEF
SIP
EF
LQEF
AMR
EF
LQEF
EF
LQEF
EF
LQEF
R99 CS conversational
AF43
LQAF43
AF43
LQAF43
AF43
LQAF43
R99 CS streaming
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
R99 PS conversational
AF43
LQAF43
AF43
LQAF43
AF43
LQAF43
R99 PS streaming
AF43
LQAF43
AF43
LQAF43
AF43
LQAF43
R99 PS high-priority
interactive
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
R99 PS medium-priority
interactive
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
R99 PS low-priority
interactive
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
R99 PS background
AF13
LQAF13
AF13
LQAF13
AF13
LQAF13
HSDPA SRB
EF
LQEF
HSDPA SIP
EF
LQEF
HSDPA voice
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
HSDPA conversational
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
HSDPA streaming
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
HSDPA high-priority
interactive
AF23
LQAF23
AF23
LQAF23
AF23
LQAF23
HSDPA medium-priority
interactive
AF23
LQAF23
AF23
LQAF23
AF23
LQAF23
HSDPA low-priority
interactive
AF23
LQAF23
AF23
LQAF23
AF23
LQAF23
HSDPA background
AF13
LQAF13
AF13
LQAF13
AF13
LQAF13
HSUPA SRB
EF
LQEF
HSUPA SIP
EF
LQEF
HSUPA voice
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
TC/THP
Gold
Silver
Copper
Primary
Secondary
Primary
Secondary
Primary
Secondary
HSUPA conversational
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
HSUPA streaming
AF33
LQAF33
AF33
LQAF33
AF33
LQAF33
HSUPA high-priority
interactive
AF23
LQAF23
AF23
LQAF23
AF23
LQAF23
HSUPA medium-priority
interactive
AF23
LQAF23
AF23
LQAF23
AF23
LQAF23
HSUPA low-priority
interactive
AF23
LQAF23
AF23
LQAF23
AF23
LQAF23
HSUPA background
AF13
LQAF13
AF13
LQAF13
AF13
LQAF13
Gold
Silver
Copper
Primary
Secondary
Primary
Secondary
Primary
Secondary
AMR
RT_VBR
None
RT_VBR
None
RT_VBR
None
CS
conversational
RT_VBR
None
RT_VBR
None
RT_VBR
None
CS streaming
RT_VBR
None
RT_VBR
None
RT_VBR
None
Gold
Silver
Copper
Primary
Secondary
Primary
Secondary
Primary
Secondary
AMR
EF
None
EF
None
EF
None
CS
conversational
AF43
None
AF43
None
AF43
None
CS streaming
AF33
None
AF33
None
AF33
None
Gold
Primary
Silver
Secondary
Primary
Copper
Secondary Primary
Secondary
TC/THP
Gold
Silver
Copper
Primary
Secondary
Primary
Secondary Primary
Secondary
SIP
EF
None
PS conversational
AF43
None
AF43
None
AF43
None
PS streaming
AF43
None
AF43
None
AF43
None
PS high-priority interactive
AF33
None
AF33
None
AF33
None
PS medium-priority
interactive
AF33
None
AF33
None
AF33
None
PS low-priority interactive
AF33
None
AF33
None
AF33
None
PS background
AF13
None
AF13
None
AF13
None