In this section we present new analyses. We examine the probability of underflow vs. mean playout rate or average latency for three different scenarios.
First, we present the channel model used throughout this analysis. We again use the basic
structure of the Markov model
shown in figure 1. In the good state packets arrive in each time slot
according to a Bernoulli trial with parameter .
In the bad state packets arrive
according to the same distribution, but now with probability
these
packets contain an error and must be retransmitted. Now let us call a channel
instance
is the mean number of packets that arrive during a frame period when the
channel is in the good state.
is the average duration of the good state,
and
is the average duration of the bad state. Note that these are related
and
as they appear in 1. Finally,
is the probability
that a packet arrives in error.
In order to proceed with the scenarios, we make the following observation concerning the
relationship between packet errors and throughput. Assuming that packets in error are
retransmitted using selective-repeat ARQ, the resulting throughput is simply given by
in the ideal case when an infinite
number of retransmission attempts are possible [3].
In reality, the number of retransmissions
attempts is limited by the remaining playout time in the buffer and
, the amount
of time it takes for a retransmission attempt. Given a buffer size of several seconds, an
on the order of several hundred milliseconds, and a sizeable
, the probability
that playout stops because of a failed retransmission is small. Thus we can alternately
characterize a channel
where
.
Next we define an adaption scheme. An adaption scheme is a function over
the buffer state that specifies the rate at which frames are removed. Typically, playout
is slowed when the number of frames in the buffer falls below the target level,
.
and sped up when the fullness exceeds the target level. We call the true time it should
take to play out a frame
. When the buffer falls below
,
playout time per frame is increased by a factor s >= 1. Conversely, when the
buffer is over full we reduce the per-packet playout time by a factor f < 1.